Also
The FHIR QUICKSTART online learning content here https://learning.intersystems.com/enrol/index.php?id=1492
Shows you how to do just what you want.
Steve
- Log in to post comments
Also
The FHIR QUICKSTART online learning content here https://learning.intersystems.com/enrol/index.php?id=1492
Shows you how to do just what you want.
Steve
I tracked this issue to $System.OBJ.Load() only apparent on IRIS 2021.2. after upgrading to IRIS 2022.1, this issue went away.
Hope this helps others.
ok... Update: This definitely looks like a Microsoft bug.
Setting up the same classes/data in a new namespace on the same IRIS instance, and created a fresh DSN in ODBC to use works fine.
I even create a new namespace and global/package mapped all data from the original failed namespace, and created a fresh DSN in ODBC to use works fine too.
So - it's not the class/package/data definition in IRIS, or the IRIS version. There is something Power BI is holding onto, which is related to the original DSN Name used, that is causing grief.
I'm sorted for now.
Hi Louis
Firstly, I don't think there is anything wrong with the code above.. but can you verify that the x509 certificate you created includes reference to the Certificate Authorities (the entire CA chain) that issued the sender's public key .
Steve
Hi Prashanth, you do need to import these classes into the IRIS cloud instance's database. (VSCode then accesses the cloud instance and pulls the code locally for editing in a local folder, or, access IRIS remotely for editing there).
To use VSCode, the VSCode connection to IRIS uses the IRIS SuperServer port to access IRIS in order to compile classes. Therefore, this port must be open from the cloud instance.
With this open SuperServer port, a local instance of IRIS Studio could be started, connected to IRIS, and then used to perform the import. Studio menu: Tools -> Import Local...
If you do not have Studio installed locally on your desktop yet, you can use the Management Portal, by going to the menu System Explorer-> Classes -> Import (after selecting your desired namespace). On the dialog window asking for a file, you should be able to specify 'My Local machine' and the browser will look locally for the XML file to import into the instance.
Alternatively, you need to copy the XML file to the cloud instance, (using ftp, or other means), and from an IRIS Session, use the ##CLASS(%SYSTEM.OBJ).Import utility to import the XML file from the command line.
Steve
Hi Tom. The REST API definition itself within IRIS is not where TLS is negotiated and terminated (and hence not where mTLS is defined) between client and server.
Requiring https over http, and, insisting on mutual authentication is defined in a Web Server layer which then in turn, communicates with IRIS on a seperate port and protocol.
You need to first install a supported web server, and then add the IRIS Web Gateway (see docs) to it. Prove that regular http request of the API work.
Then, on the web server, enforce HTTPS and mutual authentication. See your chosen web server's documentation for how to do this. (InterSystems does not provide this of course). IIS, Apache and NGINX are supported.
Once that's done, clients can only access the IRIS Rest API over https, which is negotiated/terminated against the web server, which can also insisted on client authentication (mTLS).
Hopefully this post sets you on the right path.
Sincerely,
Steve.
Hi
For existing documentation, I believe others have already directed you. If, however, you are not doing an in-place migration (that is, you are installing on a new server), then one needs to get a bit more creative. (There are several reason why you may choose to switch to new servers - for example wanting to move to a different OS, or a version that is supported by IRIS which you cannot simply upgrade to.)
Your InterSystems Sales Engineer or even the WRC, are able to provide the detailed advice you need which is often specific to your situation. The main things to consider is the downtime you can afford when you failover to IRIS, and the architecture you have (do you have multiple Cache nodes, ECP, mirrors, etc ?).
In general however, what I have done successfully is create IRIS Asynchronous Mirror servers of the Cache nodes, then, (on switch over) shutdown Cache and promote the IRIS servers to active mirror failover members etc. You obviously need to ensure that the IRIS set of servers is readily configured with security, memory etc to take over as a valid Async DR node.
Finally note that this creation of IRIS nodes as async mirror nodes to Cache servers is not supported as a production configuration but only allowed for migration.
Please reach out to InterSystems WRC or Sales Engineering for more in-depth discussion and planning.
Steve
Hi,
That should work (as advertised) but if you want a workaround .... call a method instead, that implements the same thing:
- assuming 'MyNamespace' contains the namespace name you are setting up, as a variable (or hard code it).
Then include the method:
Steve
Hi Muhammad,
For FHIR capability, one would need to be running the healthcare version of IRIS - so, you would need to run, in your case the "IRIS for Health" product, which is also available as a Community Edition.
This feature does not require additional licensing in order to appear. You just need to be on the right version of IRIS for Health.
This option you mentioned here ("Health > FHIR Configuration > Server Configuration") was first introduced with release 2020.4, so, any version of IRIS for Health should be on this, or a later version to enjoy this feature.
InterSystem's AWS marketplace offering that launches IRIS for Health Community Edition, here - does in fact launch this correct release. You could use that instance.
InterSystems' Quick Start tutorial here, which helps people learn about FHIR, also uses the correct release, and starts an instance for you in the cloud to use. If you are learning about FHIR I suggest the Quick Start tutorial is your friend there too.
You do not need to be an InterSystems customer to take advantage of this.
If you are a customer of InterSystems, you can download the version of IRIS for Health that was first released with this feature (2020.4) via the Distributions pages of the WRC, but note, that this was a Docker Container release only.
Assuming you do not want to use Docker Containers, then, the next, non-container release with this feature is IRIS for Health 2021.1, and as soon as this is generally available, I would recommend to progress with that version.
Sincerely,
Steve
I have.
As Robert mentioned - disk I/O is one issue. And the other issue is drive letter changes if the drive is moved between hosts (for example - the 'Journal File Location' pat set to D:\ in one instance - then drive 'D:' not found when the flash drive is moved between hosts.) This can be overcome with scripts of sorts (potentially) - though I haven't tried to do that.
Hi Fernando,
Strange. I've never seen this before, and it seems like a bug to me. If no other suggestions appears here, I would reach out to the support team at InterSystems for advice.
If just defining classes and routines, VS Code is another alternative IDE. This is being developed further to bring it in line with the features of Atelier and provides another formidable option.
Of course the more mature IDE is IRIS Studio, another alternative for you, if running on a Windows client.
Steve
Hi Wayne,
I'm not sure I can help you with an article, but for what it's worth, the error indicates there may be an issue with Soap security policies of the service.
For example, where the mock service is expecting to be called using HTTPS, but it is not.
To try and resolve, it may be worth reviewing what the mock service is expecting security-wise, and match that with how its being invoked.
Steve
Hi,
My guess is you are calling the query with more arguments than it expects on those times that it is failing..
Providing or inspecting again the code making the call from the client side, would be a good start to fixing your issue.
Steve
Hi,
No sooner have I asked the question, then I found the solution myself !.
and that is, when added to a Production, the EnsLib.MsgRouter.RoutingEngine class that executes the rule has a production settings "Force Sync Send", to force all SENDS to be Synchronous.
Steve
Hi Eric
If you want your service t0 be part of the framework, but not actually use any specific connection functionality typically offered by adapters (SQL, FILE, ..etc) Just ensure that the adapter is first set to 'Ens.InboundAdapter':
And - set your PoolSize is set to 1, so a job is started. with the production. Note that for every cycle of the Call Interval setting, the OnProcessInput method will be called.
If you want to regularly do your work (ie: "go through a list of values in a global and compare dates. If criteria is met, it will send an email."), then, do this in the OnProcessInput method at your desired CallInterval.
However - As you said "on start..." I'm assuming you meant, on start of the production as a whole - In this case, leave the OnProcessInput method empty with just a
statement, and, (as others mentioned here), put the logic in the OnInit() method of your service, which will be invoked on production startup or enabling/disabling of the service.
Note that without the Adapter parameter setting above, and the pool size set to 1 - neither OnInit, nor OnProcessInput are called.
Now - Productions are meant to keep on running. You may eventually move away from putting this logic in the OnInit code or somewhere which requires a Production re-start in order to execute, as this effects other running business hosts .... To explore other options further you can
(a) Work with the CallInterval which calls OnProcessInput after n seconds, and build in logic that determines if a particular cycle should just do nothing, or (say, on the change of the day, or other controlling factors, like, the size of your global entries) - would go ahead and do the emails. Note that you can set Properties for your business service, to record state - which you can initialise a value for in the OnInit, and update regularly during the running state of the service if you need to.
(b) Look at the Schedular feature. The Schedular feature controls the running state of a business host. With the schedular you can elect to Enable/Disable any service on a pre-defined schedule. So - You can enable your service, (with OnInit code to check globals and send emails), at an interval of choice without needing to stop/start the production. click : here for documentation.
Sincerely -
Steve
Sure thing...
Use the %Net.HttpRequest class, to make the HTTP request, and, take the response's data
Set httprequest=##class(%Net.HttpRequest).%New()
Set httprequest.Server="http://ws.audioscrobbler.com"
set URL="/2.0/?method=chart.gettopartists&api_key=65218c8cdd03ba3836f9fc8491fb6957&format=json&limit=1000&page=10"
Do httprequest.Get(URL)
The httprequest object has a property 'HttpResponse', now containing the http response. The HttpResponse in turn, has a stream property 'Data' containing the entire http response body -
so, you can read off this Data stream to get your raw json and setup jstring variable, however, as I see you want to call %db.FromJSON, and, that method takes a stream object anyway - you can skip setting up the jstring variable and just do this directly which should work:
DO db.%FromJSON(httprequest.HttpResponse.Data)
Steve
Hi Laura,
I would declare the class property without the underscore as you have tried, given the way the generated code (in the class ending in ...Thread1.cls) interprets this and generates code.
Having said this - I'm interested in then knowing how you are going about generating the JSON string. This is probably where you need to focus hour efforts and set a JSON element of "status_id".
How are you generating you JSON, and what version of Ensemble are you on ? Answers to these questions will help others looking at your post to contribute a solution.
Thanks - Steve
Hi,
Rather than programmatically modifying the source code and recompiling the class - if your page already defines, by default, the full set of columns available, you can show and hide each and every one of them programmatically by manipulating the table columns collection which is already part of the DOM:
Save this class in the SAMPLES namespace. In this example, the Buttons hide some columns and un-hide others on client and server:
Even if your default <table> definition did not hard-code the full set of columns in the XDATA block, which you later manipulate, you can also programmatically add the table and column objects of the table, something like this - from server side instance method -
// Add Table of transactions
set nTable=##class(%ZEN.Component.tablePane).%New()
set nTable.id="tblTrans"
set nTable.queryClass="myAccounts.QryUtils"
set nTable.queryName="Transaction"
// Define a column and add to table
set column=##class(%ZEN.Auxiliary.column).%New()
set column.colName="Narrative"
set column.hidden=1
do nTable.%AddColumn(column)
do %page.%AddComponent(nTable)- Steve
Hi,
I had a closer look, and it is not actually an "OnValidate" method per se that exists for use,..- but the feature does exist...
You need to create a <setting>IsValid() method. something like this:
Class qmd.CustomAdapter Extends EnsLib.TCP.InboundAdapter [ ProcedureBlock ] { Parameter SETTINGS = "MyCustomSetting"; Property MyCustomSetting As %Integer; ClassMethod MyCustomSettingIsValid(%val) As %Status { if %val'?1.n quit $$$ERROR(5001,"My Custom Setting should be an integer.") Quit $$$OK } }
Hope this helps.
Steve
Hi Satheesh,
ECP allows you to 'remotely mount' a database on one instance from another. For example, let's say you define (as you have) a Cache instance with a Database 'C' and you have a second instance running Ensemble now.
When the basic ECP configuration is done, you will then be able to define a 'remote' database on the Ensemble instance - the remote database being, the 'C' database managed by the Cache instance. The contents within the 'C' database on Cache will be available as a database on Ensemble just as though it was a locally mounted one.
In this scenario, the Cache instance is the ECP Server (or ECP Database server), and the Ensemble instance is the ECP Client (or ECP Application server instance).
The basic ECP configuration is carried our via ECP settings accessible through the management portal, and are done on the ECP client side (Ensemble side) - see Admin-> Settings -> ECP Settings. You need to tell Ensemble where the ECP Database Server (Cache) is.
See the documentation for details, but when that is done, your Cache databases will be available to Ensemble as databases that can be defined as remote databases.
Note: that ECP works at the database level - not at the namespace level... so the configuration of Ensemble Namespace (where you are building your production), needs to be enhanced such that the Ensemble namespaces sees the globals, packages and routines it needs. Edit the Ensemble namespace's Global, Package, and Routine mapping definitions.
Regarding SQL projections when you are on the ECP Client (Ensemble), if you do not Package Map your Class definition, then, you wont have those classes available for use. With only Global Mapping defined between your Ensemble namespace and the remotely mounted Cache database, you will only get direct global access.
The online documentation for ECP overview is here: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GDDM_overview
Finally, be aware that ECP is an enterprise feature, and that the corresponding license keys for your instances should include the Multi-Server feature. and be ECP capable.
Sincerely,
Steve
Hi Tom,
For much of what you requested, you should look at InterSystems IRIS's new Document database model.
The Document databases accepts and persists collections of JSON Documents inside of Cache . Additionally, it allows you to define a set of properties for documents in this document databse. These properties correspond to elements in inserted documents. (ie, you can define the "LastName" as being the element 'lastname') - which effectively updates an index for every document added to the collection that has an element 'lastname'.
You can then perform an SQL query, using any one of these columns, like 'lastname' within the query to restrict/select the JSON documents you are after.
I think this new feature will satisfy your requirements . There is even a generic REST API that allows you to Add/Delete/Update JSON documents.
See: https://learning.intersystems.com/course/view.php?id=687
HTH,
Steve
Hi,
If you want to protect the database, start by creating a resource via the options Security Management > Resources in the management portal, giving it an appropriate name that makes sense to you - for example, if your database is called "myAppDB", create a security resource "%DB_MYAPPDB".
Prefixing with '%DB' in the name is convention, not a requirement. During setup, add a description, and, select whether by default, users have Read, Write and/or Use privileges.
This is only the first step. Now that you have an identifiable security asset you want to secure, you can proceed.
You need to decide how users that fall under this new role of yours, will interact with this DB, so you need to build up this role definition accordingly. Using the Security Management > Roles section, select your new role, and, add the Database resource that protects your database (in my example above '%DB_MYAPPDB'), identifying if users of this role can only READ or can also WRITE data in this database.
This action assigns the privilege for this database afforded to users who belong to this new role.
Actually working with this database, however, would require that you add some resources to this role. For example, if these users are developers, and you want to give these developers access for development, then, add the %Development resource to your new role too.
You will also need to more than likely add a %Service_ type resource that allows users of this role service access into Cache, for example, via TELNET, or via ODBC, etc. Your requirements will differ from others, but is Studio access is required, definitely include %Service_Object (Use).
Finally - have a look at a pre-defined Role on the system called a "%Developer" which is setup by default on most installations., and is something you can use for reference. Have a look at this role, and its resources+permissions (privileges) you will see it has some databases under protection, and allows %Development, and a bunch of %Service_ resource types for allowing different access, as explained above.
Sincerely,
Steve
Hi,
If I understand you correctly, I think you want to know whether there is a function that you can invoke, from the condition field of a Business rule, that checks if a given date is in between two other dates.
If you want to see what functions are available, the best way is to get assistance from the Expression editor (that is invoked when you double-click in the Condition field, or click 'fx', on the main Business Rules definition window when the condition field is in focus)
Whilst in the Expression Editor, clicking 'fx' again, will list the available functions, and the editor will help you through the process of building an expression that includes the use of these functions, and wrapping them around AND, OR and other operators.
I would suspect, however, that by default, we do not supply an in-built Business Rules function for your current scenario - however - (and I think this is a powerful feature), you can create whatever custom function and add these to the ones already available. When you create your function, this will appear in the Expression Editor and can be used just as if it came with Ensemble.
Building your own function for this purpose is easy, it is just a class that extends Ens.Rule.FunctionSet, and must follow some basic rules. See the documentation here:
Your function would take 3 arguments, value, date1 and date2 to determine whether the date in 'value' is in between date1 and date2. Here I'm assuming the date format is CCYYMMDD, and I'm converting the incoming date to a numeric $horolog date, using the $zdh (or $zdateh) function. Change accordingly if your date format is different.
Defining a class like the sample above in your Ensemble-enabled namespace and compiling it, will expose this function (and the comment after the 3 '/') to the Ensemble Expression editor like any other function that already comes with Ensemble.
Hope this helps.
Steve
Hi Soufiane
The graphical rules editor used in the UI generates a class, and an xml block in that class, to represent your rules.
If you want a target to be different, based on some code, why don't you create multiple conditions for the different targets you have, then, base each condition on some database setting.
The rule itself remains static (except when you need to define a new target) and it will show all the possible paths that can be taken by the rule.
Then.. programatically...you change that value in the database which is behind all the conditional statements you have and thus - programatically, you effect a target change.
The other alternative is to use a Business Process. You can send your message to be handled and routed by a Business Process in BPL. The process, can programatically resolve the name of the target component in a context variable (let's say, context.TargetName). After context.TargetName has been programatically resolved, make a BPL Call action, and for the Call action's property "target" don't hardcode a value.
Instead supply "@context.TargetName", and the message will be sent to whatever the value of context.TargetName is at that point in time.
Steve
Hi Carey,
As you probably realised, the intention of the business rule (and rule sets within a rules class) is to typically have 1 sets of rules that are applied to a message, but for which you could have multiple versions, based on an effective date/time period. That is: if the date is X run rules set# 1, but if the date is Y run rule set# 2.
If date ranges overlap, than, whichever is the first valid date range in the order in which the rules sets are arranged, defines which set of rules is executed. This is important to remember for later...
You can create 'Rule Set #2' ahead of time, making sure that the effectiveBegin of rule set #2, is immediately after the effectiveEnd of rule set #1 (as you indicated you wanted).
However, the second rule set would need be essentially, a copy of the first set - except for the changes to an individual rule item(s) that you want a different behavior for.
Admittedly, cloning a whole rule set to another copy is labor intensive via the UI, but, extremely easy if you open the generated class in Studio - as you can just copy/paste the XML elements between the <ruleSet> tags, save and re-compile. After you have your second set, make your edits to rule set 2's effective date range, and whatever rule changes needed.
The Delegate action sends the message to another rule definition altogether, and, the Send sends the message to any other business component, (which can be a routing rule). Using these two actions, based on any condition regardless, would require you to build a new rule definition or component to act as their target which you said you did not want to do, so, they are out.
Now - What you *can* do is write a user defined function to retrieve a rule definition's second rule set date range, and, use the value from this function in a conditional statement that drives the behavior of any of your first rule set's rule items. You must ensure that Rule Set #1, however, is always the one that ever gets executed and the system does not ever fall on to running rule set #2 - so - even though you put in a date range for rule set #2, leave date ranges for rule set #1 blank or wide open, so it will be the first and only one that will qualify for execution every time.
Warning: This is not standard use or best practice, and I'm not recommending you do this - as it will effectively negate the rules sets with effective periods feature. which in the future, you may want to use. I would stick with the approach of cloning Rule Set 1 into Rule Set 2, setting an appropriate EffectiveStart/end ahead of time, and making the individual changes you need to take effect in the future.
Steve
Hi Ryan,
The message that you pass to your SOAP-based business operation should (as you indicated in your 3rd bullet point), contain both the extracted HL7 data, and the authorization key you retrieved from the previous step.
I'm assuming your SOAP Business operation you are using in the last step has been automatically generated by the Studio wizards, so, you will have a Class Method for each web method of your SOAP service.
You need to edit the default generated versions of these methods the wizard gives you, in order to add your SOAP Header.
You can access ..Adapter.%Client in this business operation to get access to the private instance of the web service client class, so, using
You can set the headers for that particular message invocation.
Sincerely - Steve
Hi Scott,
As far as I'm aware, you do not need a license to use the DeepSee User Portal, as long as you don't need to use DeepSee Cubes, or DeepSee Queries (which needs to be licensed).
So if the information you want to present in a DeepSee User Portal type dashboard, can be sought directly via SQL Queries, etc - you can use iether:
(a) Ensemble DeepSee Dashboards (receiving data from Business Metric classes). See http://docs.intersystems.com/ens20152/csp/docbook/DocBook.UI.Page.cls?KEY=EGDV_bmetric#EGDV_bmetric_dashboard
or
(b) DeepSee Dashboards based on KPI Class which are designed to use only SQL to extract a result set.
See: http://docs.intersystems.com/ens20152/csp/docbook/DocBook.UI.Page.cls?KEY=D2MODADV_ch_kpi
Of course if you want more control over the UI, then go down the ZEN, ZEN Mojo, etc route.
Steve