Aaron,

In my experience you need to purchase the schema.  You can go to http://x12.org to find out more.  Depending on your usage and needs it can be a bit pricey for licensing especially if you only need one schema.  If you have a partner that has access you could piggyback on there license assuming you have a working relationship with them.  Unfortunately I did not see any option to purchase just a single schema, though there was something like that in the past as I have a colleague that did so.  You can try to contact the org to inquire.

There is no general rule-of-thumb.  It really depends on the nature of your application and usage patterns.  For example if this is primarily a REST based service I would set the maximum to your license capacity minus a reserve for administrative functions, background tasks, and reporting.

In a mixed environment you need to have a more balanced setup of REST (web) calls versus direct connections.

As I indicated you really need to have an idea of the use patterns of your application.

One thought that occurs to me is to check the 'Maximum Server Connections' settings in the CSP  Gateway under Server Access.  This is the maximum number of connections allows and when reached further requests are queued until a connection is available or the client times out.  This is often used with REST services to avoid clients receiving a licensing error which is not desirable or to avoid any CSP calls from consuming the entire system.   If this is the case you should examine your license usage and CSP Sessions to see what might be overloading the server. 

Of course if this is a very active system then you may need to increase the maximum connections allowed and possibly the license as well.

Hope that helps.

Diego,

As I understand it you are have an existing VM on AWS that you have access too.  When you do the SSH command you indicated you end up at linux command prompt and you need to know how to install IRIS in that environment.  This is a straight installation and not running in a docker container.

All that being the case you need to do the following:

  1. Download the appropriate installation kit from https://wrc.intersystems.com/wrc/coDistIRIS.csp.  You can filter the name by 'Health' and the OS by 'Ubuntu' to get a shorter list.
  2. Now use scp to copy that installation file to your AWS environment.  Note scp has similar syntax to the ssh and cp commands.  Google it and you will find many resources for this command
  3. Finally read the documentation on installation. https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?... https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?...

If you get stuck on any issues come back here or call directly into the Worldwide Response Center https://www.intersystems.com/support.

Finally, if you DON'T have an existing environment you can look at InterSystems Cloud Manager to provision and deploy IRIS for Health to a cloud environment.

https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?...

Much of the documentation refers to deployment with containers. However there is a containerless option too which is covered in one of the subsections off this main page.

I hope this helped you get started.

Daniel,

Ok, so the data is basically a parsed report and each daily run replaces the previous run with no Delta or net change processing.  So on error they could just rerun the process to get the data.  

One suggestion on the alerting, use a distribution group as the address to send to and make sure a at least one of the Business consumers of the data are on that list.  Just alerting a technical resource could lead the business side to use incomplete data as they don't know that something went wrong.  Relying on the technical resource to pass on this information  can be risky.  Not a dig at the customer, just reality.

So, back to your question, I would still lean towards a solution that separates the loading of the data from the export to MS SQL Server.   Since you want to avoid the use of messages for the data I would still lean towards a global mapped to IRISTEMP, note NOT a process private global as the business hosts will be in separate processes.  This is a repetitive process so the fact that the DB will grow should not be an issue.  It will reach a size necessary to handle the load and stay there.  You can, if needed, programatically handle the mapping using the Config.MapGlobals API ( https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?&LIBRARY=%25SYS&CLASSNAME=Config.MapGlobals).

So this would look something like this:

  1. Business Service accepts and processes the data into a non-journaled global on a schedule.  When the data has been consumed a message is generated that is sent to the Business Operation to trigger the next phase
  2. Business operation to write to MS SQL Server reacts to receiving a message to process, including how to identify the dataset to load.  The process consumes the data out of the Global to write to SQL Server.  Once complete the data in the Global for this run is purged.

Anyway that is my two cents on this.  I am sure others will have different ideas.

First lets get the adapter concerns out of the way.  An adapter, in general, is only the "how do I connect" part of the.  What you need to do is write an Business Operation that has this set as its adapter.  Within your service you can do anything you want with the payload that is returned.

If you are not familiar with how this works I would start with our documentation.  Here are a couple of links to get you started:

https://docs.intersystems.com/iris20212/csp/docbook/DocBook.UI.Page.cls?KEY=EGDV_intro

https://docs.intersystems.com/iris20212/csp/docbook/Doc.View.cls?KEY=EGDV_busop

There are also Learning Service resources you could use.

Now to your process.  First off what is the reason you don't want messages or any tracing?  There are a number of good reasons to utilize the structure of IRIS interoperability including the ability to see a trace if something goes wrong and queue persistence to provide resilience in delivery.  

If messaging is still not viable for any reason I would consider creating a temporary holding class/Table for the data. then sending a message to another Business operation based on the SQL adapter that will pick this data up, write it to MS, and then remove it from the temp storage.  You can map this data to either a non-journaled database or to IRISTEMP depending on your needs.  Also allow for the fact that there could be multiple batches in progress so how you key the temporary storage will be important.

Some important questions:

What happens to the data if the process aborts in the middle?

Would missing any of the data being pulled have a negative effect on the business?

Is there any concern over data lineage for security or external auditing ?

Here is a quick example program I wrote a couple of months ago.  This uses JDBC and the JayDeBeapi library others have mentioned.  Note the Credentials import provides a set of  login credentials in the following format.

LocalCreds = {"user":"SuperUser", "password":"SYS"}

Here is the code:

import jaydebeapi
import credentials
def get_database_connection(inpDBInstance):
    IRIS_JARFILE = "/home/ritaylor/InterSystemsJDBC/intersystems-jdbc-3.2.0.jar"
    IRIS_DRIVER = "com.intersystems.jdbc.IRISDriver"
    AA_JARFILE = "/home/ritaylor/Downloads/AtScale/hive-jdbc-uber-2.6.5.0-292.jar"
    AA_DRIVER = "org.apache.hive.jdbc.HiveDriver"
    # note connecting to two data sources in the same program requires that you 
    # reference the paths to BOTH jar files in a list in every call. Otherwise the
    # second connection attempt will fail.  It appears that the paths only get
    # added once within a process.
    JDBC_JARFILES = [IRIS_JARFILE,AA_JARFILE]
    # Database settings - this should be in a config file somewhere
    if (inpDBInstance == "local"):
        dbConn = jaydebeapi.connect(IRIS_DRIVER,
                           "jdbc:IRIS://18.119.2.28:1972/USER",
                           credentials.LocalCreds,
                           JDBC_JARFILES)
    else:
         dbConn = None
  
    return dbConn
def run_database_query(inpQuery, inpDBInstance):
    resultSet = None
    dbConn = get_database_connection(inpDBInstance)
    cursor = dbConn.cursor()
    cursor.execute(inpQuery)
    resultSet = cursor.fetchall()
    return resultSet
def print_db_result_set(resultSet):
    if (resultSet != None):
        for row in resultSet:
            print(row)
    else:
        print("Input result set is empty")

Daniel,

One thing I always recommend is to get familiar with the API using a standalone tool before attempting to code the programmatic interface.  I like Postman as it has pretty good UI to work with.  If you are at the command line you can use curl.

https://www.postman.com/

Postman can also give you the code for different programming environments.  Unfortunately not Objectscript, but it is fairly easy to translate from the examples you can see.

Marcos,

I was going through some unanswered questions and came across yours. If you could share your spec to DM it to me I can take a look.

However, understand that the %Stream.Object probably contains the JSON payload that you need.  As such you can get your dynamic object with the following command:

set dyObject = {}.%FromJSON(body)

hope that helps.

One way to be sure that the request is or is not reaching IRIS is to go into the IRIS Web Gateway on the web server and use the trace utility to see if the request is coming in and what it looks like.  Turn on the trace, make a request and then come back and turn off the trace.  Also the default on the Web Gateway server definition is the use gzip compression which will make the body unreadable.  You can temporarily turn this off while you do this test.  

Hopefully you are doing this in a development environment so this will have no impact on production.

UPDATE:  One other thing is to check the audit logs for an security issues that you may hit.  This does not sound like the issue for you, but its worth checking.

You should understand that while InterSystems employees are on the community this is really a public forum and not an "official" support path.  Questions are answered by the  community at large when they can.  For people in the forum to help more information is needed.  You indicated you are working with HealthShare however this is really a family of solutions. Which specific product are you referring to?  What part of that product are you trying understand better?  The more specific you can be the easier it is for the community to help.

If you have an immediate need I would suggest that you contact the Worldwide Response Center (referred to as the WRC) for immediate support.  Here is the contact information:

Phone:
+1-617-621-0700
+44 (0) 844 854 2917
0800615658 (NZ Toll Free)
1800 628 181 (Aus Toll Free)

Email:
support@intersystems.com

Finally, learning services (learning.instersystems.com) and documentation (docs.intersystems.coms) can be of great help.  For HealthShare specific areas you do need to be a registered HealthShare user.  If you are not work with your organization and the WRC to get that updated.

Is it expected that this will be a single socket connection that is continually available for bi-directional communications?  I ask because my initial thought was that we have to completely separate interfaces here.  On the remote side (ip) there is a listener on the indicated port number.  You will be connecting to this ip+port to send your ADT.

A completely separate communication is initiated by the remote system to YOUR ip address where you would have a listener on the same port.  This would be limited to accept communications only from the remote ip.  The remote system would send the A19 over this connection which would.  If this is the case then you can simply use our built-in HL7 TCP operation and service  to accomplish this.

If this is truly a bi-directional communications over the same open TCP connection then @Jeffrey Drumm is correct.  They would need to provide the custom protocol they use to manage the communications.

I think John is basically correct, but I don't see this as an issue.  When you are doing client-side editing, which is what I normally do, you need to export code to your project to edit it.  This is an action you need to perform as every definition you look at should not necessarily become part of your project.  When you choose to 'Go To Definition' the InterSystems extension looks to see if it's local and opens that if it is, local meaning client-side.  If not it opens it from the server which is always read-only when working in client-side editing.  To edit export it then open the local copy.