Whether or not using the Adapter over an instantiation of the %Net.HttpRequest is a matter of your needs really.  The adapter seeks to make things "simpler" in some way.  However if you need greater control over the process and response using the HttpRequest directly is also a reasonable direction.  I have done both depending on my needs.

Glad it is working for you in an manner that is maintainable.  That is what is important.

Sorry if I am a bit late to the discussion.  Let me address your issues in order

  • This is not necessarily a problem, but I noticed that the method SendPostRequest is executed again from start to finish for each reply, reinitializing all variables as if it were the first execution, except for the RetryCount variable, which indicates the current resend count.
    • This is expected behavior.  Messages are handled as a specific unit of work (atomic).  When a message errors and is going to retry it is basically put back on the queue to process.  When the retry occurs it is treated as a new message with the exception, as you noted, of the retry count. 
  • Setting Retry = 0 prevents message resending, even with active Reply Code Actions (e.g., E=R).
    • Also expected.  With this configuration the reply action code may be set to retry , but this setting is indicating that no (0) retries are to be attempted.
  • The timeouts are not respected. Instead, the message is resent every 30 seconds instead of 18 (I also tried other Response Timeout values like 20 or 35 seconds, but nothing changes). The Retry Interval is not respected either; the resending is attempted a number of times equal to FailureTimeout/30 instead of FailureTimeout/RetryInterval, as written in the documentation.
    • This seems to be the main issue.  The ResponseTimeout is a property of the Adapter. The default value of this setting is 30 which is what you are seeing.  It appears that you have a custom Business Operation here that is using the EnsLib.HTTP.OutboundAdapter and that you assign this adapter in code rather than using the ADAPTER parameter.  When you initialize this you should assign the reference to the Adapter property of the Business Operation.  Then when you want to set the response timeout in code you would update the property of the Adapter. I would highly recommend using the ADAPTER parameter unless you need to dynamically change the adapter at runtime.  If done in code at runtime you need to be sure that everything is initialized properly
  • After the Failure Timeout expires, an error is generated even if one or more responses have been received. Despite the responses arriving, the BO does not detect them and generates the error: "ERROR 5922: Timed out waiting for response". The message continues to be resent until the Failure Timeout expires, and finally the "ERROR <Ens>ErrFailureTimeout: FailureTimeout of 90 seconds exceeded in lombardia.OMR.BO.Sender; status from last attempt was OK".
    • What is being reported here is the last known status of the operation.  In this case that is the error indicated.   There is a difference between a reply and a response.  The Visual trace is indicating that the operation receive a  reply.  This reply was an error.   The 'Response' is what is received from the Web Server.  This never arrived hence the error.  Business Operations will always receive a reply of some kind.  This reply may indicate an error as in this case or success

If you continue to have issues I would encourage you to reach out to the Worldwide Response Center (WRC) for support.  Additionally you could contact your assigned Sales Engineer.

Regards,

Rich Taylot

You can also use the iris.system object in python. 

iris.system.Version.GetVersion() returns the full version ($ZV)

IRIS for UNIX (Ubuntu Server LTS for x86-64 Containers) 2022.2 (Build 356U) Thu Oct 6 2022 22:56:28 EDT

There are other methods here to call to such as GetMajor() which would return '2022' and GetPlatform() which returns 'Ubuntu Server LTS for x86-64 Containers'

do dir(iris.system.Version) at the python shell to see them all.

Prudhvi,

The simplest answer is no.   ZEN, which has been a deprecated technology for some time, is a complete framework deeply integrated to Cache.  It relies heavily on synchronous communications which has been deprecated by, thought still available, by most browsers.  For both reasons new development should not be done using ZEN.

Angular, on the other hand, is a modern framework for Web Application development.  It is inherently asynchronous and therefore disconnected from the back-end.  As Eduard stated the typical communication methodology is for the back-end to provide a RESTful interface which are quite easy to develop and publish both direct from IRIS or via the InterSystems API Manager.

I am going to make the assumption that this is part of an existing application that you are looking do new development on.  As stated ZEN is deprecated therefore it would be a good idea to develop a roadmap to migrate from ZEN to something like Angular.  I would take this new project and use it to explore the move to Angular.

You should also note that at some unknown point in time browsers could remove support for synchronous communications.  The latter will likely not happen for quite some time as many older applications still rely on it.  Several years back one browser, Firefox I think, tried to remove this support and had to quickly backtrack as too many applications stopped working.  So you have time.

There is no general rule-of-thumb.  It really depends on the nature of your application and usage patterns.  For example if this is primarily a REST based service I would set the maximum to your license capacity minus a reserve for administrative functions, background tasks, and reporting.

In a mixed environment you need to have a more balanced setup of REST (web) calls versus direct connections.

As I indicated you really need to have an idea of the use patterns of your application.

One thought that occurs to me is to check the 'Maximum Server Connections' settings in the CSP  Gateway under Server Access.  This is the maximum number of connections allows and when reached further requests are queued until a connection is available or the client times out.  This is often used with REST services to avoid clients receiving a licensing error which is not desirable or to avoid any CSP calls from consuming the entire system.   If this is the case you should examine your license usage and CSP Sessions to see what might be overloading the server. 

Of course if this is a very active system then you may need to increase the maximum connections allowed and possibly the license as well.

Hope that helps.

Diego,

As I understand it you are have an existing VM on AWS that you have access too.  When you do the SSH command you indicated you end up at linux command prompt and you need to know how to install IRIS in that environment.  This is a straight installation and not running in a docker container.

All that being the case you need to do the following:

  1. Download the appropriate installation kit from https://wrc.intersystems.com/wrc/coDistIRIS.csp.  You can filter the name by 'Health' and the OS by 'Ubuntu' to get a shorter list.
  2. Now use scp to copy that installation file to your AWS environment.  Note scp has similar syntax to the ssh and cp commands.  Google it and you will find many resources for this command
  3. Finally read the documentation on installation. https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?... https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?...

If you get stuck on any issues come back here or call directly into the Worldwide Response Center https://www.intersystems.com/support.

Finally, if you DON'T have an existing environment you can look at InterSystems Cloud Manager to provision and deploy IRIS for Health to a cloud environment.

https://docs.intersystems.com/iris20221/csp/docbook/DocBook.UI.Page.cls?...

Much of the documentation refers to deployment with containers. However there is a containerless option too which is covered in one of the subsections off this main page.

I hope this helped you get started.

Daniel,

Ok, so the data is basically a parsed report and each daily run replaces the previous run with no Delta or net change processing.  So on error they could just rerun the process to get the data.  

One suggestion on the alerting, use a distribution group as the address to send to and make sure a at least one of the Business consumers of the data are on that list.  Just alerting a technical resource could lead the business side to use incomplete data as they don't know that something went wrong.  Relying on the technical resource to pass on this information  can be risky.  Not a dig at the customer, just reality.

So, back to your question, I would still lean towards a solution that separates the loading of the data from the export to MS SQL Server.   Since you want to avoid the use of messages for the data I would still lean towards a global mapped to IRISTEMP, note NOT a process private global as the business hosts will be in separate processes.  This is a repetitive process so the fact that the DB will grow should not be an issue.  It will reach a size necessary to handle the load and stay there.  You can, if needed, programatically handle the mapping using the Config.MapGlobals API ( https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?&LIBRARY=%25SYS&CLASSNAME=Config.MapGlobals).

So this would look something like this:

  1. Business Service accepts and processes the data into a non-journaled global on a schedule.  When the data has been consumed a message is generated that is sent to the Business Operation to trigger the next phase
  2. Business operation to write to MS SQL Server reacts to receiving a message to process, including how to identify the dataset to load.  The process consumes the data out of the Global to write to SQL Server.  Once complete the data in the Global for this run is purged.

Anyway that is my two cents on this.  I am sure others will have different ideas.

Here is a quick example program I wrote a couple of months ago.  This uses JDBC and the JayDeBeapi library others have mentioned.  Note the Credentials import provides a set of  login credentials in the following format.

LocalCreds = {"user":"SuperUser", "password":"SYS"}

Here is the code:

import jaydebeapi
import credentials
def get_database_connection(inpDBInstance):
    IRIS_JARFILE = "/home/ritaylor/InterSystemsJDBC/intersystems-jdbc-3.2.0.jar"
    IRIS_DRIVER = "com.intersystems.jdbc.IRISDriver"
    AA_JARFILE = "/home/ritaylor/Downloads/AtScale/hive-jdbc-uber-2.6.5.0-292.jar"
    AA_DRIVER = "org.apache.hive.jdbc.HiveDriver"
    # note connecting to two data sources in the same program requires that you 
    # reference the paths to BOTH jar files in a list in every call. Otherwise the
    # second connection attempt will fail.  It appears that the paths only get
    # added once within a process.
    JDBC_JARFILES = [IRIS_JARFILE,AA_JARFILE]
    # Database settings - this should be in a config file somewhere
    if (inpDBInstance == "local"):
        dbConn = jaydebeapi.connect(IRIS_DRIVER,
                           "jdbc:IRIS://18.119.2.28:1972/USER",
                           credentials.LocalCreds,
                           JDBC_JARFILES)
    else:
         dbConn = None
  
    return dbConn
def run_database_query(inpQuery, inpDBInstance):
    resultSet = None
    dbConn = get_database_connection(inpDBInstance)
    cursor = dbConn.cursor()
    cursor.execute(inpQuery)
    resultSet = cursor.fetchall()
    return resultSet
def print_db_result_set(resultSet):
    if (resultSet != None):
        for row in resultSet:
            print(row)
    else:
        print("Input result set is empty")

Daniel,

One thing I always recommend is to get familiar with the API using a standalone tool before attempting to code the programmatic interface.  I like Postman as it has pretty good UI to work with.  If you are at the command line you can use curl.

https://www.postman.com/

Postman can also give you the code for different programming environments.  Unfortunately not Objectscript, but it is fairly easy to translate from the examples you can see.

Marcos,

I was going through some unanswered questions and came across yours. If you could share your spec to DM it to me I can take a look.

However, understand that the %Stream.Object probably contains the JSON payload that you need.  As such you can get your dynamic object with the following command:

set dyObject = {}.%FromJSON(body)

hope that helps.

One way to be sure that the request is or is not reaching IRIS is to go into the IRIS Web Gateway on the web server and use the trace utility to see if the request is coming in and what it looks like.  Turn on the trace, make a request and then come back and turn off the trace.  Also the default on the Web Gateway server definition is the use gzip compression which will make the body unreadable.  You can temporarily turn this off while you do this test.  

Hopefully you are doing this in a development environment so this will have no impact on production.

UPDATE:  One other thing is to check the audit logs for an security issues that you may hit.  This does not sound like the issue for you, but its worth checking.

You should understand that while InterSystems employees are on the community this is really a public forum and not an "official" support path.  Questions are answered by the  community at large when they can.  For people in the forum to help more information is needed.  You indicated you are working with HealthShare however this is really a family of solutions. Which specific product are you referring to?  What part of that product are you trying understand better?  The more specific you can be the easier it is for the community to help.

If you have an immediate need I would suggest that you contact the Worldwide Response Center (referred to as the WRC) for immediate support.  Here is the contact information:

Phone:
+1-617-621-0700
+44 (0) 844 854 2917
0800615658 (NZ Toll Free)
1800 628 181 (Aus Toll Free)

Email:
support@intersystems.com

Finally, learning services (learning.instersystems.com) and documentation (docs.intersystems.coms) can be of great help.  For HealthShare specific areas you do need to be a registered HealthShare user.  If you are not work with your organization and the WRC to get that updated.

Is it expected that this will be a single socket connection that is continually available for bi-directional communications?  I ask because my initial thought was that we have to completely separate interfaces here.  On the remote side (ip) there is a listener on the indicated port number.  You will be connecting to this ip+port to send your ADT.

A completely separate communication is initiated by the remote system to YOUR ip address where you would have a listener on the same port.  This would be limited to accept communications only from the remote ip.  The remote system would send the A19 over this connection which would.  If this is the case then you can simply use our built-in HL7 TCP operation and service  to accomplish this.

If this is truly a bi-directional communications over the same open TCP connection then @Jeffrey Drumm is correct.  They would need to provide the custom protocol they use to manage the communications.

You could try %SYS.MONLBL.  This is intended as a performance trace, but it MAY work for this purpose too.  You would have your application running in one session and get the the process is for that session.  Open a new session and run %SYS.MONLBL.  Choose the options to monitor ALL routines and the specific process id (PID) where you are running your application.  Go back to the application session and perform the function you want to trace.  Immediately go back to %SYS.MONLBL and generate the report before doing anything else in the application.

NOTE: This might not work with deployed code and even if it does will likely not provide any details of the deployed programs.  Hopefully it will at least show an entry for the deployed routine so you can see what is being called.

Good luck

https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

I have not tried this but if you enable LDAP security and select it for that Web Application then passing a username and password it may work.  You would setup LDAP per the documentation for Cache.  

As I said I am not sure that this would work in this context.  An alternative is to use Delegated Authentication.  Here is a link to a Global Summit presentation on dealing with LDAP in the Delegated Authentication zAuthenticate routine.  https://community.intersystems.com/post/global-summit-2016-ldap-beyond-s...

This  is a bit old and it is focused on dealing with custom LDAP schemas, but it will help you understand how to work with LDAP in code.

Please expand a bit on what you are attempting to do.  Are you trying to execute queries FROM IRIS to the HP NonStop SQL environment?   Or are you trying to execute NonStop SQL syntax against IRIS?  If this is the latter understand that IRIS implements ANSI Standard SQL with some IRIS specific extensions.  NonStop SQL syntax may not work entirely against IRIS as that environment will have its own extensions and syntaxes which may not be standard.  

It would help if you could post the SQL you are attempting to run along with any error messages you are receiving.