Please expand a bit on what you are attempting to do.  Are you trying to execute queries FROM IRIS to the HP NonStop SQL environment?   Or are you trying to execute NonStop SQL syntax against IRIS?  If this is the latter understand that IRIS implements ANSI Standard SQL with some IRIS specific extensions.  NonStop SQL syntax may not work entirely against IRIS as that environment will have its own extensions and syntaxes which may not be standard.  

It would help if you could post the SQL you are attempting to run along with any error messages you are receiving.

Let me elaborate a bit more on Dmitry's suggestion.  IRIS for Health has a full FHIR server capability built-in.  Rather than implement the API yourself, and have to keep up with the changing FHIR versions, you could use that.  Now where the data comes from is a separate issue.  For this you can use the interoperability of IRIS to reach out to your external systems to supply the data needed to complete the FHIR request.  This stays with your use case of IRIS as an ESB to the rest of your environment.  You can still use the InterSystems API manager to provide access to the service and mange that inteface.

Alexey,

I feel that this would be counter productive.  Let me explain why.  There is a fundamental difference in the purpose of journaling versus Auditing.  Journals protect against data loss.  The developers are in a position to determine whether or not a particular update to the database is important to the integrity of the system.  Auditing it to help protect the Security of the data.  Giving a developer the opportunity to turn off an auditing event deemed important to capture kind of defeats that purpose.

It might be worth looking into what this external program is.  Perhaps there is a native api that would accomplish this.  You could also take a look at our gateways to see if you could ingest this external functionality to use directly in Cache.

I'd also look at our IRIS product to see if a migration to that platform would provide the needed functionality or a better pathway to utilizing the external program.

Finally, look at why this external program is called so often.  Perhaps the calls can be optimized to reduce the audit events if this is a major issue.

Weird,  I don't see a log.  That message pretty definitively says we have a license issue.  I had based my earlier response on the fact that He seemed to be able to get some jobs working which would imply that the instances was running.  That wouldn't happen if there was a license limitation exceeded on startup.  As the message indicates the instance just shuts down.  

Mohana, have you been trying this in different environments?

To echo Erik,  please let us know how you are making out!

The Community edition uses a core based license.  It appears that your instance is running successfully and that some routines do execute.  Therefore I do not believe that this is a license issue.  If you had exceeded the number of allowed cores then the instance would not start.  

I would look at the routines that are not executing in the background successfully.  It is possible that they are using Cache syntax that are no longer supported or has changed names.   Try executing these routines in the foreground instead of as a background job.  Verify that you get the results you expect.  If that works try jobbing this off from the terminal session to see if it will run in the background at all.

 I would also examine the log files to see if you are getting any errors that are captured from the background execution.

The best way to approach this would to engage with your sales engineer.  We are always available to help evaluate use cases for our technology and to assist you in understanding the implementation of such interfaces.

You can additionally begin your investigation with our documentation and learning resoures.  Here are a couple of links to get you started.

Enabling Productions to Use Managed File Transfer Services

First Look: Managed File Transfer (MFT) with Interoperability Productions

Managed File Transfer video

@John Murray 
Interesting.  I had not seen this update as I am fairly sure the earlier versions didn't allow that functionality.  That is an interesting option.  Though, I think many will not make the effort to configure this way without a clear need to do so.  I will definitely try playing around with it.

Scott,

One thing to keep in mind is that all code that you create or edit in VSCode is stored locally on your development machine as well as on the server (when saved and compiled).  There is no need to export the code as it is already "exported".  Just not packaged up into a single file.

To the question about how to get this project into Production.  The "proper" way is to have source control enabled with a proper development work flow (DevOps / Continuous Integration) such that you would just promote the work to the production stage.   The implementation of your workflow should take care of moving the artifacts of your development into production.  

Given the way you present the question I am going to assume that you don't have source control or a development workflow in place.  So to take all the code you have carefully developed tested in the project and move it to production you can take two approaches.  Keep in mind that this will only get the code that you have changed and not other artifacts like configuration globals or settings.

  1. (safest, and I use the word safe very loosely here.  Refer back to my "proper" comment above) would be to take the folder/directory where your project currently lives and copy it to a new location.  Edit the connection settings then "import and compile" the code.  Do this by right clicking on the folder where all your code lives ('src').  Select "import and compile"
  2. Same as #1 except you edit the connection settings in the same folder that you did development.  THIS IS DANGEROUS if you forget and start doing more work.

A PROPER SOURCE CONTROL AND WORKFLOW process is really a better way to go.  It will take a little effort to configure for your desired flow.  Again I am making an assumption that you are not using Docker Containers so automating the process will be a little more involved.  Tools like Chef and Puppet will help.  You will need to research what would work best for you.  As I said this will take some effort to get setup.  In the end it will help you in time and consistency of process.

Take a look at this article series on the community which may help:  

https://community.intersystems.com/post/continuous-delivery-your-intersy...