Rich Taylor · Mar 12, 2021 go to post

Alexey,

I feel that this would be counter productive.  Let me explain why.  There is a fundamental difference in the purpose of journaling versus Auditing.  Journals protect against data loss.  The developers are in a position to determine whether or not a particular update to the database is important to the integrity of the system.  Auditing it to help protect the Security of the data.  Giving a developer the opportunity to turn off an auditing event deemed important to capture kind of defeats that purpose.

It might be worth looking into what this external program is.  Perhaps there is a native api that would accomplish this.  You could also take a look at our gateways to see if you could ingest this external functionality to use directly in Cache.

I'd also look at our IRIS product to see if a migration to that platform would provide the needed functionality or a better pathway to utilizing the external program.

Finally, look at why this external program is called so often.  Perhaps the calls can be optimized to reduce the audit events if this is a major issue.

Rich Taylor · Feb 19, 2021 go to post

Weird,  I don't see a log.  That message pretty definitively says we have a license issue.  I had based my earlier response on the fact that He seemed to be able to get some jobs working which would imply that the instances was running.  That wouldn't happen if there was a license limitation exceeded on startup.  As the message indicates the instance just shuts down.  

Mohana, have you been trying this in different environments?

To echo Erik,  please let us know how you are making out!

Rich Taylor · Feb 18, 2021 go to post

The Community edition uses a core based license.  It appears that your instance is running successfully and that some routines do execute.  Therefore I do not believe that this is a license issue.  If you had exceeded the number of allowed cores then the instance would not start.  

I would look at the routines that are not executing in the background successfully.  It is possible that they are using Cache syntax that are no longer supported or has changed names.   Try executing these routines in the foreground instead of as a background job.  Verify that you get the results you expect.  If that works try jobbing this off from the terminal session to see if it will run in the background at all.

 I would also examine the log files to see if you are getting any errors that are captured from the background execution.

Rich Taylor · Jan 6, 2021 go to post

The best way to approach this would to engage with your sales engineer.  We are always available to help evaluate use cases for our technology and to assist you in understanding the implementation of such interfaces.

You can additionally begin your investigation with our documentation and learning resoures.  Here are a couple of links to get you started.

Enabling Productions to Use Managed File Transfer Services

First Look: Managed File Transfer (MFT) with Interoperability Productions

Managed File Transfer video

Rich Taylor · Dec 11, 2020 go to post

@John Murray 
Interesting.  I had not seen this update as I am fairly sure the earlier versions didn't allow that functionality.  That is an interesting option.  Though, I think many will not make the effort to configure this way without a clear need to do so.  I will definitely try playing around with it.

Rich Taylor · Dec 11, 2020 go to post

Scott,

One thing to keep in mind is that all code that you create or edit in VSCode is stored locally on your development machine as well as on the server (when saved and compiled).  There is no need to export the code as it is already "exported".  Just not packaged up into a single file.

To the question about how to get this project into Production.  The "proper" way is to have source control enabled with a proper development work flow (DevOps / Continuous Integration) such that you would just promote the work to the production stage.   The implementation of your workflow should take care of moving the artifacts of your development into production.  

Given the way you present the question I am going to assume that you don't have source control or a development workflow in place.  So to take all the code you have carefully developed tested in the project and move it to production you can take two approaches.  Keep in mind that this will only get the code that you have changed and not other artifacts like configuration globals or settings.

  1. (safest, and I use the word safe very loosely here.  Refer back to my "proper" comment above) would be to take the folder/directory where your project currently lives and copy it to a new location.  Edit the connection settings then "import and compile" the code.  Do this by right clicking on the folder where all your code lives ('src').  Select "import and compile"
  2. Same as #1 except you edit the connection settings in the same folder that you did development.  THIS IS DANGEROUS if you forget and start doing more work.

A PROPER SOURCE CONTROL AND WORKFLOW process is really a better way to go.  It will take a little effort to configure for your desired flow.  Again I am making an assumption that you are not using Docker Containers so automating the process will be a little more involved.  Tools like Chef and Puppet will help.  You will need to research what would work best for you.  As I said this will take some effort to get setup.  In the end it will help you in time and consistency of process.

Take a look at this article series on the community which may help:  

https://community.intersystems.com/post/continuous-delivery-your-inters…

Rich Taylor · Nov 18, 2020 go to post

Then that wonderful Ian Flemming intro gets reduced down to "vodka martini, shaken not stirred"

Rich Taylor · Jun 4, 2020 go to post

Great write up!  Thanks, saved me lots of googling and trying methods that don't really work :)

Rich Taylor · Jun 1, 2020 go to post

@Armin Gayl   The answer depends a lot on background.  If you are a team experienced with Cache then you are probably already comfortable with Studio.  Studio is stable and works well with InterSystems products.  If you are primarily a Windows environment, working primarily with Cache/IRIS classes and objectscript,  staying with this IDE is fine.

On the other hand if you:

  1. work across platforms (windows, linux, mac) OR
  2. want to easily integrate source control OR
  3. need to program in multiple languages and/or different components (docker, angular cli, ...) OR
  4. need to attract new talent OR

If any of the above is true I would recommend VSCode with the Objectscript plug-in.  Why?

  • mutli-platform (I work with a Linux desktop so this we important to me)
  • easy source control integrations
  • much more resource friendly than Eclipse which I found to be a resource hog
  • plug-ins just seem to work better.  Example the docker plugin(s) on Eclipse were a disaster.  The one I have on VSCode is great
  • Well accepted in the market

Let my delve into point 4 above and the previous last bullet point as I think this gets overlooked or considered unimportant to the IDE question because "they have to learn a new language (objectscript) anyway".   My opinion is that the issue of learning a new language is really overstated.   Any programmer today can't even get out of bed without knowing several development languages.   The real problem is HOW you work with those languages.  In other words the IDE.    If you can bring in someone that is already familiar with how to work with the IDE and the general workflow, which is similar regardless of language, then you remove one barrier to entry.  Now it becomes learning just another scripting language with is a process that new developers are used to.

VSCode is quite popular and is trending up.  I participated in a Hackathon with InterSystems several months ago.  In the end there were 70 teams that submitted projects.  Figure an average of 3 people on a team as a rough estimate.  Every single developer that we interacted with was using VSCode.    This, along with some great templates from @Evgeny Shvarov , made it easy for them to get working with IRIS.  In fact something like 12 teams used IRIS including the second place team (the first place team's solution was related to a process where they were forbidden by law from storing any data).

So that's my $0.02.  I would stay with Studio if that is your comfort zone and work primarily within InterSystems technology.  If not go with Visual Studio Code.  I would not consider Eclipse for many of the reasons others have stated and because it is really resource heavy.

Rich Taylor · Apr 21, 2020 go to post

Kevin,

The  best option is to work with  IRIS for Health Community Edition which is free for development and education.  You can get this from either Docker as a container you can use on your system or on AWS, Azure, or GCP if you want to work in the cloud.  AWS, at least, has a free tier that is good for 750 hours a month up to a year.  This is more than adequate for education and simple development.  I have used this for demos for a time.

https://hub.docker.com/_/intersystems-iris-for-health
https://aws.amazon.com/marketplace/pp/B07N87JLMW?qid=1587469562959&sr=0-3&ref_=srh_res_product_title
https://azuremarketplace.microsoft.com/en-us/marketplace/apps/intersystems.intersystems-iris-health-community?tab=Overview
https://console.cloud.google.com/marketplace/details/intersystems-launcher/intersystems-iris-health-community-edition?filter=category:database&filter=price:free&id=31edacf5-553a-4762-9efc-6a4272c5a13c&pli=1
 

if you follow the link in the top bar for 'Learning'  you will find many education resources including some quick start topics on IRIS.  And, of course, you can ask questions here.

Rich Taylor · Apr 16, 2020 go to post

No Problem.  The newer JSON handling is quite flexible (Dynamic Objects).  When you move to IRIS you also get the %JSON.Adapter class.  Add that to the inheritance of any class and you get extend that flexibility to whole object structures.

Rich Taylor · Feb 21, 2020 go to post

Damiano,

Keep in mind that Studio is VERY namespace centric.  A single running instance of CStudio can only talk to a single namespace at a time. Even running multiple copies of CStudio can run into issues related to this and how projects track the information.

As Dmitriy Maslennikov has indicated you can look at Visual Studio code with the VSCode-ObjectScript plug-in as long as you are on Cache 2016.2+ or IRIS.  You can also use the Atelier plugin for Eclipse (Photon version only) which has much the same capabilities.

One last thought is why you have two namespaces?  If this is just to separate the code from the application data then you really don't need two namespaces.  You need to configure a single namespace to reference the two databases, one for data and one for code.  I would review the documentation for Namespace to be sure you are on the right track.

https://cedocs.intersystems.com/ens201813/csp/docbook/DocBook.UI.Page.cls?KEY=GSA_config#GSA_config_namespace
 

I would also encourage you to engage with your sales engineer to review your architecture and development direction.

Rich Taylor · Feb 20, 2020 go to post

I don't think there is any option to do this.  By executing this again from the command line you are indicating that you want to open CStudio.

Why not just use the File (menu) -> Open?  in CStudio and navigate to the class you want.

Rich Taylor · Aug 5, 2019 go to post

If you used package mapping you may have forgotten to map the global too.  Examine the class definition to find the global name to map.  If you map the class, but not the global you are getting the code of the class, but the storage would be local.  This allows the sharing of definitions across namespaces without sharing the data.  Add the global mapping to share the data too.

Rich Taylor · Jul 1, 2019 go to post

Dmitry,

The method of logging to a global was primarily to match the original use case.  Having a full class for logging including indexes would be a better option.

However, my personal belief is that suspending transactions, or even worse stopping and starting  journaling,  is not really a good option.  Even if the actually coding does not appear to be complex there is the potential for reach beyond the current transaction if you don't cover all bases.  Holding the logs in a temporary global that then gets written out after the transaction insures that the application logic is intact.  You could even encapsulate this functionality in classmethods.  If there is any failure the impact would be on the logging and not the actual logic of the transaction.

Not if we could perform an update/insert/save that could be intentionally and deliberately excluded from the transaction OR if the suspend was only for the current transaction so no lasting impact was even possible then I would feel more comfortable with that approach.

I will add that this is a matter of design and the programmer's approach.  Either method works the decision is where you wish to have any risk, no matter how small that risk is.  My preference is to be absolutely sure the integrity of the transaction  is maintained over the logging.

Rich Taylor · Jul 1, 2019 go to post

First let me state for clarity that the life of a transaction should be short.  That is a single logical transaction to the application.  I have seen cases where an entire batch update was treated as a single transaction.  This can create many problems that are not easy to diagnose.

I also agree with Fabian that suspending transactions is a path that can lead to complexity in the code that it best to avoiding.  The nature of a transaction is to track EVERYTHING that happens in the transaction.  Messing with that will lead to issues with maintainability and debugging later in the life of the application.

My suggestion is to do your logging to an in-memory global while in the transaction.  Then you can permanently write this after the transction commits or rolls back.   Two things to be sure of in this method: 

  1. always explicitly commit or rollback the transaction.   Don't rely on implied rollback that would occur from a process end
  2. enclose your transaction in a try/catch block.  This will insure that you have the opportunity to commit your logs in the case of a system fault.  This could be an existing try/catch in the program.  However for control and clarity I recommend a separate try/catch for the transaction.

How you commit your log updates will depend on your error handling.  I this example the logs are committed in two places, after a tcommit and in the catch block since the catch block will throw the exception up the stack.  If this did not occur then a simpler approach would be to write the logs out after the catch block.  

Example (just typed it here for example.  Not a running program) with exception passed up the stack

Try {
    set ^LOG($Increment(^LOG)) = "starting a transaction"
    TSTART
    // do some program logic
    // do some logging
    set tLog($Increment(^LOG)) = "some application trace" // note using the still using he ^LOG increment
    
    // if application error
    throw ##class(%Exception.StatusException).CreateFromStatus(tSC)

    TCOMMIT
    Merge ^LOG = tLog
} catch except {
    TROLLBACK
    Merge ^LOG = tLog
    Throw except
}
 

Example with internal error handling only

Try {
    set ^LOG($Increment(^LOG)) = "starting a transaction"
    TSTART
    // do some program logic
    // do some logging
    set tLog($Increment(^LOG)) = "some application trace" // note using the still using he ^LOG increment
    
    // if application error
    throw ##class(%Exception.StatusException).CreateFromStatus(tSC)

    TCOMMIT
} catch except {
    TROLLBACK
}
Merge ^LOG = tLog
 

Rich Taylor · Jun 12, 2019 go to post

Murray,

This looks great.  However I am having a problem getting this to work with a recent pButtons report I received.  The error is a follows.  Any thoughts?

docker container run --rm -v "$(pwd)":/data yape/yape --mgstat --vmstat -c /data/dbmirror2-c_CACHE_20190131_161057_24hours.html
INFO:root:Profile run "24hours" started at 16:10:57 on Jan 31 19.
Traceback (most recent call last):
  File "/usr/local/bin/yape", line 11, in <module>
    load_entry_point('yape', 'console_scripts', 'yape')()
  File "/src/yape/yape/command_line.py", line 6, in main
    yape.yape2()
  File "/src/yape/yape/main.py", line 248, in yape2
    parsepbuttons(args.pButtons_file_name, db)
  File "/src/yape/yape/parsepbuttons.py", line 382, in parsepbuttons
    StartDateStr = datetime.strptime(StartDateStr, "%b %d %Y").strftime(
  File "/usr/local/lib/python3.7/_strptime.py", line 577, in _strptime_datetime
    tt, fraction, gmtoff_fraction = _strptime(data_string, format)
  File "/usr/local/lib/python3.7/_strptime.py", line 359, in _strptime
    (data_string, format))
ValueError: time data 'Jan 31 19' does not match format '%b %d %Y'
 

Rich Taylor · Jun 5, 2019 go to post

If that is the case you can use BPL to create a business process with greater control over the orchestration.  You would set it up so that it  sends to each end point synchronously so that the first send has to happen before the second.  Of course if this were the case then the receiving side, with separate end points, would have to be sure that the messages stayed in order.  That is out of your control however.

Rich Taylor · Jun 5, 2019 go to post

Well one thing is to be sure that the external database implements a FHIR server and that you have the necessary access information and credentials.   They have to be able to accept the REST call per the FHIR standard.  If this is not in place all is not lost.  You can still use other methods access the external database, the method depends on what options are provided.  You just could not use FHIR.

BTW, if I understand you correctly Health Connect would be acting as a FHIR client in this usage, not a server.

Rich Taylor · Jun 5, 2019 go to post

Edrian,

You state that Request.JSON is a simple string.  The %ToJSON() method only exists as part of the DynamicObject and DynamicArray classes.  Actually I am surprised that this is not failing completely before you even send the request because of this.  If your variable already has well formed JSON as the value then you can just write that into the EntityBody.

BTW, When you deal with JSON in the future you may find an advantage in using the Dynamic object handling for JSON.  For example take the following JSON {"NAME":"RICH","STATE":"MA"}

If this is in a variable, say jsonStr, I can load this into a Dynamic Object using the following command:

set obj = {}.%FromJSON(jsonStr)

Now you can use object references to work with the JSON

Write obj.NAME   -> displays RICH

I can add new properties dynamically

set obj.ZIPCODE = "99999"

Finally convert this back to a json string with:

write obj.%ToJSON()

which would display  {"NAME":"RICH","STATE":"MA","ZIPCODE":"99999"}

See the documentation at https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GJSON_preface

Rich Taylor · May 21, 2019 go to post

I would say that this is definitely not recommended.  It does violate the intent of the object design.  You are probably not privy to why this was marked private and internal.  Providing a method to bypass that could create problems.  Of course if this is only for debugging a class that you don't have the ability to edit it may be acceptable.

Rich Taylor · Apr 9, 2019 go to post

I find the easiest way is to log into the CSP Gateway for the web server you are using.  If this is a development machine and you are using the stripped down web server internal to Cache you can access this from the management portal.  The path is System Adminstration -> Configuration -> CSP Gateway Management.  If you looking to do this against the traffic on an external web server then you need to the path to the Module.xxx.  On my Windows VM this is http://192.168.x.xxx/csp/bin/Systems/Module.cxw.

You will need the Web Gateway Management username and password.  The user is typically CSPSystem.  Once logged in look for the View HTTP Trace on the left hand menu.

Click on that and you will see a screen with 'Trace OFF' and 'Trace ON' at the top of the left hand menu.  You will also see options to refresh and clear.  Below that will appear any requests that have been traced. This is probably blank at this time.  Click Trace ON (it should change to RED.  Now go make a request that you want to trace.  Once your request is complete go back and turn off the trace so you don't get a bunch of requests that have nothing to do with what you want to examine.  I did this and made a request for the System Management Portal.  Here is the list I get.

Note that I see two requests.  Only one is what I want to look at which, in my case is the first.  When you select a trace you will see the request a the top followed by the response.  Note if the body of the response is not readable you likely have gzip compression on.  Go to the Application settings in the web gateway and turn this off to actually be able to see the body.  Remember to turn it back on later though.

Here is my results (truncated).  Hope this helps you

Rich Taylor · Mar 19, 2019 go to post

John,

I thought I would add another example to this as  I have never needed to use the control wrapper solution.  I found I had to add gzip as an installed package in my docker file.  Below is a docker image built for centos.  The installation is HealthShare rather than Cache, but the that is just a change in the InterSystems installer referenced.   The overall process should work with value that make sense for your environment.  One  very important change is that the entry point should be ccontainermain.  The reason for this is the entry point needs to stay running to keep the container up and running.  Ccontrol will start Cache then end which will actually exit and shutdown the container.  There is a link to where to find this program at https://community.intersystems.com/post/cache-db-docker-container

# pull from this repository
# note that if you don't have the distribution you're after it will be automatically
# downloaded from Docker central hub repository (you'll have to create a user there)
#
FROM centos:latest

# setup vars section___________________________________________________________________
#
ENV TMP_INSTALL_DIR=/tmp/distrib

# vars for Caché silent install
ENV ISC_PACKAGE_INSTANCENAME="HSPI" \
    ISC_PACKAGE_INSTALLDIR="/opt/intersystems/healthshare" \
    ISC_PACKAGE_INITIAL_SECURITY="Normal" \
    ISC_PACKAGE_CLIENT_COMPONENTS=""  \
    ISC_PACKAGE_USER_PASSWORD="xxxx" \
    ISC_PACKAGE_CSPSYSTEM_PASSWORD="xxxx" \
    ISC_PACKAGE_HSMODULES="coretech,hscore,mprl,hspi"

# distribution file________________________________________________________________
# set-up and install from distrib_tmp dir
RUN mkdir ${TMP_INSTALL_DIR} && \
    mkdir -p ${ISC_PACKAGE_INSTALLDIR}/mgr
WORKDIR ${TMP_INSTALL_DIR}

# update OS + dependencies & run silent install___________________________________
RUN yum -y update && \
    yum -y install tar gzip which java-1.8.0-openjdk

ADD HealthShare-2018.1-Exchange_Insight_Index-b7718-lnxrhx64.tar.gz .

RUN ./HealthShare-*/cinstall_silent && \
    rm -rf ${TMP_INSTALL_DIR}/* && \
    ccontrol stop $ISC_PACKAGE_INSTANCENAME quietly 
COPY cache.key $ISC_PACKAGE_INSTALLDIR/mgr/

# TCP sockets that can be accessed if user wants to (see 'docker run -p' flag)
EXPOSE 57772 1972

# container main process PID 1 (https://github.com/zrml/ccontainermain)
WORKDIR /
ADD ccontainermain .

ENTRYPOINT  ["/ccontainermain","-cconsole","-i", "HSPI"]

Rich Taylor · Mar 15, 2019 go to post

Leo,

I would go to the "Building Your First HL7 Produciton" learning path in the links I sent earlier.  There are several learning resources listed here.  If you are in  hurry you can skip the introduction components and go direct to the "Integration Architecture" course.  Then follow along with the other courses in order.  I would recommend at least the:

  • HL7 I/O course
  • all three courses under the message router section
  • Data Transformation Basics
  • Practice building data transformations
  • the two courses under troubleshooting would be advisable too
  • Do the Final Exercise.

You can always go back  and review other courses as needed.   Also search our Learning Services area (Learning on the top bar) for other courses and presentations.

You can contact me directly (my email is in my profile)) if you want to take this offline.

Rich Taylor · Mar 13, 2019 go to post

Yes that would be a good course. There is a date coming up on April 8th.  In the meantime the online learning can get you started.  You should be able to accomplish quite a bit without any programming. Where you will need to do some is in creating a message class to contain the extracted data that is to be sent to SQL Server and to create the actual SQL operation to update SQL server.  Neither is difficult to do.  I would definitely suggest engaging with your InterSystems Sales Engineer for guidance.

Rich Taylor · Mar 13, 2019 go to post

Not to worry.  You don't need to do C# programming or deal with obscure libraries.  With Ensemble you can accomplish much of what you need to do without alot of programming. I would start with the "Build your first HL7 production" learning path to discover how to create an HL7 integration.  There is also Classroom training that you can sign up for.  Check our learning services site for class schedules.

To update SQL Server you would create a Business operation (which will require some coding using ObjectScript) that uses our SQL Outbound adapter to update SQL Server.  Here is a link to that documentation.  Using the SQL Outbound Adapter

Finally, if you need more direct assistance I would recommend engaging with the Sale Engineer assigned to your account.   We're always here to help get you moving in the right direction.

Rich Taylor · Mar 13, 2019 go to post

Leo,

First can I ask which InterSystems product you are working with?  Ensemble, Health Connect, and IRIS for Health all provide tools that make handling of HL7 much easier.  Further these are all interoperability platforms which provide tools for building these types of integrations efficiently and fast.

A further question is what are you intending to do with this HL7 message?  Just put the entire message file contents into SQL Server as a blob or are you looking to pull out specific data to put into columns in a SQL table?  From your further comments I believe it is the latter,  but clarity would help here.

For the moment I make an assumption that you are using either Ensemble or Health Connect and point you at some documentation that can help you.

Ensemble HL7 Version 2 Development Guide

Also some Learning Services links

Integration Architecture

Building Your First HL7 Production learning path

Hope this helps