Question
· Aug 8, 2022
FHIR Package Loading Sadness

Im playing whack a mole importing an IG (which is a fairly typical exercise with fhir packages) through FHIR packages, and getting at some parts I cant seem to work around with some store errors...

Im getting MAXSTRING on `hl7.terminology.r4`:

1 3
0 379

Upgraded IRIS/Connect to 2022.1 and /api/atelier no longer works through a Web/CSPGateway. Also upgraded the Web/CSPGateway to version WebGateway-2022.1.0.152.0 on Ubuntu and HTTPD Server version: Apache/2.4.29 (Ubuntu) with no luck as well.

It doesnt seem to matter if I add /api/atelier or /api/monitor to the enabled applications list, these routes do not make it back to the instance, however /csp, /csp/sys still does.

1 2
0 283

We have an implementation with a bunch of users, and a bunch of namespaces, both of which are added and removed frequently, and the users have restrictive perms in the namespaces (lets just say, not %All)... and the users are utilizing the VSCODE extension for development.

Per the instructions and the user experience, we need to run:

GRANT EXECUTE ON %Library.RoutineMgr_StudioOpenDialog TO ${user}

For ... each Namespace, and additionally %SYS for Web Apps.

0 4
0 362

If anybody could give me some insight on creating the %All Namespace programmatically I would appreciate it. There are quite a few posts I found that reference its creation using the UI, but I cant seem to get passed some validations with any form of the below:

0 3
0 387

Deploying InterSystems HealthShare code, supporting lookups and artifacts like ssl certs, keys etc is relatively straight forward using Gitlab Runners. Not only does this approach enable managing the code base and deploying with git type workflows, but it also lends to a speedy recovery and repeatable environments for some implementations.

4 1
2 564

We are ridiculously good at mastering data. The data is clean, multi-sourced, related and we only publish it with resulting levels of decay that guarantee the data is current. We chose the HL7 Reference Information Model (RIM) to land the data, and enable exchange of the data through Fast Healthcare Interoperability Resources (FHIR®).

13 2
1 777
Article
· Dec 7, 2020 6m read
IRIS Python Native API in AWS Lambda

If you are looking for a slick way to integrate your IRIS solution in the Amazon Web Services ecosystem, server less application, or boto3 powered python script, using the IRIS Python Native API could be the way to go. You don't have to build out to far with a production implementation until you'll need to reach out and get something or set something in IRIS to make your application do its awesome sauce, so hopefully you will find value in this article and build something that matters or doesn't matter at all to anybody else but you as that is equally important.

image

7 2
1 849
Question
· Dec 12, 2019
ComplexMap Riddle

Im usually pretty good at ComplexMaps and implemented a couple, but I have one that is stumping me on how I can implement it. My problem is I have no real "leading data" to key off of and need something else...

It goes a little bit like this:

D123456 THING1 THING2 THING3 THING4

D789101 THING1 THING2 THING3 THING4

0 1
0 179

Loading your IRIS Data to your Google Cloud Big Query Data Warehouse and keeping it current can be a hassle with bulky Commercial Third Party Off The Shelf ETL platforms, but made dead simple using the iris2bq utility.

Let's say IRIS is contributing to workload for a Hospital system, routing DICOM images, ingesting HL7 messages, posting FHIR resources, or pushing CCDA's to next provider in a transition of care. Natively, IRIS persists these objects in various stages of the pipeline via the nature of the business processes and anything you included along the way. Lets send that up to Google Big Query to augment and compliment the rest of our Data Warehouse data and ETL (Extract Transform Load) or ELT (Extract Load Transform) to our hearts desire.

A reference architecture diagram may be worth a thousand words, but 3 bullet points may work out a little bit better:

  • It exports the data from IRIS into DataFrames
  • It saves them into GCS as .avro to keep the schema along the data: this will avoid to specify/create the BigQuery table schema beforehands.
  • It starts BigQuery jobs to import those .avro into the respective BigQuery tables you specify.

5 3
0 1.1K

Hello,

I am pretty stuck here and would appreciate any help or advice on an approach to this...

I have a single claim file, ingested that has 7 claims inside of it, I am pulling each claim out based on a qualifier, then want to remove all of the others and do something with the one that is left over.

My problem is I cant seem to figure out how to Remove the Claims programatically...

0 1
0 276

Hello,

I am running a transformDTL through COS that transforms an EDI document to JSON and would like to validate the document and throw an error if it does not validate or build a map.

I am looking at available methods and haven't been able to find one that sort of does: Set tSC = ##class(*).Validate(tDoc,"HIPAA_5100").

Any help here would be appreciated, it seems like I am missing something simple.

0 1
0 437
Question
· Mar 12, 2018
Compilation Status after Import

Is there a way to enumerate the compilation status of a package?

currently after deployment we are doing something like this to validate a successful load and compile of classes:

successful_compilation_count=`grep -a "Compilation finished successfully" output.log | wc -l`
successful_load_count=`grep -a "Load finished successfully" output.log | wc -l`

is there a method to do this where it is a little bit more elegant/dynamic without having to maintain counts for comparison ?

0 4
0 397