Article
· Feb 18 8m read

OMOP Odyssey - InterSystems OMOP, The Cloud Service (Troy)



InterSystems OMOP, The Cloud Service (Troy)

An Implementer's approach into the OHDSI ( pronounced "Odyssey" ) Community through an Expiring Trial of  InterSystems OMOP Cloud Service.

What is it? 

The InterSystems OMOP, available as a HealthShare service through the InterSystems Cloud Services Portal, transforms HL7® FHIR® data into the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM). The InterSystems OMOP looks at FHIR data stored in an S3 bucket and automatically transforms and sends the data to the cloud-hosted repository in the OMOP CDMO format. You can then use external Observational Health Data Sciences and Informatics (OHDSI) tools, such as ATLAS or HADES, in conjunction with a database driverOpens in a new tab, such as JDBC, to perform analytical tasks on your data.

Abridged: It transforms S3 Hosted FHIR Bulk Export Data to the OMOP CDM to a Cloud Hosted IRIS or a postgres flavored database of your choice.

Going to take the above for a spin here from "soup to nuts" as they say and go end to end with an implementation surrounded by modern powerful tools and the incredible ecosystem of applications from the OHDSI Community.  Will try not to re-hash the docs, neither here or there, and surface some foot guns 👣 🔫 along the way.

Everything Good Starts with a Bucket

When you first provision the service, you may feel immediately that you are in a chicken and egg situation when you get to the creation dialog and prompted for S3 information right out of the gate.   You can fake this best you can and, and update it later or take an approach that is less hurried where you understand how you are provisioning an Amazon S3 Bucket for transformation use.  Its a modern approach implemented in most Cloud based data solutions to share data, where you provision the source location yourself, then grant the service access to interact with it.

  • Provision Bucket and Initial Policy Stack
  • Create the Deployment for the Service
  • Update the Bucket Policy constrained to the Deployment

We can click the console to death , or do this with an example stack.

 
s3-omop-fhir-stack.yaml

Create the stack anyway you want to, one way is to use the aws cli.

aws cloudformation create-stack --stack-name omopfhir --template-body s3-omop-fhir-bucket.yaml --parameters ParameterKey=BucketName,ParameterValue=omop-fhir

Create some initial keys in the bucket to use for provisioning and the source folder for FHIR ingestion.

aws s3api put-object --bucket omop-fhir --key Transactions/in --profile pidtoo
aws s3api put-object --bucket omop-fhir --key termz --profile pidtoo

You should now be setup to provision the service with the following, pay attention to the field asking for the arn, is actually asking for the arn of the bucket despite the description asking for the name... small 👣🔫 here.



After the deployment is created, head over to the "Configurations" navigation item inside the "FHIR to OMOP Pipeline" deployment and grab the policy by Copying it to your clipboard.  You can just follow the directions supplied there, and wedge this into your your current policy or just snag the value of the role and update your stack.

aws cloudformation update-stack --stack-name omopfhir --template-body s3-omop-fhir-bucket.yaml --parameters ParameterKey=PolicyfromOMOPConfiguration,ParameterValue="arn:aws:iam::1234567890:role/skipper-deployment-4a358965ec38ba179ebeeeeeeeeeee-Role"

Either way, you should end up with a policy that looks like this on your source bucket under permissions... (account number, role fuzzed)

 
 Recommended Policy

I used a more open policy that allowed for the opening the root account, but constraining on the buckets.  This way I could support multiple deployments with a single bucket (or multiple buckets).   Not advised I guess, but a second example for reference to support multiple environments in a single policy for IAC purposes.

 
Root Account

That's our source for the transformation, now lets move on to the target, the OMOP Database.


Meet OMOP

Lets take a quick look over to the other deployment "OMOP on IRIS" and meet the Common Data Model.

The OMOP (Observational Medical Outcomes Partnership) database is a monument on how you boil ridiculous complexity to support  multiple sources into a common data model, referred to as the CDM.  Any further explanation outside of the community would be an exercise in cut and paste (or even worse generative content), and the documentation in this community is really, really well done.

Navigating to the "SQL Query Tools" navigation and you can see the InterSystems implementation of the Common Data Model, shown here next to the infamous diagram of OMOP Schema from the OHDSI community.

That's as far as we go with this work of art, let's investigate another option for using the service for transformation purposes only.

BYODB (Bring Your Own Database)

We got a database for free when we provisioned last time, but if we want to target another database, we can surely do that as the service at this time of writing supports transforming to flavors of Postgres as well.  For this we will outline how to wrangle an external database, via Amazon RDS, and connect it to the service.


Compute


I'll throw a flag here and call another 👣🔫 I refer to as "Biggie Smalls" in regards to sizing your database for the service if you bring your own.  InterSystems does a pretty good job of sizing the transform side to the database side, so you will have to follow suit and consider the fact that the speed of your transform performance is dependent on the sql instance you procure to write to, so do so accordingly.   This may be obvious to some, but witnessed it and thought Id call it out, as I went cheap with RDS, Google Cloud SQL et al, and the persistence times of the FHIR Bundles to the OMOP database were impacted.

Having said all that, I do exactly the opposite and give Jeff Bezos the least amount of money possible for the task regardless, with a db.t4g.micro postgres RDS Instance.

We expose it publicly and head over to download the certificate bundle for the corresponding region your database is in... make sure its in .pem format.

Next, however you interact with databases these days, connect to your db instance and create a DATABASE and SCHEMA:




Load OMOP CDM 5.4

Now we get a little help from our friends in the OHDSI Community to provision the supported schema at version 5.4 in RStudio with OMOP Common Data Model schema using OHDSI Tools. 

install.packages("devtools")
devtools::install_github("OHDSI/CommonDataModel")
install.packages("DatabaseConnector")
install.packages("SqlRender")
Sys.setenv("DATABASECONNECTOR_JAR_FOLDER" = "/home/sween/Desktop/OMOP/iris-omop-extdb/jars")
library(DatabaseConnector)
downloadJdbcDrivers("postgresql")

We now have what we need and can connect to our postgres instance and created the tables in the OMOPCDM54 we provisioned above.

Connect

cd <- DatabaseConnector::createConnectionDetails(dbms = "postgresql",
                                                 server = "extrdp-ops.biggie-smalls.us-east-2.rds.amazonaws.com/OMOPCDM54",
                                                 user = "postgres",
                                                 password = "REDACTED",
                                                 pathToDriver = "./jars"
                                                 )


Create

CommonDataModel::executeDdl(connectionDetails = cd,
                            cdmVersion = "5.4",
                            cdmDatabaseSchema = "omopcdm54"
                            )

Barring a "sea of red", it should have executed successfully.



Now lets check out work, we should have an external postgres OMOP database suitable for using with the service.


Configure OMOP Cloud Service

We have the sources, we have the targets, lets configure the service to glue them together to complete the transformation pipeline from FHIR to the external database.


InterSystems OMOP Cloud Service Should be all set!

The OMOP Journey continues...

Discussion (0)2
Log in or sign up to continue