Keywords: ChatGPT, COS, Lookup Table, IRIS, AI


It has been more than a month since I imagined how ChatGPT would automate some daily engineering works. Time seems warped these days. GPT-4 came out last week, able to process 32K tokens (around 25K words on average) and handle image inputs, compared with ChatGPT's 4K token limit. It seems last month felt as if last year, and last year as last century. Before sending ChatGPT away, I just wish to share some quick appreciations on how it actually helped, already, and to recap some possible quick wins or gaps out of GPTs .


One of the simplest possible tasks could be a Lookup table, right?

Occasionally you might also have to turn tediously long CSV or Excel code tables into an Ensemble/I4H Lookup table in XML, manually. Then clinical teams changed it, then changed again and again.

Can ChatGPT help? Can I just tell it what I want, then he gave me a tool to just turn that messy long CSV into a XML lookup table? Here below is a quick try.


The following prompt is sent in ChatGPT:


You are a program to automatically convert a CSV file into a XML lookup table file.

The input is a comma delimited CSV file with 3 data columns , such as:

The output should be a XML file in the form of:
<?xml version="1.0"?>
<entry table="HIE.ICE.FilterOBR4" key="XCOM">Blood Science</entry>
<entry table="HIE.ICE.FilterOBR4" key="ADX ">Blood Science</entry>
<entry table="HIE.ICE.FilterOB
R4" key="DARA">Blood Transfusion</entry>
<entry table="HIE.ICE.FilterOBR4" key="BT1">Blood Transfusion</entry>
Please note: the sequence of the XML attribute in each root should be kept as they are


Here's a Python script that reads in the CSV file, processes it, and outputs the resulting XML file:

import csv

def convert_csv_to_xml(input_file, output_file):
    Convert a CSV file to an XML lookup table file.

    input_file (str): Path to the input CSV file.
    output_file (str): Path to the output XML file.

    int: 0 if the conversion was successful, 1 if there was an error.
        # Create a dictionary to hold the lookup table data
        lookup_table = {}

        # Read in the CSV file and process the data
        with open(input_file, "r") as csvfile:
            csvreader = csv.reader(csvfile)
            for row in csvreader:
                category = row[0].strip()
                key = row[1].strip()
                value = row[2].strip()
                lookup_table[key] = category

        # Generate the XML file
        with open(output_file, "w") as xmlfile:
            xmlfile.write('<?xml version="1.0"?>\n')
            for key in lookup_table:
                xmlfile.write(f'<entry table="HIE.ICE.FilterOBR4" key="{key}">{lookup_table[key]}</entry>\n')

        return 0
    except Exception as e:
        print("Error:", e)
        return 1

This script first reads in the CSV file using the csv module and creates a dictionary to hold the lookup table data. It then processes each row of the CSV file and adds the relevant data to the dictionary.

Finally, the script generates the XML file by iterating through the dictionary and writing the appropriate XML tags and attributes to the output file. Note that the f string syntax is used to dynamically insert the key and category data into the XML tags.

So it gave a Python code by default, without specific asking. Fine, I ran it briefly in a Jupyter notebook, it checked out, except a tiny tweak and a bit local testing. It also stripped out the blanks. It gave a tool allowing me handling really lengthy CSVs, without being limited to token size.

In this specific case, I'd say it increased my productivity about 10+ times, particularly when you need to handle revised CSVs quite a few times over next few weeks.

0 6
0 24

InterSystems FAQ rubric

You can programmatically retrieve routine dates and sizes using the RoutineList query of the %Library.Routine (or just %Routine) class.

The RoutineList query has an argument, and the routine name to be searched can be specified by prefix match or middle match. (For wildcards, specify * or ?)

In the example below, *.MAC is specified as an argument.

2 0
0 120

In the previous article we have reviewed how to install our EMPI in standalone, so we are ready to start the basic configuration of our EMPI.

First of all we have to do an initial basic configuration, we can access to the configuration from the Configuration menu of our Registry.

Selecting that option will allow to us to edit the basic configuration table of the Registry:

1 0
0 20

Hi folks!

Recently I was in need to setup a local FHIR server using IRIS For Health and I think I found the easiest and simplest way ever.

Just run in terminal these two lines below :

docker run --rm --name my-iris -d --publish 9091:1972 --publish 9092:52773 intersystemsdc/irishealth-community


docker exec -it my-iris iris session iris -U "USER" '##class(%ZPM.PackageManager).Shell("install fhir-server")'

And you'll have FHIR server running locally at http://localhost:9092/fhir/r4.

That's it!

The FHIR server will use the latest build of InterSystems IRIS for Health Community Edition and will deploy FHIR server from this app via IPM package in FHIRSERVER namespace.

This is for Mac, so please add in comments how it works in Windows.

This is a very short article as it is really easy to setup a local FHIR server with InterSystems IRIS for Health and IPM Package Manager.

0 0
0 15