Hi

I started writing a response to your questions yesterday afternoon and I wrote a lengthy explanation using LabTrak as the basis for my response. I wrote about 400 lines of explanation and then my laptop rebooted to install some updates and I lost everything I wrote. So I am going to answer but I am going to abbreviate the explanation as I don't have enough time to rewrite the whole document I wrote yesterday.

1) Different database Storage Models

  • DefaultStrorage
    • Is the Storage model used when you create a new Class/Table in Ensemble/IRIS.
    • Stores data in a Global whose name ends with a "D". It stores Indices for the Class/Table in a Global ending with an "I" and puts Stream data into a Global where the Name ends with an "S"
    • Data is stored in a $list structure. $list creates a couple of bytes at the beginning of each field that informs the Compiler how long the field is and the base datatype of the field: String, Number, Boolean and Bit.
    • By default, the RowId is a Sequential Contiguous Integer starting from 1. The RowId is referenced by the column named "ID"
    • The RowId can be overridden by creating an Index on a property of the class and assigning it the attributes [Unique, PrimaryKey, IDKey]. PrimaryKey says that the property value is the RowId for each row in the Table. IDKey says that the property Value is the Object.%Id() and is the Object Identifier. Unique is implicit in that you cannot have two different records with the same RowId value.
    • The Global reference for the Data Global looks like this:
    • ^GlobalD(RowId)=$lb(Property1,Property2,...,PropertyN)
    • An index on Property X will generate a Global Reference: ^GlobalI("{IndexName}",{Property_Value},{RowId_Value"})=""
    • The RowId, irrespective of whether it is system generated or based on the value of a specific property or properties is the "ID" in an SQL statement
  • SQLStorage
    • This model was implemented to allow InterSystems customers who had developed applications that used direct Global Access to create a Mapping of the Global Structures in such a way that the data in the global can be exposed as Class Objects or Table Rows.
    • The RowId can be referenced as "ID" in an SQL Statement. The RowId has to be generated by the application even if it is a sequential Integer, though in many applications including LabTrak typically subscripted their Application Globals with a data value generated by the application. For example, the Episode Number in LabTrak is based on the 2 character Site Code followed by a string of Numerics. e.g. AA1345363

2) Links between Tables

  • A Designative Reference is a term we use for a Property/Column whose value is the RowId of a row in another Class/Table. The most common examples are code tables. If you have a code table CT_Country where the Country_Code is the RowId of the table. A column on the table EP_VisitNumber (A Trak Episode)  called CountryDR is linked to a row in CT_Country. 
  • The CT_Country Global will look like this: ^User.CTCountryD({Country_Code})="{Country_Name}\{Country_Currency}\{Country_Telephone_Code}" and an example would be: ^User.CTCountryD("ZA")="South Agrica\ZAR\+27"
  • The Episode Record will look like this: ^TEPI({Episode_Number})="field1\Field2\...\ZA\...\FieldN".  The SQL query "Select Country_Code from SQLUser.EP_VisitNumber where EP_VisitNumber='AA12347690'" will display ZA from the Country_Code column. You can use a "->" (Points Into) construct supported in Ensemble/IRIS SQL to access the Country_Name without having to specify the CT_Country TableName in the FROM clause or the WHERE clause "WHERE EP_VisitNumber.Country_Code=CT_Country.ID"
  • The Select statement looks like this "SELECT EP_VisitNumber,EP_VisitSurname,Country_Code->CountryName from SQLUser.EP_VisitNumber
  • Designative References do not support Referential Integrity. That means that you could delete the "ZA" row in CT_Country even though there may be thousands of Episodes that point to that row. In order to enforce Referential Integrity, you declare a ForeignKey definition on CountryCode in the Class Definition. Ensemble will generate an Index of Country Codes and Episode Numbers and if there are any entries in this Index for the Country Code "ZA", i.e. Episodes linked to the Code "ZA" then an Error Status is generated that indicates that the ForeignKey Constraint prevents the delete of the the "ZA" row in CT_Country as there are rows in other Tables that reference that Row.
  • LabTrak doesn't use ForeignKeys (generally speaking), the application does, however, prevent you from Deleting rows in Code Tables if those Codes are referenced by other LabTrak Tables.

3) Relationships

There are two types of relationships most commonly used in Ensemble/IRIS Classes/Tables: Parent-Child and One-Many. Many-Many relationships can be defined. Refer to the documentation on Relationships to see how Many-Many relationships are defined. I can't think of any examples of Many-Many relationships in LabTrak so instead, I will just focus on the two main relationship Types.

Parent-Child Relationships

If you look at the TEPI global in LabTrak which is the global which stores Episodes. You will see something like this:

^TEPI({Episode_Number})="{Episode Fields}"

^TEPI({Episode_Number},1,{TestSetCode})="{Test_Set_Fields}"

^TEPI({Episode_Number},1,{Test_Set_Code},"DATA",{Test_Item_Code})="{Test_Item_Fields}"

for example:

^TEPI("AA00000401")=""SALM\NIGEL\M\44453\\\S2\61839\646\61838\480\61839\0DR\\R\\\1039444\\41C028W000\\\\\40\\\F\\fred.smith\\61839\646\N\\AA\,690425\\\\\\N\0\\\\N\H\38768\\\\41C028W000\\\\\\\\\\\\\\\Z03.9\52ZETH000003" - This is the Episode Record

^TEPI("AA00000401",1,"H065",1)="61842\594\jane.doe\61842\597\jane.doe\\N\\\\\\\\\\\\\61839\646\\\N\\\Y\\\A\~~~\ \N\\fred.smith\\\\\\N\\61838\28800\\\1\\\\\Y\\\\61839\38760\\\N\\\\\\\N\61843\8122\\\P\Z03.9\" - This is a Test Set Record within the Episode

^TEPI("AA00000401",1,"H065",1,"DATA","H0440")="4.14\\\\\AAH02\\\\\\\\\" - This is a Test Item Record within the Test Set.

As you can see this global is a Multidimensional Array. It has many subscript levels. The ability to create Arrays like this is one of the features that differentiates Cache/Ensemble/IRIS from any other Database Technology. It is one of the reasons why Ensemble/IRIS creates databases that are, on average, half the size of other database technologies on a Like for Like basis. Partly because the data records do not use fixed-length fields. By using Delimiters or $List we do not need to pad out field values with leading '0s or trailing spaces or NULL characters. Secondly, by storing the Test Sets within the Episode men's that when we access the Episod/IRIS loads the data page that contains that episode into memory it brings with it the Test Sets and Test Items. Where you walk through the Test Sets the Test Set Records are already in memory. We haven't had to go to another data structure in the Database to fetch the Test Set Records from a Test Set Table as you would need to do in other Relational Database Technologies.

If you delete an Episode the Test Sets and Test Items are deleted as well. You do not need to delete the Test Items first, then delete the Test Sets and then, finally, delete the Episode.

Nor do you need to maintain Indices to link Test Items to Test Sets or Test Sets to Episode.

In LabTrak the property in the Parent Table that references the Child rows in the Chile Table is prefixed with the word "Child" and the corresponding property in the Child Table that points to the row in the Parent Table is suffixed with the word "ParRef"

One-Many Relationships

In the One to Many relationship, there will be two tables, a Row in the "One" table is linked to Many rows in the "Many" table. From an Objects point of view, there is a Property in the "One" class that points to the related rows in the "Many" table. In LabTrak the property name is prefixed with the word "child". There is a corresponding property in the "Many" Table and that property name is suffixed with the word "ParRef".

Your Question

When I read your question and I looked at the Schema Diagram I was a bit confused as there is no SQLUser.CT_Loc table in LabTrak. There is however a table called SQLUser.CT_UserLocation.

If you use the Management Portal -> System Explorer -> SQL in the LabTrak namespace and you select the SQLUser schema in the Schema drop-down list on the left of the SQL form. You will be presented with a list of all Tables in the SQLUser schema. If you scroll down and find the table "CT_UserLocation" and you drag it into the "Execute SQL" text area on the form it will display the following SQL Statement:

SELECT 
CTUSL_RowId, CTUSL_Code, CTUSL_Desc, CTUSL_Destination_DR, CTUSL_UniqueSite, CTUSL_MoveDirectory, CTUSL_AccreditationNumber, CTUSL_WebReportFooter, CTUSL_DisplaySequence, CTUSL_RegionCode, CTUSL_Address1, CTUSL_Address2, CTUSL_Address3_Suburb, CTUSL_Address4_State_DR, CTUSL_Address5_PostCode, CTUSL_Phone, CTUSL_Fax, CTUSL_email, CTUSL_StructuredCode, CTUSL_StructuredCodeLength, CTUSL_StructuredCodeLevel, CTUSL_Address, CTUSL_DocCourierRun_DR, CTUSL_LocCourierRun_DR, CTUSL_ActiveFlag, CTUSL_DefaultNID, CTUSL_UnixMoveDirectory, CTUSL_PFforBatchEntry
FROM SQLUser.CT_UserLocation

Note the Columns  CTUSL_Address4_State_DR, CTUSL_DocCourierRun_DR, CTUSL_LocCourierRun_DR. These are all Designative References to rows in other Tables.

The EP_VisitTestSet table contains the following Columns:

SELECT 
VISTS_ParRef, VISTS_RowId, VISTS_TestSetCounter, VISTS_TestSet_DR, VISTS_DateOfEntry, VISTS_TimeOfEntry, VISTS_UserEntered_DR, VISTS_DateOfAuthorisation, VISTS_TimeOfAuthorisation, VISTS_UserAuthorised_DR, VISTS_PathologistID_DR, VISTS_ExcludeFromCCR, VISTS_Rule3Exempt_Sequence, VISTS_Priority_DR, VISTS_Rule3Exempt_Max, VISTS_TherapeutDosage, VISTS_TimeOfDosage, VISTS_24HUVolume, VISTS_24HUTimePeriod, VISTS_BB_TransfEvents_DR, VISTS_Rule3Exempt_Date, VISTS_DateOfPathologistAtt, VISTS_TimeOfPathologistAtt, VISTS_StaffNotes, VISTS_DateOfCreation, VISTS_TimeOfCreation, VISTS_StandardLettersChecked, VISTS_BB_DateRequired, VISTS_ExcludeFromPatientMean, VISTS_UserSite_DR, VISTS_Machine, VISTS_Printed, VISTS_SuperSet_DR, VISTS_StatusResult, VISTS_DFT_TimeOfFirstCollection, VISTS_HISTO_Extra, VISTS_HISTO_BillingItem, VISTS_SupressBilling, VISTS_HospitalRefNumber, VISTS_UserCreated_DR, VISTS_MoveToReferralLab_DR, VISTS_MoveToUserSite_DR, VISTS_DFT_DR, VISTS_DFT_Position, VISTS_DFT_DateOfFirstCollection, VISTS_StatusEntry, VISTS_Confidential, VISTS_SpecimenType_DR, VISTS_SpecimenNo, VISTS_DateOfCollection, VISTS_TimeOfCollection, VISTS_CollectedBy_DR, VISTS_DisplaySequence, VISTS_SupressReason, VISTS_UserSupress_DR, VISTS_DateOfSupressBilling, VISTS_Rule3Exempt_Comments, VISTS_AnatomicalSite_DR, VISTS_Reason_DR, VISTS_DateOfReason, VISTS_TimeOfReason, VISTS_UserReason_DR, VISTS_DateOfReceive, VISTS_TimeOfReceive, VISTS_PaymentCode_DR, VISTS_SpecimenGroup_DR, VISTS_Document_DR, VISTS_BB_Neonatal, VISTS_RR_Date, VISTS_RR_Time, VISTS_RR_User_DR, VISTS_RV_Date, VISTS_RV_Time, VISTS_RV_User_DR, VISTS_BB_DoNotFile, VISTS_DateOfLastChange, VISTS_TimeOfLastChange, VISTS_UserOfLastChange_DR, VISTS_DateOfSentSTM, VISTS_TimeOfSentSTM, VISTS_DateOfReceivedSTM, VISTS_TimeOfReceivedSTM, VISTS_MovementStatus, VISTS_AddedByAction, VISTS_PricingStatus, VISTS_ICD10List, VISTS_LongTerm, VISTS_RequestBy_DR, VISTS_LongTermReason, VISTS_PairedSeraQueue, VISTS_DoctorDR, VISTS_BB_TimeRequired, VISTS_BB_Tags, VISTS_DTOfResultChange, VISTS_DateOfFirstAuthorisation, VISTS_TimeOfFirstAuthorisation, VISTS_AnatomicalSiteFT, VISTS_ReasonReportedTo, VISTS_ReasonTelephone, VISTS_ReasonObservations, VISTS_ManualPCEntry, VISTS_ReasonClearResults, VISTS_ReasonClearSN, VISTS_ReasonResultsNR
FROM SQLUser.EP_VisitTestSet

If I add a WHERE Clause: WHERE VISTS_ParRef %STARTSWITH 'AA' it will return records from the site that "AA" 

If you run this query you will see rows of data that look like this

AA00000005 AA00000005||H065||1 1 H065 04/19/2010 1209

The ParRef is the Episode Number. The RowId for the Test Set is AA00000005||H065||1, {Episode}||{TestSetCode}||{TestSetCounter}.

I created an ODBC connection to a LabTrak Namespace on our UAT server. I unspecified the Server IP address, the Super Server Port Number, The Namespace and my Credentials. I tested the connection which was successful. Te ODBC connection can be used by Excel and other reporting tools to create Reports and Charts.

I then use an application called DBBeaver (available on the Open Exchange site) to create ERD diagrams of the LabTrak classes. DBeaver uses the InterSystems JDBC driver and you set up the connection in DBeaver itself.

I 

Hi

If you are using the HS FHIR Resource classes in IRIS for Health then you will see that attributes such as Patient.Picture is a BinaryStream. If you are using a BinaryStream you need to convert it into Base64 encoding which generates a stream of characters that do not include any control characters that may cause a JSON or XML parser to think that it has encountered a line terminator. If you look at the FHIR to SDA nd SDA to FHIR data transformations you will see how they Transform the Inbound Stream from Base64 to Binary and from Binary to Base 64 encoding for the outbound SDA to FHIR JSON or XML.

Hi

I have spent the last 4 months developing a generic HL7 and FHIR Codebase. The way it works is as follows:

1) I have a Codebase Namespace that consists of a number of common classes:

a) A Message Queue Service that processes messages in a Message Queue. The Message Queue class has a number of standard properties including: CreateTS, ProcessTS, CompletedTS, Target Message Type (HK7 or FHIR), Fields for the generated Request FHIR or HL7 message, Fields for the Response HL7 or FHIR message.  Fileds for the HL7 ACK/NACK Code and Text, HTTP Response Status, Overall Message Status.

b) There are a number of standard methods for every  Message Queue,: CreateMessage(), Update Message(), GetNextMessage(), CompleteMessage(), ResendMessage(), ResendMessageRange, PurgeMessages(). These methods are called by the Business Service that Processes the Message Queue, Methods that are used by the Busiess Process to Resend Messages or Complete a Message and the UpdateMessage() method is used to update the fields in the message with data it collects as the message passes through the Production. 

c) Code tables to convert some application field values into HL7 or FHIR equivalents.

d) The following Business Operations: HL7HTTPOperation, HL7HTTPSOperation, FHIRHTTPOperation, FHIRHTTPSOperation, HL7FileOperation and FHIRFileOperation 

e) A Business Service that calls the GetNextMessage() method and if it finds a message it puts the MessageId into an Ensemble Request Message and sends it to the Business Process.

f) The Business Process uses the MessageId to open the message Queue Object which apart from the standard fields mentioned in (a) contains any fields derived from the source application that identify the source data that you want to transform into an HL7 or FHIR Message. In the OnRequest() method of the Process you use the fields in the Message to retrieve the data from the data source and then invoke a DTL to transform the data into an HL7 or FHIR Message. The resultant message is written to the Message Queue object. Then depending on how the Interface is configured the Ensemble Request Message is sent on to any of the the Business Operations I listed. The Business Operations then do the work to write the HL7 or FHIR message into the HTTPRequest Message. The HTTP/S Operations then send the HTTP request to the Target Server. When a response is received the response status is noted and the message queue object updated. If there is content in the response then that too is written into the Message Queue Object and an Ensemble response message is instantiated and sent back to the Business Process.

g) The Business Process OnResponse() method processes the response it receives from the HTTP Operation and updates the Message Queue Object with the overall status of the transaction. It also calls the CompleteMessage() Method which tidies everything up and sets the CompletedTS to the current Date/Time. There are two calculated fields: TimeFromCreateToComplete and TimeFromProcessToComplete. The second one is  based on the time from when the message was picked up by the Business Service through to completion. The first time field records the time it took from when the message was originally created through to completion. 

h) There is a House Keeping Business Service that purges the Message Queue, Ensemble Messages, Ensemble Log Files ($$$TRACE etc), the Inbound and outbound file directories. The purges are based on configurable "Number of Days to Keep {messages/logs/etc...}"

i) There is a Debug Logger that you can call anywhere in your code to record some debg information. I did this because you can't use $$$TRACE or any of the $$$LOG... calls in non-Production classes. It also encourages developers to use a standard debugging approach using a standard format rather than having lots of "set ^Nigel($i(^Nigel),$zdt($h,3))="The value of variable x is "_$g(x) approach.

j) There is an Alert Notification System. I have the concept of Error Alert Notifications such as a comms failure in the HTTP Operation or an exception caught in a Try/Catch. Then there are Condition Notifications. They are intended for a different audience and an example would be when the Interface discovers that there is no entry in a code table for a specific code. An email will be sent to the appropriate person/people to update the code table. If the code isn't there a day later a follow-up email will be sent. The Alert Notification Service also checks the number of items in the Production Queues and will send emails to notify someone that the queues are either empty (that could mean that something has broken in the source database/application and it is no longer creating messages for the message queue.

k) There is a single configuration object. There is not a single hard coded value in the entire codebase. Every element of the interface gets its context from the configuration object. The configuration Object has about 40 properties and include the name of the message queue class, the Business Process Production Item Name and Business Process Class Name, the Inbound and Outbound File Directory names, the HL7 or FHIR Request and Response File names (if you are using the File Operation which is intended mainly for testing purposes but can also be used if you are using FTP to send a message onto some other system.

l) The Codebase sits in one database along with the code tables and code table mapping classes and their contents.

m) When you want to create a new Interface you create a new Namespace and map the Codebase classes into that Namespace. You also map the code tables and Code table mapping globals into the new Interface Namespace.

n) In the source application database you import one class. The Interface Mapping Class. If you have an application or ensemble interface in your source then in some appropriate class where you have the details of a Patent or other Entity that you want to send into the new interface you add two lines of code into that class that populate an array of name/value pairs that correspond to the message queue specific fields in the message queue class created for that interface. as mentioned earlier the Message Queue Class has a number of standard properties and methods. When you create a new Interface you create a new Message Queue Class and inherit the Properties and Methods into the class and then add properties for any identifiers that the interface will use to access the data that will be used in the Data Transformation to create your HL7 or FHIR message. The name/value pairs in the array you pass to the Interface Mapping class method correspond to those identifying property names.

o) The alternative is that an Interface will access the source database and walk through a given table and for every entry it finds in that table it will call the Interface Mapping class method to create a new Message Queue Object with the identifier for that item in the table you are processing. This is really useful for Bulk Loads of data from one source into a target destination.

p) There is also a testing module that will take data from any source. You can define test conditions. For example: Scramble the data in one of the source table fields, generate random data for any field, insert or remove items from any collection, remove values from properties. This module uses a combination of the Populate Utils Methods, a whole host of Patient Names, Addresses, Identifiers and you can write custom methods to manipulate the data from the data source. Tests are conducted on one or more source records which are grouped into a "Manifest". You can tell the Manifest how many records you want to process,  where to get the records from. The Manifest Records are then processed by the Test Interface which will run some SQL or object code to get the source data for each Manifest Record. It then calls the DTL you have created to convert that source data into an HL7 or FHIR message. It then runs through the test conditions and applies them to each generated message before the message is sent to the HTTP operations and optionally the File Operations. Details of the Request and Response messages are captured in the Manifest Records. You can also specify documentation on what you expect to happen when the transformed data is processed by the target application that you are testing and once completed there is a manifest report that will generate a .cav file containing the data from every record in the manifest. aving analysed this you update the manifest with the outcome and if the outcome differs from the expected outcome then you either send the whole manifest back to the developers and tell them to fix the bug or you look at why the tests produced a different outcome and whether that is due to issues in the source database or application. Tests are defined in a Tests Table and Tests are then grouped into a Test Set which is linked to a Manifest. That means that Tests can be reused for other test scenarios

Te entire collection of classes are extensively documented. I have determined that once I have monitored the first Interface that is going unto testing in the very near future and I resolve any bugs that haven't been picked up by the various unit tests I have done so far then assuming it is stable (lol I wrote it, how could it not be)  then it will take 1- days to generate any new interface. I wrote this architecture as I have 14 HL7 and FHIR Interfaces that I have to write and they are all almost identical.

The whole framework comes with a few CSP pages that I created to maintain the code tables, code table mappings and the configuration class. I am busy learning Angular at the moment and once I have that under my belt then I will create a REST API that any UI can call so that anyone who uses the Framework can build their own UI on top of it. 

If anyone is interested in this framework please contact me. If there are enough people interested then I will do the work to publish this on Open Exchange.

If anyone wants the Frame work but doesn't want to do any of the development to make it work in your environment then I will do the implementation, create the interfaces and whatever else is required on a contract basis.

There are some aspects f this framework that could make better use of some features in Ensemble such as Code Tables, Uploading CSV files, the Managed Alert Monitoring System (which I rather like but have not built it into this framework as I have my Alert Notification Module which does pretty much the same thing but does not have the 'Help Desk" feel of the Ensemble Alert Monitoring System.

If there are other adapters that are required then I can add them into the framework codebase.

Nigel

$$$ASSERT(condition) Writes an entry of type Assert, if the argument is false. condition is an ObjectScript expression that evaluates to true or false.

$$$LOGASSERT(condition) Writes an entry of type Assert, for any value of the argument. condition is an ObjectScript expression that evaluates to true or false.

so in the same way that you would have a $$$LOGERROR() statement in your code or a $$$TRACE the $$$ASSERT AND $$$LOGASSERT() will write to ^Ens.Log.UtilD in the case of $$$Assert()

The exact code for each is defined in the Ensemble.inc Include file

#define ASSERT(%condition) If '(%condition) { $$$LOGASSERT("Assert Condition Failed: "##quote(%condition)$char(13,10)) BREAK }

#define LOGALERT(%arg) Do ##class(Ens.Util.Log).LogAlert($$$CurrentClass,$$$CurrentMethod,%arg)

interesting: the $$$$ASSERT has a BREAK statement in it.

so if you have BREAK L/L+/S/S+ then if the task is running in the fore ground then it will automatically BREAK at that point.

If the Production item is not running in the foreground the BREAK will be ignored.

It's basically the same as saying 'If condition $$$LOGINFO("message")' I suspect it is probably useful in BPL's where to write an IF construct is tedious in comparison with a single statement $$$.... in ObjectScript

Hi

Just one additional comment. If the databases/namespaces are in different instances on the same or different servers, then you need to create a remote database connection to the databases on the secondary server (the server instance which contains the tables you want to use in queries in your primary server instance). Once you have made the secondary databases available to your primary server then you can create a namespace on the primary server to include the database or databases (if you split your globals into one database and your routines into another database) and then you can use Package Mapping on the primary Server Namespace to map the Class/Table Packages into the Primary server Instance. What you don't want to do is map the globals associated with the secondary server namespace into the primary server namespace as this will cause the globals to be created in the primary server namespace database which means that if you ever disconnect the servers then the data that rightfully belongs to the secondary server will be orphaned in the databases of the primary server.

Nigel

Yours

Nigel

Hi

Can you include the tags where you reference the style sheets. I see you use a mix of style="", class=".{name}" and id="#{name}"

Nigel

Undoubtedly VS Code. There are new ObjectScript Plugins being developed all the time and now that you can start using Python and R in the latest IRIS 2021.1 (in the Early adopter program) you just add the appropriate Extensions for these languages as well. It is very obvious that VS Code is growing in popularity just by the sheer volume of extensions available. The extensions for ObjectScript include the Server Manager, gi::locate, gi::connect, ObjectScript Language Server, CacheQualityQ for both VS Code and Cache Studio that helps you write better code. The built-in Source Control connects to GitHub or private GitHub servers and there are a number of GIT extensions. I'm just waiting for someone to add support for the graphical DTL, BPL, and Business Rules (though of course, you can do those through the Management Portal (which you can also run within VS Code). Personally, I hated Atelier and as I had used Cache Studio for roughly 30 years the idea that I might want to use a different Editor never crossed my mind and Atelier was just scary for a simple developer like me but VS Code changed all that. I still use Cache Studio though (it's a hard habit to break, not that I particularly want to break that habit)

Nigel

That sounds really cool. I have a friend who had a house in Cape Town and he had installed Solar Panels, water collection systems,  and everything in his house from his music system, alarm system, gates, garage doors, lights, curtains, fridge were all controlled through an app on his phone. His solar panels and storage cells effectively covered all of his electricity needs and his electricity meter for City Power cost about R5 a month

Hi

Glad to hear that you have your IRIS on Raspberry Pi. Before I send you some suggestions can you give me a bit of background on what aspects of IRIS you are most interested in ? I take it you have access to the Open Exchange and secondly how much experience do you have with ITIS or Ensemble? 

In the meantime here are some suggestions:

https://openexchange.intersystems.com/package/sql-rest-api

https://openexchange.intersystems.com/package/ObjectScript-Package-Manager (ZPM is the most useful tool for installing Open Exchange Applications

https://openexchange.intersystems.com/package/zpm-registry This will show you a list of OEX apps that are zpm ready

Are you using Cache Studio or V S Code and do you use Git. If you use V S Code then there are a number of Objectscript Extensions available and they are all useful

https://openexchange.intersystems.com/package/intersystems-iris-dev-temp...

https://openexchange.intersystems.com/package/Trying-Embedded-Python  There is going to be a lot of Python apps apppearing on OEX now that we have a version with Python fully integrated into IRIS (first implementation of Native Python within IRIS

Python opens the gateway to more adventurous use of ML, NLP, and AI and there are a number of ML nd I example (with or without Python)'

Some useful information on Python at

https://www.geeksforgeeks.org/defaultdict-in-python/

https://openexchange.intersystems.com/package/OCR-Service

DBeaver is an excellent Databse Viewer/Creator with native support for IRIS JDBC and its free

https://openexchange.intersystems.com/package/DBeaver

https://openexchange.intersystems.com/package/integratedml-demo-template

That should get you started.

Please let  me know how you get on with these and if there is anything more I can help you with just message me

Nigel

Hi

I have done some more experienting and in the Contact class I have a ContactPhoneNumbers which I defined as %ListOfDataTypes and I noticed that they were being generated but not exported to JSON so I changed the type to %ArrayOfDataTpes and that didn't work either. I played around with the %JSON attributes to no avail. I read the documentation on the %JSON.Adapter class and there are strict rules about Arrays and Lists must contain literals or objects and  so I wrapped the Phone Numbers in quotes even though I was generating them as +27nn nnn nnn but that made no difference. I suspect that the Attribute ElementType should be set. In the ParentClass I specify that the array of object Oid's has an ElementType of %Persistent (the default is %RegisteredObject) and I think that I should do the same with the Phone Number array/list.

Nigel