Check this discussion. % variables are the simplest singletons there is.
- Log in to post comments
Check this discussion. % variables are the simplest singletons there is.
%Close is called automatically. Consider the following example:
Class Utils.GC Extends%RegisteredObject
{
Property Type As%String;/// do ##class(Utils.GC).Test()ClassMethod Test()
{
set obj = ..%New("explicit")
kill obj
do..Implicit()
}
ClassMethod Implicit()
{
set obj = ..%New("implicit")
// obj will be removed from memory after we exit current method/frame.
}
Method %OnClose() As%Status [ Private, ServerOnly = 1 ]
{
Write"%Close is running: ", ..Type,!
Quit$$$OK
}
Method %OnNew(type) As%Status [ Private, ServerOnly = 1 ]
{
Set..Type = type
Quit$$$OK
}
}Here's the output from the Test method:
HCC>do##class(Utils.GC).Test()
%Close is running: explicit
%Close is running: implicitGreat advice, Jordan! Thank you!
What about collisions?
In router Force Sync Send should be 1.
The default RCA behavior is ':?R=RF,:?E=S,:~=S,:?A=C,:*=S,:I?=W,:T?=C'
This means for NACKs received with error code AR or CR retry, while codes AE or CE suspend the current outbound message and move on to the next. I suppose you want ':?R=RF,:?E=F,:~=S,:?A=C,:*=S,:I?=W,:T?=C'
To avoid getting unrelated http errors on xDBC testing, test in terminal:
set sc = ##class(%SYSTEM.SQLGateway).TestConnection(name, 0, 0, .err)
set sc = ##class(%SQL.Manager.API).TestDecodeDSN(name, usr, password, 0, .err)Are you on Linux? Use JDBC. ODBC Snowflake driver is incompatible with IRIS because it uses a backtrace() syscal which causes signal 11 in IRIS process if the process is running in a background.
Snowflake ODBC driver uses backtrace to determine Driver Manager on Linux, so currently it does not work on Linux at all with IRIS.
Great!
IRIS 2019.4.0, Business Host class compilation triggers all Config Items of that class to be automatically restarted.
Which compile flag is that?
Properties are case sensitive.
What would be the use case for that?
Timings heavily depend on your setup. You can always start by collecting the data for an hour.
You can also use defaults for that:
w responseData.%Get("items",[]).%Get(0,{}).%Get("titles",[]).%Get(0, {}).%Get("value",{}).%Get("en_US")%SQLConnection inherits %XML.Adaptor, so you can also use xml export/import.
There are three property types which result in custom selectors:
None of them are a directory unfortunately. Path datatype would be nice to have. Please submit a WRC request for it.
You're welcome. Here's a bit more info about how BPL BPs work. After you compile a BPL BP, two classes get created into the package with the same name as a full BPL class name:
Thread1 class contains methods S1, S2, ... SN, which correspond to activities within BPLContext class has all context variables and also the next state which BPL would execute (i.e., S5)Also BPL class is persistent and stores requests currently being processed.
BPL works by executing S methods in a Thread class and correspondingly updating the BPL class table, Context table, and Thread1 table where one message "being processed" is one row in a BPL table. After the request is processed, BPL deletes the BPL, Context, and Thread entries. Since BPL BPs are asynchronous, one BPL job can simultaneously process many requests by saving information between S calls and switching between different requests.
For example, BPL processed one request till it got to a sync activity - waiting for an answer from BO. It would save the current context to disk, with %NextState property (in Thread1 class) set to response activity S method, and work on other requests until BO answers. After BO answers, BPL would load Context into memory and execute the method corresponding to a state saved in %NextState property.
That's why registered objects as context properties can (and would) be lost between states - as they are not persisted.
HS.FHIR.DTL.vR4.Model.Resource.Patient is a registered object and not a persistent object, so there's no guarantee it will exist beyond a current BP State. You set context.patient in "Transform 1", so it will be gone from process memory after "Send to FHIR Repo 1" sends the request, but before it gets the reply back.
As a solution you can serialize FHIR resource to json and persist that.
Can you provide a minimal example, please?
I assume you do not mutate/reuse objects.
Is it possible to search all globals within a namespace/db or at least to do a full text search on a list of globals?
Can't reproduce in IRIS for Windows (x86-64) 2022.1 (Build 209U) Tue May 31 2022 12:16:40 EDT:
.png)
But can reproduce on IRIS for Windows (x86-64) 2025.1 (Build 223U) Tue Mar 11 2025 18:14:42 EDT
.png)
Please file a WRC. Looks like something changed between 2022.1 and 2024.1.1
%Stream.Dynamic* are read-only pointers to DAO data. %Stream.Tmp* are full-fledged (r/w) in-memory streams.
You can still use classes, just add %JSONIGNOREINVALIDFIELD to ignore unknown properties:
Parameter%JSONIGNOREINVALIDFIELDAs BOOLEAN = 1;Here's how your example can look like:
Class dc.Item Extends (%RegisteredObject, %JSON.Adaptor)
{
Parameter%JSONIGNOREINVALIDFIELDAs BOOLEAN = 1;Property pureId As%Integer;Property portalUrl As%VarString;
}and the main class:
Class dc.Response Extends (%RegisteredObject, %JSON.Adaptor)
{
Parameter%JSONIGNOREINVALIDFIELDAs BOOLEAN = 1;Property items As list Of Item;/// do ##class(dc.Response).Test()ClassMethod Test()
{
set json = ..Sample()
set obj = ..%New()
$$$TOE(sc, obj.%JSONImport(json))
do obj.DisplayItems()
}
Method DisplayItems()
{
for i=1:1:..items.Count() {
set item = ..items.GetAt(i)
zw item
}
}
ClassMethod Sample() As%String [ CodeMode = expression ]
{
{
"count": 0,
"pageInformation": {
"offset": 0,
"size": 0
},
"items": [
{
"pureId": 0,
"uuid": "196ab1c9-6e60-4000-88cb-4b1795761180",
"createdBy": "string",
"createdDate": "1970-01-01T00:00:00.000Z",
"modifiedBy": "string",
"modifiedDate": "1970-01-01T00:00:00.000Z",
"portalUrl": "string",
"prettyUrlIdentifiers": [
"string"
],
"previousUuids": [
"string"
],
"version": "string",
"startDateAsResearcher": "1970-01-01",
"affiliationNote": "string",
"dateOfBirth": "1970-01-01",
"employeeStartDate": "1970-01-01",
"employeeEndDate": "1970-01-01",
"externalPositions": [
{
"pureId": 0,
"appointment": {
"uri": "string",
"term": {
"en_GB": "Some text"
}
},
"appointmentString": {
"en_GB": "Some text"
},
"period": {
"startDate": {
"year": 0,
"month": 1,
"day": 1
},
"endDate": {
"year": 0,
"month": 1,
"day": 1
}
},
"externalOrganization": {
"uuid": "196ab1c9-6e60-4000-8b89-29269178a480",
"systemName": "string"
}
}
]
}
]
}.%ToJSON()
}
}Iris session can do it (assumes OS auth is enabled):
irissession <INSTANCE> -U<NAMESPACE> '##class(%SYSTEM.OBJ).Load("<file from linux folder>","ck")'Note that there must be no whitespaces in the command arg.
Something along these lines should work.
ClassMethod ToEnsStream(obj As%RegisteredObject, Output req As Ens.StreamContainer) As%Status
{
#dim sc As%Status = $$$OKtry {
set stream = ##class(%Stream.GlobalCharacter).%New()
if obj.%Extends("%JSON.Adaptor") {
$$$TOE(sc, obj.%JSONExportToStream(.stream))
} elseif obj.%Extends(##class(%DynamicAbstractObject).%ClassName(1)) {
do obj.%ToJSON(.stream)
} else {
/// try %ZEN.Auxiliary.altJSONProvider:%ObjectToAET?throw##class(%Exception.General).%New("<JSON>")
}
set req = ##class(Ens.StreamContainer).%New(stream)
} catch ex {
set sc = ex.AsStatus()
}
quit sc
}set obj = ##class(YourPackage.YourClass).%New()
set sc = obj.%JSONImport(jsonstring)Starting 2025.1InterSystems adds support for two alternative syntax flavors: LIMIT ... OFFSET ..., which is commonly used in other database platforms, and OFFSET ... FETCH ..., which is the official ANSI standard. Documentation.
Well, I have great news for you, Otto!
Starting from 2025.1, we have automatic database download from mirror member. Documentation. No copying required.
Use io operations like write, zwrite, zzdump - they would be written to an output file automatically, if set.
Would canonicalization work for you?
Also consider storing your xmls as gzipped streams (%Stream.GblChrCompress) or compressed strings ($system.Util.Compress). I think it will be more effective as a storage space saving strategy.