go to post Sylvain Guilbaud · Feb 25, 2022 Thanks Dmitry for your reply. Actually, I know all of this ; that's why I don't understand why it's not working any more... gh repo clone intersystems-community/sam cd sam tar xvzf sam-1.0.0.115-unix.tar.gz cd sam-1.0.0.115-unix ./start.sh Then I create a cluster + a target on my local instance (non-container) : iris list irishealth Configuration 'IRISHEALTH' directory: /Users/guilbaud/is/irishealth versionid: 2021.2.0.649.0 datadir: /Users/guilbaud/is/irishealth conf file: iris.cpf (SuperServer port = 61773, WebServer = 52773) status: running, since Fri Feb 25 15:35:32 2022 state: ok product: InterSystems IRISHealth I check that /api/monitor/metrics runs well : curl http://127.0.0.1:52773/api/monitor/metrics -o metrics % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 17634 100 17634 0 0 14174 0 0:00:01 0:00:01 --:--:-- 14383
go to post Sylvain Guilbaud · Feb 22, 2022 Thanks Robert for your comment. Merging globals is exactly what the toArchive method does here : Class data.archive.person Extends (%Persistent, data.current.person) { Parameter DEFAULTGLOBAL = "^off.person"; /// Description ClassMethod archive(purgeArchive As %Integer = 0, purgeSource As %Integer = 0) As %Status { set sc = $$$OK , tableName = "" set (archived,archivedErrors, severity) = 0 set sourceClassName = $PIECE(##class(%Dictionary.ClassDefinition).%OpenId(..%ClassName(1)).Super,",",2) set targetClassName = ..%ClassName(1) set sourceClass = ##class(%Dictionary.ClassDefinition).%OpenId(sourceClassName) set targetClass = ##class(%Dictionary.ClassDefinition).%OpenId(targetClassName) set sourceDataLocation = sourceClass.Storages.GetAt(1).DataLocation set sourceIndexLocation = sourceClass.Storages.GetAt(1).IndexLocation set sourceStreamLocation = sourceClass.Storages.GetAt(1).StreamLocation set targetDataLocation = targetClass.Storages.GetAt(1).DataLocation set targetIndexLocation = targetClass.Storages.GetAt(1).IndexLocation set targetStreamLocation = targetClass.Storages.GetAt(1).StreamLocation set tableName = $$$CLASSsqlschemaname($$$gWRK,sourceClassName) _"."_ $$$CLASSsqltablename($$$gWRK,sourceClassName) if $ISOBJECT(sourceClass) & $ISOBJECT(targetClass) & tableName '= "" { if $ISOBJECT(sourceClass.Storages.GetAt(1)) & $ISOBJECT(targetClass.Storages.GetAt(1)) { set tStatement=##class(%SQL.Statement).%New(1) kill sql set sql($i(sql)) = "SELECT" set sql($i(sql)) = "id" set sql($i(sql)) = "FROM" set sql($i(sql)) = tableName set sc = tStatement.%Prepare(.sql) set result = tStatement.%Execute() kill:purgeArchive @targetDataLocation, @targetIndexLocation, @targetStreamLocation while result.%Next() { set source = $CLASSMETHOD(sourceClassName,"%OpenId",result.%Get("id")) if $ISOBJECT(source) { set archive = $CLASSMETHOD(targetClassName,"%New") for i = 1:1:sourceClass.Properties.Count() { set propertyName = sourceClass.Properties.GetAt(i).Name set $PROPERTY(archive,propertyName) = $PROPERTY(source,propertyName) } set sc = archive.%Save() if sc { set archived = archived + 1 } else { set archivedErrors = archivedErrors + 1 } } } kill:purgeSource @sourceDataLocation, @sourceIndexLocation, @sourceStreamLocation set msg ="archive data from " _ sourceClassName _ " into "_ targetClassName _ " result:" _ archived _ " archived (errors:" _ archivedErrors _ ")" } else { set severity = 1 set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes have not storage definition" } } else { set severity = 1 set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes not found in %Dictionary.ClassDefinition" } do ##class(%SYS.System).WriteToConsoleLog(msg,0,severity) Return sc } ClassMethod toArchive(purgeArchive As %Integer = 0, purgeSource As %Integer = 0) As %Status { set sc=$$$OK set sourceClassName = $PIECE(##class(%Dictionary.ClassDefinition).%OpenId(..%ClassName(1)).Super,",",2) set targetClassName = ..%ClassName(1) set sourceClass = ##class(%Dictionary.ClassDefinition).%OpenId(sourceClassName) set targetClass = ##class(%Dictionary.ClassDefinition).%OpenId(targetClassName) if $ISOBJECT(sourceClass) & $ISOBJECT(targetClass) { if $ISOBJECT(sourceClass.Storages.GetAt(1)) & $ISOBJECT(targetClass.Storages.GetAt(1)) { set sourceDataLocation = sourceClass.Storages.GetAt(1).DataLocation set sourceIndexLocation = sourceClass.Storages.GetAt(1).IndexLocation set sourceStreamLocation = sourceClass.Storages.GetAt(1).StreamLocation set targetDataLocation = targetClass.Storages.GetAt(1).DataLocation set targetIndexLocation = targetClass.Storages.GetAt(1).IndexLocation set targetStreamLocation = targetClass.Storages.GetAt(1).StreamLocation kill:purgeArchive @targetDataLocation, @targetIndexLocation, @targetStreamLocation merge @targetDataLocation = @sourceDataLocation merge @targetIndexLocation = @sourceIndexLocation merge @targetStreamLocation = @sourceStreamLocation set ^mergeTrace($i(^mergeTrace)) = $lb($zdt($h,3),sourceDataLocation) kill:purgeSource @sourceDataLocation, @sourceIndexLocation, @sourceStreamLocation set severity = 0 set msg = "ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " SUCCESSFULLY" } else { set severity = 1 set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes have not storage definition" } } else { set severity = 1 set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes not found in %Dictionary.ClassDefinition" } do ##class(%SYS.System).WriteToConsoleLog(msg,0,severity) return sc } Storage Default { <data name="personDefaultData"> <value name="1"> <value>%%CLASSNAME</value> </value> <value name="2"> <value>name</value> </value> <value name="3"> <value>dob</value> </value> <value name="4"> <value>activ</value> </value> <value name="5"> <value>created</value> </value> </data> <datalocation>^off.personD</datalocation> <defaultdata>personDefaultData</defaultdata> <idlocation>^off.personD</idlocation> <indexlocation>^off.personI</indexlocation> <streamlocation>^off.personS</streamlocation> <type>%Storage.Persistent</type> } }
go to post Sylvain Guilbaud · Feb 22, 2022 Thanks Eduard for sharing your code implementing a very powerful approach of data snapshot.
go to post Sylvain Guilbaud · Feb 21, 2022 You can find the full definitions of current and archive classes on Github.
go to post Sylvain Guilbaud · Feb 19, 2022 Thanks for sharing this explanation. If you want to avoid to add the WITH clause in all your DDL statement, you can also modify this default behavior by using : SET status=$SYSTEM.SQL.Util.SetOption("DDLUseExtentSet",0,.oldval)
go to post Sylvain Guilbaud · Feb 11, 2022 Did you try to pull the containers after having been first logged in successfully ? echo $PASSWORD | docker login -u=your-login --password-stdin containers.intersystems.com docker pull containers.intersystems.com/intersystems/iris:2022.1.0.114.0
go to post Sylvain Guilbaud · Nov 22, 2021 That's a really significant milestone. Congrats !!! 10K in less than 6 years, means an approx rate of 140 new members each month.I'm confident that it will take less than 6 years to reach the next 10K members.
go to post Sylvain Guilbaud · Sep 2, 2020 Hello @Robert Cemper, thanks for your reply on this 5 year question My question was more on references using Data Connectors on production. We can update cubes based on external tables using data connectors through ProcessFact(). Sinon, je vais très bien, je te remercie et j'espère qu'il en est de même pour toi. En voyant ton infatigable activité, je devine que tu vas bien. Salutations de France, Sylvain
go to post Sylvain Guilbaud · Mar 5, 2020 Hi Ron, thanks for this great article. There's a typo which creates a wondering question about the potentiality of Google Cloud : Using the Google Cloud Console (Easiest) https://could.google.com
go to post Sylvain Guilbaud · Aug 12, 2019 I agree ; that's a drawback.BTW, if you're a partner with a Software Update contract, you can download the IRIS for Health containers directly from WRC Containers Distributions web site :
go to post Sylvain Guilbaud · Aug 12, 2019 Hi Duncan,before getting IRIS for Health on Docker Store, you can start with IRIS for Health Community Edition available on AWS, Azure and GCP.Kind Regards,Sylvain
go to post Sylvain Guilbaud · Nov 8, 2016 The alternative installation of Caché on the Mac OS X is much like the installation on any UNIX® platform.To install Caché:Obtain the installation kit from InterSystems and install it on the desktop (tar.gz)Log in as user ID root. It is acceptable to su (superuser) to root while logged in from another account.See Adjustments for Large Number of Concurrent Processes and make adjustments if needed.Follow the instructions in the Run the Installation Script section and subsequent sections of the “Installing Caché on UNIX and Linux” chapter of this guide.
go to post Sylvain Guilbaud · Oct 5, 2016 Hi Evgeny, this code was written while upgrading to an async mirrored a DeepSee remote instance (originally based on a shadow server configuration + ECP access to ^OBJ.DSTIME global from DeepSee instance to production. It was before DSINTERVAL was created). Of course this sample can be modified to add/remove/modify any other parameter by modifying the query on %Dictionary.ParameterDefinition to filter any other parameter you are trying to add/remove/modify.
go to post Sylvain Guilbaud · Mar 10, 2016 To export globals in XML format, use $system.OBJ.Export : d $system.OBJ.Export("DeepSee.TermList.GBL","/data/TermList.xml")