I'm running the very last version of Docker : 4.5.0 (74594)

 
 docker version

After a reboot I'm now again able to reach 1 local instance (out of 2) and 0 containers + the IRIS-SAM instance.

 

 
docker-compose.yml
 
docker ps
 

 isc_prometheus.yml

Thanks Dmitry for your reply.

Actually, I know all of this ; that's why I don't understand why it's not working any more...

  1. gh repo clone intersystems-community/sam
  2. cd sam
  3. tar xvzf sam-1.0.0.115-unix.tar.gz
  4. cd sam-1.0.0.115-unix
  5. ./start.sh

Then I create a cluster + a target on my local instance (non-container) :

iris list irishealth                

Configuration 'IRISHEALTH'

directory:    /Users/guilbaud/is/irishealth

versionid:    2021.2.0.649.0

datadir:      /Users/guilbaud/is/irishealth

conf file:    iris.cpf  (SuperServer port = 61773, WebServer = 52773)

status:       running, since Fri Feb 25 15:35:32 2022

state:        ok

product:      InterSystems IRISHealth

I check that /api/monitor/metrics runs well :

curl http://127.0.0.1:52773/api/monitor/metrics -o metrics

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 17634  100 17634    0     0  14174      0  0:00:01  0:00:01 --:--:-- 14383

 

Thanks Robert for your comment.
 

Merging globals is exactly what the toArchive method does here :


 


Class data.archive.person Extends (%Persistent, data.current.person)
{

Parameter DEFAULTGLOBAL = "^off.person";

/// Description
ClassMethod archive(purgeArchive As %Integer = 0, purgeSource As %Integer = 0) As %Status
{
    set sc = $$$OK , tableName = ""
    set (archived,archivedErrors, severity) = 0

    set sourceClassName = $PIECE(##class(%Dictionary.ClassDefinition).%OpenId(..%ClassName(1)).Super,",",2)
    set targetClassName = ..%ClassName(1)

    set sourceClass = ##class(%Dictionary.ClassDefinition).%OpenId(sourceClassName) 
    set targetClass = ##class(%Dictionary.ClassDefinition).%OpenId(targetClassName)

    set sourceDataLocation = sourceClass.Storages.GetAt(1).DataLocation
    set sourceIndexLocation = sourceClass.Storages.GetAt(1).IndexLocation
    set sourceStreamLocation = sourceClass.Storages.GetAt(1).StreamLocation

    set targetDataLocation = targetClass.Storages.GetAt(1).DataLocation
    set targetIndexLocation = targetClass.Storages.GetAt(1).IndexLocation
    set targetStreamLocation = targetClass.Storages.GetAt(1).StreamLocation

    set tableName = $$$CLASSsqlschemaname($$$gWRK,sourceClassName) _"."_  $$$CLASSsqltablename($$$gWRK,sourceClassName)

    if $ISOBJECT(sourceClass) 
     & $ISOBJECT(targetClass)
     & tableName '= "" {
        if $ISOBJECT(sourceClass.Storages.GetAt(1)) 
         & $ISOBJECT(targetClass.Storages.GetAt(1))
         {
            set tStatement=##class(%SQL.Statement).%New(1) 
            kill sql
            set sql($i(sql)) = "SELECT" 
            set sql($i(sql)) = "id"  
            set sql($i(sql)) = "FROM"
            set sql($i(sql)) = tableName
            set sc = tStatement.%Prepare(.sql) 
            set result = tStatement.%Execute()

            kill:purgeArchive @targetDataLocation, @targetIndexLocation, @targetStreamLocation 

            while result.%Next() {
                set source = $CLASSMETHOD(sourceClassName,"%OpenId",result.%Get("id"))

                if $ISOBJECT(source) {
                    set archive = $CLASSMETHOD(targetClassName,"%New")

                    for i = 1:1:sourceClass.Properties.Count() {
                        set propertyName = sourceClass.Properties.GetAt(i).Name
                        set $PROPERTY(archive,propertyName) = $PROPERTY(source,propertyName)
                    }

                    set sc = archive.%Save()
                    if sc {
                        set archived = archived + 1
                    } else {
                        set archivedErrors = archivedErrors + 1
                    }
                }
            }

            kill:purgeSource @sourceDataLocation, @sourceIndexLocation, @sourceStreamLocation

            set msg ="archive data from " _ sourceClassName _ " into "_ targetClassName _ " result:" _ archived _ " archived (errors:" _ archivedErrors _ ")"

       } else {
            set severity = 1
            set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes have not storage definition"
        }
    } else {
        set severity = 1
        set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes not found in %Dictionary.ClassDefinition"
    }
    do ##class(%SYS.System).WriteToConsoleLog(msg,0,severity)
    Return sc
}

ClassMethod toArchive(purgeArchive As %Integer = 0, purgeSource As %Integer = 0) As %Status
{
    set sc=$$$OK

    set sourceClassName = $PIECE(##class(%Dictionary.ClassDefinition).%OpenId(..%ClassName(1)).Super,",",2)
    set targetClassName = ..%ClassName(1)
    set sourceClass = ##class(%Dictionary.ClassDefinition).%OpenId(sourceClassName) 
    set targetClass = ##class(%Dictionary.ClassDefinition).%OpenId(targetClassName)

    if $ISOBJECT(sourceClass) 
     & $ISOBJECT(targetClass) {
        if $ISOBJECT(sourceClass.Storages.GetAt(1)) 
         & $ISOBJECT(targetClass.Storages.GetAt(1))
         {
    
            set sourceDataLocation = sourceClass.Storages.GetAt(1).DataLocation
            set sourceIndexLocation = sourceClass.Storages.GetAt(1).IndexLocation
            set sourceStreamLocation = sourceClass.Storages.GetAt(1).StreamLocation

            set targetDataLocation = targetClass.Storages.GetAt(1).DataLocation
            set targetIndexLocation = targetClass.Storages.GetAt(1).IndexLocation
            set targetStreamLocation = targetClass.Storages.GetAt(1).StreamLocation

            kill:purgeArchive @targetDataLocation, @targetIndexLocation, @targetStreamLocation 

            merge @targetDataLocation = @sourceDataLocation
            merge @targetIndexLocation = @sourceIndexLocation
            merge @targetStreamLocation = @sourceStreamLocation

            set ^mergeTrace($i(^mergeTrace)) = $lb($zdt($h,3),sourceDataLocation)

            kill:purgeSource @sourceDataLocation, @sourceIndexLocation, @sourceStreamLocation

            set severity = 0
            set msg = "ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " SUCCESSFULLY"
                    

        } else {
            set severity = 1
            set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes have not storage definition"
        }
    } else {
        set severity = 1
        set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes not found in %Dictionary.ClassDefinition"
    }
    do ##class(%SYS.System).WriteToConsoleLog(msg,0,severity)
    return sc
}

Storage Default
{
<data name="personDefaultData">
<value name="1">
<value>%%CLASSNAME</value>
</value>
<value name="2">
<value>name</value>
</value>
<value name="3">
<value>dob</value>
</value>
<value name="4">
<value>activ</value>
</value>
<value name="5">
<value>created</value>
</value>
</data>
<datalocation>^off.personD</datalocation>
<defaultdata>personDefaultData</defaultdata>
<idlocation>^off.personD</idlocation>
<indexlocation>^off.personI</indexlocation>
<streamlocation>^off.personS</streamlocation>
<type>%Storage.Persistent</type>
}

}

Hello @Robert Cemper,

thanks for your reply on this 5 year question surprise

My question was more on references using Data Connectors on production.

We can update cubes based on external tables using data connectors through ProcessFact().

Sinon, je vais très bien, je te remercie et j'espère qu'il en est de même pour toi. En voyant ton infatigable activité, je devine que tu vas bien.

Salutations de France,

Sylvain

The alternative installation of Caché on the Mac OS X is much like the installation on any UNIX® platform.

To install Caché:
  1. Obtain the installation kit from InterSystems and install it on the desktop (tar.gz)
  2. Log in as user ID root. It is acceptable to su (superuser) to root while logged in from another account.
  3. See Adjustments for Large Number of Concurrent Processes and make adjustments if needed.
  4. Follow the instructions in the Run the Installation Script section and subsequent sections of the “Installing Caché on UNIX and Linux” chapter of this guide.

Hi Evgeny, 

this code was written while upgrading to an async mirrored a DeepSee remote instance (originally based on a shadow server configuration + ECP access to ^OBJ.DSTIME global from DeepSee instance to production. It was before DSINTERVAL was created). 

Of course this sample can be modified to add/remove/modify any other parameter by modifying the query on %Dictionary.ParameterDefinition to filter any other parameter you are trying to add/remove/modify.