go to post Sylvain Guilbaud · Feb 25, 2022 I'm running the very last version of Docker : 4.5.0 (74594) docker version Client: Cloud integration: v1.0.22 Version: 20.10.12 API version: 1.41 Go version: go1.16.12 Git commit: e91ed57 Built: Mon Dec 13 11:46:56 2021 OS/Arch: darwin/amd64 Context: default Experimental: true Server: Docker Desktop 4.5.0 (74594) Engine: Version: 20.10.12 API version: 1.41 (minimum version 1.12) Go version: go1.16.12 Git commit: 459d0df Built: Mon Dec 13 11:43:56 2021 OS/Arch: linux/amd64 Experimental: true containerd: Version: 1.4.12 GitCommit: 7b11cfaabd73bb80907dd23182b9347b4245eb5d runc: Version: 1.0.2 GitCommit: v1.0.2-0-g52b36a2 docker-init: Version: 0.19.0 GitCommit: de40ad0
go to post Sylvain Guilbaud · Feb 25, 2022 After a reboot I'm now again able to reach 1 local instance (out of 2) and 0 containers + the IRIS-SAM instance. docker-compose.yml version: '3.7' networks: dockernet: ipam: driver: default config: - subnet: 172.19.0.0/24 services: arbiter: image: containers.intersystems.com/intersystems/arbiter:2022.1.0.131.0 init: true command: - /usr/local/etc/irissys/startISCAgent.sh 2188 hostname: arbiter container_name: arbiter ports: - 50100:2188 networks: dockernet: ipv4_address: 172.19.0.100 iris-a: init: true build: context: . image: iris:2022.1.0.114.0 hostname: iris-a container_name: iris-a environment: - ISC_DATA_DIRECTORY=/InterSystems volumes: - ./data:/data - ./volumes/InterSystems:/InterSystems - ./keys/iris.key:/usr/irissys/mgr/iris.key ports: - 50004:52773 - 50005:1972 networks: dockernet: ipv4_address: 172.19.0.10 iris-b: init: true build: context: . image: iris:2022.1.0.114.0 hostname: iris-b container_name: iris-b environment: - ISC_DATA_DIRECTORY=/InterSystems volumes: - ./data:/data - ./volumes/InterSystems-b:/InterSystems - ./keys/iris.key:/usr/irissys/mgr/iris.key ports: - 50014:52773 - 50015:1972 networks: dockernet: ipv4_address: 172.19.0.20 webgateway: hostname: webgateway container_name: webgateway depends_on: - iris-a - iris-b - arbiter image: containers.intersystems.com/intersystems/webgateway:2022.1.0.131.0 ports: - 50243:443 - 50200:80 environment: - ISC_DATA_DIRECTORY=/webgateway - IRIS_USER=CSPsystem - IRIS_PASSWORD=SYS networks: dockernet: ipv4_address: 172.19.0.200 volumes: - "./volumes/webgateway:/webgateway" postgres: container_name: postgres image: postgres:13.4-alpine3.14 environment: POSTGRES_PASSWORD: postgres volumes: - ./src/sql/postgreSQL:/docker-entrypoint-initdb.d/ - ./volumes/postgreSQL:/var/lib/postgresql/data ports: - 50006:5432 restart: unless-stopped healthcheck: test: ["CMD", "pg_isready", "-U", "postgres"] interval: 30s timeout: 30s retries: 3 networks: dockernet: ipv4_address: 172.19.0.11 mssql: container_name: mssql image: 'mcr.microsoft.com/mssql/server:2019-latest' ports: - '50007:1433' environment: - ACCEPT_EULA=Y - SA_PASSWORD=Secret1234 volumes: - './volumes/mssql:/var/opt/mssql' networks: dockernet: ipv4_address: 172.19.0.12 sam-alertmanager: container_name: sam-alertmanager command: - --config.file=/config/isc_alertmanager.yml - --data.retention=24h - --cluster.listen-address= depends_on: - sam-iris - sam-prometheus expose: - '9093' image: prom/alertmanager:v0.20.0 restart: on-failure volumes: - ./sam/config/alertmanager:/config networks: - dockernet sam-grafana: container_name: sam-grafana depends_on: - sam-prometheus expose: - '3000' image: grafana/grafana:6.7.1 restart: on-failure volumes: - ./sam/data/grafana:/var/lib/grafana - ./sam/config/grafana/grafana.ini:/etc/grafana/grafana.ini - ./sam/config/grafana/datasource.yml:/etc/grafana/provisioning/datasources/datasource.yml - ./sam/config/grafana/dashboard-provider.yml:/etc/grafana/provisioning/dashboards/dashboard-provider.yml - ./sam/config/grafana/dashboard.json:/var/lib/grafana/dashboards/dashboard.json networks: - dockernet sam-iris: container_name: sam-iris environment: - ISC_DATA_DIRECTORY=/dur/iconfig expose: - '51773' - '52773' hostname: IRIS image: store/intersystems/sam:1.0.0.115 init: true restart: on-failure volumes: - ./sam/data/iris:/dur - ./sam/config:/config networks: - dockernet sam-nginx: container_name: sam-nginx depends_on: - sam-iris - sam-prometheus - sam-grafana image: nginx:1.17.9-alpine ports: - 8080:8080 restart: on-failure volumes: - ./sam/config/nginx/nginx.conf:/etc/nginx/nginx.conf networks: - dockernet sam-prometheus: container_name: sam-prometheus command: - --web.enable-lifecycle - --config.file=/config/isc_prometheus.yml - --storage.tsdb.retention.time=2h networks: - dockernet depends_on: - sam-iris expose: - '9090' image: prom/prometheus:v2.17.1 restart: on-failure volumes: - ./sam/config/prometheus:/config # openldap: # image: bitnami/openldap:2 # ports: # - '50008:1389' # - '50009:1636' # environment: # - LDAP_ADMIN_USERNAME=admin # - LDAP_ADMIN_PASSWORD=adminpassword # - LDAP_USERS=user01,user02 # - LDAP_PASSWORDS=password1,password2 # volumes: # - ./volumes/openldap_data:/bitnami/openldap # networks: # dockernet: # ipv4_address: 172.19.0.172 docker ps docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5ee1a03673cb nginx:1.17.9-alpine "nginx -g 'daemon of…" About a minute ago Up About a minute 80/tcp, 0.0.0.0:8080->8080/tcp sam-nginx cce6e64a4fd6 grafana/grafana:6.7.1 "/run.sh" About a minute ago Up About a minute 3000/tcp sam-grafana bbadfa5c326e prom/alertmanager:v0.20.0 "/bin/alertmanager -…" About a minute ago Up About a minute 9093/tcp sam-alertmanager 89df4b965a3b containers.intersystems.com/intersystems/webgateway:2022.1.0.131.0 "/startWebGateway" About a minute ago Up About a minute (healthy) 0.0.0.0:50200->80/tcp, 0.0.0.0:50243->443/tcp webgateway 260b51880b60 prom/prometheus:v2.17.1 "/bin/prometheus --w…" About a minute ago Up About a minute 9090/tcp sam-prometheus 961b84b9ff4a iris:2022.1.0.114.0 "/iris-main" About a minute ago Up About a minute (health: starting) 2188/tcp, 53773/tcp, 54773/tcp, 0.0.0.0:50015->1972/tcp, 0.0.0.0:50014->52773/tcp iris-b b0c23098794a postgres:13.4-alpine3.14 "docker-entrypoint.s…" About a minute ago Up About a minute (healthy) 0.0.0.0:50006->5432/tcp postgres 4cb33c8c36e3 store/intersystems/sam:1.0.0.115 "/iris-main" About a minute ago Up About a minute (healthy) 2188/tcp, 51773/tcp, 52773/tcp, 53773/tcp, 54773/tcp sam-iris 54e117a4e856 iris:2022.1.0.114.0 "/iris-main" About a minute ago Up About a minute (healthy) 2188/tcp, 53773/tcp, 54773/tcp, 0.0.0.0:50005->1972/tcp, 0.0.0.0:50004->52773/tcp iris-a 2c5133d95693 containers.intersystems.com/intersystems/arbiter:2022.1.0.131.0 "/arbiterEntryPoint.…" About a minute ago Up About a minute (healthy) 0.0.0.0:50100->2188/tcp arbiter 307e59e12fdc mcr.microsoft.com/mssql/server:2019-latest "/opt/mssql/bin/perm…" About a minute ago Up About a minute 0.0.0.0:50007->1433/tcp mssql isc_prometheus.yml alerting: alertmanagers: - static_configs: - targets: - alertmanager:9093 global: evaluation_interval: 15s scrape_interval: 15s remote_read: - url: http://iris:52773/api/sam/private/db/read remote_write: - url: http://iris:52773/api/sam/private/db/write rule_files: - ./isc_alert_rules.yml scrape_configs: - job_name: SAM metrics_path: /api/monitor/metrics scheme: http static_configs: - labels: cluster: "2" targets: - 192.168.1.100:52774 - 192.168.1.100:52773 - labels: cluster: "3" targets: - 192.168.1.100:50004 - 192.168.1.100:50014 - 0.0.0.0:50004 - 0.0.0.0:50014 - labels: cluster: "4" targets: - 192.168.1.100:8080
go to post Sylvain Guilbaud · Feb 25, 2022 For containers, I'm using docker-compose. I've tried with SAM in the same yml file to get everything in the same network, but, nothing is working (0.0.0.0, etc.)
go to post Sylvain Guilbaud · Feb 25, 2022 Thanks Dmitry for your reply. Actually, I know all of this ; that's why I don't understand why it's not working any more... gh repo clone intersystems-community/sam cd sam tar xvzf sam-1.0.0.115-unix.tar.gz cd sam-1.0.0.115-unix ./start.sh Then I create a cluster + a target on my local instance (non-container) : iris list irishealth Configuration 'IRISHEALTH' directory: /Users/guilbaud/is/irishealth versionid: 2021.2.0.649.0 datadir: /Users/guilbaud/is/irishealth conf file: iris.cpf (SuperServer port = 61773, WebServer = 52773) status: running, since Fri Feb 25 15:35:32 2022 state: ok product: InterSystems IRISHealth I check that /api/monitor/metrics runs well : curl http://127.0.0.1:52773/api/monitor/metrics -o metrics % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 17634 100 17634 0 0 14174 0 0:00:01 0:00:01 --:--:-- 14383
go to post Sylvain Guilbaud · Feb 22, 2022 Thanks Robert for your comment. Merging globals is exactly what the toArchive method does here : Class data.archive.person Extends (%Persistent, data.current.person) { Parameter DEFAULTGLOBAL = "^off.person"; /// Description ClassMethod archive(purgeArchive As %Integer = 0, purgeSource As %Integer = 0) As %Status { set sc = $$$OK , tableName = "" set (archived,archivedErrors, severity) = 0 set sourceClassName = $PIECE(##class(%Dictionary.ClassDefinition).%OpenId(..%ClassName(1)).Super,",",2) set targetClassName = ..%ClassName(1) set sourceClass = ##class(%Dictionary.ClassDefinition).%OpenId(sourceClassName) set targetClass = ##class(%Dictionary.ClassDefinition).%OpenId(targetClassName) set sourceDataLocation = sourceClass.Storages.GetAt(1).DataLocation set sourceIndexLocation = sourceClass.Storages.GetAt(1).IndexLocation set sourceStreamLocation = sourceClass.Storages.GetAt(1).StreamLocation set targetDataLocation = targetClass.Storages.GetAt(1).DataLocation set targetIndexLocation = targetClass.Storages.GetAt(1).IndexLocation set targetStreamLocation = targetClass.Storages.GetAt(1).StreamLocation set tableName = $$$CLASSsqlschemaname($$$gWRK,sourceClassName) _"."_ $$$CLASSsqltablename($$$gWRK,sourceClassName) if $ISOBJECT(sourceClass) & $ISOBJECT(targetClass) & tableName '= "" { if $ISOBJECT(sourceClass.Storages.GetAt(1)) & $ISOBJECT(targetClass.Storages.GetAt(1)) { set tStatement=##class(%SQL.Statement).%New(1) kill sql set sql($i(sql)) = "SELECT" set sql($i(sql)) = "id" set sql($i(sql)) = "FROM" set sql($i(sql)) = tableName set sc = tStatement.%Prepare(.sql) set result = tStatement.%Execute() kill:purgeArchive @targetDataLocation, @targetIndexLocation, @targetStreamLocation while result.%Next() { set source = $CLASSMETHOD(sourceClassName,"%OpenId",result.%Get("id")) if $ISOBJECT(source) { set archive = $CLASSMETHOD(targetClassName,"%New") for i = 1:1:sourceClass.Properties.Count() { set propertyName = sourceClass.Properties.GetAt(i).Name set $PROPERTY(archive,propertyName) = $PROPERTY(source,propertyName) } set sc = archive.%Save() if sc { set archived = archived + 1 } else { set archivedErrors = archivedErrors + 1 } } } kill:purgeSource @sourceDataLocation, @sourceIndexLocation, @sourceStreamLocation set msg ="archive data from " _ sourceClassName _ " into "_ targetClassName _ " result:" _ archived _ " archived (errors:" _ archivedErrors _ ")" } else { set severity = 1 set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes have not storage definition" } } else { set severity = 1 set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes not found in %Dictionary.ClassDefinition" } do ##class(%SYS.System).WriteToConsoleLog(msg,0,severity) Return sc } ClassMethod toArchive(purgeArchive As %Integer = 0, purgeSource As %Integer = 0) As %Status { set sc=$$$OK set sourceClassName = $PIECE(##class(%Dictionary.ClassDefinition).%OpenId(..%ClassName(1)).Super,",",2) set targetClassName = ..%ClassName(1) set sourceClass = ##class(%Dictionary.ClassDefinition).%OpenId(sourceClassName) set targetClass = ##class(%Dictionary.ClassDefinition).%OpenId(targetClassName) if $ISOBJECT(sourceClass) & $ISOBJECT(targetClass) { if $ISOBJECT(sourceClass.Storages.GetAt(1)) & $ISOBJECT(targetClass.Storages.GetAt(1)) { set sourceDataLocation = sourceClass.Storages.GetAt(1).DataLocation set sourceIndexLocation = sourceClass.Storages.GetAt(1).IndexLocation set sourceStreamLocation = sourceClass.Storages.GetAt(1).StreamLocation set targetDataLocation = targetClass.Storages.GetAt(1).DataLocation set targetIndexLocation = targetClass.Storages.GetAt(1).IndexLocation set targetStreamLocation = targetClass.Storages.GetAt(1).StreamLocation kill:purgeArchive @targetDataLocation, @targetIndexLocation, @targetStreamLocation merge @targetDataLocation = @sourceDataLocation merge @targetIndexLocation = @sourceIndexLocation merge @targetStreamLocation = @sourceStreamLocation set ^mergeTrace($i(^mergeTrace)) = $lb($zdt($h,3),sourceDataLocation) kill:purgeSource @sourceDataLocation, @sourceIndexLocation, @sourceStreamLocation set severity = 0 set msg = "ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " SUCCESSFULLY" } else { set severity = 1 set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes have not storage definition" } } else { set severity = 1 set msg = "ERROR WHILE ARCHIVING " _ sourceClassName _ " in "_ targetClassName _ " : " _ " classes not found in %Dictionary.ClassDefinition" } do ##class(%SYS.System).WriteToConsoleLog(msg,0,severity) return sc } Storage Default { <data name="personDefaultData"> <value name="1"> <value>%%CLASSNAME</value> </value> <value name="2"> <value>name</value> </value> <value name="3"> <value>dob</value> </value> <value name="4"> <value>activ</value> </value> <value name="5"> <value>created</value> </value> </data> <datalocation>^off.personD</datalocation> <defaultdata>personDefaultData</defaultdata> <idlocation>^off.personD</idlocation> <indexlocation>^off.personI</indexlocation> <streamlocation>^off.personS</streamlocation> <type>%Storage.Persistent</type> } }
go to post Sylvain Guilbaud · Feb 22, 2022 Thanks Eduard for sharing your code implementing a very powerful approach of data snapshot.
go to post Sylvain Guilbaud · Feb 21, 2022 You can find the full definitions of current and archive classes on Github.
go to post Sylvain Guilbaud · Feb 19, 2022 Thanks for sharing this explanation. If you want to avoid to add the WITH clause in all your DDL statement, you can also modify this default behavior by using : SET status=$SYSTEM.SQL.Util.SetOption("DDLUseExtentSet",0,.oldval)
go to post Sylvain Guilbaud · Feb 11, 2022 Did you try to pull the containers after having been first logged in successfully ? echo $PASSWORD | docker login -u=your-login --password-stdin containers.intersystems.com docker pull containers.intersystems.com/intersystems/iris:2022.1.0.114.0
go to post Sylvain Guilbaud · Nov 22, 2021 That's a really significant milestone. Congrats !!! 10K in less than 6 years, means an approx rate of 140 new members each month.I'm confident that it will take less than 6 years to reach the next 10K members.
go to post Sylvain Guilbaud · Sep 2, 2020 Hello @Robert Cemper, thanks for your reply on this 5 year question My question was more on references using Data Connectors on production. We can update cubes based on external tables using data connectors through ProcessFact(). Sinon, je vais très bien, je te remercie et j'espère qu'il en est de même pour toi. En voyant ton infatigable activité, je devine que tu vas bien. Salutations de France, Sylvain
go to post Sylvain Guilbaud · Mar 5, 2020 Hi Ron, thanks for this great article. There's a typo which creates a wondering question about the potentiality of Google Cloud : Using the Google Cloud Console (Easiest) https://could.google.com
go to post Sylvain Guilbaud · Aug 12, 2019 I agree ; that's a drawback.BTW, if you're a partner with a Software Update contract, you can download the IRIS for Health containers directly from WRC Containers Distributions web site :
go to post Sylvain Guilbaud · Aug 12, 2019 Hi Duncan,before getting IRIS for Health on Docker Store, you can start with IRIS for Health Community Edition available on AWS, Azure and GCP.Kind Regards,Sylvain
go to post Sylvain Guilbaud · Nov 8, 2016 The alternative installation of Caché on the Mac OS X is much like the installation on any UNIX® platform.To install Caché:Obtain the installation kit from InterSystems and install it on the desktop (tar.gz)Log in as user ID root. It is acceptable to su (superuser) to root while logged in from another account.See Adjustments for Large Number of Concurrent Processes and make adjustments if needed.Follow the instructions in the Run the Installation Script section and subsequent sections of the “Installing Caché on UNIX and Linux” chapter of this guide.
go to post Sylvain Guilbaud · Oct 5, 2016 Hi Evgeny, this code was written while upgrading to an async mirrored a DeepSee remote instance (originally based on a shadow server configuration + ECP access to ^OBJ.DSTIME global from DeepSee instance to production. It was before DSINTERVAL was created). Of course this sample can be modified to add/remove/modify any other parameter by modifying the query on %Dictionary.ParameterDefinition to filter any other parameter you are trying to add/remove/modify.