Search

Clear filter
Article
Evgeny Shvarov · Jun 8, 2023

Making your own Chat with Open AI ChatGPT in Telegram Using InterSystems Interoperability

Hi Community! Just want to share with you an exercise I made to create "my own" chat with GPT in Telegram. It became possible because of two components on Open Exchange: Telegram Adapter by @Nikolay.Soloviev and IRIS Open-AI by @Francisco.López1549 So with this example you can setup your own chat with ChatGPT in Telegram. Let's see how to make it work! Prerequisites Create a bot using @BotFather account and get the Bot Token. Then add bot into a telegram chat or channel and give it admin rights. Learn more at https://core.telegram.org/bots/api Open (create if you don't have it) an account on https://platform.openai.com/ and get your Open AI API Key and Organization id. Make sure you have IPM installed in your InterSystems IRIS. if not here is one liner to install: USER> s r=##class(%Net.HttpRequest).%New(),r.Server="pm.community.intersystems.com",r.SSLConfiguration="ISC.FeatureTracker.SSL.Config" d r.Get("/packages/zpm/latest/installer"),$system.OBJ.LoadStream(r.HttpResponse.Data,"c") Or you can use community docker image with IPM onboard like this: $ docker run --rm --name iris-demo -d -p 9092:52797 -e IRIS_USERNAME=demo -e IRIS_PASSWORD=demo intersystemsdc/iris-community:latest $ docker exec -it iris-demo iris session iris -U USER USER> Installation Install the IPM package in a namespace with Interoperability enabled. USER>zpm "install telegram-gpt" Usage Open the production. Put your bot's Telegram Token into Telegram business service and Telegram Business operation both: Also initialize St.OpenAi.BO.Api.Connect operation with your Chat GPT API key and Organization id: Start the production. Ask any question in the telegram chat. You'll get an answer via Chat GPT. Enjoy! And in visual trace: Details This example uses 3.5 version of Chat GPT Open AI. It could be altered in the data-transformation rule for the Model parameter. pic 1 didnt show up How about now? It looks like that organization field for Open AI integration is not mandatory, so only Telegram Token and ChatGPT key needed. Great!!! Good job Thank you, @Francisco.López1549! And thanks for introducing chatGPT package to the community! In a new version can also be installed as: USER>zpm "install telegram-gpt -D TgToken=your_telegram_token -D GPTKey=your_ChatGPT_key" so you can pass the Telegram Token and ChatGPT API keys as production parameters. A new version is coming soon... New features 😉 Looking forward!
Announcement
Jacquie Clermont · Nov 30, 2022

InterSystems a Leader in Latest Forrester Wave Report: Translytical Data Platforms Q4 2022

Hi Community: Pleased to let you know that in Forrester's latest "Wave" report on analytical data platforms, we have been designated a "leader." You can learn more from this InterSystems Press Release, or even better, read The Forrester Wave™: Translytical Data Platforms, Q4 2022.
Article
Megumi Kakechi · May 4, 2023

Which processes do you need to monitor in Windows to check that InterSystems IRIS is working properly

InterSystems FAQ rubric In Windows, set the processes with the following image names as monitoring targets. [irisdb.exe] contains important system processes.* Please refer to the attachment for how to check important system processes that should be monitored. [IRISservice.exe] This is the process for handling IRIS instances via services.When this process ends, it does not directly affect the IRIS instance itself, but stopping IRIS (stopping the service) is no longer possible. [ctelnetd.exe] %Service_Telnet Starts when the service is enabled and becomes a daemon process to access IRIS via Telnet.Once this process is finished, Telnet access to the IRIS instance will no longer be possible. [iristrmd.exe] %Service_Console This is a daemon process that starts when the service is enabled (enabled by default) and accesses IRIS from the server's local terminal (from the server's IRIS launcher to the terminal).After this process ends, local terminal access to the IRIS instance is no longer possible. [iristray.exe] The process for the IRIS Launcher that appears in the system tray.The presence or absence of this process does not affect the IRIS instance, so there is no particular need to monitor it. [licmanager.exe] License server process is started when using multi-server type licenses. [httpd.exe] Apache process for the management portal. To obtain a list of processes inside IRIS, select the following menu of the Management Portal or  [Management Portal] > [System Operation] > [Process]. You can also check the process list from the command line by launching a terminal. do ALL^%SS Please see the following documents for details.IRIS Core Processes
Announcement
Shane Nowack · Feb 14, 2023

Beta testers needed for our upcoming InterSystems HL7 Interface Specialist certification exam

Hello InterSystems HL7 Community, InterSystems Certification is developing a certification exam for InterSystems HL7 interface specialists and, if you match the exam candidate description given below, we would like you to beta test the exam. The dates the exam will be available for beta testing are March 28 - April 30, 2023. Interested beta testers should sign up now by emailing certification@intersystems.com (see below for more details). Note: The InterSystems HL7 Interface Specialist certification exam is version 2.0 of our HealthShare Health Connect HL7 Interface Specialist certification exam, which will be retired at the time of the release of this exam. The expiration dates for individuals currently holding the HealthShare Health Connect HL7 Interface Specialist credential will NOT change from when the credential was earned. However, the name on your digital badge will be updated to reflect the new name at the time of release. FAQs What are my responsibilities as a beta tester? You will be assigned the exam and will need to take it within one month of the beta release. The exam will be administered in an online proctored environment, free of charge (the standard fee of $150 per exam is waived for all beta testers), and then the InterSystems Certification Team will perform a careful statistical analysis of all beta test data to set a passing score for the exam. The analysis of the beta test results will take 6-8 weeks, and after the passing score is established, you will receive an email notification from InterSystems Certification informing you of the results. If your score on the exam is at or above the passing score, you will have earned the certification! Note: Beta test scores are completely confidential.Can I beta test the exam if I am already a certified HealthShare Health Connect HL7 Interface Specialist?Yes! If you receive a passing score on your beta test your credential expiration date will be updated to be 5 years from the release date of the new exam. Exam Details Exam title: InterSystems HL7 Interface Specialist Candidate description: An IT professional who: designs, builds, and performs basic troubleshooting of HL7 interfaces with InterSystems products, and has at least six months full-time experience in the technology. Number of questions: 68 Time allotted to take exam: 2 hours Recommended preparation: Classroom course Building and Managing HL7 Integrations or equivalent experience. Online courses Configuring Validation of HL7 V2 Messages in Productions, Building Basic HL7 V2 Integrations with InterSystems, and Using HL7 V2 Bracket and Parentheses Syntax To Identify Virtual Properties are recommended, as well as experience searching InterSystems Documentation. Recommended practical experience: 6 months - 1 year designing, building, and performing basic troubleshooting of HL7 interfaces with InterSystems products version 2019.1 or higher. Exam practice questions A set of practice questions will be sent to you via email when we notify you of the beta release. Exam format The questions are presented in two formats: multiple choice and multiple response. System requirements for beta testing Version 6.1.34.2 or later of Questionmark Secure Adobe Acrobat set as the default PDF viewer on your system Exam topics and content The exam contains question items that cover the areas for the stated role as shown in the KSA (Knowledge, Skills, Abilities) chart immediately below. KSA Group KSA Group Description KSA Description Target items T1 Designs HL7 productions Interprets interface requirements Determines productions and namespaces needed Determines appropriate production components and the flow of messages Determines production needs from interface specifications Determines data transformation needs Determines validation settings Designs routing rules Chooses production architecture components Identifies basic functionality of production components Identifies adapters used by built-in HL7 components Identifies the components in a production diagram Names production components, rules, and DTLs according to conventions Designs custom schemas Identifies custom segments in custom schema categories Determines where sample messages deviate from schema requirements T2 Builds HL7 productions Adds production components to build interfaces Adds production components to productions Imports and exports productions and their components using the deploy tool Creates and applies routing rules Creates and interprets rule sets Accesses HL7 message contents using expression editor Identifies how constraints affect code completion in the expression editor Uses virtual property path syntax to implement rule conditions Uses virtual property syntax and utility functions to retrieve HL7 data Applies foreach actions Determines problems within routing rules Applies key configuration settings in productions Identifies key configuration settings in business services and operations Maps key settings to correct components Configures pool size and actor pool size settings to ensure FIFO Configures alert configuration settings Configures failure timeout setting to ensure FIFO Configures and uses credentials Identifies behavior caused by using system default settings Uses DTL Editor to build DTLs Configures source and target message classes Adds functions to DTL expressions Differentiates between Create New versus Create Copy settings Applies foreach actions Applies if actions Applies group actions Applies switch actions Tests DTLs Creates custom schemas Determines custom schema requirements Creates new custom schemas based on requirements Identifies segment characteristics from message structure Applies ACK/NACK functionality Selects appropriate ACK mode settings Identifies default ACK/NACK settings for business service Determines reply code actions for desired behaviors Manages messages Purges messages manually Purges messages automatically Ensures purge tasks are running T3 Troubleshoots HL7 productions Identifies and uses tools for troubleshooting Uses production configuration page Configures Archive I/O setting Identifies the name of the central alert component Uses bad message handler Uses Jobs page, Messages page, Production Monitor page, and Queues page Identifies root cause of production problem states Tests message routing rules using testing tool Uses Visual Trace Locates session ID of a message Interprets information displayed in Visual Trace Interprets different icons in the Visual Trace Locates information in tabs of Visual Trace Determines causes of alerts Troubleshoots production configuration problems Uses Message Viewer Optimizes search options Searches messages using Basic Criteria Searches messages using Extended Criteria Uses search tables in productions Resends Messages Troubleshoots display problems in Message Viewer Uses logs for debugging Uses Business Rule Log Uses the Event Log to examine log entries Identifies auditable events Searches the Event Log Interested in participating? Email certification@intersystems.com now to sign up! Is it still possible to participate?
Article
Eduard Lebedyuk · Apr 6, 2018

Continuous Delivery of your InterSystems solution using GitLab - Part VI: Containers infrastructure

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDWhy containers?Containers infrastructureGitLab CI/CD using containersIn the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.In the third article, we covered GitLab installation and configuration and connecting your environments to GitLabIn the fourth article, we wrote a CD configuration.In the fifth article, we talked about containers and how (and why) they can be used.In this article let's discuss main components you'll need to run a continuous delivery pipeline with containers and how they all work together.Our configuration would look like this:Here we can see the separation of the three main stages:BuildShipRunBuildIn the previous parts, the build was often incremental - we calculated the difference between current environment and current codebase and modified our environment to correspond to the codebase. With containers, each build is a full build. The result of a build is an image that could be run anywhere with to dependencies.ShipAfter our image is built and passed the tests it is uploaded into the registry - specialized server to host docker images. There it can replace the previous image with the same tag. For example, due to new commit to the master branch we built the new image (project/version:master) and if tests passed we can replace the image in the registry with the new one under the same tag, so everyone who pulls project/version:master would get a new version.RunFinally, our images are deployed. CI solution such as GitLab can control that or a specialized orchestrator, but the point is the same - some images are executed, periodically checked for health and updated if a new image version becomes available.Check out docker webinar explaining these different stages.Alternatively, from the commit point of view:In our delivery configuration we would:Push code into GitLab repositoryBuild docker imageTest itPublish image to our docker registrySwap old container with the new version from the registryTo do that we'll need:DockerDocker registryRegistered domain (optional but preferable)GUI tools (optional) DockerFirst of all, we need to run docker somewhere. I'd recommend starting with one server with more mainstream Linux flavor like Ubuntu, RHEL or Suse. Don't use cloud-oriented distributions like CoreOS, RancherOS etc. - they are not really aimed at the beginners. Don't forget to switch storage driver to devicemapper. If we're talking about big deployments then using container orchestration tools like Kubernetes, Rancher or Swarm can automate most tasks but we're not going to discuss them (at least in this part). Docker registryThat's a first container we need to run, and it is a stateless, scalable server-side application that stores and lets you distribute Docker images.You should use the Registry if you want to: tightly control where your images are being stored fully own your images distribution pipeline integrate image storage and distribution tightly into your in-house development workflowHere's registry documentation.Connecting registry and GitLabNote: GitLab includes in-built registry. You can run it instead of external registry. Read GitLab docs linked in this paragraph.To connect your registry to GitLab, you'll need to run your registry with HTTPS support - I use Let's Encrypt to get certificates, and I followed this Gist to get certificates and passed them into a container. After making sure that the registry is available over HTTPS (you can check from browser) follow these instructions on connecting registry to GitLab. These instructions differ based on what you need and your GitLab installation, in my case configuration was adding registry certificate and key (properly named and with correct permissions) to /etc/gitlab/ssl and these lines to /etc/gitlab/gitlab.rb: registry_external_url 'https://docker.domain.com' gitlab_rails['registry_api_url'] = "https://docker.domain.com" And after reconfiguring GitLab I could see a new Registry tab where we're provided with the information on how to properly tag newly built images, so they would appear here. Domain In our Continuous Delivery configuration, we would automatically build an image per branch and if the image passes tests then we'd publish it in the registry and run it automatically, so our application would be automatically available in all "states", for example, we can access: Several feature branches at <featureName>.docker.domain.comTest version at master.docker.domain.comPreprod version at preprod.docker.domain.comProd version at prod.docker.domain.com To do that we need a domain name and add a wildcard DNS record that points *.docker.domain.com to the IP address of docker.domain.com. Other option would be to use different ports. Nginx proxy As we have several feature branches we need to redirect subdomains automatically to the correct container. To do that we can use Nginx as a reverse proxy. Here's a guide. GUI tools To start working with containers you can use either command line or one of the GUI interfaces. There are many available, for example: RancherMicroBadgerPortainerSimple Docker UI... They allow you to create containers and manage them from the GUI instead of CLI. Here's how Rancher looks like: GitLab runner Same as before to execute scripts on other servers we'll need to install GitLab runner. I discussed that in the third article. Note that you'll need to use Shell executor and not Docker executor. Docker executor is used when you need something from inside of the image, for example you're building an Android application in java container and you only need an apk. In our case we need a whole container and for that we need Shell executor. Conclusion It's easy to start running containers and there are many tools to choose from. Continuous Delivery using containers differs from the usual Continuous Delivery configuration in several ways: Dependencies are satisfied at build time and after image is built you don't need to think about dependencies.Reproducibility - you can easily reproduce any existing environment by running the same container locally.Speed - as containers do not have anything except what you explicitly added, they can be built faster more importantly they are built once and used whenever required.Efficiency - same as above containers produce less overhead than, for example, VMs.Scalability - with orchestration tools you can automatically scale your application to the workload and consume only resources you need right now. What's next In the next article, we'll talk create a CD configuration that leverages InterSystems IRIS Docker container.
Announcement
Evgeny Shvarov · Apr 11, 2018

Git Flows and Continuous Delivery For InterSystems Data Platforms Webinar, 24th of April 2018

Hi, Community!Continuous Delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently. The approach helps reduce the cost, time, and risk of delivering changes by allowing for more incremental updates to applications in production. Join us at 07:00 UTC, April 24th for a webinar with a live demo "Git flows and Continuous Delivery" by @Eduard.Lebedyuk The language of the webinar is Russian.Also, see the related articles on DC. This webinar recording is available in a dedicated Webinars in Russian playlist on InterSystems Developers YouTube Channel: Enjoy it!
Article
Eduard Lebedyuk · Mar 20, 2018

Continuous Delivery of your InterSystems solution using GitLab - Part IV: CD configuration

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as: Git 101 Git flow (development process) GitLab installation GitLab Workflow Continuous Delivery GitLab installation and configuration GitLab CI/CD In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software. In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery. In the third article, we covered GitLab installation and configuration and connecting your environments to GitLab I this article we'll finally write a CD configuration. Plan Environments First of all, we need several environments and branches that correspond to them: Environment Branch Delivery Who can commit Who can merge Test master Automatic Developers Owners Developers Owners Preprod preprod Automatic No one Owners Prod prod Semiautomatic (press button to deliver) No one Owners Development cycle And as an example, we'll develop one new feature using GitLab flow and deliver it using GitLab CD. Feature is developed in a feature branch. Feature branch is reviewed and merged into the master branch. After a while (several features merged) master is merged into preprod After a while (user testing, etc.) preprod is merged into prod Here's how it would look like (I have marked the parts that we need to develop for CD in cursive): Development and testing Developer commits the code for the new feature into a separate feature branch After feature becomes stable, the developer merges our feature branch into the master branch Code from the master branch is delivered to the Test environment, where it's loaded and tested Delivery to the Preprod environment Developer creates merge request from master branch into the preprod branch Repository Owner after some time approves merge request Code from the preprod branch is delivered to the Preprod environment Delivery to the Prod environment Developer creates merge request from preprod branch into the prod branch Repository Owner after some time approves merge request Repository owner presses "Deploy" button Code from prod branch is delivered to the Prod environment Or the same but in a graphic form: Application Our application consists of two parts: REST API developed on InterSystems platform Client JavaScript web application Stages From the plan above we can determine the stages that we need to define in our Continuous Delivery configuration: Load - to import server-side code into InterSystems IRIS Test - to test client and server code Package - to build client code Deploy - to "publish" client code using web server Here's how it looks like in .gitlab-ci.yml configuration file: stages: - load - test - package - deploy Scripts Load Next, let's define the scripts. Scripts docs. First, lets define a script load server that loads server-side code: load server: environment: name: test url: http://test.hostname.com only: - master tags: - test stage: load script: csession IRIS "##class(isc.git.GitLab).load()" What happens here? load server is a script name next, we describe the environment where this script runs only: master - tells GitLab that this script should be run only when there's a commit to master branch tags: test specifies that this script should only run on a runner which has test tag stage specifies stage for a script script defines code to execute. In our case, we call classmethod load from isc.git.GitLab class Important note For InterSystems IRIS replace csession with iris session. For Windows use: irisdb -s ../mgr -U TEST "##class(isc.git.GitLab).load() Now let's write the corresponding isc.git.GitLab class. All entry points in this class look like this: ClassMethod method() { try { // code halt } catch ex { write !,$System.Status.GetErrorText(ex.AsStatus()),! do $system.Process.Terminate(, 1) } } Note that this method can end in two ways: by halting current process - that registers in GitLab as a successful completion by calling $system.Process.Terminate - which terminates the process abnormally and GitLab registers this as an error That said, here's our load code: /// Do a full load /// do ##class(isc.git.GitLab).load() ClassMethod load() { try { set dir = ..getDir() do ..log("Importing dir " _ dir) do $system.OBJ.ImportDir(dir, ..getExtWildcard(), "c", .errors, 1) throw:$get(errors,0)'=0 ##class(%Exception.General).%New("Load error") halt } catch ex { write !,$System.Status.GetErrorText(ex.AsStatus()),! do $system.Process.Terminate(, 1) } } Two utility methods are called: getExtWildcard - to get a list of relevant file extensions getDir - to get repository directory How can we get the directory? When GitLab executes a script first it specifies a lot of environment variables. One of them is CI_PROJECT_DIR - The full path where the repository is cloned and where the job is run. We can easily get it in our getDir method: ClassMethod getDir() [ CodeMode = expression ] { ##class(%File).NormalizeDirectory($system.Util.GetEnviron("CI_PROJECT_DIR")) } Tests Here's test script: load test: environment: name: test url: http://test.hostname.com only: - master tags: - test stage: test script: csession IRIS "##class(isc.git.GitLab).test()" artifacts: paths: - tests.html What changed? Name and script code of course, but artifact also was added. An artifact is a list of files and directories which are attached to a job after it completes successfully. In our case after the tests are completed, we can generate HTML page redirecting to the test results and make it available from GitLab. Note that there's a lot of copy-paste from the load stage - environment is the same, script parts, such as environments can be labeled separately and attached to a script. Let's define test environment: .env_test: &env_test environment: name: test url: http://test.hostname.com only: - master tags: - test Now our test script looks like this: load test: <<: *env_test script: csession IRIS "##class(isc.git.GitLab).test()" artifacts: paths: - tests.html Next, let's execute the tests using UnitTest framework. /// do ##class(isc.git.GitLab).test() ClassMethod test() { try { set tests = ##class(isc.git.Settings).getSetting("tests") if (tests'="") { set dir = ..getDir() set ^UnitTestRoot = dir $$$TOE(sc, ##class(%UnitTest.Manager).RunTest(tests, "/nodelete")) $$$TOE(sc, ..writeTestHTML()) throw:'..isLastTestOk() ##class(%Exception.General).%New("Tests error") } halt } catch ex { do ..logException(ex) do $system.Process.Terminate(, 1) } } Tests setting, in this case, is a path relative to repository root where unit tests are stored. If It's empty then we skip tests. writeTestHTML method is used to output html with a redirect to test results: ClassMethod writeTestHTML() { set text = ##class(%Dictionary.XDataDefinition).IDKEYOpen($classname(), "html").Data.Read() set text = $replace(text, "!!!", ..getURL()) set file = ##class(%Stream.FileCharacter).%New() set name = ..getDir() _ "tests.html" do file.LinkToFile(name) do file.Write(text) quit file.%Save() } ClassMethod getURL() { set url = ##class(isc.git.Settings).getSetting("url") set url = url _ $system.CSP.GetDefaultApp("%SYS") set url = url_"/%25UnitTest.Portal.Indices.cls?Index="_ $g(^UnitTest.Result, 1) _ "&$NAMESPACE=" _ $zconvert($namespace,"O","URL") quit url } ClassMethod isLastTestOk() As %Boolean { set in = ##class(%UnitTest.Result.TestInstance).%OpenId(^UnitTest.Result) for i=1:1:in.TestSuites.Count() { #dim suite As %UnitTest.Result.TestSuite set suite = in.TestSuites.GetAt(i) return:suite.Status=0 $$$NO } quit $$$YES } XData html { <html lang="en-US"> <head> <meta charset="UTF-8"/> <meta http-equiv="refresh" content="0; url=!!!"/> <script type="text/javascript"> window.location.href = "!!!" </script> </head> <body> If you are not redirected automatically, follow this <a href='!!!'>link to tests</a>. </body> </html> } Package Our client is a simple HTML page: <html> <head> <script type="text/javascript"> function initializePage() { var xhr = new XMLHttpRequest(); var url = "${CI_ENVIRONMENT_URL}:57772/MyApp/version"; xhr.open("GET", url, true); xhr.send(); xhr.onloadend = function (data) { document.getElementById("version").innerHTML = "Version: " + this.response; }; var xhr = new XMLHttpRequest(); var url = "${CI_ENVIRONMENT_URL}:57772/MyApp/author"; xhr.open("GET", url, true); xhr.send(); xhr.onloadend = function (data) { document.getElementById("author").innerHTML = "Author: " + this.response; }; } </script> </head> <body onload="initializePage()"> <div id = "version"></div> <div id = "author"></div> </body> </html> And to build it we need to replace ${CI_ENVIRONMENT_URL} with its value. Of course, real-world application would probably require npm, but it's just an example. Here's the script: package client: <<: *env_test stage: package script: envsubst < client/index.html > index.html artifacts: paths: - index.html Deploy And finally, we deploy our client by copying index.html into webserver root directory. deploy client: <<: *env_test stage: deploy script: cp -f index.html /var/www/html/index.html That's it! Several environments What to do if you need to execute the same (similar) script in several environments? Script parts can also be labels, so here's a sample configuration that loads code in test and preprod environments: stages: - load - test .env_test: &env_test environment: name: test url: http://test.hostname.com only: - master tags: - test .env_preprod: &env_preprod environment: name: preprod url: http://preprod.hostname.com only: - preprod tags: - preprod .script_load: &script_load stage: load script: csession IRIS "##class(isc.git.GitLab).loadDiff()" load test: <<: *env_test <<: *script_load load preprod: <<: *env_preprod <<: *script_load This way we can escape copy-pasting the code. Complete CD configuration is available here. It follows the original plan of moving code between test, preprod and prod environments. Conclusion Continuous Delivery can be configured to automate any required development workflow. Links Hooks repository (and sample configuration) Test repository Scripts docs Available environment variables What's next In the next article, we'll create CD configuration that leverages InterSystems IRIS Docker container. Hi Eduard,I had a look at your continuous delivery articles and found them awesome! I tried to set up a similar environment but I'm struggling with a detail... Hope you'll be able to help me out.I currently have a working gitlab-runner installed on my Windows Laptop with a working Ensemble 2018.1.1 with the isc.gitlab package you provided.C:\Users\gena6950>csession ENS2K18 -U ENSCHSPROD1 "##class(isc.git.GitLab).test()"===============================================================================Directory: C:\Gitlab-Runner\builds\ijGUv41q\0\ciussse-drit-srd\ensemble-continuous-integration-tests\Src\ENSCHS1\Tests\Unit\===============================================================================[...]Use the following URL to view the result:http://10.225.31.79:8971/csp/sys/%25UnitTest.Portal.Indices.cls?Index=23&$NAMESPACE=ENSCHSPROD1All PASSEDDC:\Users\gena6950>I had to manually "alter" the .yml file because of a new bug with parenthesis in the gitlab-runner shell commands (see https://gitlab.com/gitlab-org/gitlab-runner/issues/1941). Relevant parts of this file are there (the file itself is larger but I think it's irrelevant). I put a "Echo" there to see how the command was received by the runner.stages: - load - test - packagevariables: LOAD_DIFF_CMD: "##class(isc.git.GitLab).loadDiff()" TEST_CMD: "##class(isc.git.GitLab).test()" PACKAGE_CMD: "##class(isc.git.GitLab).package()".script_test: &script_test stage: test script: - echo csession ENS2K18 -U ENSCHSPROD1 "%TEST_CMD%" - csession ENS2K18 -U ENSCHSPROD1 "%TEST_CMD%" artifacts: paths: - tests.htmlAnd this is the output seen by the gitlab output : Running with gitlab-runner 11.7.0 (8bb608ff) on Laptop ACGendron ijGUv41qUsing Shell executor...Running on CH05CHUSHDP1609...Fetching changes...Removing tests.htmlHEAD is now at b1ef284 Ajout du fichier de config du pipeline GitlabChecking out b1ef284e as master...Skipping Git submodules setup$ echo csession ENS2K18 -U ENSCHSPROD1 "%TEST_CMD%"csession ENS2K18 -U ENSCHSPROD1 "##class(isc.git.GitLab).test()"$ csession ENS2K18 -U ENSCHSPROD1 "%TEST_CMD%"<NOTOPEN>>ERROR: Job failed: exit status 1I'm pretty sure it must be a small thing but I can't put my finger on it!Hope you'll be able to help!Kind regardsAndre-Claude I have not tested the code on Windows, but here's my idea.As you can see in the code for test method in cause of exceptions I end all my processes with do $system.Process.Terminate(, 1) it seems this path is getting hit. How to fix this exception: Check that test method actually gets called. Write to a global in a first line.In exception handler add do ex.Log() and check application error log to see exception details. Thank you, I will have a look at that and also, I'm replicating my setup on a redhat gitlab-runner. I'll update this post if I find my way out of this on Windows. I also noticed that the "ENVIRONMENT" variables were not passed appropriately in a way that csession understands. The $system.Util.GetEnviron("CI_PROJECT_DIR") and $system.Util.GetEnviron("CI_COMMIT_SHA") calls both returns an empty string. Perhaps the <NOTOPEN> is related to the way the stdout is read in windows. Thanks for this detailed article but have a few questions in mind ( probably you might have faced/answered when you have implemented this solution) 1) These CI/CD builds on the corresponding servers are Clean build ? If the commit is about deleting 4/5 classes unless we do a "Delete" Exclusive/all files on the servers regular load from Sandbox may not overwrite/delete the files that are present on the Server where we are building ? 2) Are the Ensemble Production classes stored as One Production file but in different versions (with Different settings) in respective branches ? or dedicated production file for each Branch/Server so Developers merge Items (Business Service, Process etc) as they move them from one branch to another ? what is the best approach that supports this above model ? Per your second question, best practice is generally to use System Defaults which are set in your Namespace and store the production settings (rather than storing them in the Production class). This allows you to prevent having to have differences in the Production class between branches. Interesting questions.There are several ways to achieve clean and pseudo-clean builds:Containers. Clean builds every time. Next articles in the series explore how containers can be used for CI/CD.Hooks. Curently I implemented one-time and every-time hooks before and after build. They can be used to do deletion, configuration, etc.Recreate. Add action to delete before build:DBsNamespacesRolesWebAppsAnything else you createdI agree with @Benjamin.Spead here. System default settings are the way to go. If you're working outside of Ensemble architecture, you can create a small settings class which gets the data from global/table and use that. Example. I agree, but unfortunately Portal's edit forms for config items always apply settings into the production class (the XData block). Even worse, Portal ignores the source control status of the production class, so you can't prevent these changes. Portal users have to go elsewhere to add/edit System Defaults values. It's far from obvious, nor is it easy. And because they don't have to (i.e. the users can just make the edit in the Portal panel and save it) nobody does. We raised all this years ago, but so far it's remained unaddressed See also https://community.intersystems.com/post/system-default-settings-versus-production-settings#answer-429986 Solution for this issue was posted here.
Article
Eduard Lebedyuk · Mar 26, 2018

Continuous Delivery of your InterSystems solution using GitLab - Part V: Why containers?

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDWhy containers?GitLab CI/CD using containersIn the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.In the third article, we covered GitLab installation and configuration and connecting your environments to GitLabIn the fourth article, we wrote a CD configuration.In this article, let's talk about containers and how (and why) they can be used.This article assumes familiarity with the concepts of docker and container. Check out these articles by @Luca.Ravazzolo if you want to read about containers and images.AdvantagesThere are many advantages to using containers:PortabilityEfficiencyIsolationLightweightImmutabilityLet's talk about them in detail.PortabilityA container wraps up an application with everything it needs to run, like configuration files and dependencies. This enables you to easily and reliably run applications on different environments such as your local desktop, physical servers, virtual servers, testing, staging, production environments and public or private clouds.Another part of portability is that once you built your Docker image and checked that it runs correctly, it would run anywhere else that runs docker, which are Windows, Linux and MacOS servers today.EfficiencyYou really only need your application process to run not all the system soft, etc. And containers provide exactly that - they run only the processes you explicitly need and nothing else. Since containers do not require a separate operating system, they use up less resources. While a VM often measures several gigabytes in size, a container usually measures only a few hundred megabytes, making it possible to run many more containers than VMs on a single server. Since containers have a higher utilization level with regard to the underlying hardware, you require less hardware, resulting in a reduction of bare metal costs as well as datacenter costs.IsolationContainers isolate your application from everything else, and while several containers can run on the same server they can be completely independent of each other. Any interaction between containers should be explicitly declared as such. If one container fails it doesn't affect others and can be quickly restarted. Security also benefits from such isolation. For example, exploiting web server vulnerability on a bare metal can potentially give an attacker access to the whole server, but in the case of containers, attacker would only get access to web server container.LightweightSince containers do not require a separate OS, they can be started, stopped or rebooted in a matter of seconds which speeds up all related development pipelines and time to production. You can start working sooner and spend zero time on configuring. ImmutabilityImmutable infrastructure is comprised of immutable components that are replaced for every deployment, rather than being updated in-place. Those components are started from a common image that is built once per deployment and can be tested and validated. Immutability reduces inconsistency and allows replication and moving between different states of your application with ease. More on immutability.New possibilitiesAll these advantages allow us to manage our infrastructure and workflow in the entirely new ways.OrchestrationThere is a problem with bare metal or VM environments - they gain individuality which brings many, usually unpleasant, surprises down the road. The answer to that is Infrastructure as code - management of infrastructure in a descriptive model, using the same versioning as DevOps team uses for source code.With Infrastructure as Code a deployment command always sets the target environment into the same configuration, regardless of the environment’s starting state. It is achieved by either automatically configuring an existing target or by discarding the existing target and recreating a fresh environment.Accordingly, with Infrastructure as Code, teams make changes to the environment description and version the configuration model, which is typically in well-documented code formats such as JSON. The release pipeline executes the model to configure target environments. If the team needs to make changes, they edit the source, not the target.All this is possible and much easier to do with containers. Shutting down container and starting a new one takes a few seconds while provisioning a new VM takes a few minutes. And I'm not even talking about rolling back a server into a clean state.ScalingFrom the previous point, you may get an idea that infrastructure as code is static by itself. That's not so, as orchestration tools can also provide horizontal scaling (provisioning more of the same) based on current workload. You should only run what is currently required and scale your application accordingly. It can also reduce costs.ConclusionContainers can streamline your development pipeline. Elimination of inconsistencies between environments allows for easier testing and debugging. Orchestration allows you to build scalable applications. Deployment or rollback to any point of immutable history is possible and easy.Organizations want to work at a higher level where all the above listed issues are already solved and where we find schedulers and orchestrators handling more things in an automated way for us.What's nextIn the next article, we'll talk about provisioning with containers and create a CD configuration that leverages InterSystems IRIS Docker container.
Article
Mikhail Khomenko · Aug 16, 2017

Grafana-based GUI for mgstat, a system monitoring tool for InterSystems Caché, Ensemble or HealthShare

Hello! This article continues the article "Making Prometheus Monitoring for InterSystems Caché". We will take a look at one way of visualizing the results of the work of the ^mgstat tool. This tool provides the statistics of Caché performance, and specifically the number of calls for globals and routines (local and over ECP), the length of the write daemon’s queue, the number of blocks saved to the disk and read from it, amount of ECP traffic and more. ^mgstat can be launched separately (interactively or by a job), and in parallel with another performance measurement tool, ^pButtons.I’d like to break the narrative into two parts: the first part will graphically demonstrate the statistics collected by ^mgstat, the second one will concentrate on how exactly these stats are collected. In short, we are using $zu-functions. However, there is an object interface for the majority of collected parameters accessible via the classes of the of the SYS.Stats package. Just a fraction of the parameters that you can collect are shown in ^mgstat. Later on, we will try to show all of them on Grafana dashboards. This time, we will only work with the ones provided by ^mgstat. Apart from this, we’ll take a bite of Docker containers to see what they are.Installing DockerThe first part is about installing Prometheus and Grafana from tarballs. We will now show how to launch a monitoring server using Docker’s capabilities. This is the demo host machine:# uname -r4.8.16-200.fc24.x86_64# cat /etc/fedora-releaseFedora release 24 (Twenty Four)Two more virtual machines will be used (192.168.42.131 and 192.168.42.132) in the VMWare Workstation Pro 12.0 environment, both with Caché on board. These are the machines we will be monitoring. Versions:# uname -r3.10.0-327.el7.x86_64# cat /etc/redhat-releaseRed Hat Enterprise Linux Server release 7.2 (Maipo)…USER>write $zversionCache for UNIX (Red Hat Enterprise Linux for x86-64) 2016.2 (Build 721U) Wed Aug 17 2016 20:19:48 EDTLet’s install Docker on the host machine and launch it:# dnf install -y docker# systemctl start docker# systemctl status docker● docker.service — Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)Active: active (running) since Wed 2017-06-21 15:08:28 EEST; 3 days ago...Launching Prometheus in a Docker containerLet’s load the last Prometheus image:# docker pull docker.io/prom/prometheusIf we look at the Docker file, we will see that the image reads the configuration from its /etc/prometheus/prometheus.yml file, and collected statistics are saved to the /prometheus folder:…CMD [ "-config.file=/etc/prometheus/prometheus.yml", \"-storage.local.path=/prometheus", \...When starting Prometheus in a Docker container, let’s make it load the configuration file and the metrics database from the host machine. This will help us “survive” the restart of the container. Let’s now create folders for Prometheus on the host machine:# mkdir -p /opt/prometheus/data /opt/prometheus/etcLet’s create a Prometheus configuration file:# cat /opt/prometheus/etc/prometheus.ymlglobal: scrape_interval: 10sscrape_configs: - job_name: 'isc_cache' metrics_path: '/mgstat/5' # Tail 5 (sec) it's a diff time for ^mgstat. Should be less than scrape interval. static_configs: - targets: ['192.168.42.131:57772','192.168.42.132:57772'] basic_auth: username: 'PromUser' password: 'Secret'We can now launch a container with Prometheus:# docker run -d --name prometheus \--hostname prometheus -p 9090:9090 \-v /opt/prometheus/etc/prometheus.yml:/etc/prometheus/prometheus.yml \-v /opt/prometheus/data/:/prometheus \docker.io/prom/prometheusCheck if it launched fine:# docker ps --format "{{.ID}}: {{.Command}} {{.Status}} {{.Names}}"d3a1db5dec1a: "/bin/prometheus -con" Up 5 minutes prometheusLaunching Grafana in a Docker containerFirst, let’s download the most recent image:# docker pull docker.io/grafana/grafanaWe’ll then launch it, specifying that the Grafana database (SQLite by default) will be stored on the host machine. We’ll also make a link to the container with Prometheus, so that we can then link to it from the one containing Grafana:# mkdir -p /opt/grafana/db# docker run -d --name grafana \--hostname grafana -p 3000:3000 \--link prometheus \-v /opt/grafana/db:/var/lib/grafana \docker.io/grafana/grafana# docker ps --format "{{.ID}}: {{.Command}} {{.Status}} {{.Names}}"fe6941ce3d15: "/run.sh" Up 3 seconds grafanad3a1db5dec1a: "/bin/prometheus -con" Up 14 minutes prometheusUsing Docker-composeBoth our containers are launched separately. A more convenient method of launching several containers at once is the use of Docker-compose. Let’s install it and suspend the current two containers, then reconfigure their restart via Docker-compose and launch them again.The same in the cli language:# dnf install -y docker-compose# docker stop $(docker ps -a -q)# docker rm $(docker ps -a -q)# mkdir /opt/docker-compose# cat /opt/docker-compose/docker-compose.ymlversion: '2'services: prometheus: image: docker.io/prom/prometheus container_name: prometheus hostname: prometheus ports: - 9090:9090 volumes: - /opt/prometheus/etc/prometheus.yml:/etc/prometheus/prometheus.yml - /opt/prometheus/data/:/prometheus grafana: image: docker.io/grafana/grafana container_name: grafana hostname: grafana ports: - 3000:3000 volumes: - /opt/grafana/db:/var/lib/grafana# docker-compose -f /opt/docker-compose/docker-compose.yml up -d# # Both containers can be disabled and removed with the following command: # # docker-compose -f /opt/docker-compose/docker-compose.yml down# docker ps --format "{{.ID}}: {{.Command}} {{.Status}} {{.Names}}"620e3cb4a5c3: "/run.sh" Up 11 seconds grafanae63416e6c247: "/bin/prometheus -con" Up 12 seconds prometheusPost-installation proceduresAfter starting Grafana for the first time, you need to do two more things: change the admin password for the web interface (by default, the login/password combination is admin/admin) and add Prometheus as a data source. This can be done either from the web interface or by directly editing the Grafana SQLite database (located by default at /opt/grafana/db/grafana.db), or using REST requests.Let me show the third option:# curl -XPUT "admin:admin@localhost:3000/api/user/password" \-H "Content-Type:application/json" \-d '{"oldPassword":"admin","newPassword":"TopSecret","confirmNew":"TopSecret"}'If the password has been changed successfully, you will get the following response:{"message":"User password changed"}Response of the following type:curl: (56) Recv failure: Connection reset by peermeans that the Grafana server hasn’t started yet and we just need to wait a little longer before running the previous command again. You can wait like this, for example:# until curl -sf admin:admin@localhost:3000 > /dev/null; do sleep 1; echo "Grafana is not started yet";done; echo "Grafana is started"Once you’ve successfully changed the password, add a Prometheus data source:# curl -XPOST "admin:TopSecret@localhost:3000/api/datasources" \-H "Content-Type:application/json" \-d '{"name":"Prometheus","type":"prometheus","url":"http://prometheus:9090","access":"proxy"}'If the data source has been added successfully, you will get a response:{"id":1,"message":"Datasource added","name":"Prometheus"}Creating an equivalent of ^mgstat^mgstat saves output to a file and the terminal in an interactive mode. We don’t care about output to a file. This is why we are going to use the Studio to create and compile a class called my.Metrics containing some an object-oriented implementation of ^mgstat in the USER space. /// This class is an object-oriented implementation of ^mgstat routine./// Unlike the last the Caché version check is skipped./// If you want to monitor seizes you should set parameter ISSEIZEGATHERED = 1./// Unlike ^mgstat routine Seizes metrics show as diff (not as percentage)./// Some of $zutil functions are unknown for me, but they are used in ^mgstat so they're leaved here.Class my.Metrics Extends %RegisteredObject{/// Metrics prefixParameter PREFIX = "isc_cache_mgstat_";/// Metrics for Prometheus must be divided by 'new line'Parameter NL As COSEXPRESSION = "$c(10)";/// Unknown parameter -). Used as in ^mgstat.intParameter MAXVALUE = 1000000000;/// 2**64 - 10. Why minus 10? It's a question -) Used as in ^mgstat.intParameter MAXVALGLO = 18446744073709551610;/// Resources that we are interested to monitor. You can change this listParameter SEIZENAMES = "Global,ObjClass,Per-BDB";/// Default value - $zutil(69,74). You can start gather seize statistics it by setting "1"Parameter ISSEIZEGATHERED = 0;Parameter MAXECPCONN As COSEXPRESSION = "$system.ECP.MaxClientConnections()";/// Number of global buffers types (8K, 16K etc.)Parameter NUMBUFF As COSEXPRESSION = "$zutil(190, 2)";/// Memory offset (for what? -))Parameter WDWCHECK As COSEXPRESSION = "$zutil(40, 2, 146)";/// Memory offset for write daemon phaseParameter WDPHASEOFFSET As COSEXPRESSION = "$zutil(40, 2, 145)";/// Memory offset for journalsParameter JOURNALBASE As COSEXPRESSION = "$zutil(40, 2, 94)";ClassMethod getSamples(delay As %Integer = 2) As %Status{ set sc = $$$OK try { set sc = ..gather(.oldValues) hang delay set sc = ..gather(.newValues) set sc = ..diff(delay, .oldValues, .newValues, .displayValues) set sc = ..output(.displayValues) } catch e { write "Error: "_e.Name_"_"_e.Location, ..#NL } quit sc}ClassMethod gather(Output values) As %Status{ set sc = $$$OK // Get statistics for globals set sc = ..getGlobalStat(.values) // Get write daemon statistics set sc = ..getWDStat(.values) // Get journal writes set values("journal_writes") = ..getJournalWrites() // Get seizes statistics set sc = ..getSeizeStat(.values) // Get ECP statistics set sc = ..getECPStat(.values) quit sc}ClassMethod diff(delay As %Integer = 2, ByRef oldValues, ByRef newValues, Output displayValues) As %Status{ set sc = $$$OK // Process metrics for globals set sc = ..loopGlobal("global", .oldValues, .newValues, delay, 1, .displayValues) set displayValues("read_ratio") = $select( displayValues("physical_reads") = 0: 0, 1: $number(displayValues("logical_block_requests") / displayValues("physical_reads"),2) ) set displayValues("global_remote_ratio") = $select( displayValues("remote_global_refs") = 0: 0, 1: $number(displayValues("global_refs") / displayValues("remote_global_refs"),2) ) // Process metrics for write daemon (not per second) set sc = ..loopGlobal("wd", .oldValues, .newValues, delay, 0, .displayValues) // Process journal writes set displayValues("journal_writes") = ..getDiff(oldValues("journal_writes"), newValues("journal_writes"), delay) // Process seize metrics set sc = ..loopGlobal("seize", .oldValues, .newValues, delay, 1, .displayValues) // Process ECP client metrics set sc = ..loopGlobal("ecp", .oldValues, .newValues, delay, 1, .displayValues) set displayValues("act_ecp") = newValues("act_ecp") quit sc}ClassMethod getDiff(oldValue As %Integer, newValue As %Integer, delay As %Integer = 2) As %Integer{ if (newValue < oldValue) { set diff = (..#MAXVALGLO - oldValue + newValue) \ delay if (diff > ..#MAXVALUE) set diff = newValue \ delay } else { set diff = (newValue - oldValue) \ delay } quit diff}ClassMethod loopGlobal(subscript As %String, ByRef oldValues, ByRef newValues, delay As %Integer = 2, perSecond As %Boolean = 1, Output displayValues) As %Status{ set sc = $$$OK set i = "" for { set i = $order(newValues(subscript, i)) quit:(i = "") if (perSecond = 1) { set displayValues(i) = ..getDiff(oldValues(subscript, i), newValues(subscript, i), delay) } else { set displayValues(i) = newValues(subscript, i) } } quit sc}ClassMethod output(ByRef displayValues) As %Status{ set sc = $$$OK set i = "" for { set i = $order(displayValues(i)) quit:(i = "") write ..#PREFIX_i," ", displayValues(i),..#NL } write ..#NL quit sc}ClassMethod getGlobalStat(ByRef values) As %Status{ set sc = $$$OK set gloStatDesc = "routine_refs,remote_routine_refs,routine_loads_and_saves,"_ "remote_routine_loads_and_saves,global_refs,remote_global_refs,"_ "logical_block_requests,physical_reads,physical_writes,"_ "global_updates,remote_global_updates,routine_commands,"_ "wij_writes,routine_cache_misses,object_cache_hit,"_ "object_cache_miss,object_cache_load,object_references_newed,"_ "object_references_del,process_private_global_refs,process_private_global_updates" set gloStat = $zutil(190, 6, 1) for i = 1:1:$length(gloStat, ",") { set values("global", $piece(gloStatDesc, ",", i)) = $piece(gloStat, ",", i) } quit sc}ClassMethod getWDStat(ByRef values) As %Status{ set sc = $$$OK set tempWdQueue = 0 for b = 1:1:..#NUMBUFF { set tempWdQueue = tempWdQueue + $piece($zutil(190, 2, b), ",", 10) } set wdInfo = $zutil(190, 13) set wdPass = $piece(wdInfo, ",") set wdQueueSize = $piece(wdInfo, ",", 2) set tempWdQueue = tempWdQueue - wdQueueSize if (tempWdQueue < 0) set tempWdQueue = 0 set misc = $zutil(190, 4) set ijuLock = $piece(misc, ",", 4) set ijuCount = $piece(misc, ",", 5) set wdPhase = 0 if (($view(..#WDWCHECK, -2, 4)) && (..#WDPHASEOFFSET)) { set wdPhase = $view(..#WDPHASEOFFSET, -2, 4) } set wdStatDesc = "write_daemon_queue_size,write_daemon_temp_queue,"_ "write_daemon_pass,write_daemon_phase,iju_lock,iju_count" set wdStat = wdQueueSize_","_tempWdQueue_","_wdPass_","_wdPhase_","_ijuLock_","_ijuCount for i = 1:1:$length(wdStat, ",") { set values("wd", $piece(wdStatDesc, ",", i)) = $piece(wdStat, ",", i) } quit sc}ClassMethod getJournalWrites() As %String{ quit $view(..#JOURNALBASE, -2, 4)}ClassMethod getSeizeStat(ByRef values) As %Status{ set sc = $$$OK set seizeStat = "", seizeStatDescList = "" set selectedNames = ..#SEIZENAMES set seizeNumbers = ..getSeizeNumbers(selectedNames) // seize statistics set isSeizeGatherEnabled = ..#ISSEIZEGATHERED if (seizeNumbers = "") { set SeizeCount = 0 } else { set SeizeCount = isSeizeGatherEnabled * $length(seizeNumbers, ",") } for i = 1:1:SeizeCount { set resource = $piece(seizeNumbers, ",", i) set resourceName = ..getSeizeLowerCaseName($piece(selectedNames, ",", i)) set resourceStat = $zutil(162, 3, resource) set seizeStat = seizeStat_$listbuild($piece(resourceStat, ",")) set seizeStat = seizeStat_$listbuild($piece(resourceStat, ",", 2)) set seizeStat = seizeStat_$listbuild($piece(resourceStat, ",", 3)) set seizeStatDescList = seizeStatDescList_$listbuild( resourceName_"_seizes", resourceName_"_n_seizes", resourceName_"_a_seizes" ) } set seizeStatDesc = $listtostring(seizeStatDescList, ",") set seizeStat = $listtostring(seizeStat, ",") if (seizeStat '= "") { for k = 1:1:$length(seizeStat, ",") { set values("seize", $piece(seizeStatDesc, ",", k)) = $piece(seizeStat, ",", k) } } quit sc}ClassMethod getSeizeNumbers(selectedNames As %String) As %String{ /// USER>write $zu(162,0) // Pid,Routine,Lock,Global,Dirset,SatMap,Journal,Stat,GfileTab,Misc,LockDev,ObjClass... set allSeizeNames = $zutil(162,0)_"," //Returns all resources names set seizeNumbers = "" for i = 1:1:$length(selectedNames, ",") { set resourceName = $piece(selectedNames,",",i) continue:(resourceName = "")||(resourceName = "Unused") set resourceNumber = $length($extract(allSeizeNames, 1, $find(allSeizeNames, resourceName)), ",") - 1 continue:(resourceNumber = 0) if (seizeNumbers = "") { set seizeNumbers = resourceNumber } else { set seizeNumbers = seizeNumbers_","_resourceNumber } } quit seizeNumbers}ClassMethod getSeizeLowerCaseName(seizeName As %String) As %String{ quit $tr($zcvt(seizeName, "l"), "-", "_")}ClassMethod getECPStat(ByRef values) As %Status{ set sc = $$$OK set ecpStat = "" if (..#MAXECPCONN '= 0) { set fullECPStat = $piece($system.ECP.GetProperty("ClientStats"), ",", 1, 21) set activeEcpConn = $system.ECP.NumClientConnections() set addBlocks = $piece(fullECPStat, ",", 2) set purgeBuffersByLocal = $piece(fullECPStat, ",", 6) set purgeBuffersByRemote = $piece(fullECPStat, ",", 7) set bytesSent = $piece(fullECPStat, ",", 19) set bytesReceived = $piece(fullECPStat, ",", 20) } set ecpStatDesc = "add_blocks,purge_buffers_local,"_ "purge_server_remote,bytes_sent,bytes_received" set ecpStat = addBlocks_","_purgeBuffersByLocal_","_ purgeBuffersByRemote_","_bytesSent_","_bytesReceived if (ecpStat '= "") { for l = 1:1:$length(ecpStat, ",") { set values("ecp", $piece(ecpStatDesc, ",", l)) = $piece(ecpStat, ",", l) } set values("act_ecp") = activeEcpConn } quit sc}}In order to call my.Metrics via REST, let’s create a wrapper class for it in the USER space.Class my.Mgstat Extends %CSP.REST{XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ]{<Routes><Route Url="/:delay" Method="GET" Call="getMgstat"/></Routes>}ClassMethod getMgstat(delay As %Integer = 2) As %Status{ // By default, we use 2 second interval for averaging quit ##class(my.Metrics).getSamples(delay)}}Creating a resource, a user and a web applicationNow that we have a class feeding us metrics, we can create a RESTful web application. Like in the first article, we’ll allocate a resource to this web application and create a user who will use it and on whose behalf Prometheus will be collecting metrics. Once done, let’s grant the user rights to particular databases. In comparison with the first article, we have added a permission to write to the CACHESYS database (to avoid the <UNDEFINED>loop+1^mymgstat *gmethod" error) and the possibility to use the %Admin_Manage resource (to avoid the <PROTECT>gather+10^mymgstat *GetProperty,%SYSTEM.ECP" error). Let’s repeat these steps on both virtual servers, 192.168.42.131 and 192.168.42.132. Before doing that, we’ll upload our code, the my.Metrics class and the my.Mgstat class to the USER space on both servers (the code is available on github).That is, we perform the following steps on each virtual server:# cd /tmp# wget https://raw.githubusercontent.com/myardyas/prometheus/master/mgstat/cos/udl/Metrics.cls# wget https://raw.githubusercontent.com/myardyas/prometheus/master/mgstat/cos/udl/Mgstat.cls## # If servers are not connected to the Internet, copy the program and the class locally, then use scp.## csession <instance_name> -U userUSER>do $system.OBJ.Load("/tmp/Metrics.cls*/tmp/Mgstat.cls","ck")USER>zn "%sys"%SYS>write ##class(Security.Resources).Create("PromResource","Resource for Metrics web page","") 1%SYS>write ##class(Security.Roles).Create("PromRole","Role for PromResource","PromResource:U,%Admin_Manage:U,%DB_USER:RW,%DB_CACHESYS:RW")1%SYS>write ##class(Security.Users).Create("PromUser","PromRole","Secret")1%SYS>set properties("NameSpace") = "USER"%SYS>set properties("Description") = "RESTfull web-interface for mgstat"%SYS>set properties("AutheEnabled") = 32 ; See description%SYS>set properties("Resource") = "PromResource"%SYS>set properties("DispatchClass") = "my.Mgstat" %SYS>write ##class(Security.Applications).Create("/mgstat",.properties)1Check the availability of metrics using curl(Don't forget to open port 57772 in firewall)# curl --user PromUser:Secret -XGET http://192.168.42.131:57772/mgstat/5isc_cache_mgstat_global_refs 347isc_cache_mgstat_remote_global_refs 0isc_cache_mgstat_global_remote_ratio 0…# curl --user PromUser:Secret -XGET http://192.168.42.132:57772/mgstat/5isc_cache_mgstat_global_refs 130isc_cache_mgstat_remote_global_refs 0isc_cache_mgstat_global_remote_ratio 0...Check the availability of metrics from PrometheusPrometheus listens to port 9090. Let’s check the status of Targets first:Then look at a random metric:Showing one metricWe’ll now show just one metric – for example, isc_cache_mgstat_global_refs, as a graph. We’ll need to update the dashboard and insert the graph there. To do this, we go to Grafana (http://localhost:3000, login/pass — admin/TopSecret) and add a new dashboard:Add a graph:Edit it by clicking on “Panel title”, then “Edit”:Set Prometheus as the data source and pick our metric, isc_cache_mgstat_global_refs. Set the resolution to 1/1:Let’s give our graph a name:Add a legend:Click the “Save” button at the top of the window and type the dashboard’s name:We end up having something like this:Showing all metricsLet’s add the rest of the metrics the same way. There will be two text metrics – Singlestat. As the result, we’ll get get the following dashboard (upper and lower parts shown):Two things obviously don’t seem right:— Scrollbars in the legend (as the number of servers goes up, you’ll have to do a lot of scrolling);— Absence of data in Singlestat panels (that, of course, imply a single value). We have two servers and two values, accordingly.Adding the use of a templateLet’s try fixing these issues by introducing a template for instances. To do that, we’ll need to create a variable storing the value of the instance, and slightly edit requests to Prometheus, according to the rules. That is, instead of the "isc_cache_mgstat_global_refs" request, we should use "isc_cache_mgstat_global_refs{instance="[[instance]]"}" after creating an “instance” variable.Creating a variable:In our request to Prometheus, let’s select the values of instance labels from each metric. In the lower part, we can see that the values of our two instances have been identified. Click the “Add” button:A variable with possible values has been added to the upper part of the dashboard:Let us now add this variable to requests for each panel on the dashboard; that is, turn request like "isc_cache_mgstat_global_refs" into "isc_cache_mgstat_global_refs{instance="[[instance]]"}". The resulting dashboard will look like this (instance names have been left next to the legend on purpose):Singlestat panels are already working:The template of this dashboard can be downloaded from github. The process of importing it to Grafana was described in part 1 of the article.Finally, let’s make server 192.168.42.132 the ECP client for 192.168.42.131 and create globals for generating ECP traffic. We can see that ECP client monitoring is working:ConclusionWe can replace the display of ^mgstat results in Excel with a dashboard full of nice-looking graphs that are available online. The downside is the need to use an alternative version of ^mgstat. In general, the code of the source tool can change, which wasn’t taken into account. However, we get a really convenient method of monitoring Caché’s performance.Thank you for your attention!To be continued...P.S.The demo (for one instance) is available here, no login/password required. Nice! Thanks for this! Here's what I'm currently using to deal with pButtons/mgstat output: https://community.intersystems.com/post/visualizing-data-jungle-part-iv-running-yape-docker-image Cheers, Fab Fabian. thanks for link! I'll try it. Thank you for sharing . Good job. Hi Mikhail, you've done a really nice job!I'm just curious, why:We don’t care about output to a file.Wasn't it easier to parse mgstat's output file? Hi, Alexey! Thanks! What about your question: I think, in that case we should:1) run mgstat continuously2) parse file.Although both of these steps are not difficult, a REST-interface enables us to merge them in one step when we run class in that time we want. Besides we can always extend our metrics set. For example, it's worth to add monitoring of databases sizes as well as Mirroring, Shadowing etc. Mikhail, haven't you considered to keep the time series in Cache and use Grafana directly? @David Loveluck seems got something working in this direction: https://community.intersystems.com/post/using-grafana-directly-iris. Cashe/IRIS is powerful database, so another database in the middle feels like Boeing using parts from Airbus. @Arto.Alatalo At that time i've used Prometheus as a central monitoring system for hosts, services and Cache as well.Also Simple JSON plugin should be improved to provide similar to Prometheus/Grafana functionality, at least at that moment I've looked at it last time. Someone keep json from dashboard ? Hi Mikhail, Where did you get the information for the meaning of the returned values from $system.ECP.GetProperty("ClientStats")? Nice job, anyway ;-) Hi David,Thanks -) Regarding a meaning - it's taken from mgstat source-code (%SYS, routine mgstat.int).Starting point was a line 159 in my local Cache 2017.1: i maxeccon s estats=$p($system.ECP.GetProperty("ClientStats"),",",1,21),array($i(i))=+$system.ECP.NumClientConnections(),array($i(i))=$p(estats,",",2),array($i(i))=$p(estats,",",6),array($i(i))=$p(estats,",",7),array($i(i))=$p(estats,",",19),array($i(i))=$p(estats,",",20) Then I guessed a meaning from a subroutine "heading" (line 289). But the best option for you, I think, is to ask WRC. Support is very good.
Article
Eduard Lebedyuk · Aug 14, 2018

Continuous Delivery of your InterSystems solution using GitLab - Part IX: Container architecture

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:Git 101Git flow (development process)GitLab installationGitLab WorkflowContinuous DeliveryGitLab installation and configurationGitLab CI/CDWhy containers?Containers infrastructureCD using containersCD using ICMContainer architectureIn this article, we would talk about building your own container and deploying it.Durable %SYSSince containers are rather ephemeral they should not store any application data. Durable %SYS feature enables us to do just that - store settings, configuration, %SYS data and so on a host volume, namely:The iris.cpf file.The /csp directory, containing the web gateway configuration and log files.The file /httpd/httpd.conf, the configuration file for the instance’s private web server.The /mgr directory, containing the following:The IRISSYS system database, comprising the IRIS.DAT and iris.lck files and the stream directory, and the iristemp, irisaudit, iris and user directories containing the IRISTEMP, IRISAUDIT, IRIS and USER system databases.The write image journaling file, IRIS.WIJ.The /journal directory containing journal files.The /temp directory for temporary files.Log files including messages.log, journal.log, and SystemMonitor.log.Container architectureOn the other hand, we need to store application code inside our container so that we can upgrade it when required.All that brings us to this architecture:To achieve this during build time we, at the minimum, need to create one additional database (to store application code) and map it into our application namespace. In my example, I would use USER namespace to hold application data as it already exists and is durable.InstallerBased on the above, our installer needs to:Create APP namespace/databaseLoad code into APP NamespaceMap our application classes to USER namespaceDo all other installation (in this case I created CSP web app and REST app) Class MyApp.Hooks.Local { Parameter Namespace = "APP"; /// See generated code in zsetup+1^MyApp.Hooks.Local.1 XData Install [ XMLNamespace = INSTALLER ] { <Manifest> <Log Text="Creating namespace ${Namespace}" Level="0"/> <Namespace Name="${Namespace}" Create="yes" Code="${Namespace}" Ensemble="" Data="IRISTEMP"> <Configuration> <Database Name="${Namespace}" Dir="/usr/irissys/mgr/${Namespace}" Create="yes" MountRequired="true" Resource="%DB_${Namespace}" PublicPermissions="RW" MountAtStartup="true"/> </Configuration> <Import File="${Dir}Form" Recurse="1" Flags="cdk" IgnoreErrors="1" /> </Namespace> <Log Text="End Creating namespace ${Namespace}" Level="0"/> <Log Text="Mapping to USER" Level="0"/> <Namespace Name="USER" Create="no" Code="USER" Data="USER" Ensemble="0"> <Configuration> <Log Text="Mapping Form package to USER namespace" Level="0"/> <ClassMapping From="${Namespace}" Package="Form"/> <RoutineMapping From="${Namespace}" Routines="Form" /> </Configuration> <CSPApplication Url="/" Directory="${Dir}client" AuthenticationMethods="64" IsNamespaceDefault="false" Grant="%ALL" Recurse="1" /> </Namespace> </Manifest> } /// This is a method generator whose code is generated by XGL. /// Main setup method /// set vars("Namespace")="TEMP3" /// do ##class(MyApp.Hooks.Global).setup(.vars) ClassMethod setup(ByRef pVars, pLogLevel As %Integer = 0, pInstaller As %Installer.Installer) As %Status [ CodeMode = objectgenerator, Internal ] { Quit ##class(%Installer.Manifest).%Generate(%compiledclass, %code, "Install") } /// Entry point ClassMethod onAfter() As %Status { try { write "START INSTALLER",! set vars("Namespace") = ..#Namespace set vars("Dir") = ..getDir() set sc = ..setup(.vars) write !,$System.Status.GetErrorText(sc),! set sc = ..createWebApp() } catch ex { set sc = ex.AsStatus() write !,$System.Status.GetErrorText(sc),! } quit sc } /// Modify web app REST ClassMethod createWebApp(appName As %String = "/forms") As %Status { set:$e(appName)'="/" appName = "/" _ appName #dim sc As %Status = $$$OK new $namespace set $namespace = "%SYS" if '##class(Security.Applications).Exists(appName) { set props("AutheEnabled") = $$$AutheUnauthenticated set props("NameSpace") = "USER" set props("IsNameSpaceDefault") = $$$NO set props("DispatchClass") = "Form.REST.Main" set props("MatchRoles")=":" _ $$$AllRoleName set sc = ##class(Security.Applications).Create(appName, .props) } quit sc } ClassMethod getDir() [ CodeMode = expression ] { ##class(%File).NormalizeDirectory($system.Util.GetEnviron("CI_PROJECT_DIR")) } } To create the not-durable database I use a subdirectory of /usr/irissys/mgr, which is not persistent. Note that call to ##class(%File).ManagerDirectory() returns path to the durable directory and not the path to internal container directory. Continuous delivery configuration Check part VII for complete info, but all we need to do is to add these two lines (in bold) to our existing configuration. run image: stage: run environment: name: $CI_COMMIT_REF_NAME url: http://$CI_COMMIT_REF_SLUG.docker.eduard.win/index.html tags: - test script: - docker run -d --expose 52773 --volume /InterSystems/durable/$CI_COMMIT_REF_SLUG:/data --env ISC_DATA_DIRECTORY=/data/sys --env VIRTUAL_HOST=$CI_COMMIT_REF_SLUG.docker.eduard.win --name iris-$CI_COMMIT_REF_NAME docker.eduard.win/test/docker:$CI_COMMIT_REF_NAME --log $ISC_PACKAGE_INSTALLDIR/mgr/messages.log volume argument mounts host directory to the container and ISC_DATA_DIRECTORY variable shows InterSystems IRIS what directory to use. To quote documentation: When you run an InterSystems IRIS container using these options, the following occurs: The specified external volume is mounted.If the durable %SYS directory specified by the ISC_DATA_DIRECTORY environment variable, iconfig/ in the preceding example, already exists and contains durable %SYS data, all of the instance’s internal pointers are reset to that directory and the instance uses the data it contains.If the durable %SYS directory specified by the ISC_DATA_DIRECTORY environment variable already exists but does not contain durable %SYS data, no data is copied and the instance runs using the data in the installation tree inside the container, which means that the instance-specific data is not persistent. For this reason, you may want to include in scripts a check for this condition prior to running the container.If the durable %SYS directory specified by ISC_DATA_DIRECTORY does not exist: The specified durable %SYS directory is created.The directories and files listed in contents of the Durable %SYS Directory are copied from their installed locations to the durable %SYS directory (the originals remain in place). All of the instance’s internal pointers are reset to the durable %SYS directory and the instance uses the data it contains. Updates When application evolves and a new version (container) gets released sometimes you may need to run some code. It could be pre/post compile hooks, schema migrations, unit tests but all it boils down to is that you need to run arbitrary code. That's why you need a framework that manages your application. In previous articles, I outlined a basic structure of this framework, but it, of course, can be considerably extended to fit specific application requirements. Conclusion Creating a containerized application takes some thoughts but InterSystems IRIS offers several features to make this process easier. Links IndexCode for the articleTest projectComplete CD configurationComplete CD configuration with durable %SYS Great article, Eduard, thanks!It's not clear though what do I need to have on a production host initially? In my understanding, installed docker and container with IRIS and container with IRIS is enough. In that case how Durable %SYS and Application data (USER NAMESPACE) appear on the production host for the first time? what do I need to have on a production host initially?First of all you need to have:FQDNGitLab serverDocker RegistryInterSystems IRIS container inside your Docker RegistryThey could be anywhere as long as they are accessible from production host.After that on a production host (and on every separate host you want to use), you need to have:DockerGitLab RunnerNginx reserve proxy containerAfter all these conditions are met you can create Continuous Delivery configuration in GitLab and it would build and deploy your container to production host.In that case how Durable %SYS and Application data (USER NAMESPACE) appear on the production host for the first time? When InterSystems IRIS container is started in Durable %SYS mode, it checks directory for durable data, if it does not exist InterSystems IRIS creates it and copies the data from inside the container. If directory already exists and contains databases/config/etc then it's used.By default InterSystems IRIS has all configs inside. Thanks, Eduard! But why Gitlab? Can I manage all the Continuous Delivery say with Github? GitHub does not offer CI functionality but can be integrated with CI engine. Another question: can I deploy a container manually on production host with a reasonable number of steps in management or CI is the only option? can I deploy a container manuallySure, to deploy a container manually it's enough to execute this command: docker run -d --expose 52773 --volume /InterSystems/durable/master:/data --env ISC_DATA_DIRECTORY=/data/sys --name iris-master docker.eduard.win/test/docker:master --log $ISC_PACKAGE_INSTALLDIR/mgr/messages.log Alternatively, you can use GUI container management tools to configure a container you want to deploy. For example, here's Portainer web interface, you can define volumes, variables, etc there: it also allows browsing registry and inspecting your running containers among other things: Wow. Do you have any experience with Portanier + IRIS containers? I think it deserves a new article. I ran InterSystems IRIS containers via Rancher and Portainer it's all the same stuff. GUI over docker run. Excellent work! Way to go! Thanks for the sharing, Eduard. That is a good demonstration of the use of -volume in the docker run command.In the situation of bringing up multiple containers of the same InterSystems IRIS image (such as starting up a number of sharding nodes, etc.), the system administrator may consider organising the host by some files/folders among instances as sharable but read-only (such as the license key, etc.), and some as isolated yet temporary. Maybe we can go further and enhance the docker run command in this direction, so that the same command with a small change (such as the port number and the container name) is enough for starting up a cluster of instances quickly. I think you'll need a full fledged orchestrator for that. Docker run always starts one container, bit several volumes could be mounted, some of them RW, some RO. I'm not sure about mounting the same folder into several containers (again, orchestration).You can also use volumes_from argument to mount directory from one container to another. Thank you, Luca! That's right. Maybe we can consider running docker-compose as an option for such an orchestration.
Announcement
Fabiano Sanches · Jul 12, 2023

InterSystems IRIS, IRIS for Health, & HealthShare Health Connect 2023.1.1 are available

The extended maintenance releases of InterSystems IRIS, InterSystems IRIS for Health, and HealthShare Health Connect 2023.1.1 are now available. This release provides bug fixes for the previous 2023.1.0 releases. You can find the detailed change lists / upgrade checklists on these pages: InterSystems IRIS InterSystems IRIS for Health HealthShare Health Connect How to get the software The software is available as both classic installation packages and container images. For the complete list of available installers and container images, please refer to the Supported Platforms webpage. Full installation packages for both InterSystems IRIS and InterSystems IRIS for Health are available from this WRC's InterSystems IRIS Data Platform Full Kits. page. HealthShare Health Connect kits are available from WRC's HealthShare Full Kits page. Container images are available from the InterSystems Container Registry. There are no Community Edition kits or containers available for this release. The number of all kits & containers in this release is 2023.1.1.380.0.
Announcement
Sam Schafer · Jun 21, 2023

Managing InterSystems Servers: In-Person July 10-14, 2023 - Registration space available

Course: Managing InterSystems Servers When: July 10-14, 2023 9am to 5pm US Eastern Time (ET) Where: In-person only at InterSystems Headquarters in Cambridge, MA. This five-day course teaches system and database administrators how to install, configure and secure InterSystems server software, configure for high availability and disaster recovery, and monitor the system. Students also learn troubleshooting techniques. This course is applicable to both InterSystems IRIS and Caché. Although the course is mostly platform independent, students can complete the exercises using either Windows or Ubuntu. Self-register on our registration site Email education@intersystems.com with questions
Announcement
Celeste Canzano · Mar 24

Reminder: Beta Testers Needed for Our Upcoming InterSystems IRIS Developer Professional Certification Exam

Hello again IRIS community, We have officially released our InterSystems IRIS Developer Professional certification exam for beta testing. The beta test will be available until April 20, 2025. As a beta tester, you have the chance to earn the certification for free! Interested in beta testing? See the InterSystems IRIS Developer Professional Beta Test Developer Community post for exam details, recommended preparation, and instructions on how to schedule and take the beta exam. Thank you!
Announcement
Larry Finlayson · Mar 26

Managing InterSystems Servers – In-Person April 14-18, 2025 / Registration space available

Managing InterSystems Servers – In-Person April 14-18, 2025 Configure, manage, plan, and monitor system operations of InterSystems Data Platform technology This five-day course teaches system and database administrators how to install, configure and secure InterSystems server software, configure for high availability and disaster recovery, and monitor the system. Students also learn troubleshooting techniques. This course is applicable to both InterSystems IRIS and Caché. Although the course is mostly platform independent, students can complete the exercises using either Windows or Ubuntu. Self-Register Here
Announcement
Anastasia Dyubaylo · Mar 25

[Video] Building and Deploying React Frontend for the InterSystems FHIR server in 20 minutes with Lovable AI

Hi Community, Check out this new video that shows how to build the UI for InterSystems FHIR server from scratch using AI with no previous frontend experience: 📺 Building and Deploying React Frontend for the InterSystems FHIR server in 20 minutes with Lovable AI 🗣 Presenter: @Evgeny Shvarov, Senior Manager of Developer and Startup Programs, InterSystems The tools and products used: InterSystems IRIS for Health (FHIR Server) Lovable.dev (Frontend) VSCode (dev env) Github (code repository) ngrok (domain forwarding) Share your thoughts or questions in the comments to this post. Enjoy! Here is the related demo project on OEX.