Search

Clear filter
Announcement
Rubens Silva · Mar 16, 2020

IRIS-CI: A docker image for running InterSystems IRIS in CI environments

Hello all! As we ObjectScript developers have been experiencing, preparing an environment to run CI related tasks can be quite the chore. This is why I have been thinking about how we could improve this workflow and the result of that effort is [IRIS-CI](https://openexchange.intersystems.com/package/iris-ci). See how it works [here](https://imgur.com/N7uVDNK). ### Quickstart 1.Download the image from the Docker Hub registry: ``` docker pull rfns/iris-ci:0.5.3 ``` 2. Run the container (with the default settings): ``` docker run --rm --name ci -t -v /path/to/your/app:/opt/ci/app rfns/iris-ci:0.5.3 ``` Notice that volume mounting to `/path/to/your/app?` This is where the app should be. And that's it: the only thing required to start running the test suites is the path of the application. Also, since this is supposed to be a ephemeral and run-once container, there's no need to keep it listed after executing it, that's why there's the `--rm` flag. ### TL;DR; If you want an example on how to how use it: Check the usage with my another project [dotenv](https://github.com/rfns/dotenv/blob/master/.github/workflows/ci.yml). ### Advanced setup Some projects might need sophisticated setups in order to run the test suites, for such circunstances there's two customization levels: 1. Environment variables 2. Volume overwrite ### Environment variables Environment variables are the most simple customization format and should suffice for most situations. There's two ways to provide an environment variable: * `-e VAR_NAME="var value"` while using `docker run`. * By providing a .env file by mounting an extra volume for `docker run` like this: `-v /my/app/.env:/opt/ci/.env`. > NOTE: In case a variable is defined in both formats, using the `-e` format takes precedence over using a `.env` file. ### Types of environment variables * Variables prefixed with `CI_{NAME}` are passed down as `name` to the installer manifest. * Variables prefixed with `TESPARAM_{NAME}`are passed down as `NAME` to the unit test manager's UserFields property. * `TEST_SUITE`and `TEST_CASE` to control where to locate and which test case to target. Every variable is available to read from the `configuration.Envs` list, which is [passed](https://github.com/rfns/iris-ci/blob/master/ci/Runner.cls#L6) [down](https://github.com/rfns/iris-ci/blob/master/ci/Runner.cls#L53) through `Run` and `OnAfterRun` class methods. If `TEST_CASE` is not specified then the `recursive` flag will be set. In a project with many classes it might be interesting to at least define the `TEST_SUITE` and reduce the search scope due to performance concerns. ### Volume overwrite This image ships with a default installer that's focused on running test suites. But it's possbile to overwrite the following files in order to make it execute different tasks like: generating a XML file for old Caché versions. * `/opt/ci/App/Installer.cls` * `/opt/ci/Runner.cls` For more details on how to implement them, please check the default implementations: [Installer.cls](https://github.com/rfns/iris-ci/blob/master/ci/App/Installer.cls) [Runner.cls](https://github.com/rfns/iris-ci/blob/master/ci/Runner.cls) > TIP: Before overwriting the default Installer.cls check if you really need to, because the current implementation [also allows to create configurated web applications.](https://github.com/rfns/iris-ci#using-the-default-installer-manifest-for-unit-tests) EDIT: Link added.
Announcement
Ksenia Samokhvalova · Mar 19, 2020

Share your thoughts about InterSystems Documentation by filling out a survey!

Hello Developer Community! We are looking to better understand how our users use the Documentation. If you have a few minutes, please fill out this quick survey - https://www.surveymonkey.com/r/HK7F5P7! Feedback from real users like you in invaluable to us and helps us create better product. Your feedback can go further than the survey - we would love to interview you about your experience, just indicate in the survey that you’re open to talking to us! Thank you so much! If you have any questions, please contact me at Ksenia.samokhvalova@intersystems.com I look forward to hearing from you! Ksenia Ksenia SamokhvalovaUX Designer | InterSystemsKsenia.samokhvalova@intersystems.com
Announcement
Anastasia Dyubaylo · May 8, 2020

InterSystems IRIS 2020.1 Tech Talk: Integrated Development Environments

Hi Community, We're pleased to invite you to join the upcoming InterSystems IRIS 2020.1 Tech Talk: Integrated Development Environments on May 19 at 10:00 AM EDT! In this edition of InterSystems IRIS 2020.1 Tech Talks, we put the spotlight on Integrated Development Environments (IDEs). We'll talk about InterSystems latest initiative with the open source ObjectScript extension to Visual Studio Code, discussing what workflows are particularly suited to this IDE, how development, support, and enhancement requests will work in an open source ecosystem, and more. Speakers:🗣 @Raj.Singh5479, InterSystems Product Manager, Developer Experience🗣 @Brett.Saviano, InterSystems Developer Date: Tuesday, May 19, 2020Time: 10:00 AM EDT ➡️ JOIN THE TECH TALK! Additional Resources: ObjectScript IDEs [Documentation] Using InterSystems IDEs [Learning Course] Hi Community!Join the Tech Talk today. 😉 ➡️ You still have time to REGISTER.
Announcement
Olga Zavrazhnova · Aug 25, 2020

Global Masters Reward: 1.5-hour consultation with InterSystems Expert

Hi Community, As you may know, on Global Masters you can redeem a consultation with InterSystems expert on any InterSystems product: InterSystems IRIS, IRIS for Health, Interoperability (Ensemble), IRIS Analytics (DeepSee), Caché, HealthShare.And we have exciting news for you: now these consultations available in the following languages: English, Portuguese, Russian, German, French, Italian, Spanish, Japanese, Chinese. Also! The duration is extended to 1.5 hours for your deep dive into the topic. If you are interested, don't hesitate to redeem the reward on Global Masters! If you are not a member of Global Masters yet - you are very welcome to join here (click on the InterSystems login button and use your InterSystems WRC credentials). To learn more about Global Masters read this article: Global Masters Advocate Hub - Start Here! See you on InterSystems Global Masters today! 🙂
Article
Timothy Leavitt · Aug 27, 2020

Continuous Integration with the InterSystems Package Manager, GitHub Actions, and Docker

Introduction In a previous article, I discussed patterns for running unit tests via the InterSystems Package Manager. This article goes a step further, using GitHub actions to drive test execution and reporting. The motivating use case is running CI for one of my Open Exchange projects, AppS.REST (see the introductory article for it here). You can see the full implementation from which the snippets in this article were taken on GitHub; it could easily serve as a template for running CI for other projects using the ObjectScript package manager. Features demonstrated implementation include: Building and testing an ObjectScript package Reporting test coverage measurement (using the TestCoverage package) via codecov.io Uploading a report on test results as a build artifact The Build Environment There's comprehensive documentation on GitHub actions here; for purposes of this article, we'll just explore the aspects demonstrated in this example. A workflow in GitHub actions is triggered by a configurable set of events, and consists of a number of jobs that can run sequentially or in parallel. Each job has a set of steps - we'll go into the details of the steps for our example action in a bit. These steps consist of references to actions available on GitHub, or may just be shell commands. A snippet of the initial boilerplate in our example looks like: # Continuous integration workflow name: CI # Controls when the action will run. Triggers the workflow on push or pull request # events in all branches on: [push, pull_request] # A workflow run is made up of one or more jobs that can run sequentially or in parallel jobs: # This workflow contains a single job called "build" build: # The type of runner that the job will run on runs-on: ubuntu-latest env: # Environment variables usable throughout the "build" job, e.g. in OS-level commands package: apps.rest container_image: intersystemsdc/iris-community:2019.4.0.383.0-zpm # More of these will be discussed later... # Steps represent a sequence of tasks that will be executed as part of the job steps: # These will be shown later... For this example, there are a number of environment variables in use. To apply this example to other packages using the ObjectScript Package Manager, many of these wouldn't need to change at all, though some would. env: # ** FOR GENERAL USE, LIKELY NEED TO CHANGE: ** package: apps.rest container_image: intersystemsdc/iris-community:2019.4.0.383.0-zpm # ** FOR GENERAL USE, MAY NEED TO CHANGE: ** build_flags: -dev -verbose # Load in -dev mode to get unit test code preloaded test_package: UnitTest # ** FOR GENERAL USE, SHOULD NOT NEED TO CHANGE: ** instance: iris # Note: test_reports value is duplicated in test_flags environment variable test_reports: test-reports test_flags: >- -verbose -DUnitTest.ManagerClass=TestCoverage.Manager -DUnitTest.JUnitOutput=/test-reports/junit.xml -DUnitTest.FailuresAreFatal=1 -DUnitTest.Manager=TestCoverage.Manager -DUnitTest.UserParam.CoverageReportClass=TestCoverage.Report.Cobertura.ReportGenerator -DUnitTest.UserParam.CoverageReportFile=/source/coverage.xml If you want to adapt this to your own package, just drop in your own package name and preferred container image (must include zpm - see https://hub.docker.com/r/intersystemsdc/iris-community). You might also want to change the unit test package to match your own package's convention (if you need to load and compile unit tests before running them to deal with any load/compile dependencies; I had some weird issues specific to the unit tests for this package, so it might not even be relevant in other cases). The instance name and test_reports directory shouldn't need to be modified for other use, and the test_flags provide a good set of defaults - these support having unit test failures flag the build as failing, and also handle export of jUnit-formatted test results and a code coverage report. Build Steps Checking out GitHub Repositories In our motivating example, two repositories need to be checked out - the one being tested, and also my fork of Forgery (because the unit tests need it). # Checks out this repository under $GITHUB_WORKSPACE, so your job can access it - uses: actions/checkout@v2 # Also need to check out timleavitt/forgery until the official version installable via ZPM - uses: actions/checkout@v2 with: repository: timleavitt/forgery path: forgery $GITHUB_WORKSPACE is a very important environment variable, representing the root directory where all of this runs. From a permissions perspective, you can do pretty much whatever you want within that directory; elsewhere, you may run in to issues. Running the InterSystems IRIS Container After setting up a directory where we'll end up putting our test result reports, we'll run the InterSystems IRIS Community Edition (+ZPM) container for our build. - name: Run Container run: | # Create test_reports directory to share test results before running container mkdir $test_reports chmod 777 $test_reports # Run InterSystems IRIS instance docker pull $container_image docker run -d -h $instance --name $instance -v $GITHUB_WORKSPACE:/source -v $GITHUB_WORKSPACE/$test_reports:/$test_reports --init $container_image echo halt > wait # Wait for instance to be ready until docker exec --interactive $instance iris session $instance < wait; do sleep 1; done There are two volumes shared with the container - the GitHub workspace (so that the code can be loaded; we'll also report test coverage info back to there), and a separate directory where we'll put the jUnit test results. After "docker run" finishes, that doesn't mean the instance is fully started and ready to command yet. To wait for the instance to be ready, we'll keep trying to run a "halt" command via iris session; this will fail and continue trying once per second until it (eventually) succeeds, indicating that the instance is ready. Installing test-related libraries For our motivating use case, we'll be using two other libraries for testing - TestCoverage and Forgery. TestCoverage can be installed directly via the Community Package Manager; Forgery (currently) needs to be loaded via zpm "load"; but both approaches are valid. - name: Install TestCoverage run: | echo "zpm \"install testcoverage\":1:1" > install-testcoverage docker exec --interactive $instance iris session $instance -B < install-testcoverage # Workaround for permissions issues in TestCoverage (creating directory for source export) chmod 777 $GITHUB_WORKSPACE - name: Install Forgery run: | echo "zpm \"load /source/forgery\":1:1" > load-forgery docker exec --interactive $instance iris session $instance -B < load-forgery The general approach is to write out commands to a file, then run then in IRIS session. The extra ":1:1" in the ZPM commands indicates that the command should exit the process with an error code if an error occurs, and halt at the end if no errors occur; this means that if an error occurs, it will be reported as a failed build step, and we don't need to add a "halt" command at the end of each file. Building and Testing the Package Finally, we can actually build and run tests for our package. This is pretty simple - note use of the $build_flags/$test_flags environment variables we defined earlier. # Runs a set of commands using the runners shell - name: Build and Test run: | # Run build echo "zpm \"load /source $build_flags\":1:1" > build # Test package is compiled first as a workaround for some dependency issues. echo "do \$System.OBJ.CompilePackage(\"$test_package\",\"ckd\") " > test # Run tests echo "zpm \"$package test -only $test_flags\":1:1" >> test docker exec --interactive $instance iris session $instance -B < build && docker exec --interactive $instance iris session $instance -B < test && bash <(curl -s https://codecov.io/bash) This follows the same pattern we've seen, writing out commands to a file then using that file as input to iris session. The last part of the last line uploads code coverage results to codecov.io. Super easy! Uploading Unit Test Results Suppose a unit test fails. It'd be really annoying to have to go back through the build log to find out what went wrong, though this may still provide useful context. To make life easier, we can upload our jUnit-formatted results and even run a third-party program to turn them into a pretty HTML report. # Generate and Upload HTML xUnit report - name: XUnit Viewer id: xunit-viewer uses: AutoModality/action-xunit-viewer@v1 if: always() with: # With -DUnitTest.FailuresAreFatal=1 a failed unit test will fail the build before this point. # This action would otherwise misinterpret our xUnit style output and fail the build even if # all tests passed. fail: false - name: Attach the report uses: actions/upload-artifact@v1 if: always() with: name: ${{ steps.xunit-viewer.outputs.report-name }} path: ${{ steps.xunit-viewer.outputs.report-dir }} This is mostly taken from the readme at https://github.com/AutoModality/action-xunit-viewer. The End Result If you want to see the results of this workflow, check out: Logs for the CI job on intersystems/apps-rest (including build artifacts): https://github.com/intersystems/apps-rest/actions?query=workflow%3ACITest coverage reports: https://codecov.io/gh/intersystems/apps-rest Please let me know if you have any questions!
Announcement
Olga Zavrazhnova · Feb 26, 2021

Global Masters Challenge: Record a Testimonial Video About InterSystems IRIS

Hi Developers,A new exciting challenge introduced for Global Masters members of "Advocate" level and above: we invite you to record a 30-60 sec video with an answer to our question: ➥ What is the value of InterSystems IRIS to you? 🎁 Reward of your choice for doing the interview: $50 Gift Card (VISA/Amazon) or 12,000 points! Follow this direct link to the challenge for more information. Please note that the link will work for GM members of "Advocate" level and above. More about GM levels you can read here. We would love to hear from you! See you on the Global Masters Advocate Hub today!
Discussion
Matthew Waddingham · May 17, 2021

Should we store external files in InterSystems %Stream or Windows folders

We've been tasked with developing a file upload module as part of our wider system, storing scanned documents against a patients profile. Our Intersystems manager suggested storing those files in the DB as streams would be the best approach and it sounded like a solid idea, it can be encrypted, complex indexes, optimized for large files and so on. However the stake holder questioned why would we want to do that over storing them in windows folders and that putting it in the DB was nuts. So we were wondering what everyone else has done in this situation and what made them take that route. The nice advantage of storing them in the DB is that is makes the following easier: - refreshing earlier environments for testing- mirroring the file contents- encryption- simpler consistent backups However, if you're talking about hundreds of GBs of data, then you can run into issues which you should weigh against the above: - journaling volume- .dat size- .dat restore time One way to help mitigate the above for larger volume file management is to map the classes that are storing the the stream properties into their own .DAT so they can be managed separately from other application data, and then you can even use subscript level mapping to cap the size of the file .DATs. Hope that helps I can't disagree with Ben, there is a cut-off point where it makes more sense to store the files external to IRIS however it should be noted that if I was working with any other database technology such as Oracle or SQL Server I wouldn't even consider storing 'Blobs' in the database. However Cache/Ensemble/IRIS is extremely efficient at storing stream data especially binary steams. I agree with Ben that by storing the files in the database you will have the benefits of Journallng and Backups which support 24/7 up time. If you are using Mirroring as part of your Disaster Recovery strategy then restoring your system will be faster. If you store the files externally you will need to back up the files as a separate process from Cache/Ensemble/IRIS backups. I assume that you would have a seperate file server as you wouldn't want to keep the external files on the same server as your Cach/Ensemble/IRIS server for two reasons: 1) You would not want the files to be stored on the same disk as your database .dat files as the disk I/O might be compromised 2) If your database server crashes you may lose the external files unless they are are on separate server. 3) You would have to backup your file server to another server or suitable media 4) If the steam data is stored in IRIS then you can use iFind and iKnow on the file content which leads you into the realms of ML, NLP and AI 5) If your Cache.dat files and the External files are sored on the same disk system you potentially run into disk fragmentation issues over time and the system will get slower as the fragmentation gets worse. Far better to have your Cache.dat files on a disk system of their own where the database growth factor is set quite high but the database growth will be contiguous and fragmentation is considerably reduced and the stream data will be managed as effectively as any other global structure in Cache/Ensemble/IRIS. Yours Nigel Fragmentations issues, with SSD disks not an issue anymore. But in any way, I agree with storing files in the database. I have a system in production, where we have about 100TB of data, while more than half is just for files, stored in the database. Some of our .dat files by mapping used exclusively for streams, and we take care of them, periodically by cutting them at some point, to continue with an empty database. Mirroring, helps us do not to worry too much about backups. But If would have to store such amount of files as files on the filesystem, we would lose our mind, caring about backups and integrity. Great data point! Thanks @Dmitry.Maslennikov :) I'm throwing in another vote for streams for all the reasons in the above reply chain, plus two more: 1. More efficient hard drive usage. If you have a ton of tiny files and your hard drive is formatted with a larger allocation unit, you're going to use a lot of space very inefficiently and very quickly. 2. At my previous job, we got hit by ransomware years ago that encrypted every document on our network. (Fortunately, we had a small amount of data and good offline backup process, so we were able to recover fairly quickly!) We were also using a document management solution that ran on Cache and stored the files as Stream objects, and they were left untouched. I'm obviously not going to say streams and ransomewareproof, but that extra layer of security can't hurt! Thank you all for your input, they're all sound reasoning that I can agree with. It's not a good idea to store files in the DB that you'll simply be reading back in full. The main issue you'll suffer from if you do hold them in the database (which nobody else seems to have picked up on) is that you'll needlessly flush/replace global buffers every time you read them back (the bigger the files, the worse this will be). Global buffers are one of the keys to performance. Save the files and files and use the database to store their filepaths as data and indices. Hi Rob, what factors play a part in this though, we'd only be retrieving a single file at a time (per user session obviously) and the boxes have around 96gb-128gb memory each (2 app, 2 db) if that has any effect on your answer? I've mentioned above a system with a significant amount of streams stored in the database. And just checked how global buffers used there. And streams are just around 6%. The system is very active, including files. Tons of objects created every minute, attached files, changes in files (yeah, our users can change MS Word files online on the fly, and we keep all the versions). So, I still see no reasons to change it. And still, see tons of benefits, of keeping it as is. Hey Matthew, No technical suggestions from me, but I would say that there are pros/cons to file / global streams which have been covered quite well by the other commenters. For the performance concern in particular, it is difficult to compare different environments and use patterns. It might be helpful to test using file / global streams and see how the performance for your expected stream usage, combined with your system activity, plays into your decision to go with one or the other. I agree, for our own trust we'll most likely go with Stream. However I've suggested we plan to build both options for customers but we'll just reference the links to files and then they can implement back up etc as they see fit. Great! This was an interesting topic and I'm sure one that will help future viewers of the community. There are a lot of considerations. Questions: Can you describe what are you going to do with that streams (or files I guess)? Are they immutable? Are they text or binary? Are they already encrypted or zipped? Average stream size? Global buffers are one of the keys to performance. Yes, that's why if streams are to be stored in the db they should be stored in a separate db with distinct block size and separate global buffers. Having multiple different global buffers for different block sizes, does not make sense. IRIS will use bigger size of block for lower size blocks inefficiently. The only way to separate is, to use a separate server, right for streams. For us it will be scanned documents (to create a more complete picture of a patients record in one place) so we can estimate a few of the constants involved to test how it will perform under load. I'm not sure what you mean by this. On an IRIS instance configured with global buffers of different sizes, the different sized buffers are organized into sperate pools. Each database is assigned to a pool based on what is the smallest size available that can handle that database. If a system is configured with 8KB and 32KB buffers, the 32KB buffers could be assigned to handle 16KB database or 32KB databases but never 8KB databases. It depends. I would prefer to store the files in the linux filesystem with a directory structure based on a hash of the file and only store the meta-information (like filename, size, hash, path, author, title, etc) in the database. In my humble opinion this has the following advantages over storing the files in the database: The restore process of a single file will run shorter than the restore of a complete database with all files. Using a version control (f.e. svn or git) for the files is possible with a history. Bitrot will only destroy single files. This should be no problem if a filesystem with integrated checksums (f.e. btrfs) is used. Only a webserver and no database is needed to serve the files. You can move the files behind a proxy or a loadbalancer to increase availability without having to use a HA-Setup of Caché/IRIS. better usage of filesystem cache. better support for rsync. better support for incremental/differential backup. But the pros and cons may vary depending on the size and amount of files and your server setup. I suggest to build two PoCs, load a reasonable amount of files in each one and do some benchmarks to get some figures about the performance and to test some DR- and restore-scenarios. Jeffrey, thanks. But if I would have only 16KB blocks buffer configured and with a mix of databases 8KB (mostly system or CACHETEMP/IRISTEMP) and some of my application data stored in 16KB blocks. 8KB databases in any way will get buffered in 16KB Buffer, and they will be stored one to one, 8KB data in 16KB buffer. That's correct? So, If I would need to separate global buffers for streams, I'll just need the separate from any other data block size and a significantly small amount of global buffer for this size of the block and it will be enough for more efficient usage of global buffer? At least for non-stream data, with a higher priority? Yes, if you have only 16KB buffers configured and both 8KB and 16KB databases, then the 16KB buffers will be used to hold 8KB blocks - one 8KB block stored in one 16KB buffer using only 1/2 the space... If you allocate both 8KB and 16KB buffers then (for better or worse) you get to control the buffer allocation between the 8KB and 16KB databases. I'm just suggesting that this is an alternative to standing up a 2nd server to handle streams stored in a database with a different block size. One more consideration for whether to store the files inside the database or not is how much space gets wasted due to the block size. Files stored in the filesystem get their size rounded up to the block size of the device. For Linux this tends to be around 512 bytes (blockdev --getbsz /dev/...). Files stored in the database as streams are probably* stored using "big string blocks". Depending on how large the streams are, the total space consumed (used+unused) may be higher when stored in a database. ^REPAIR will show you the organization of a data block. *This assumes that the streams are large enough to be stored as big string blocks - if the streams are small and are stored in the data block, then there will probably be little wasted space per block as multiple streams can be packed into a single data block. Some info about blocks, in this article and others in cycle. In my opinion, it is much better & faster to store binary files out of the database. I have an application with hundreds of thousands of images. To get a faster access on a Windows O/S they are stored in a YYMM folders (to prevent having too many files in 1 folder that might slow the access) while the file path & file name are stored of course inside the database for quick access (using indices). As those images are being read a lot of times, I did not want to "waste" the "cache buffers" on those readings, hence storing them outside the database was the perfect solution. Hi, I keep everything I need in Windows folders, I'm very comfortable, I have everything organized. But maybe what you suggest won't look bad and be decent in terms of convenience! it depends on the file type , content, use frequency and so on , each way has its advantage
Announcement
Anastasia Dyubaylo · Mar 4, 2021

New Video: Getting Started with the InterSystems IRIS FHIR Server on AWS

Hey Developers, See how the InterSystems IRIS FHIR Server allows you to develop and deploy your FHIR applications on AWS without manual configuration and deployment: ⏯ Getting Started with the InterSystems IRIS FHIR Server on AWS 👉🏼 Subscribe to InterSystems Developers YouTube. Enjoy and stay tuned!
Announcement
Anastasia Dyubaylo · Mar 19, 2021

New Video: Deploying InterSystems IRIS Solutions into Kubernetes Google Cloud

Hi Community, Please welcome the new video on InterSystems Developers YouTube: ⏯ Deploying InterSystems IRIS Solutions into Kubernetes Google Cloud See how an InterSystems IRIS data platform application is deployed into a Kubernetes cluster, specifically on Google Kubernetes Engine (GKE), using Terraform to create a cluster and a CI/CD GitHub implementation called GitHub Actions to automate deployment steps. ⬇️ Access all code samples here. 🗣 Presenter: @Mikhail.Khomenko, DevOps Engineer Additional materials to this video you can find in this InterSystems Online Learning Course. Enjoy watching this video! 👍🏼
Announcement
Anastasia Dyubaylo · May 4, 2021

New Video: Package First Development Approach with InterSystems IRIS and ZPM

Hi Developers, Enjoy watching this new video presented by @Evgeny.Shvarov: ⏯ Package First Development Approach with InterSystems IRIS and ZPM A demo of package first development approach with InterSystems IRIS and ZPM Package manager. Develop the code as it is already deployed. ⬇️ ObjectScript Package Manager on Open Exchange ➡️ Join ZPM discussion in our Discord ✅ Follow DC ZMP tag to be up to date with the latest post on ZMP Stay tuned!
Announcement
Anastasia Dyubaylo · Nov 28, 2022

Time to vote in InterSystems IRIS for Health Contest: FHIR for Women's Health

Hi Community, It's voting time! Cast your votes for the best applications in our IRIS for Health Programming Contest focused on building FHIR solutions for Women's Health: 🔥 VOTE FOR THE BEST APPS 🔥 How to vote? Details below. Experts nomination: InterSystems experienced jury will choose the best apps to nominate the prizes in the Experts Nomination. Please welcome our experts: ⭐️ @Alexander.Koblov, Support Specialist⭐️ @Alexander.Woodhead, Technical Specialist⭐️ @Guillaume.Rongier7183, Sales Engineer⭐️ @Alberto.Fuentes, Sales Engineer⭐️ @Dmitry.Zasypkin, Senior Sales Engineer⭐️ @Daniel.Kutac, Senior Sales Engineer⭐️ @Eduard.Lebedyuk, Senior Cloud Engineer⭐️ @Steve.Pisani, Senior Solution Architect⭐️ @Patrick.Jamieson3621, Product Manager⭐️ @Nicholai.Mitchko, Manager, Solution Partner Sales Engineering⭐️ @Timothy.Leavitt, Development Manager⭐️ @Benjamin.DeBoe, Product Manager⭐️ @Robert.Kuszewski, Product Manager⭐️ @Stefan.Wittmann, Product Manager⭐️ @Raj.Singh5479, Product Manager⭐️ @Jeffrey.Fried, Director of Product Management⭐️ @Aya.Heshmat, Product Specialist⭐️ @Evgeny.Shvarov, Developer Ecosystem Manager⭐️ @Dean.Andrews2971, Head of Developer Relations Community nomination: For each user, a higher score is selected from two categories below: Conditions Place 1st 2nd 3rd If you have an article posted on DC and an app uploaded to Open Exchange (OEX) 9 6 3 If you have at least 1 article posted on DC or 1 app uploaded to OEX 6 4 2 If you make any valid contribution to DC (posted a comment/question, etc.) 3 2 1 Level Place 1st 2nd 3rd VIP Global Masters level or ISC Product Managers 15 10 5 Ambassador GM level 12 8 4 Expert GM level or DC Moderators 9 6 3 Specialist GM level 6 4 2 Advocate GM level or ISC Employees 3 2 1 Blind vote! The number of votes for each app will be hidden from everyone. Once a day we will publish the leaderboard in the comments to this post. The order of projects on the contest page will be as follows: the earlier an application was submitted to the competition, the higher it will be on the list. P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments. To take part in the voting, you need: Sign in to Open Exchange – DC credentials will work. Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community. If you changed your mind, cancel the choice and give your vote to another application! Support the application you like! Note: contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases! Hello everyone, Check out the V2 of my application for the InterSystems FHIR contest !! This time, we will see how to go from CSV to FHIR to SQL to JUPYTER all in one go and using only Python !!! Check this out here : https://community.intersystems.com/post/incredible-csv-fhir-sql-jupyter-fhir-contest-v2 thanks for your contributions, Lucas! :) Hey, Developers! Since the start of the contest, here are the top 5 apps! Expert Nomination, Top 5 Pregnancy Symptoms Tracker by @José.Pereira fhir-healthy-pregnancy by @Edmara.Francisco FemTech Reminder by @KATSIARYNA.Shaustruk FHIR Questionnaires by @Yuri.Gomes Contest-FHIR by @Lucas.Enard2487 ➡️ Voting is here. Community Nomination, Top 5 Pregnancy Symptoms Tracker by @José.Pereira FHIR Questionnaires by @Yuri.Gomes fhir-healthy-pregnancy by @Edmara.Francisco FemTech Reminder by @KATSIARYNA.Shaustruk Contest-FHIR by @Lucas.Enard2487 ➡️ Voting is here. Support the application you like! Devs! Here are the results after two days of voting! Expert Nomination, Top 5 Pregnancy Symptoms Tracker by @José Roberto Pereira fhir-healthy-pregnancy by @Edmara Francisco FemTech Reminder by @Katsiaryna Shaustruk Beat Savior by @Jan.Skála Contest-FHIR by @Lucas Enard ➡️ Voting is here. Community Nomination, Top 5 Pregnancy Symptoms Tracker by @José Roberto Pereira fhir-healthy-pregnancy by @Edmara Francisco FemTech Reminder by @Katsiaryna Shaustruk Contest-FHIR by @Lucas Enard Beat Savior by @Jan.Skála ➡️ Voting is here. Hi Developers! At the moment we can see the next results of the voting: Expert Nomination, Top 5 FemTech Reminder by @Katsiaryna Shaustruk Pregnancy Symptoms Tracker by @José Roberto Pereira fhir-healthy-pregnancy by @Edmara Francisco iris-fhir-app by @Oliver.Wilms NeuraHeart by @Grzegorz.Koperwas ➡️ Voting is here. Community Nomination, Top 5 Pregnancy Symptoms Tracker by @José Roberto Pereira FemTech Reminder by @Katsiaryna Shaustruk fhir-healthy-pregnancy by @Edmara Francisco FHIR Questionnaires by @Yuri.Gomes iris-fhir-app by @Oliver.Wilms ➡️ Voting is here. Support participants with your votes! Developers! Last call!Only a few hours left to the end of voting! Cast your votes for applications you like!
Announcement
Shane Nowack · Oct 19, 2022

InterSystems IRIS System Administration Specialist Certification Exam is now LIVE!

Get certified on InterSystems IRIS System Administration! Hello Community, After beta testing the new InterSystems IRIS System Administration Specialist exam, the Certification Team of InterSystems Learning Services has performed the necessary calibration and adjustments to release it to our community. It is now ready for purchase and scheduling in the InterSystems certification exam catalog. Potential candidates can review the exam topics and the practice questions to help orient them to exam question approaches and content. Passing the exam allows you to claim an electronic certification badge that can be embedded in social media accounts such as Linkedin. If you are new to InterSystems Certification, please review our program pages that include information on taking exams, exam policies, FAQ and more. Also, check out our Organizational Certification that can help your organization access valuable business opportunities and establish your organization as a solid provider of InterSystems solutions in our marketplace. The Certification Team of InterSystems Learning Services is excited about this new exam and we are also looking forward to working with you to create new certifications that can help you advance your career. We are always open to ideas and suggestions at certification@intersystems.com. Looking forward to celebrating your success, Shane Nowack - Certification Exam Developer, InterSystems @Shane.Nowack - congratulations on his launch!! Very exciting and a great addition to the Professional Certification Exam portfolio for ISC technology :)
Announcement
Anastasia Dyubaylo · Mar 24, 2023

[Video] Git Source Control for InterSystems IRIS Interoperability with Docker and VSCode

Hi Developers, Often we create and edit InterSystems IRIS Interoperability solutions via a set of UI tools that is provided with IRIS. But it is sometimes difficult to setup the development environment to handle changes we make in the UI to source control. This video illustrates how git-source-control helps with source control Interoperability components while changing it in the UI. ⏯ Git Source Control for InterSystems IRIS Interoperability with Docker and VSCode Add these two lines in your iris.script during docker build: zpm "install git-source-control" do ##class(%Studio.SourceControl.Interface).SourceControlClassSet("SourceControl.Git.Extension") And Interoperability UI components will start working with git. Example application.
Announcement
Anastasia Dyubaylo · Apr 17, 2023

Time to vote in InterSystems IRIS Cloud SQL and IntegratedML Contest

Hi Community, It's voting time! Cast your votes for the best applications in our InterSystems IRIS Cloud SQL and IntegratedML Contest: 🔥 VOTE FOR THE BEST APPS 🔥 How to vote? Details below. Experts nomination: InterSystems experienced jury will choose the best apps to nominate the prizes in the Experts Nomination. Please welcome our experts: ⭐️ @Alexander.Koblov, Support Specialist⭐️ @Guillaume.Rongier7183, Sales Engineer⭐️ @Eduard.Lebedyuk, Senior Cloud Engineer⭐️ @Steve.Pisani, Senior Solution Architect⭐️ @Timothy.Leavitt, Development Manager⭐️ @Evgeny.Shvarov, Developer Ecosystem Manager⭐️ @Dean.Andrews2971, Head of Developer Relations⭐️ @Alexander.Woodhead, Senior Systems Developer⭐️ @Andreas.Dieckow , Principal Product Manager⭐️ @Aya.Heshmat, Product Specialist⭐️ @Benjamin.DeBoe, Product Manager⭐️ @Robert.Kuszewski, Product Manager⭐️ @Carmen.Logue , Product Manager⭐️ @Jeffrey.Fried, Director of Product Management⭐️ @Luca.Ravazzolo, Product Manager⭐️ @Raj.Singh5479, Product Manager⭐️ @Patrick.Jamieson3621, Product Manager⭐️ @Stefan.Wittmann, Product Manager⭐️ @Steven.LeBlanc, Product Specialist⭐️ @Thomas.Dyar, Product Specialist Community nomination: For each user, a higher score is selected from two categories below: Conditions Place 1st 2nd 3rd If you have an article posted on DC and an app uploaded to Open Exchange (OEX) 9 6 3 If you have at least 1 article posted on DC or 1 app uploaded to OEX 6 4 2 If you make any valid contribution to DC (posted a comment/question, etc.) 3 2 1 Level Place 1st 2nd 3rd VIP Global Masters level or ISC Product Managers 15 10 5 Ambassador GM level 12 8 4 Expert GM level or DC Moderators 9 6 3 Specialist GM level 6 4 2 Advocate GM level or ISC Employees 3 2 1 Blind vote! The number of votes for each app will be hidden from everyone. Once a day we will publish the leaderboard in the comments to this post. The order of projects on the contest page will be as follows: the earlier an application was submitted to the competition, the higher it will be on the list. P.S. Don't forget to subscribe to this post (click on the bell icon) to be notified of new comments. To take part in the voting, you need: Sign in to Open Exchange – DC credentials will work. Make any valid contribution to the Developer Community – answer or ask questions, write an article, contribute applications on Open Exchange – and you'll be able to vote. Check this post on the options to make helpful contributions to the Developer Community. If you changed your mind, cancel the choice and give your vote to another application! Support the application you like! Note: contest participants are allowed to fix the bugs and make improvements to their applications during the voting week, so don't miss and subscribe to application releases! Since the beginning of the voting we have the results: Expert Nomination, Top 5 superset-iris by @Dmitry.Maslennikov Sheep’s Galaxy by @Maria.Gladkova iris-mlm-explainer by @Muhammad.Waseem IntegratedML-IRIS-Cloud-Height-prediction by @珊珊.喻 Customer churn predictor by @Oleh.Dontsov ➡️ Voting is here. Community Nomination, Top 5 superset-iris by @Dmitry.Maslennikov Sheep’s Galaxy by @Maria.Gladkova iris-mlm-explainer by @Muhammad.Waseem IntegratedML-IRIS-Cloud-Height-prediction by @珊珊.喻 Customer churn predictor by @Oleh.Dontsov ➡️ Voting is here. So, the voting continues. Please support the application you like! Devs! Here are the top 5 for now: Expert Nomination, Top 5 superset-iris by @Dmitry Maslennikov Sheep’s Galaxy by @Maria Gladkova Customer churn predictor by @Oleh Dontsov audit-consolidator by @Oliver.Wilms iris-mlm-explainer by @Muhammad Waseem ➡️ Voting is here. Community Nomination, Top 5 superset-iris by @Dmitry Maslennikov IntegratedML-IRIS-Cloud-Height-prediction by @Shanshan Yu Sheep’s Galaxy by @Maria Gladkova iris-mlm-explainer by @Muhammad Waseem AI text detection by @Oleh Dontsov ➡️ Voting is here. Experts, we are waiting for your votes! 🔥 Support our participants with your votes! Hi Developers! At the moment we can see the next results of the voting: Expert Nomination, Top 5 Sheep’s Galaxy by @Maria Gladkova superset-iris by @Dmitry Maslennikov AI text detection by @Oleh Dontsov iris-mlm-explainer by @Muhammad Waseem Customer churn predictor by @Oleh Dontsov ➡️ Voting is here. Community Nomination, Top 5 Sheep’s Galaxy by @Maria Gladkova superset-iris by @Dmitry Maslennikov IntegratedML-IRIS-Cloud-Height-prediction by @Shanshan Yu AI text detection by @Oleh Dontsov iris-mlm-explainer by @Muhammad Waseem ➡️ Voting is here. Hi, Dev's! And here're the results at the moment: Expert Nomination, Top 5 Sheep’s Galaxy by @Maria Gladkova AI text detection by @Oleh Dontsov superset-iris by @Dmitry Maslennikov Customer churn predictor by @Oleh Dontsov iris-mlm-explainer by @Muhammad Waseem ➡️ Voting is here. Community Nomination, Top 5 Sheep’s Galaxy by @Maria Gladkova superset-iris by @Dmitry Maslennikov IntegratedML-IRIS-Cloud-Height-prediction by @Shanshan Yu AI text detection by @Oleh Dontsov Customer churn predictor by @Oleh Dontsov ➡️ Voting is here. Developers, only two days left to the end of the voting!Cast your votes for the application you like! Last day of voting! ⌛ Please check out the Contest Board.Our contestants need your votes! 📢
Announcement
Evgeny Shvarov · Apr 3, 2023

Bonuses For InterSystems IRIS Cloud SQL and IntegratedML Contest 2023

Here're the technology bonuses for the InterSystems IRIS Cloud SQL and IntegratedML Contest 2023 that will give you extra points in the voting: IntegratedML usage Online Demo Article on Developer Community The second article on Developer Community Video on YouTube First Time Contribution Community Idea Implementation IRIS Cloud SQL Survey See the details below. IntegratedML usage - 5 points Use IntegratedML SQL extension of IRIS Cloud SQL and collect 5 extra bonus points. Online Demo of your project - 2 pointsCollect 3 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project. Collect 2 points for each article. Translations to different languages work too. The Second article on Developer Community - 1 point You can collect one more bonus point for the second article or the translation regarding the application. The 3rd and more will not bring more points but the attention will all be yours. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. Examples. First Time Contribution - 3 points Collect 3 bonus points if you participate in InterSystems Open Exchange contests for the first time! Community Idea Implementation - 3 points You can get 3 extra bonus points if the dev tool implements one of the ideas listed as Community Opportunity on the InterSystems Idea portal. IRIS Cloud SQL Survey - 2 points Please complete a survey with your feedback on your experience with InterSystems IRIS Cloud SQL and collect 2 bonus points! You should receive the survey on our email as a participant. If not please raise the question here on in discord. The list of bonuses is subject to change. Stay tuned! Good luck with the competition! Hello, I have written two articles and I will write one more. I also deployed online demo for audit-consolidator. Thank you We added IRIS Cloud SQL survey bonus! Don't forget to collect one!