Article
· May 28, 2021 1m read
Fetch Upstream in GitHub

Hi colleagues!

Often when we collaborate to someone's repo in GitHub we do the following cycle:

Fork-Clone-Change-Commit-Push-Pull-Request-Merge to the original repo.

This is all great and works fine!

And if we want to make a second collaboration right after the merge you need to perform "Fetch upstream" to your forked repo first to "ingest" your own Pull-request in the original repo.

Geeky git-professionals do it with ease but this was always a headache for me so I usually simply deleted the fork and created a new one.

And today I figured that Github added a new UI feature that I can easily fetch-upstream for my fork with the original one and make it up to date and capable for pull-requests.

Here is where the button is:

This is a relief! )

Wanted to share this relief and productivity tip with you!

Bring more collaborations to Github repos!

And speaking of PR - I just made a PR with docker to Google Cloud Run deployment for the FHIRaaS demo made by @Anton Umnikov for the current FHIR Contest! Looking for more of your contributions!

9 0
3 377

The first installment of this article series discussed how to read a big chunk of data from the raw body of an HTTP POST method and save it to a database as a stream property of a class. The second installment discussed how to send files and their names wrapped in a JSON format.

Now let’s look closer at the idea of sending large files in parts to the server. There are several approaches we can use to do this. This article discusses using the Transfer-Encoding header to indicate chunked transfer. The HTTP/1.1 specification introduced the Transfer-Encoding header, and the RFC 7230 section 4.1 described it, but it’s absent from the HTTP/2 specification.

4 0
0 674

I work as an Integration Engineer for United States Department of Veterans Affairs (VA). I work on a Health Connect production which processes many RecordMap files. I do not fully understand RecordMaps and I wanted to develop an application for the Interoperability contest where I could learn more about working with RecordMaps. I browsed InterSystems documentation for inspiration on how to start. I was happy to find CSV Record Wizard.

0 0
0 350

Requirement: Transform source XML message to target JSON.

step1: First create json equivalent xml from any online tool.

step2: use add in studio utility create persistent classes from json equivalent xml and compile it , now target persistent classes are ready

step3: Do the above step for source xml or xsd and generate persistent classes for source and compile it

step4: complete the DTL by selecting root node from source, target persistent classes and complete the mappings and compile it

5 0
0 359
Article
· Jul 21, 2022 11m read
ECP With Docker

Hi community,

This is the third article in the series about initializing IRIS instances with Docker. This time, we will focus on Enterprise Cache Protocol (ECP).

In a very simplified way, ECP allows configuring some IRIS instances as application servers and others as data servers. Detailed technical information can be found in the official documentation.

This article aims to describe:

4 0
1 724

One of the most important tasks of dashboards is, on the one hand, to help you perceive data in an aggregated form, and, on the other hand, not to lose the depth of immersion in the data if you need this. One of the tools that help us achieve this result is drill down. It enables us to display several hierarchical levels of data, from aggregated to detailed.

5 0
0 469

Welcome community members to a new article! this time we are going to test the interoperability capabilities of IRIS for Health to work with DICOM files.

Let's go to configure a short workshop using Docker. You'll find at the end of the article the URL to access to GitHub if you want to make it run in your own computer.

Previously to any configuration we are going to explain what is DICOM:

3 0
0 631

These days the vast majority of applications are deployed on public cloud services. There are multiple advantages, including the reduction in human and material resources needed, the ability to grow quickly and cheaply, greater availability, reliability, elastic scalability, and options to improve the protection of digital assets. One of the most favored options is the Google Cloud. It lets us deploy our applications using virtual machines (Compute Engine), Docker containers (Cloud Run), or Kubernetes (Kubernetes Engine). The first one does not use Docker.

4 0
2 2K

Recently I needed to restore a version of a production class, which was overwritten by compilation and running UpdateProduction. As the correct version was unavailable in the source control, I used journals to restore the data. Journals store a plethora of information about what's happening in the system and are quite a powerful tool. This article explains how to work with journals to extract the data you require.

5 0
0 159

As an AI language model, ChatGPT is capable of performing a variety of tasks like language translation, writing songs, answering research questions, and even generating computer code. With its impressive abilities, ChatGPT has quickly become a popular tool for various applications, from chatbots to content creation.
But despite its advanced capabilities, ChatGPT is not able to access your personal data. So in this article, I will demonstrate below steps to build custom ChatGPT AI by using LangChain Framework:

4 0
1 9K

We return to the attack with our EMPI!

In previous articles we have seen how to configure and customize our EMPI, we have seen how we can include new patients in our system through HL7 messaging, but of course, not everything is HL7 v.2 in this life! How could we configure our EMPI instance to work with FHIR messaging?

3 0
1 188

We now get to make use of the IKO.

Below we define the environment we will be creating via a Custom Resource Definition (CRD). It lets us define something outside the realm of what the Kubernetes standard knows (this is objects such as your pods, services, persistent volumes (and claims), configmaps, secrets, and lots more). We are building a new kind of object, an IrisCluster object.

8 0
1 283

The IKO allows for sidecars. The idea behind them is to have direct access to a specific instance of IRIS. If we have mirrored data nodes, the web gateway will (correctly) only give us access to the primary node. But perhaps we need access to a specific instance. The sidecar is the solution.

Building on the example from the previous article, we introduce the sidecar by using a mirrored data node and of course arbiter.

7 0
1 141

Introduction

Application integration at its simplest is often just one application sending a message to another to notify it of some change. Perhaps when a patient arrives at a hospital, the registration system will send a message to clinical systems so they have all demographic data ready to use. Of perhaps it is just a nightly file transfer from the sales system to the accounting system.

But modern application integration platforms or suites can do a lot more than this to help applications work together and add real value to the enterprise.

0 0
0 301

Abstract

In a recent benchmark test of an application based on InterSystems Caché, a sustainable rate of 8.9million database accesses/second, with peaks of 16.9 million database accesses/second, was achieved. These results were from a test performed on a connected system of eight applications servers, using Intel Xeon 5570 processors, and running Linux as the operating system. This benchmark shows that:

0 0
0 137
Article
· Feb 2, 2016 1m read
Cache' databases as UNIX sparse files

Some third party backup products may by default restore CACHE.DAT files as UNIX sparse files when there are trailing zeroes in the backup file.

The support for sparse files vary from UNIX distribution and file system types. Sparse files attempt to use file system space more efficiently when blocks allocated to the file are mostly empty similar to thin-provisioned storage. The file system transparently converts metadata representing empty blocks into "real" blocks filled with zero bytes at runtime. The application is suppose to be unaware of this conversion.

5 0
0 361

The DeepSee Shell best Practices Series - Execute an MDX query in the DeepSee Shell with/without results cache

This cache is different from cache reset. Cache reset clears everything in the namespace but “cache off” only clears the cache in ^DeepSee.Cache.Results and ^DeepSee.Cache.Axis global node for the corresponding cube. The difference is quite smaller in the case as below, but in some cases it can be a big difference.

2 0
0 407