Created by Daniel Kutac, Sales Engineer, InterSystems

 

Part 3. Appendix

InterSystems IRIS OAUTH classes explained

In the previous part of our series we have learned about configuring InterSystems IRIS to act as an OAUTH client as well as authorization and authentication server (by means of OpenID Connect). In this final part of our series we are going to describe classes implementing InterSystems IRIS OAuth 2.0 framework. We will also discuss use cases for selected methods of API classes.

The API classes implementing OAuth 2.0 can be separated into three different groups according to their purpose. All classes are implemented in %SYS namespace. Some of them are public (via % package), some not and should not be called by developers directly.

7 0
2 2,598

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • GitLab CI/CD using containers

In the first article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software.

In the second article, we covered GitLab Workflow - a complete software life cycle process and Continuous Delivery.

In the third article, we covered GitLab installation and configuration and connecting your environments to GitLab

In the fourth article, we wrote a CD configuration.

In the fifth article, we talked about containers and how (and why) they can be used.

In this article let's discuss main components you'll need to run a continuous delivery pipeline with containers and how they all work together.

3 0
2 2,592
Article
Michael Breen · Nov 9, 2016 5m read
How the Ensemble Scheduler Works

The Ensemble Scheduler is used for automatically turning on and off business hosts at certain dates and times.   You could use it if, for example, you wanted to only run a business host from 9am to 5pm every day.    Conversely, if you want do to trigger an event to occurr at a specific time, for example, a job running at 1am to batch up and send off all the previous day's transactions in one file, we recommend other methods such as the Task Manager.

5 3
3 2,567

Generally speaking, InterSystems products supported dynamic objects and JSON for a long while, but version 2016.2 came with a completely new implementation of these features, and the corresponding code was moved from the ObjectScript level to the kernel/C level, which made for a substantial performance boost in these areas. This article is about innovations in the new version and the migration process (including the ways of preserving backward compatibility).

5 0
2 2,552
Article
Sean McKenna · Aug 5, 2016 8m read
HealthShare's new SDA extensions

Creating and working with the new SDA extensions for storage of custom data elements

 

In HSCore 15.01, there is a new way to store custom data elements.  HealthShare now had the ability to use custom extensions on many SDA elements.

This article will:

  1. Show how to set up your system to use SDA extensions
  2. Create a new SDA extension property
  3. Use the new SDA extension property in HL7 transactions
  4. Interact with the new data
  5. Show new SDA extension used in a customization of Patient Summary Report
 
  

8 2
0 2,526

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD

In the previous article, we covered Git basics, why a high-level understanding of Git concepts is important for modern software development, and how Git can be used to develop software. Still, our focus was on the implementation part of software development, but this part presents:

  • GitLab Workflow - a complete software life cycle process - from idea to user feedback
  • Continuous Delivery - software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software faster and more frequently.

6 1
2 2,518
Article
Eduard Lebedyuk · Apr 17, 2017 4m read
Debugging Web

In this article I'll cover testing and debugging Caché web applications (mainly REST) with external tools. Second part covers Caché tools.

You wrote server-side code and want to test it from a client or already have a web application and it doesn't work. Here comes debugging. In this article I'll go from the easiest to use tools (browser) to the most comprehensive (packet analyzer), but first let's talk a little about most common errors and how they can be resolved.

15 1
3 2,503

As a developer, you have probably spent at least some time writing repetetive code. You may have even found yourself wishing you could generate the code programmatically. If this sounds familiar, this article is for you!

We'll start with an example. Note: the following examples use the %DynamicObject interface, which requires Caché 2016.2 or later. If you are unfamiliar with this class, check out the documentation here: Using JSON in Caché. It's really cool!

20 2
3 2,499

Introduction

The field test of Caché 2016.2 has been available for quite some time and I would like to focus on one of the substantial features that is new in this version: the document data model. This model is a natural addition to the multiple ways we support for handling data including Objects, Tables and Multidimensional arrays. It makes the platform more flexible and suitable for even more use cases.

13 12
0 2,489

Terminal scripts can be used to run pre-designed commands on the terminal, like a batch file.  You can write anything that can be executed on terminal, like for loop, if else and so on,  inside Terminal scripts. In this article, I will show you how to call Terminal scripts, how to use parameters in Terminal scripts and how to avoid session disconnected when running Terminal scripts. If you have any information about how to use Terminal scripts or you have any feedback, please feel free to leave a comment.

4 6
2 2,489

The release of IBM POWER 8 processors with AIX 7.1 introduced up to 8 SMT threads per processor core (logical or physical).  Which SMT level (1, 2, 4, or 8) to use can be confusing and varies based on multiple factors.  This article is meant to help with a starting point for your specific application.

Firstly, if running on a version of 2014.x or older, it is advised to use SMT 4 or lower.  SMT 8 with those older versions of Cache' has shown a decline in performance and scaling in benchmarking applications.

3 3
0 2,435

Hi!

Want to share with you code snippet of try catch block I usually use in methods which should return %Status. 


{ 
 try {
  	$$$TOE(sc,StatusMethod())
 }
 catch e {
 	set sc=e.AsStatus()
 	do e.Log()
 }

Quit sc 
}

Here $$$TOE is a short form of $$$TROWONERROR macro.

Inside macro StatusMethod is any method you call which will return %Status value. This value will be placed into sc variable.

3 21
1 2,416

Problem:

Caché prints to printers in a manner somewhat different from other Windows applications.  Caché sends the data directly to the GDI Printer, without the usual interface.  This is because the GUI interface can only be shown on a system desktop session and not in web browser and terminal sessions.  Some printer drivers have problems with this method of printing.

Is this the problem you are having?

4 3
2 2,406
Article
Alexander Koblov · May 20, 2016 12m read
Collations in Caché

Order is a necessity for everyone, but not everyone understands it in the same way
(Fausto Cercignani)

Disclaimer: This article uses Russian language and Cyrillic alphabet as examples, but is relevant for anyone who uses Caché in a non-English locale.
Please note that this article refers mostly to NLS collations, which are different than SQL collations. SQL collations (such as SQLUPPER, SQLSTRING, EXACT which means no collation, TRUNCATE, etc.) are actual functions that are explicitly applied to some values, and whose results are sometimes explicitly stored in the global subscripts. When stored in subscripts, these values would naturally follow the NLS collation in effect (“SQL and NLS Collations”).

9 7
1 2,383
Article
Eduard Lebedyuk · Mar 14, 2018 10m read
REST Design and Development

Intro

For many in today's interoperability landscape, REST reigns supreme. With the overabundance of tools and approaches to REST API development, what tools do you choose and what do you need to plan for before writing any code?
This article focuses on design patterns and considerations that allow you to build highly robust, adaptive, and consistent REST APIs. Viable approaches to challenges of CORS support and authentication management will be discussed, along with various tips and tricks and best tools for all stages of REST API development. Learn about the open-source REST APIs available for InterSystems IRIS Data Platform and how they tackle the challenge of ever-increasing API complexity.
The article is a write-up for a recent webinar on the same topic.

6 5
4 2,368

With the release of Cache 2016.1, JSON support was re-architected and made part of the core object model with the creation of %Object and %Array classes, which allow you to create dynamic JSON enabled objects and arrays.

On a recent demonstration I was working on, I had the need to create a REST web service that returned a JSON representation of a persistent object.  After searching for methods that would allow me to accomplish this, ultimately I found none, until now.

3 20
0 2,344

Apache Spark has rapidly become one of the most exciting technologies for big data analytics and machine learning. Spark is a general data processing engine created for use in clustered computing environments. Its heart is the Resilient Distributed Dataset (RDD) which represents a distributed, fault tolerant, collection of data that can be operated on in parallel across the nodes of a cluster. Spark is implemented using a combination of Java and Scala and so comes as a library that can run on any JVM.

11 5
0 2,334

There have been a few use cases recently within InterSystems where we've needed to connect to Caché-based web services from PHP. The first of these was actually the Developer Community itself, which uses web services as part of Single Sign-On with other InterSystems sites/applications. The following example demonstrates how to connect to a Caché-based web service (particularly, the web service in the SAMPLES namespace) from PHP, using password authentication.

5 0
0 2,325

Database systems have very specific backup requirements that in enterprise deployments require forethought and planning. For database systems, the operational goal of a backup solution is to create a copy of the data in a state that is equivalent to when application is shut down gracefully.  Application consistent backups meet these requirements and Caché provides a set of APIs that facilitate the integration with external solutions to achieve this level of backup consistency.

1 7
2 2,317

In my previous article, we reviewed possible use-cases for macros, so let’s now proceed to a more comprehensive example of macros usability. In this article we will design and build a logging system.

Logging system

Logging system is a useful tool for monitoring the work of an application that saves a lot of time during debugging and monitoring. Our system would consist of two parts:

  • Storage class (for log records)
  • Set of macros that automatically add a new record to the log

15 6
9 2,304

One of the great availability and scaling features of Caché is Enterprise Cache Protocol (ECP). With consideration during application development distributed processing using ECP allows a scale out architecture for Caché applications. Application processing can scale to very high rates from a single application server to the processing power of up to 255 application servers with no application changes.

9 6
2 2,286

In this article, I would like to talk about the spec-first approach to REST API development.

While traditional code-first REST API development goes like this:

  • Writing code
  • REST-enabling it
  • Documenting it (as a REST API)

Spec-first follows the same steps but reverse. We start with a spec, also doubling as documentation, generate a boilerplate REST app from that and finally write some business logic.

This is advantageous because:

  • You always have relevant and useful documentation for external or frontend developers who want to use your REST API
  • Specification created in OAS (Swagger) can be imported into a variety of tools allowing editing, client generation, API Management, Unit Testing and automation or simplification of many other tasks
  • Improved API architecture.  In code-first approach, API is developed method by method so a developer can easily lose track of the overall API  architecture, however with the spec-first developer is forced to interact with an API from the position if API consumer which usually helps with designing cleaner API architecture
  • Faster development - as all boilerplate code is automatically generated you won't have to write it, all that's left is developing business logic.
  • Faster feedback loops - consumers can get a view of the API immediately and they can easier offer suggestions simply by modifying the spec

Let's develop our API in a spec-first approach!

12 6
8 2,283