InterSystems IRIS

Syndicate content 37 

Hey folks,

I am new to IRIS and cloud platforms. I've done the InterSystems IRIS Experience on the learning site and read a lot of the online documentation. What I am unable to figure out is which type of package or option you will use on the cloud provider.

AWS for example, has AWS EC2, Elastic Beanstalk and some other products geared towards Docker containers.
Azure has Kubernetes and some other options.

Is IRIS and Docker deployed using the "Docker" products of the cloud providers, or should one get VM's on which CentOS is installed and Docker on top of that? Any advice, guidance or clarification on this will be great.

Last answer 4 August 2018 Last comment 5 August 2018
0 2
175

views

0

rating

Hi all. Today we are going to install Jupyter Notebook and connect it to Apache Spark and InterSystems IRIS.

Note: I have done the following on Ubuntu 18.04,  Python 3.6.5.

Introduction

If you are looking for well-known, widely-spread and mainly popular among Python users notebook instead of Apache Zeppelin, you should choose Jupyter notebook. Jupyter notebook is a very powerful and great data science tool. it has a big community and a lot of additional software and integrations. Jupyter notebook allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more. And most importantly, it is a big community that will help you solve the problems you face

2 2
0

comments

639

views

+ 5

rating

Hi Community!

New video is already on DC YouTube Channel:

Data Platform Scalability

 

0 1
0

comments

59

views

0

rating

Hi all. Today we are going to upload a ML model into IRIS Manager and test it.

Note: I have done the following on Ubuntu 18.04, Apache Zeppelin 0.8.0, Python 3.6.5.

Introduction

These days many available different tools for Data Mining enable you to develop predictive models and analyze the data you have with unprecedented ease. InterSystems IRIS Data Platform provide a stable foundation for your big data and fast data applications, providing interoperability with modern DataMining tools.

Last comment 30 July 2018
1 2
310

views

+ 5

rating

Hi all. Today we are going to use k-means algorithm on the Iris Dataset.

Note: I have done the following on Ubuntu 18.04, Apache Zeppelin 0.8.0, python 3.6.5.

Introduction

K-Means is one of the simplest unsupervised learning algorithms that solves the clustering problem. It groups all the objects in such a way that objects in the same group (group is a cluster) are more similar (in some sense) to each other than to those in other groups. For example, assume you have an image with a red ball on the green grass. K-Means will split all pixels into two clusters. The first cluster will contain the pixels of the ball, the second cluster will contain the pixels of the grass

3 2
0

comments

2411

views

+ 6

rating

Hello,

I have imported my data with the following code (%DocDB).

set filename = "/home/student/Dokumente/convertcsv.json"

IF $SYSTEM.DocDB.Exists("Fitabase1") { 

SET db = ##class(%DocDB.Database).%GetDatabase("Fitabase1")

}

ELSE {

SET db = ##class(%DocDB.Database).%CreateDatabase("Fitabase1") 

}

set arr = ##class(%DynamicAbstractObject).%FromJSON(filename)

SET jstring = arr.%ToJSON()

//SET doccount = db.%Size()

DO db.%FromJSON(jstring) 

Now I have data sets like

Last answer 16 July 2018
0 2
0

comments

127

views

0

rating

Often InterSystems technology architect team is asked about recommended storage arrays or storage technologies.  To provide this information to a wider audience as reference, a new series is started to provide some of the results we have encountered with various storage technologies.  As a general recommendation, all-flash storage is highly recommended with all InterSystems products to provide the lowest latency and predictable IOPS capabilities.

The first in the series was the most recently tested Netapp AFF A300 storage array.  This is middle-tier type storage array with several higher models above it.  This specific A300 model is capable of supporting a minimal configuration of only a few drives to hundreds of drives per HA pair, and also capable of being clustered with multiple controller pairs for tens of PB's of disk capacity and hundreds of thousands of IOPS or higher. 

0 1
0

comments

933

views

+ 3

rating

In this series of articles, I'd like to present and discuss several possible approaches toward software development with InterSystems technologies and GitLab. I will cover such topics as:

  • Git 101
  • Git flow (development process)
  • GitLab installation
  • GitLab Workflow
  • Continuous Delivery
  • GitLab installation and configuration
  • GitLab CI/CD
  • Why containers?
  • Containers infrastructure
  • CD using containers
  • CD using ICM

In this article, we'll build Continuous Delivery with InterSystems Cloud Manager. ICM is a cloud provisioning and deployment solution for applications based on InterSystems IRIS. It allows you to define the desired deployment configuration and ICM would provision it automatically. For more information take a look at First Look: ICM.

1 1
0

comments

298

views

+ 1

rating

Hi all. Yesterday I tried to connect Apache Spark, Apache Zeppelin, and InterSystems IRIS. During the process, I experienced troubles connecting it all together and I did not find a useful guide. So, I decided to write my own.

Introduction

What is Apache Spark and Apache Zeppelin and find out how it works together. Apache Spark is an open-source cluster-computing framework. It provides an interface for programming entire clusters with implicit data parallelism and fault tolerance. So, it is very useful when you need to work with Big Data. And Apache Zeppelin is a notebook, that provides cool UI to work with analytics and machine learning. Together, it works like this: IRIS provides data, Spark reads provided data, and in a notebook we work with the data.

Note: I have done the following on Windows 10

1 2
0

comments

400

views

+ 8

rating

Hi everyone,

I am still learning the platform for a student project and have to do some streaming and data analysis next. Since for my case I have no "live api" I wanted to just stream json files and output the data as it comes in from the files. (basically to emulate a incoming data scenario)

So thanks to the documentation and community posts I have figured how to create a stream and read data from a JSON but since I'm also new to JSON I have some parsing problems. I don't know how to access subarrays/sub-objects via objectscript.

The structure of the JSON File, I will omit some data because its large and the "rowset" has much more data

Last answer 3 July 2018 Last comment 4 July 2018
0 2
309

views

0

rating

In this post, I am going to detail how to set up a mirror using SSL, including generating the certificates and keys via the Public Key Infrastructure built in to InterSystems IRIS Data Platform. I did a similar post in the past for Caché, so feel free to check that out here if you are not running InterSystems IRIS. Much like the original, the goal of this is to take you from new installations to a working mirror with SSL, including a primary, backup, and DR async member, along with a mirrored database. I will not go into security recommendations or restricting access to the files. This is meant to just simply get a mirror up and running. Example screenshots are taken on a 2018.1.1 version of IRIS, so yours may look slightly different.

1 1
0

comments

178

views

+ 3

rating

Hi Community!

Come join us on Developer Week in NYC on 18-20 of June!

InterSystems has signed on for a high-level sponsorship and exhibitor space at this year's DeveloperWeek, billed as "New York City’s Largest Developer Conference & Expo". This is the first time we have participated in the event that organizers expect will draw more than 3,000 developers  from 18th to 20th June.

Last comment 19 June 2018
0 3
159

views

+ 5

rating

Let's say I have a user-generated document template with placeholders and I want to replace them with actual values.

Values could be:

  • scalars
  • tables
  • ...?

So far I wrote a simple find/replace tool that works with RTF format (because it's not a binary format), here's how it works:

set template = "D:\Cache\RTF\template.rtf"
set var("%title") = "Hello"
set var("%table") = $lb("Utils.RTF", "TestFunc")
set result = "D:\Cache\RTF\out.rtf"
set sc = ##class(Utils.RTF).replace(template, .var, result)

There should be two placeholders in RTF template: %title and %table and the are replaced with "Hello" and  results of Test query from Utils.RTF class, serialized into a table.

Template:

Output

Last answer 18 June 2018 Last comment 18 June 2018
0 1
92

views

0

rating

++ Update: August 1, 2018

The use of the InterSystems Virtual IP (VIP) address built-in to Caché database mirroring has certain limitations. In particular, it can only be used when mirror members reside the same network subnet. When multiple data centers are used, network subnets are not often “stretched” beyond the physical data center due to added network complexity (more detailed discussion here). For similar reasons, Virtual IP is often not usable when the database is hosted in the cloud.

Network traffic management appliances such as load balancers (physical or virtual) can be used to achieve the same level of transparency, presenting a single address to the client applications or devices. The network traffic manager automatically redirects clients to the current mirror primary’s real IP address. The automation is intended to meet the needs of both HA failover and DR promotion following a disaster. 

Last comment 14 June 2018
0 7
2080

views

+ 11

rating

When you hear people talk about moving their applications to the cloud, are you unsure of what exactly they mean? Do you want a solution for migrating your local, physical servers to a flexible, efficient cloud infrastructure? 

Join Luca Ravazzolo for Introducing InterSystems Cloud Manager, (May 17th, 2:00 p.m. EDT). In this webinar, Luca — Product Manager for InterSystems Cloud Manager — will explain cloud technology and how you can move your InterSystems IRIS infrastructure to the cloud in an operationally agile fashion. He will also be able to answer your questions following the webinar about this great new product from InterSystems!

Last comment 13 June 2018
0 4
226

views

+ 2

rating

This post provides useful links and an overview of best practice configuration for low latency storage IO by creating LVM Physical Extent (PE) stripes for database disks on InterSystems Data Platforms; InterSystems IRIS, Caché, and Ensemble.

Consistent low latency storage is key to getting the best database application performance. For applications running on Linux, Logical Volume Manager (LVM) is often used for database disks, for example because of the ability to grow volumes and filesystems or create snapshots for online backups. For database applications the parallelism of writes using LVM PE striped logical volumes can also help increase performance for large sequential reads and writes by improving the efficiency of the data I/O

Last comment 25 May 2018
1 4
649

views

+ 5

rating

Managed File Transfer (MFT) feature of InterSystems IRIS enables easy inclusion of a third-party file transfer service directly into an InterSystems IRIS production. Currently, DropBox, Box, and Kiteworks cloud disks are available.

In this article, I'd like to describe how to add more cloud storage platforms.

Here's what we're going to talk about:

  • What is MFT
  • Reference: Dropbox
    • Connection
    • Interoperability
    • Direct access
  • Interfaces you need to implement
    • Connection
    • Logic
  • Installation
0 1
0

comments

111

views

+ 1

rating

Modern businesses need new kinds of applications — ones that are smarter, faster, and can scale more quickly and cost-effectively to accommodate larger data sets, greater workloads, and more users.

With this in mind, we have unveiled InterSystems IRIS Data Platform™, a complete, unified solution that provides a comprehensive and consistent set of capabilities spanning data management, interoperability, transaction processing, and analytics. It redefines high performance for application developers, systems integrators, and end-user organizations who develop and deploy data-rich and mission-critical solutions.

 ​

Last comment 17 May 2018
0 6
1056

views

+ 9

rating

With the release of InterSystems IRIS, we're also making available a nifty bit of software that allows you to get the best out of your InterSystems IRIS cluster when working with Apache Spark for data processing, machine learning and other data-heavy fun. Let's take a closer look at how we're making your life as a Data Scientist easier, as you're probably already facing tough big data challenges already, just from the influx of job offers in your inbox!

Last comment 17 May 2018
0 4
453

views

+ 2

rating

So far, dozens of people have started the InterSystems IRIS Experience – we want to hear from you! How are you enjoying the Experience so far? Do you have any suggestions for future challenges or datasets you’d like to see? This is a space for you to interact with both InterSystems staff and your peers about the InterSystems IRIS Experience, so let us know what you think! 

Last comment 16 May 2018
0 6
0

answers

212

views

+ 1

rating