Written by

Sales Engineer Manager at InterSystems
Article Tani Frankel · Apr 20 8m read

IRIS Cloud Document - Beginner Guide & Sample : Part II - Sample (Dockerized) Java App

This is the second part of an article pair where I walk you through:

  • Part I - Intro and Quick Tour (the previous article)
    • What is it?
    • Spinning up an InterSystems IRIS Cloud Document deployment
    • Taking a quick tour of the service via the service UI
  • Part II - Sample (Dockerized) Java App (this article)
    • Grabbing the connection details and TLS certificate
    • Reviewing a simple Java sample that creates a collection, inserts documents, and queries them
    • Setting up and running the Java (Dockerized) end‑to‑end sample

As mentioned the goal is to give you a smooth “first run” experience.

 

Previously we created an IRIS Cloud Document deployment (and took a quick tour), now let's see how we can interact with it from a Java app.

Assuming you want to take this for a drive, and go hands-on, you'll need Docker, and Git, and start by hoping over to the Open Exchange App, and cloning the GitHub repo.

4. Note the connection details

Once the deployment is running, open it and look at the Overview page. There’s a table called Making External Connections that lists: 

Keep those values handy; we’ll plug them into the Java demo.

  • Hostname (e.g. k8s-your-hostname.elb.us-east-1.amazonaws.com)
  • Port (should be 443)
  • Namespace (should be USER)
  • SQL username (should be SQLAdmin)
  • Password (you set this when creating or configuring access)

(These Docs also might also help)

  1. Enable external connections
    • Make sure external access is enabled, either for all IP addresses, or your client IP (or IP range) is allowed in the deployment firewall settings.
    • This is done in the Cloud Services Portal when creating the service, or in the section mentioned above.
  2. Download the TLS certificate
    • Cloud Document requires TLS. From the deployment overview there’s a link to download a self‑signed X.509 certificate for your deployment. You’ll use this certificate on your client side to establish a trusted TLS connection. Save it as something like: certs/certificateSQLaaS.pem

That’s all we need from the portal: host, port, namespace, credentials, and the certificate file.

5. Review the Sample Accessing Cloud Document from Java

In general the pattern looks like:

  1. Make a secure connection (Connecting - Docs) - Configure DataSource (server, port, namespace, user, password, TLS).
  2. Ingest some data (Using Document and Collections - Docs) - Get a Collection by name (created automatically the first time). And build JSONObject/JSONArray instances, insert them as Documents.
  3. Query / fetch data back (Querying - Docs) - Query using a ShorthandQuery (string that behaves like a WHERE clause on the collection).

If you’ve used other document databases, this should feel pretty familiar.

The Java driver for Cloud Document lives in the package com.intersystems.document. It gives you three main pieces:

  • DataSource – a connection pool to the Cloud Document server.
  • Document – base class for JSON documents; usually you’ll use its subclasses:
    • JSONObject – JSON object with put() methods for key/value pairs.
    • JSONArray – JSON array with add() methods.
  • Collection – represents a named collection; you can insert, get, getAll, drop, and run queries.

The code and data used in this sample is based directly on the examples provided within our Documentation.

5.1 Making the connection

First, the bits we need for a basic connection:

  • Hostname, port, namespace, user, password – from the deployment’s “external connections” information.
  • The deployment’s X.509 certificate, imported into a Java keystore.
  • A small SSLConfig.properties file so the driver knows which keystore to use.

Building a TLS-enabled DataSource

Here’s a compact example that focuses on the connection itself:

 

Java Connection Code

import com.intersystems.document.DataSource;
import com.intersystems.document.Document;

    // 1. Create and configure the DataSource (connection pool)
    pool = DataSource.createDataSource();
    pool.setServerName(serverName);
    pool.setPortNumber(port);
    pool.setDatabaseName(namespace);
    pool.setUser(user);
    pool.setPassword(password);

    // Require TLS – connectionSecurityLevel 10 enables TLS.
    pool.setConnectionSecurityLevel(10);

    pool.preStart(5);
    pool.getConnection();  // force pool creation

If SSLConfig.properties and keystore.jks are set up correctly, calling createDataSource() should establish a connection over TLS to your Cloud Document deployment.

The Cloud Document Java driver looks for this SSLConfig.properties file and uses it when you set connectionSecurityLevel to require TLS.

This is what this file would look like:

 

SSLConfig.properties file sample

# SSL/TLS configuration for InterSystems Java client
# This file MUST be named SSLConfig.properties and be in the application's working directory.
# The Docker image will create /app/keystore.jks at container startup.
debug=false
protocol=TLS
trustStore=keystore.jks
trustStorePassword=changeit

In the Docker sample I provided there is a script that takes care of this for you.

If you're running your own samples, you can use a line like this one:

keytool -importcert -file /path/to/certs/cloud-document.pem -keystore keystore.jks
  • Answer yes when asked if you want to trust the certificate.
  • Set a password and remember it.

In the Docker sample, our script does this:

 

docker-entrypoint.sh certificate handling

echo "Importing certificate into keystore..."
keytool -importcert -noprompt -alias "$ALIAS_NAME" -file "$CERT_PATH" -keystore "$KEYSTORE_PATH" -storepass "$KEYSTORE_PASSWORD"

5.2 Ingesting data from Java

Once we have a DataSource, we work with collections and documents.

  • Collection is the named container, like colors or demoPeople.
  • A document is a JSONObject or JSONArray extending Document.

Here’s a small “ingest” example that mirrors the colors JSON file we imported in the UI earlier.

 

Java Ingest Code

    // 2. Get (or create) the collection
    Collection people = Collection.getCollection(pool, collectionName);
    if (people.size() > 0) {
        System.out.println("\nCollection '" + people.getName() + "' already has "
            + people.size() + " documents. Dropping them for a clean demo...");
        people.drop();
    }
    System.out.println("Using collection: " + people.getName());

    // 3. Insert a very simple array document
    Document docOne = new JSONArray()
        .add("Hello from Cloud Document (Docker demo)");
    
    String id1 = people.insert(docOne);
    System.out.println("\nInserted docOne (JSONArray) with id " + id1);

    // 4. Insert a JSONObject document
    Document docTwo = new JSONObject()
        .put("name", "John Doe")
        .put("age", 42)
        .put("city", "Boston");

    String id2 = people.insert(docTwo);
    System.out.println("Inserted docTwo (JSONObject) with id " + id2);

    // 5. Bulk insert of multiple JSONObject documents
    List<Document> batch = new ArrayList<>();
    batch.add(new JSONObject()
        .put("name", "Jane Doe")
        .put("age", 20)
        .put("city", "Seattle"));
    batch.add(new JSONObject()
        .put("name", "Anne Elk")
        .put("age", 38)
        .put("city", "London"));

    BulkResponse bulk = people.insert(batch);
    System.out.println("Bulk insert completed. New ids: " + bulk.getIds());

A few notes:

  • Collection.getCollection(pool, name) will create the collection on first use if it doesn’t exist.
  • insert() returns the document ID assigned by Cloud Document.
  • insert(List<Document>) does a bulk write and returns all the IDs in a BulkResponse.

This is the same basic pattern you’d use in an application ingesting JSON from a file, a queue, or an API.

5.3 Querying and fetching data

On the Java side you have two main options:

  1. Use the collection-centric APIs (getAll, createShorthandQuery, etc.).
  2. Use regular SQL (for example with JDBC directly) and JSON_TABLE when you want rich SQL projections.

For a first experience, the collection APIs are usually enough.

List all documents in a collection and searching for some

 

Java Fetch/Query Code

            // 6. Retrieve and display all documents in the collection
            System.out.println("\nAll documents in collection '" + collectionName + "':");
            List<Document> allDocuments = people.getAll();
            for (Document d : allDocuments) {
                System.out.println("  " + d.getID() + ": " + d.toJSONString());
            }
            System.out.println("Collection size reported by server: " + people.size());

            // 7. Run a shorthand query
            String shorthand = "name > 'H' AND age >= 21";
            System.out.println("\nRunning shorthand query: " + shorthand);

            ShorthandQuery query = people.createShorthandQuery(shorthand);
            Cursor results = query.execute();

            System.out.println("Shorthand query returned " + results.count() + " result(s).");
            while (results.hasNext()) {
                Document d = results.next();
                System.out.println("  " + d.toJSONString());
            }

What’s happening here:

  • getAll() gives you every document in the collection as Document objects.
  • createShorthandQuery("name > 'H'") creates a query that’s conceptually similar to WHERE name > 'H' in SQL.
  • Cursor lets you iterate the results and also ask for a count.

If you later want to bring this into the SQL world, the same collections you touched here can be queried with JSON_TABLE in the SQL UI or via JDBC. That’s one of the nice aspects of Cloud Document: you don’t have to choose between “document API” and “SQL”; you get both.

6. Setting up and Running the Sample

As mentioned I'm providing a Dockerized sample, to ensure a smooth as possible experience without requiring you to manually download and install various parts, but if you want you can use the same sample and run this on your own.

The Open Exchange and related GitHub repository include detailed instructions for running, but at high-level it comes down to simply:

6.1 Update .env file and place TLS certificate

This is what your environment variables file might look like after your edit it:

 

Environment Variables .env Edited File (example)

# Copy this file to .env and fill in your values

# Cloud Document connection settings
IRIS_HOST=k8s-e8c99d11-a90ppp7q-333333jj22-2222o11o111oo1o1.elb.us-east-1.amazonaws.com
IRIS_PORT=443
IRIS_NAMESPACE=USER
IRIS_USER=SQLAdmin
IRIS_PASSWORD=verySECRETpassword12345*

# Optional: collection name (override default)
COLLECTION_NAME=demoPeople

# Absolute path on your host to the Cloud Document X.509 certificate
CERT_FILE_HOST_PATH=./cert/certificateSQLaaS.pem

6.2 Run docker compose

Just run docker compose up --build and the sample will run.

behind the scenes we will:

  • Stage 1: Use a Maven + JDK image to build a shaded JAR.
  • Stage 2: Use a slim JDK image, copy the JAR and SSLConfig.properties, create a keystore from your cert at container startup, then run the JAR.

Here's a short video demonstrating this:

Wrapping up

If you’re new to InterSystems but not new to programming, the basic path to a good first experience with IRIS Cloud Document is:

  1. Bring the service up: create a deployment, note host/port/namespace/credentials, download the certificate.
  2. Kick the tires in the web portal: upload a JSON file, import into a collection, browse with the Collection Browser, and run a simple SQL query with JSON_TABLE.
  3. Wire it into Java:
    • create a TLS-enabled DataSource (with SSLConfig.properties + keystore),
    • use Collection and Document to ingest data,
    • and query with getAll and shorthand queries.

From there you can iterate toward more interesting things: updates, deletes, richer queries, combining Cloud Document data with relational data, or using other drivers like .NET.

But if you’ve followed along to this point and seen your own JSON documents come back from the Java code, you’ve already taken the most important step: you’re up and running in the InterSystems ecosystem.

Enjoy!