Zen users needing alternative solutions may find some of our (MGateway) technologies useful.  EWD has been around for longer than Zen and is still an actively-supported product that is used by some extremely large Cache/IRIS users.  Contact me if interested.

Alternatively you can take a look here for our latest high-performance back-end integration technologies for IRIS, and also our glsdb JSON abstraction for JavaScript/Node.js (and Bun.js) which may provide a useful (and very simple yet effective) solution for your JSON needs:

https://github.com/robtweed/mg-showcase

You'll see our range of solutions summarised on our web site at www.mgateway.com

If you want to use IRIS with Node.js, I suggest you take a look at our mg_dbx_napi interface rather than the functionally limited and extremely slow Native JS API for IRIS that comes with IRIS and that you've tried out.

Full details are here:

https://github.com/chrisemunt/mg-dbx-napi

You can find out additional information, benchmark comparisons with the Native API, and working examples and tutorials about mg_dbx_napi (and much more!) if you install and try out or mg-showcase Container for IRIS:

https://github.com/robtweed/mg-showcase

Hi Michael

Can I suggest you first read this article on our mg_web product which is designed for anyone in exactly your situation:

https://github.com/chrisemunt/mg_web/blob/master/why_mg_web.md

Some additional historical background that may help:

Something you should note is that a lot of people refer generically to WebLink, but there were always two distinct parts to it:

- the core WebLink gateway, which provided the physical connection between the web server (usually either Apache or IIS) and one or more Cache databases

- an early web application framework that I designed and built on top of the WebLink gateway, known as WebLink Developer.  This pioneered many areas of functionality that have now become commonplace in Web Applications generally, and was created in no small part as a way to help developers using WebLink and creating for themselves all manner of serious security issues.  The use of WebLink Developer was optional, and many people built their own Cache-based Web Applications using the core WebLink gateway and its APIs.

So, when you say you have a program involving WebLink, you first need to determine if it's actually a WebLink Developer application or separately handcrafted logic that makes use of the core WebLink gateway.

Some technical background that may also help you:

Whilst WebLink Developer was superseded by CSP-based applications and other non-InterSystems frameworks (eg my own EWD framework), the core WebLink gateway continued active service right up until the introduction of IRIS.  InterSystems made the commercial decision to no longer support WebLink in IRIS.

The architecture of the core WebLink gateway was an innovative queue/dispatch mechanism that ran as an add-on module in the Web Server, and made completely stateless, persistent connections to one or more Cache databases.  This had an interesting effect in terms of licensing: each connection consumed a single Cache license, but the queue/dispatch mechanism allowed any number of concurrent web application users to be supported via the configured WebLink-Cache connections. Too few connections to support the concurrent traffic simply resulted in requests being queued and taking longer to be handled, but you'd not actually run out of Cache licenses, no matter how many concurrent users you had: ideal for use on the open Internet where user numbers were impossible to control.

In terms of technical capability, contrary to widespread belief, there was never anything that CSP's core gateway could do that WebLink's core gateway couldn't also do equally well.  On the Cache-side of a WebLink connection, you had full access to all of Cache's capabilities via ObjectScript, including Objects and SQL. My colleague Chris Munt designed and created the WebLink gateway and assisted InterSystems in the design and support of the core CSP gateway, so we have full knowledge of the comparative capabilities of both technologies.

CSP was designed by InterSystems to replace the core WebLink gateway and inherently included an application framework that was completely based around the use of Cache Objects.  Critically, CSP also added an embedded mechanism that enforced stricter licensing by artificially inflating the number of licenses needed to support a given number of concurrent users (leading to the infamous "grace period" and the resulting risk of running out of Cache licenses on a busy system).  When CSP was introduced, WebLink was deprecated by InterSystems and no longer actively promoted, but was retained as a supported product for any existing legacy users.

Unfortunately no viable migration pathway was ever available for legacy WebLink users other than a potentially risky, time-consuming and costly rewrite of applications to CSP, or a rip-and-replace migration to a (sadly) more commonplace mainstream back-end web architecture and database technology.  The result is that there are actually still quite a lot of WebLink users out there, and some are extremely large users indeed!

So, with all that in mind, you can hopefully understand our rationale for creating mg_web and why it's gaining traction with some of the big legacy WebLink users.  We (MGateway) provide technical support for mg_web and, having been instrumental in introducing WebLink in the first place, can provide expert technical assistance in WebLink migrations.  Please contact my colleague Chris Munt in the first instance - his contact details can be found via the mg_web repository.

Just bringing this back to life, by way of pointing out that glsdb is pre-installed in the mg-showcase repository:

https://github.com/robtweed/mg-showcase

As my original post said, glsdb offers a radical re-imagining of database access for back-end JavaScript developers.  Take this snippet from one of the mg-showcase examples :

      let person = new this.glsdb.node('Person').proxy;
       let key = person.nextId++;
       person.data[key] = messageObj.data.body;

Look! No database?: it's all just JavaScript Objects apparently, but this is incrementing a counter in the persistent Person Global and mapping the JSON in the request body straight into the specified Person's data property in the Global.

And all with the insane performance provided by mg-dbx-napi.

Try it out for yourself with the build-and-run mg-showcase IRIS Containers!

Of course in the Node.js world, caching of key/value data is usually done using Redis, usually considered one of the fastest key/value stores available.  So how does it compare with an equivalent loop creating and reading key/value pairs?  Well you can test it for yourself: the IRIS containers include a pre-installed copy of Redis and both benchmark tests, but on our M1 Mac Mini, using the standard Redis connector for Node.js, we get a mere 17,000/sec: thats for both reads and writes.  Even in pipelined mode it only maxes out at around 250,000/sec.

And of course, IRIS can do so much more than just key/value pairs.

Try our benchmarks out for yourself and perhaps let us know your results.

One important thing to note about mg-dbx-napi: it not only gives you access to the underlying Global storage, it also provides APIs for accessing IRIS Classes and SQL from within JavaScript too.  See:

https://github.com/chrisemunt/mg-dbx-napi#direct-access-to-intersystems-...

https://github.com/chrisemunt/mg-dbx-napi#direct-access-to-sql-mgsql-and...

These APIs are also made available if you're using our QOper8 package for handling requests in a Node.js or Bun.js child process - via the this.mgdbx object.

Full stack developers don't actually need to learn ObjectScript to use IRIS: everything can be done in JavaScript.  Unfortunately this isn't something that is very well known or understood, which is one reason we've just released this repository which will hopefully get you started and show you what's possible using our mg-dbx-napi interface

https://github.com/robtweed/mg-showcase

OK to answer my own question, it looks like I can create a Dockerfile like this simple example:

        FROM containers.intersystems.com/intersystems/iris-community-arm64:2023.2.0.227.0

        USER root

        RUN apt update && apt install -y \

          wget \

          curl \

          locate \

          nano 

        USER irisowner

It appears to build a customised version of the community edition that can then be started in the usual way.  The key piece is switching USER to root and then back again to irisowner before the end of the Dockerfile.

I'm happy, for this exercise, to just limit it to just the ARM version.  I've currently manually customised the container, but I'd like to automate the process so I can restart it in a customised state without repeating the manual steps all over again.  Your example is therefore a lot more complex than I need.

I could, of course, add a bash script file that is accessed via a shared volume and executed to do the customisation, but a Dockerfile would be easier if it was possible.

I think this is something that others may want to be able to do too?

You should take a look at this alternative Open Source Node.js client for IRIS: mg_dbx:

https://github.com/chrisemunt/mg-dbx

It provides significantly better performance.

To use its synchronous APIs safely you need to look at either:

- QEWD: https://github.com/robtweed/qewd

- qoper8-fastify: https://github.com/robtweed/qoper8-fastify/blob/master/IRIS.md

An observation: the administrators of this forum regularly report the size of this community, currently an impressive number in excess of 12,000 developers, yet, as can be seen from the OP, numbers of views of even the most popular posts is tiny by comparison.  From what I've seen, a typical post will get around about 50 views, and each comment will add a further 50 - a meagre 0.4% of the total community.  This leads me to believe that there is actually only a small core (~50?) of regular contributors and readers (many of whom, I suspect, are actually ISC employees)

It does make me wonder what, if anything, most of those 12,000 Cache/IRIS developers read, and what site(s) they use instead. Perhaps they don't read anything? 

My reason for raising this is that, for a third-party developer of solutions for these technologies, reaching and informing those 12,000 developers is a significant issue and problem, and it's fairly clear to me that this forum currently doesn't provide a very effective solution to that problem.

My question/challenge for 2023 is therefore what do folks here (the regular 50?) think can be done to provide a more effective way to reach that reported developer base?