Glad you liked the articles, Ken. How IRIS (and Cache) handles the physical side of global storage is, of course, proprietary, but, as the article referred to by Alexander explains, it's done using a fairly classic b-tree architecture. For performance, access to the physical database is buffered via memory, the amount of which you can configure. Additionally, IRIS and Cache both have ECP networking which adds an amazing level of additional power and flexibility with Globals able to be transparently abstracted across networked machines - needless to say the technicalities of ECP are a closely-guarded proprietary secret!
As my articles explain, however, you can actually implement the basics of global storage on top of a number of other different databases, with BerkeleyDB being probably the closest example to how IRIS and Cache implement them.
Of course, for the average user, how the concept/abstraction of Global Storage is physically implemented is of less interest than how you can harness and make use of Global Storage to do the kinds of things you want to do. That, of course, was the focus of my articles, to show just some of the most common ways (and some of the lesser-known and very sophisticated ways) in which you can harness Global Storage.
I've sometimes described Global Storage as a "proto-database" - a very simple but powerful and flexible database engine on which you can model pretty much anything else and on which you can layer all the other stuff you need to create your specific database environment. As such, it's unique in the database marketplace, and it has always baffled me over the years why it's so little known about and used: it blows away everything else out there.
Anyway, a Happy New Year to all fans of IRIS and Global Storage!
If you're wanting to use IRIS with Node.js, you might want to also look at QEWD. Rather than using the IRIS Native Node.js APIs, it uses the alternative Open Source mg_dbx interface and its APIs (https://github.com/chrisemunt/mg-dbx), and QEWD creates an architecture that allows the synchronous APIs to be used safely (via a queue/dispatch architecture which gives QEWD its name). With QEWD and the mg-dbx interface, you can run Node.js on either the same machine/system as IRIS, or on separate machines/systems via a networked connection, and you can run QEWD either as a native installation or you can use a pre-built/configured Dockerised version, so it's extremely flexible.
QEWD can be used to develop both Node.js REST services and interactive web applications, the latter connecting users' browsers and your back-end over WebSockets. QEWD does all the tricky technical work for you, leaving you to just focus on your application or API functionality.
Although it allows use of IRIS via its standard Object and SQL architecture, and APIs QEWD can also abstract the IRIS database as persistent JavaScript Objects/JSON, which allows you to access and use an IRIS database in a completely unique and powerful way that makes it a natural fit with JavaScript. To explore this aspect of QEWD with IRIS, check out:
Calling any interested Python developers: It would be interesting for someone to take a look at what I've done in JavaScript/Node.js to implement what I refer to as persistent JSON objects, essentially creating JavaScript objects/JSON that resides on disk rather than memory and which can also be manipulated, traversed and modified in-situ rather than being shuttled between disk and memory. It's a concept I've referred to as QEWD-JSdb. Given the native support for JSON that now exists in Python, it strikes me that the equivalent ought to be possible using the new embedded Python in IRIS. I'm not a Python developer so I'm not the person to implement it, but all the logic for implementing this concept is available in my JavaScript repositories and my assumption is that it should be a matter of recasting the logic in Python. The really cool part is the multi-model stuff you can then derive from it such as the persistent DOM against which a standard XPath library can then be applied for searches. For information on what I'm referring to, see:
If you are using JWTs as bearer tokens, then you need to provide a logout REST API that invalidates the JWT at the back-end and returns a response without a bearer token. See the QEWD/mg_web Conduit application examples for how it's done
All this kind of stuff is built into both QEWD's REST services and mgweb-server with appropriate APIs for all the JWT/Bearer Token life-cycle. Find both on my Github repos:
IT techniques and trends go in cycles - it's always been that way and still is. Watch for the latest trend that's beginning to gain traction that's known as "HTML over the wire". It's basically server-side code that generates fragments of markup that are delivered over either HTTP or WebSockets and which dynamically update the DOM - stuff we were doing years ago in WebLink Developer and then EWD (which could be compiled to either (functionally identical) WebLink or CSP code). The capabilities at the back-end really haven't changed or improved since the mid 1990s - just lots of new buzzwords to describe the same old ways of skinning the cat. The key difference is the performance you'll get from an IRIS/Cache/M back-end compared with every other database-linked back-end out there. Anyway the good news is you have loads of choice these days for how you want to engineer and architect your web technology using IRIS, via what ISC provide and what others such as ourselves provide as Open Source solutions (eg see our mg_web, mg_web server and QEWD technologies)
I have to agree with Herman. When WebLink was first acquired by ISC, and during the period 1996-2000, we were demonstrating things that no other web tech at the time could do. Chris's Event Broker technology, originally released as a Java-based technology that ran in the browser, preceded and delivered what much later became known as Ajax. I remember during that time John Bertoglio, then a non-ISC user web guy who discovered what we were doing, telling me that in his estimation, we had a lead of between 5 and 8 years over what the rest of the industry was doing. We did try to lead the horse to water and all that, but hey...
Something that isn't, I think, widely known/realised is that if you install a 64-bit version of Linux on a Raspberry Pi (eg Ubuntu 20.04 64-bit for ARM), and then, if you have an M1 Apple Mac, install Parallels on it and install a Ubuntu 20.04 VM, the two environments (Raspberry Pi and M1 Mac VM) are functionally identical. Literally you can move the exact same code from the M1 Mac VM to the Raspberry Pi and it will work. Make it all even simpler by then installing Docker on both the Raspberry Pi and the M1 Mac VM, and a container built on one will, again, work identically on the other.
The only difference, of course, is the massively faster speed of execution you'll get on the M1 Mac (and the huge difference in cost!).
I've had a lot of fun with the various QEWD-related Docker containers I've created: there's something quite magical about watching a container built on the M1 Mac working, without any change, on a Raspberry Pi! It's what, IMO, makes ARM-64 a very interesting platform for the future
Take a look at QEWD.js - it works extremely well (and fast!) with IRIS on the Raspberry Pi, and you'll be able to build REST APIs and interactive apps using it. See:
It's not a good idea to store files in the DB that you'll simply be reading back in full. The main issue you'll suffer from if you do hold them in the database (which nobody else seems to have picked up on) is that you'll needlessly flush/replace global buffers every time you read them back (the bigger the files, the worse this will be). Global buffers are one of the keys to performance.
Save the files and files and use the database to store their filepaths as data and indices.
Yes, I think that everyone who uses Cache or IRIS should take the time to discover what lies beneath! The power and flexibility of Global Storage is way beyond anything else out there - something that has been true ever since I first started working with such databases way back in the early 1980s. In all that time, I've never come across any other database architecture that is as quick and simple to grasp and yet as devastatingly powerful. I'm hoping our efforts in putting together these resources will help a new generation of developers discover and harness that unique magic.
No disrespect to what you've created, but I have to say I'm always amazed and dismayed at how complex the Java community always seem to manage to make even the simplest of tasks - Spring/Hibernate is a classic example of making a crazy mountain out of what should technically be a very simple mole-hill.
If you're interested in trying out the mgweb-server Docker Container (aka mg_web Server Appliance) with an IRIS back-end (eg using the IRIS Community Edition Docker Container), I've put together a detailed user guide and tutorial that takes you through the entire process, step by step, showing you how to create REST APIs, along with showing you how to get it working with the pre-built mgweb-conduit back-end Demonstrator/Example on IRIS, and how to add the wc-conduit front-end to the mg_web Server Appliance.
It's all very quick and simple to get it up and working, with lots of utility functions included in the kit that make it very simple to create your REST APIs, use JWTs, secure passwords etc. It's a different and alternative approach to using IRIS for REST APIs that you might find interesting.
Notice, by the way, the hefty performance penalty incurred by the SQL/Class option compared with the low-level ObjectScript functions ($extract and $translate).
go to post
Glad you liked the articles, Ken. How IRIS (and Cache) handles the physical side of global storage is, of course, proprietary, but, as the article referred to by Alexander explains, it's done using a fairly classic b-tree architecture. For performance, access to the physical database is buffered via memory, the amount of which you can configure. Additionally, IRIS and Cache both have ECP networking which adds an amazing level of additional power and flexibility with Globals able to be transparently abstracted across networked machines - needless to say the technicalities of ECP are a closely-guarded proprietary secret!
As my articles explain, however, you can actually implement the basics of global storage on top of a number of other different databases, with BerkeleyDB being probably the closest example to how IRIS and Cache implement them.
Of course, for the average user, how the concept/abstraction of Global Storage is physically implemented is of less interest than how you can harness and make use of Global Storage to do the kinds of things you want to do. That, of course, was the focus of my articles, to show just some of the most common ways (and some of the lesser-known and very sophisticated ways) in which you can harness Global Storage.
I've sometimes described Global Storage as a "proto-database" - a very simple but powerful and flexible database engine on which you can model pretty much anything else and on which you can layer all the other stuff you need to create your specific database environment. As such, it's unique in the database marketplace, and it has always baffled me over the years why it's so little known about and used: it blows away everything else out there.
Anyway, a Happy New Year to all fans of IRIS and Global Storage!
Rob
go to post
If you're wanting to use IRIS with Node.js, you might want to also look at QEWD. Rather than using the IRIS Native Node.js APIs, it uses the alternative Open Source mg_dbx interface and its APIs (https://github.com/chrisemunt/mg-dbx), and QEWD creates an architecture that allows the synchronous APIs to be used safely (via a queue/dispatch architecture which gives QEWD its name). With QEWD and the mg-dbx interface, you can run Node.js on either the same machine/system as IRIS, or on separate machines/systems via a networked connection, and you can run QEWD either as a native installation or you can use a pre-built/configured Dockerised version, so it's extremely flexible.
QEWD can be used to develop both Node.js REST services and interactive web applications, the latter connecting users' browsers and your back-end over WebSockets. QEWD does all the tricky technical work for you, leaving you to just focus on your application or API functionality.
Here's a few ways to get started with QEWD:
Native IRIS/QEWD: https://github.com/robtweed/qewd-baseline/blob/master/IRIS-WINDOWS.md
Using Dockerised IRIS: https://github.com/robtweed/qewd-baseline/blob/master/IRIS.md
Networked connection between QEWD/Node.js and IRIS: https://github.com/robtweed/qewd-starter-kit-iris-networked
Although it allows use of IRIS via its standard Object and SQL architecture, and APIs QEWD can also abstract the IRIS database as persistent JavaScript Objects/JSON, which allows you to access and use an IRIS database in a completely unique and powerful way that makes it a natural fit with JavaScript. To explore this aspect of QEWD with IRIS, check out:
https://github.com/robtweed/qewd-jsdb-kit-iris
These repositories and the documentation/examples/tutorials they contain should get you started
Rob
go to post
Calling any interested Python developers: It would be interesting for someone to take a look at what I've done in JavaScript/Node.js to implement what I refer to as persistent JSON objects, essentially creating JavaScript objects/JSON that resides on disk rather than memory and which can also be manipulated, traversed and modified in-situ rather than being shuttled between disk and memory. It's a concept I've referred to as QEWD-JSdb. Given the native support for JSON that now exists in Python, it strikes me that the equivalent ought to be possible using the new embedded Python in IRIS. I'm not a Python developer so I'm not the person to implement it, but all the logic for implementing this concept is available in my JavaScript repositories and my assumption is that it should be a matter of recasting the logic in Python. The really cool part is the multi-model stuff you can then derive from it such as the persistent DOM against which a standard XPath library can then be applied for searches. For information on what I'm referring to, see:
https://github.com/robtweed/qewd-jsdb
and a more broad discussion of the underlying concepts:
https://github.com/robtweed/global_storage
If anyone is interested in such a project, all I ask is for the appropriate attribution for the original concept.
go to post
If you are using JWTs as bearer tokens, then you need to provide a logout REST API that invalidates the JWT at the back-end and returns a response without a bearer token. See the QEWD/mg_web Conduit application examples for how it's done
go to post
All this kind of stuff is built into both QEWD's REST services and mgweb-server with appropriate APIs for all the JWT/Bearer Token life-cycle. Find both on my Github repos:
https://github.com/robtweed/qewd
https://github.com/robtweed/mgweb-server
See the REST/IRIS examples based on the RealWorld/Conduit application which use JWT-based authentication carried as bearer tokens:
https://github.com/robtweed/mgweb-conduit
https://github.com/robtweed/qewd-conduit
go to post
IT techniques and trends go in cycles - it's always been that way and still is. Watch for the latest trend that's beginning to gain traction that's known as "HTML over the wire". It's basically server-side code that generates fragments of markup that are delivered over either HTTP or WebSockets and which dynamically update the DOM - stuff we were doing years ago in WebLink Developer and then EWD (which could be compiled to either (functionally identical) WebLink or CSP code). The capabilities at the back-end really haven't changed or improved since the mid 1990s - just lots of new buzzwords to describe the same old ways of skinning the cat. The key difference is the performance you'll get from an IRIS/Cache/M back-end compared with every other database-linked back-end out there. Anyway the good news is you have loads of choice these days for how you want to engineer and architect your web technology using IRIS, via what ISC provide and what others such as ourselves provide as Open Source solutions (eg see our mg_web, mg_web server and QEWD technologies)
go to post
I have to agree with Herman. When WebLink was first acquired by ISC, and during the period 1996-2000, we were demonstrating things that no other web tech at the time could do. Chris's Event Broker technology, originally released as a Java-based technology that ran in the browser, preceded and delivered what much later became known as Ajax. I remember during that time John Bertoglio, then a non-ISC user web guy who discovered what we were doing, telling me that in his estimation, we had a lead of between 5 and 8 years over what the rest of the industry was doing. We did try to lead the horse to water and all that, but hey...
go to post
Something that isn't, I think, widely known/realised is that if you install a 64-bit version of Linux on a Raspberry Pi (eg Ubuntu 20.04 64-bit for ARM), and then, if you have an M1 Apple Mac, install Parallels on it and install a Ubuntu 20.04 VM, the two environments (Raspberry Pi and M1 Mac VM) are functionally identical. Literally you can move the exact same code from the M1 Mac VM to the Raspberry Pi and it will work. Make it all even simpler by then installing Docker on both the Raspberry Pi and the M1 Mac VM, and a container built on one will, again, work identically on the other.
The only difference, of course, is the massively faster speed of execution you'll get on the M1 Mac (and the huge difference in cost!).
I've had a lot of fun with the various QEWD-related Docker containers I've created: there's something quite magical about watching a container built on the M1 Mac working, without any change, on a Raspberry Pi! It's what, IMO, makes ARM-64 a very interesting platform for the future
go to post
Take a look at QEWD.js - it works extremely well (and fast!) with IRIS on the Raspberry Pi, and you'll be able to build REST APIs and interactive apps using it. See:
https://github.com/robtweed/qewd
but also look at the QEWD-related postings I've put in here on Open Exchange:
https://openexchange.intersystems.com/package/QEWD-js
https://openexchange.intersystems.com/package/qewd-jsdb-kit-iris
https://openexchange.intersystems.com/package/qewd-conduit
go to post
Take a look at the ObjectScript functions $NAME, $QUERY, $QSUBSCRIPT and $QLENGTH
https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=RC...
go to post
The difference is described in full in these two documents:
https://github.com/robtweed/global_storage/blob/master/Subscripts.md
https://github.com/robtweed/global_storage/blob/master/Leaf_Nodes.md
go to post
It's not a good idea to store files in the DB that you'll simply be reading back in full. The main issue you'll suffer from if you do hold them in the database (which nobody else seems to have picked up on) is that you'll needlessly flush/replace global buffers every time you read them back (the bigger the files, the worse this will be). Global buffers are one of the keys to performance.
Save the files and files and use the database to store their filepaths as data and indices.
go to post
Yes, I think that everyone who uses Cache or IRIS should take the time to discover what lies beneath! The power and flexibility of Global Storage is way beyond anything else out there - something that has been true ever since I first started working with such databases way back in the early 1980s. In all that time, I've never come across any other database architecture that is as quick and simple to grasp and yet as devastatingly powerful. I'm hoping our efforts in putting together these resources will help a new generation of developers discover and harness that unique magic.
go to post
No disrespect to what you've created, but I have to say I'm always amazed and dismayed at how complex the Java community always seem to manage to make even the simplest of tasks - Spring/Hibernate is a classic example of making a crazy mountain out of what should technically be a very simple mole-hill.
By comparison, take a look at mgweb-server: https://github.com/robtweed/mgweb-server and its underlying mg_web interface (https://github.com/chrisemunt/mg_web) for probably the thinnest (and therefore the most performant) and simplest way possible of delivering REST services using IRIS.
go to post
The simplest way is to use QEWD. It will save you a lot of time trying to sort out all the moving parts required.
Start here:
https://github.com/robtweed/qewd
and check this out: https://groups.google.com/g/enterprise-web-developer-community/c/HQK-THx...
Rob
go to post
Awaiting my app/repo to be approved and then I'll submit it to the competition...
Rob
go to post
We have two Open Source products that will look after JWTs for you in the ways you are asking about (ie REST services with IRIS):
- QEWD, if you want to implement everything at the back-end in Node.js / JavaScript
- mgweb-server if you want to use ObjectScript logic for your back-end logic
For QEWD and IRIS, see:
https://github.com/robtweed/qewd-starter-kit-iris-networked
In particular for REST services, see:
https://github.com/robtweed/qewd-starter-kit-iris-networked/blob/master/...
and specifically this section:
https://github.com/robtweed/qewd-starter-kit-iris-networked/blob/master/...
For mgweb-server, see:
https://github.com/robtweed/mgweb-server
specifically using with IRIS:
https://github.com/robtweed/mgweb-server/blob/master/IRIS.md
and within that document, this section on JWTs:
https://github.com/robtweed/mgweb-server/blob/master/IRIS.md#using-json-...
Rob
go to post
if you're a JavaScript/Node.js developer, you can use the QEWD-JSdb abstractions of IRIS:
- Document database
- Persistent JavaScript Objects
- Redis-like Lists
- Redis-like key/object store
- Persistent XML DOM (with XPath querying)
See: https://github.com/robtweed/qewd-starter-kit-iris-networked
Take the introductory tutorial:
https://github.com/robtweed/qewd-starter-kit-iris-networked/blob/master/...
and then delve into the other database models:
- https://github.com/robtweed/qewd-starter-kit-iris-networked/blob/master/...
https://github.com/robtweed/qewd-starter-kit-iris-networked/blob/master/...
https://github.com/robtweed/qewd-starter-kit-iris-networked/blob/master/...
A whole world of multi-model opportunities to explore for this competition!
go to post
If you're interested in trying out the mgweb-server Docker Container (aka mg_web Server Appliance) with an IRIS back-end (eg using the IRIS Community Edition Docker Container), I've put together a detailed user guide and tutorial that takes you through the entire process, step by step, showing you how to create REST APIs, along with showing you how to get it working with the pre-built mgweb-conduit back-end Demonstrator/Example on IRIS, and how to add the wc-conduit front-end to the mg_web Server Appliance.
It's all very quick and simple to get it up and working, with lots of utility functions included in the kit that make it very simple to create your REST APIs, use JWTs, secure passwords etc. It's a different and alternative approach to using IRIS for REST APIs that you might find interesting.
Why not take a look at:
https://github.com/robtweed/mgweb-server/blob/master/IRIS.md
go to post
Notice, by the way, the hefty performance penalty incurred by the SQL/Class option compared with the low-level ObjectScript functions ($extract and $translate).