Representing XML and JSON in Global Storage is an interesting area.

In the case of XML, things are a little more complex than described in your article, since there's something of an ambiguity between information stored in the text value of an element and an attribute of an element.  For this reason, the better representation is to model the XML DOM in Global Storage nodes.  You'll find a pretty thorough (and free, Open Source) implementation here:

https://github.com/robtweed/EWD

This article provides more information:

https://groups.google.com/forum/#!searchin/enterprise-web-developer-comm...

Once in DOM format you can apply cool stuff such as XPath querying.  See:

https://groups.google.com/forum/#!searchin/enterprise-web-developer-comm...

The DOM is essentially modelled in Global Storage as a graph.  DOM programming is extremely powerful, allowing all sorts of complex things to be performed very efficiently and simply.

 

JSON is much simpler, being a pure hierarchy.  The only ambiguity with Global Storage is that JSON only allows leaf nodes to hold data - intermediate nodes cannot.  However, Global nodes can be intermediate nodes AND store data.  So whilst all JSON trees can be represented as a Global tree, not all Global trees can be represented as JSON.

The Node.js-based QEWD.js framework uses Global Storage as an embedded database to provide Session storage, persistent JavaScript Objects and a fine-grained Document Database.  To see how this is done, see the training course slide decks:

http://ec2.mgateway.com/ewd/ws/training.html

..specifically parts 17 - 27

Keep up the great work on this series of articles!

Rob

Very good article - good to see Global Storage being discussed.  Thanks for the compliments!  We need this kind of discussion to be promulgated out to a wider audience, not just preaching here to the (largely) already converted.  

I despair when I go to technical conferences where not one single attendee I speak to has heard of Cache or Global Storage.  Yet it's a database technology and architecture that is ideally suited to todays requirements (and particularly suited to the burgeoning JavaScript world), and is crying out to be more widely known about.  I do my best to get new interest in the mainstream, but feel I'm something of a lone voice in the wilderness.

The other thing I'd love to see is to have at my disposal within Node.js a similar level of performance as the article has described when using native Cache Objectscript.  It turns out there's just one Google V8 bottleneck in the way - if that could be sorted out, the idea of having a database in Node.js that could persist JSON data at speeds in excess of 1 million name/value pairs per second would blow every other database clean out of the water.  I would LOVE to see this rectified , and if fixed, it could create a huge wave of interest (people at those conferences I go to might actually want to find out about it!)

Here's the issue:

https://bugs.chromium.org/p/v8/issues/detail?id=5144#c1

Anyway, looking forward to part 2 of the article.

Someone should do a talk at the Developers Conference.... ??

I've just pushed out a new set of enhancements to QEWD that are described here:

https://robtweed.wordpress.com/2017/05/11/qewd-now-supports-koa-js-and-u...

I've upgraded the Cache-based RealWorld Conduit demo to make use of Koa.js.  As suggested in the article, take a look at the X-ResponseTime response headers in our browser's JavaScript console to get an idea just how fast the combination of QEWD, Koa.js, Node.js + Cache really is.  The URL for the live demo is:

   http://34.201.135.122:8080

Rob

Hi David

 

Glad to hear your success in getting it working for you.

There's a right way and a somewhat dodgy way to do what you want to do.

The right way is to have separate instances of QEWD, each connected to a particular namespace and listening on a different port.  You could probably proxy them via some URL re-writing (eg with nginx at the front-end)

The dodgy way which I think should work is to have a function wrapper around $zu(5) (or equivalent) to change namespace, and make a call to this in each of your back-end handler functions.  If you do this you need to make sure that you switch back to the original namespace before your finished() call and return from your function.  If an unexpected error occurs, you need to realise that your worker process could end up stuck in the wrong namespace.

Your namespace-switching function would need to be callable from all your Cache namespaces - use routine mapping for this.

Doing this namespace switching will be at your own risk - see how it goes

BTW for these type of situations where you want to do the same thing before and after every handler function, you might find the latest feature (beforeHandler and afterHandler) described here very useful:

https://groups.google.com/forum/#!topic/enterprise-web-developer-communi...

Rob

I've set up this instance of the Conduit Application:

http://34.201.135.122:8080

It uses: 

- Cache 2015.2, running on Ubuntu  Linux and Node.js v6.10, on an AWS EC2 instance

- QEWD with 2 child processes

- qewd-conduit RealWorld Conduit back-end

- The React/Redux version of the front-end for Conduit, with its data served up via REST calls to the qewd-conduit back-end

Note: no changes were needed for the application to run with Cache.

Regarding the installation of QEWD, it's worth pointing out that the Github/NPM repository includes an installers folder which contains a number of installer scripts for Linux-based systems. There's even one for QEWD + Linux + Cache, although it shouldn't be run as a script since you will probably already have Cache installed - use it instead as a guide for the commands you need to run in order to install the various bits and pieces.

See https://github.com/robtweed/qewd/tree/master/installers

There's even an installer for the Raspberry Pi, although that can't use Cache (sadly!)

Another alternative is to use the Docker appliance, but it would need to be adapted to work with Cache.  See https://github.com/robtweed/qewd/tree/master/docker

Very nice article, Ward!

For anyone wanting to create a Cache-based REST / Web Service back-end, QEWD + Node.js is definitely worth considering as a modern, lightweight but very high-performance, robust and scalable alternative to CSP.  I'd recommend that anyone interested should also go through this slide deck:

https://www.slideshare.net/robtweed/ewd-3-training-course-part-31-ewdxpr...

Chris Munt would be the person to ask, but I'm not aware that the lack of a resolution to the bottleneck issue is anything to with security - more that Google don't "get" the issue and see it as a priority, since it's an unusual requirement.

Anyway cache.node is an official InterSystems interface so why not just use it instead of rolling your own?  Ideally ISC should be pushing Google to fix the bottleneck.

cache.node will handle in excess of 90,000 global sets/sec in its in-process mode.  If that V8 bottleneck was sorted out, we'd have COS performance for JS accessing a Cache database (ie > 1,000,000 global sets / sec)

With QEWD (actually ewd-qoper8), you find that for simple message handling, optimum throughput is when the number of Node.js processes (master + workers) equals the number of CPU cores.  That's a Node.js context switching issue, nothing to do with Cache.

QEWD allows you to configure cache.node to work in networked mode, in which case you can have Cache running on a separate server to the Node.js machine.  I've not compared performance, but it's a simple QEWD startup-file setting change to switch modes, so easy to compare if you want to give it  a try.

QEWD is very quick and easy to set up on a Windows machine where Cache is already installed.  See:

https://www.slideshare.net/robtweed/installing-configuring-ewdxpress

Just to clarify - despite the bottleneck I've mentioned, cache.node will be a LOT faster than a TCP-based Node.js interface.  cache.node runs in-process with the Node.js process, and uses the Cache call-in interface.  As a result, QEWD is very fast and my guess is that it should outstrip an equivalent app using CSP.

Also, note that with QEWD, the socket.io connections are between the browsers and the QEWD master process, and NOT the child/worker processes that connect to Cache.  So Cache is decoupled from the web-socket and, indeed the HTTP(S) interfacing.