Well spotted, I've updated my comment.
- Log in to post comments
Well spotted, I've updated my comment.
Hi Murali,
Your perfectly right.
You can have multiple namespaces on the same Caché instance for different purposes. These should have a naming convention to identify their purpose. That convention is really down to you. I normally postfix the name with -DEV and -TEST, e.g. FOO-DEV & FOO-TEST.
These namespaces will share commonly mapped code from the main library, but unless configured otherwise they are completely independent from each other. You can dev and test in them respectively without fear of polluting one another.
Tip
You can mark the mode of an instance via the management portal > system > configuration > memory and startup. On that configuration page you will see a drop down for "system mode" with the options...
Live System
Test System
Development System
Failover System
The options are mostly inert, but what they will do is paint a box on the top of your management portal screens. If you mark it as live then you will get a red box that you can't miss.
Sean
link fixed
Via a terminal command prompt...
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KE…
How to create a namespace (instance)...
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KE…
How to create a production...
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KE…
If its just for testing then its fine.
My pleasure Richard.
If you've not done a course yet then your doing a great job getting this far already.
If your in the UK then I do know a very good consultant that can provide in-depth on site training and on-going developer support & mentoring...
Hi Richard,
It will be because you are trying to send %Net.MailMessage as an Ensemble message. This does not extend %Persistent which is the minimum required for an Ensemble message.
You will need to create your own custom class that extends Ens.Request and have reflective properties for each of the MailMessage properties that you need at the other end. The operation will then need to create a new MailMessage from the message class.
Sean.
In relation to stackoverflow I think the intention was to stop people abusing the implementation of the down vote.
I'm not sure that's relevant here, unless the main DC site started displaying some kind of reputation score in the same way as stackoverflow.
> Then downvotes will never be used, they why even have them?
I think dangerously incorrect is a poor description and perhaps needs down voting :)
It should just say "contains incorrect information".
Despite that, there are 1,000,000+ users on stackoverflow regularly using the down vote so it must be working.
Hi Evgeny,
I couldn't help notice that this post has a -1 rating.
I can't see anything wrong with the post so curious what the reason for a down vote would be.
On stack overflow they explain the use of down voting as...
This post has an interesting conversation around down votes...
https://meta.stackexchange.com/questions/135/encouraging-people-to-expl…
As the original poster explains...
Where the down-vote has been explained I've found it useful & it has improved my answer, or forced me to delete the answer if it was totally wrong
This initiated a change that prompts down voters with the message, "Please consider adding a comment if you think the post can be improved".
We all make mistakes so its good to get this type of feedback. It also helps anyone landing on the answer months or years down the line and not realising the answer they are trying to implement has a mistake. Perhaps the DC site could do with something similar?
Sean.
Ahhh OK, the Page() method has been overridden so we lose all of the %CSP.Page event methods.
Since there are no event methods the only thing you can do is override either the Page() method or the DispatchRequest(). At least the headers are not written too until after the method call.
I guess what you are doing will be as good as it gets. Only worry is if the implementation changes in a later release.
Ideally the class should have an OnBeforeRequest() and an OnAfterRequest() set of methods.
That's interesting. The REST page is just a %CSP.Page so you would think it would. I'll have a dig around.
Hi Murali,
The production settings are stored as XML inside a Cache class (as XDATA).
If you go to studio you can open the class and look at the XML.
If you are not sure what the class name is, go to your production view and look at the blue box in the top left had corner, this will have the class name inside it.
Sean.
Hi Eduard,
I would look at implementing the OnPreHTTP() and OnPostHTTP() methods.
Also, if you want to issue a 500 error from the dispatch request it will be to late as the headers will have been written. You will need to do that from OnPreHTTP().
Sean.
No problem. I know you said you didn't want to write your own custom code, so this is for anyone else landing on the question.
If you use the DTL editor (which I advise even for ardent COS developers), then you will most likely use the helper methods in Ens.Util.FunctionSet that your DTL extends from, e.g. ToUpper, ToLower etc.
Inevitably there will be other common functions that will be needed time and time again. A good example would be to select a patient ID in a repeating field of no known order. This can be achieved with a small amount of DTL logic, but many developers will break out and write some custom code for this. The problem however is that I see each developer doing there own thing and a system ends up with custom DTL logic all over the place, often repeating the same code over and over again.
The answer is to have one class for all of these functions and make that class extend Ens.Rule.FunctionSet. By extending this class, all of the ClassMethods in that class will magically appear in the drop down list of the DTL function wizard. This way all developers across the team, past and future will visibly see what methods are available to them.
To see this in action, create your own class, something like this...
Then create a new HL7 DTL. Click on any segment on the source and add an "If" action to it. Now in the action tab, click on the condition wizard and select the function drop down box. The SegmentExists function will magically appear in the list. Select it and the wizard will inject the segment value into this function.
Whilst developers feel the need to type these things out by hand, they will never beat the precision of using these types of building blocks and tools. It also means that you can have data mappers with strong business logic and not so broad programming skills bashing out these transforms.
I've found Veeam can cause problems in a mirrored environment for Cache.
We invested a great deal of energy trying to get to the bottom of this (with ISC) but never truly fathomed it out.
As you can imagine we tried all sorts of freeze thaw scripts and QOS settings. Whilst we did reduce the frequency we could not completely eliminate the problem.
In the end we came up with this strategy.
1. Run Veeam on the backup member nightly
2. Run Veeam on the primary member only once a week during the day
3. Only use Veeam as a way to restore the OS
4. This would exclude all DAT, Journal and WIJ files
5. Take a nightly dat backup of both members and stash several days of Journal files
6. Ensure the mirror pair are on completely different sets of hardware and ideally locations
7. Configure the DAT and Journal files to be on different LUNs on the VM setup
8. Understand how to physically recover these files should the server crash and not recover
Conclusion for me was that Veeam is a nice to have tool in a VM set-up, but it should not be a replacement for the battle tested cache backup solution.
Hi Andre,
1. Description might be empty whilst NTE might actually exist, so it would be better to do...
2. If you are looking to shave off a few characters then take a look at $zdh...
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KE…
3. For time there is $zth, but it expects a : in the time (e.g. HH:MM).
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KE…
It doesn't look like you have a colon, so you could use $tr to reformat the time first. This will save you 10 or so characters...
4. You can have more than one PID group, but you should not see more than one PID segment in a PID group. Not all message types define (or allow) a repeating PID group. You might see multiple PID groups in an ADT merge message. You might also see bulk records in an ORU message, but in the real world probably not. If you know the implementation only ever sends one PID group then you will see many developers just hard coding its ordinal key to 1, e.g.
Developing with HL7 can feel verbose to begin with. Tbh, what you are currently doing is all perfectly fine and acceptable.
Sean.
I can remember reading your previous post about the bottleneck before. I just wonder if such a fix would make the V8 less secure and as such would be unlikely to happen under the current architecture.
I did think about the problem for a while and I managed to get a basic AST JavaScript to Mumps compiler working. Its just a proof of concept and would require much more effort. Coercion missmatch is the biggest barrier. I came up with 1640 unit tests alone as an exercise to figure out how big the task would be. And that's just 1/20th of the overall unit tests it would require.
Effectively you would write JavaScript stored procedures inline that would run on the Cache side. I'm still not sure of the exact syntax, but it might be something like
In terms of benchmarks, I would like to see the likes of Cache and ilk on here...
https://www.techempower.com/benchmarks/#section=data-r13&hw=ph&test=upd…
You could buffer it up, but you will still have a period of writing to disk where the other collection process could grab it mid write.
I normally write the file to a temp folder or use a temp file name and then change the file name once its been fully written to. Making sure the collection process ignores the temp file extension or the temp folder location.
Hi Nikita,
Sounds like an interesting plan.
I've developed my own desktop grade UI widget library. I used to use ExtJS but got fed up with the price and speed. I've got it to the stage where I can build the shell of an IDE that could be mistaken for a thick client installation. If you right click the image below and open in a new tab you will see that it has all the features you would expect of an IDE. Split panel editing, drag panels, accordions, trees, menus, border layouts etc, and an empty panel that needs a terminal!
I have syntax highlighting working for a variation of M that I have been working on for a few years. I can get the syntax highlighting working for COS no problem (well long form at least).
The hard stuff would be getting things like the studio inspector working like for like. Lots of back end COS required etc.
I've still got a few months working on an RPC messaging solution for the UI but once thats done I would be open to collaborating on a back-end implementation for Cache.

Sean.
I had some interesting results using just a TCP binding between node and cache.
With just one single node process and one single cache worker process I was able to process 1200 JSON RPC 2.0 messages per second. This included Cache de-serialising the JSON, calling its internal target method, writing and reading some random data and then passing back a new JSON object. Adding a second Cache process nearly doubled that throughput.
I was running node, cache and the stress test tool on the same desktop machine with lots of other programs running. I started to hit limits that seemed to be related to the test tool, so I wasn't sure how high I could take these benchmarks with this set up.
Interestingly when I bypass node and use a CSP page to handle the requests I could only get the same test set up to process 340 messages per second. This I couldn't understand. I am sure it was to do with the test tool, but could not work out how to get this higher. I would have expected Cache to spin up lots of processes and see more than the 1200 that were limited by one process.
It did make me wonder that no matter how many processes you have, you can only process 2 to 3 at a time per 4 CPU cores and that maybe Node was just much faster at dealing with the initial HTTP request handling, or that spreading the load between the two was a good symbiotic performance gain. Still I was not expecting such a big difference.
Now I would have thought, if you put Node.JS on one box and Cache on a second box so they don't compete for the resources they need most, that the TCP connection would be much more efficient than binding Node and Cache in the same process on the same box?
Good to know QEWD has been using web sockets for a long time. Socket.io is a well maintained library with all of the fall back features that I worry are missing with Cache sockets alone. Makes it a lot easier when putting Node.JS in front of Cache. I guess I just want to avoid any additional moving parts as I bash out as many web applications per year as I have users on some of them. That's why I never use the likes of Zen, I just need things simple and fast and typically avoid web sockets for fear of headaches using them. But, it is 2017 and we can only hope the NHS masses will soon move on to an evergreen browser.
Do you have benchmarks for QEWD. I have my own Node to Cache binding that I have developed for an RPC solution and I found I could get twice as much throughput of marshalled TCP messages compared to CSP requests. But then I can never be sure with these types of benchmarks unless put into production environment.
Hi Nikita,
Thanks for your detailed response. Good to hear a success story with WebSockets. Many of the problems that I have read about are very fringe. Most firewalls seem to allow the outgoing TCP sockets because they are running over 80 or 443, but it appears there are fringe cases of a firewall blocking the traffic. Also certain types of AV software can block. I suspect these problems are more prominent in the node.js community because node is more prevalent than Caché and that Caché is more likely to be inside the firewall with the end users.
The main problem I still have is that I work on Caché and Ensemble inside healthcare organisations in the UK and they are always behind on browser versions for various reasons. Only recently have I been able to stop developing applications that needed to work on IE6. Many are still on IE8 or IE9 (sometimes running in IE7 emulation mode). Either way websockets only work on IE10+. I can work around most browser problems with pollyfill's, but sockets require a client and server solution. That means you can't just drop in sockjs as an automatic fall-back library because there is no server side implementation for it on Caché.
Without any library built for Cache I am thinking what is needed is a simple native emulator of the client socket library that implements a simple long poll implementation with Cache. If I then hit a scalability problem then it would be time to put Node.JS in front of Cache with all the additional admin overhead. A nice problem to have all the same. Still, I suspect it would be a large number of users and sockets to swamp out Cache resources.
Your WebTerminal looks very good. Not sure why I have not seen that before, looks like something I would use. I'm not sure why we can't have a web based IDE for Cache when I see a complex working web application such as this. I even have my own web based IDE that I use for that other M database, not sure I can mentioned it here :) which I keep thinking to port to Cache.
Just glancing comments.
You are trying to set a parameter. I'm no ZEN expert, but I am pretty sure parameters are immutable in all classes.
The other thing, if I was doing this in CSP. Setting the content-type in the OnPage method would be too late, the headers would already be written. It would have to be written before then. Not sure if ZEN is similar, but I would override the OnPreHTTP (or equivalent) and set %response.ContentType=myCONTENTTYPE in that method.
I remember the encode / decode limitation was 3.6MB not 1MB, I corrected my original message.
Having built several EDT document solutions in Ensemble (sending thousands of documents a day) I have not had to code around this limitation.
But if you have documents that are bigger then take a look at GetFieldStreamBase64() on the orignal HL7 message. I've not used it, but should be fairly simple to figure out. In which case you can use an Ens.StreamContainer to move the message.
Thinking about it, there is an even simple solution, just send the HL7 message "as is" to the operation and do the extract and decode at the last second.
This is what I would do.
Create a custom process and extract the value using GetValueAt and put it into a string container. String containers are handy Ens.Request messages that you can use to move a strings around without needing to create a custom Ens.Request message. Then just send it async to an operation that will decode the base64 and write it to a file. Two lines of code, nice and simple...
To decode the base64 use this method inside your operation.
Things to consider...
1. You mention the message is 2.3, but the MSH has a 2.4
2. If you set your inbound service to use either of the default schemas for these two then you will have a problem with required EVN and PV1 segments
3. Therefore you will need to create a custom schema and make these optional.
4. The base 64 decode method is limited to a string, so your PDF documents can not be greater than 3.6MB (assuming large string support is on be default).
5. You probably don't want to decode the document into another message too soon, do this just before writing to a file
T02 to ITK should just be a matter of creating a new transform and dragging the OBX(1):5.5 field onto the reflective target field.
No problem. Eduard got us all on the right track with this.
Just adding a space will prevent the value being non escaped...
SELECT JSON_OBJECT('id': ' {{}')
Which you can add to the end as well...
SELECT JSON_OBJECT('id': '{{} ')
Which basically suggests that if a value starts with a { and ends with a } then the JSON_OBJECT function assumes that it is a valid JSON object and does not escape it, for instance this...
SELECT JSON_OBJECT('id': '{"hello":"world"}')
will output this...
{"id":{"hello":"world"}}
In some ways I would say this is valid / wanted behaviour, except perhaps that there should be a special Caché type for raw json where there is then an implicit handling of that type in these functions.
An alternative workaround that works is to use the CONCAT function to append a trailing space...
SELECT JSON_OBJECT('id':{fn CONCAT('{{}',' ')})
Which produces...
{"id":"{{} "}
Which on the original query would need to be...
SELECT JSON_OBJECT('idSQL':id, 'content': {fn CONCAT(content,' ')} ) FROM DocBook.block
What happens if you run the same query without the JSON_OBJECT function...
SELECT idSQL,content FROM DocBook.block
That error is caused by the extra curly brace, try...
SELECT JSON_OBJECT('id': '{}')