We have been investigating how to activate a Server to Generate Tokens and an associated Resource Server to Validate the Token. This step, actually we have found out with the invaluable support of @Alberto Fuentes from Intersystems.
My team works on implementing an Interoperability solution utilizing InterSystems Kubernetes Operator on Red Hat OpenShift container platform.
We are trying to determine how many messages we can process in any given time. We have a Feeder app running in 10 containers sending 50k messages each to a load balancer all beginning at the same time.
Messages are received via HTTPS protocol by webgateway containers.
Interoperability production runs in compute pods with persistent data, journals, and WIJ volumes.
In terms of general through-put design and long term support, I'm considering what would be a "best approach" for needing to create multiple batch files in a few different layouts from the same data-sets.
I'm getting a lot of hs_err_pid.mdmp & hs_err_pis.txt error files in the path where Cache.DAT in located and as I googled these seems to be Java error files and I'm wondering what this has to do with Ensemble, and is it alright to just delete them?
I'm new user learning to use Iris and Ensemble. I'm trying to set up a TCP interface to send delimited data from Ensemble to another interface engine. I created File.PassthroughService to pick up the file and send the data to TCP.Framed.PassthroughOperation. The framing is MLLP and SSL configuration used. It is able to process small files around 50kb. When I drop a larger file such as 5mb, the operation is not getting the ack within the 60 sec timeout.
I am currently wokring on integrating unit tests into a project. I am also attempting to test productions with the TestProductions class. This works great, but I noticed that no code coverage information is being gathered when I run the production tests?
Am I doing something wrong (forgot to add something in the coverage.list for instance) or is TestProduction not intended for code coverage?
While the documentation of configuring authentication with Kerberos for IRIS on Linux servers is sparse, for docker i found no docs at all. Assuming I would be able to adapt the requirements from linux to docker (on linux host) I had no success at all. Has anyone successfully done this?
Was at an HL7 Connectathon over the weekend and got in a scramble that headed us in the direction of trying out Preview 4 for I4H and found that the USER namespace, and subsequent namespaces created do not have any mappings included with them.
Hi, I am passing an EDIFACT file from a service to an operation using the super class. I would like to prepopulate the 'Source' variable with some information, but I’m not sure how to access this variable.
Process Mining( http://en.wikipedia.org/wiki/Process_mining) is becoming more and more popular in IT industry. I wonder that since all processes are stored and managed in IRIS/Health Connect, any one hs developed similar functions on IRIS? Thx!
Thank you for taking the time to read and answer this question.
We need to find out how to display an EnsLib.DICOM.Document using LOGINFO, in the traces.
We have tried to use:
set xml= writer.GetXMLString()
$$$LOGINFO("..DocumentFromService en xml: "_xml)
We are embarking on a project that we are injesting raw EDI files 837's to start with into HS.
We have and inbound X12 adaptor to take in the raw *.edi file and we have an business process that is mostly pass thru. we have been unable to find any DTL to map X12 to SDA/FHIR ( similar to the ones that exist for HL7 , CDA,CCD). If anybody has done anything on this front,would appreciate any tips.