Hi guys, I've setup an FTP passthrough service and operation to move two files from one SFTP server to another. The files are moved and i can confirm with filezila that the whole operation went well. However, i need to raise errors and send email if one the sftp servers is down for whatever reason, and if the trasnfer didn't go well. I ticked the alert on error box for the passthrough BS and BO and the errors are indeed forwraded to an alert management system. I have real problems with the check on the SFTP servers.
My team works on implementing an Interoperability solution utilizing InterSystems Kubernetes Operator on Red Hat OpenShift container platform.
We are trying to determine how many messages we can process in any given time. We have a Feeder app running in 10 containers sending 50k messages each to a load balancer all beginning at the same time.
Messages are received via HTTPS protocol by webgateway containers.
Interoperability production runs in compute pods with persistent data, journals, and WIJ volumes.
I have a use case where I'm using embedded SQL within a Business Process to interact with a SQL table. However, when it comes to deployment into our production, the table wont form part of the deployment package created from the production.
Beyond manually creating the table on the production system, is there a standard way of ensuring that a table needed for a class is created during deployment?
we this format of url with port required, and my guess that if we specify port 57772 as the default port in the web server which I'm assuming that would be Apache , we don't have to specify the port in our url, so how can set a default port in Apache?
Is there a way to insert new Key/Value in an existing lookup Table via a DTL code? The only thing I found in the documentation is that we could use the following command SELECT KeyName,DataValue FROM Ens_Util.LookupTable WHERE TableName = 'myTab'. In the meantime I just created a table and used it in my DTL to insert new values and validate if the Key exist.
Has anybody tried to write custom code to empty out queues when Interoperability shuts down? We run IRIS in Kubernetes cluster and we have compute pods scaling up and down. We have Message Bank operation to keep all messages in one place. We want to see all messages in Message Bank.
On adding two new custom settings to a process, i'm looking to improve the labelling on the component so that it includes spaces, i.e ("Archive Path" instead of "ArchivePath").
I've looked at the following article, specifically at the suggestion:
set ^CacheMsg("EnsColumns","en","ArchivePath")="Archive Path"
I'm trying to get head around this principle of instance variables and the advantage of it.
I found this principle used a lot by my predecessor in some properties definition and I'm wondering why don't we just use the property as simple as is, its creating two properties sCtg that contain the value and Ctg is a calculate to get the value of sCtg, is there advantage of fast accessing or... !?
I'm testing an HL7 2.4 -> HL7 2.3.1 set of transformations. For the time being the source (service) and sink (operation) are file adapters. What I'd really like is to be able to save the output file with a name matching/containing the input file name - but as the DTL transformation in between uses "new" rather than "copy" it looks like I'm losing (some of?) the metadata, including the "Source" field (Body tab, message viewer).
Is there any way of preserving the Source field so the OutboundAdapter has access to it?
Has anybody tried to extend the Menu on Management Portal? I like to add a new page or a dashboard that will be created soon to the Management Portal and allow others to use it also. I understand there are risks that I could lose things during an upgrade. I am okay with that. Does InterSystems support such an effort?
I would appreciate some advice please- hopefully there is plenty of experts out there.
We are setting up an sFTP share between hospital trusts here in the UK and I have set the outbound operation up using a custom extension of EnsLib.HL7.Operation.FTPOperation.
We are configuring a VPN tunnel to run between the sites also so there is a bit of firewall / network routing to take place to enable the connection but to add in a complication we are on a mirrored cluster and usually present on a Virtual IP address.
I have created a business service that uses an adaptor that I wrote by extending Ens.InboundAdapter (i.e. the ADAPTOR parameter is set to my custom adaptor ).
The OnTask() method of my adaptor polls our IRIS database to determine which records are ready to be sent out of our system to an external target system.
If the record is ready the OnTask() method creates an instance of the Business Service and then calls the OnProcess() method sending in the record as the input.
I am working on sending Gmail with error details when any errors occurs in the ensemble production.
I am facing the below issues while doing it
1.I have Ens.Alert (Business process) using the class Ens.Alerting.AlertManager and Emailoperation (Business operation) using the class EnsLib.EMail.AlertOperation. here my business process is not sending the Alarm request to business operation eventhough i am using rule to connect the business operation
2.What are SMTP server details needs to given for Gmail?
Still a newbie for ensemble, I am trying to convert XML message to HL7 Message. I am using Custom schema for XML structure which includes MSH and PID segments from HL7 Message.
These are the service, process, Operation classes i am using
Business service-EnsLib.EDI.XML.Service.FileService
Business Process-EnsLib.MsgRouter.RoutingEngine
Business operation -EnsLib.HL7.Operation.FileOperation
I know there is the ability to enable Rule Sets within a Business Rule by dates, but what about Sub-Rules or just Rules in general? We have an upcoming go-live that is at Midnight and I was wondering if there was a way I could script out enabling rules/sub-rules within a Business Rule to save my team some time and headaches. Or is the best practice is just to create another rule for the go-live, then move it over once we are live...