A permanent job opportunity has arisen for a Caché/Ensemble /Iris developer with at least 2 years experience. My client is a specialist resource provisioner of developers for high profile clients in the finance, healthcare, retail, distribution and credit business that are mainly based in central London. My client is looking for a highly-motivated individual who thrives in an environment where problems are open-ended.
I am working on tweaking our current patient load process. The roster input is in text pipe delimited format, parsed out and transforms it to a standard HS.Message.PatientSearchRequest, and send to business process for patient matching. Result is transformed into a standard ADT HL7 message using EnsLib.HL.Message class. I would like to know if there is another class I can use that will transform my output to a text flat file pipe delimited instead of HL7 format? any advice on how to do this is greatly appreciated.
I'm asking this best-practices question on behalf of a customer.
They have a Caché-based application, and an Ensemble production deployed in front as an ESB to provide web service API access to the back end application. They're looking for a best practice approach for the scenario where the Caché back end is calling a third-party web service. Should that go through Ensemble too? It's sort of a philosophical design question/debate.
I use an excellent service to collect time programming. Is the WakaTime: https://wakatime.com/.
This service integrate with main IDE tools, including VSCode and collect time spent by project, language, IDE and code stats at general. See my last 7 days:
These information help me to organize and balance my targets by project.
After all these years of doing basic Ensemble work, I am just beginning to venture into using Cache Tables instead of either Data Lookup tables, or what I know of Outside SQL tables using JDBC. I have several Cache SQL tables that I am building for a project I am working on.
Hi Community, we would like to share the news about the "Bell State" example (a compact quantum computing program) automated using our ML Toolkit with a Rigetti QVM running in the background:
In %Net.SSH.Session there is a method settraceMask and that will create a wireshark cap file for help with troubleshooting a connection. I dont see any class in Net.FtpSession that can be used for troubleshooting. Is there a different method that I should use?
I'm trying to immigrate some of my HealthInsight dashboards and pivot tables to another HS instance.
In some pivot tables, I defined them with a set of calculated dimensions defined in the analyzer, e.g as below:
Then when I exported the cubes and pivot tables in used to my new envirmonment. When I open my pivot tables again, the calculated dimensions are missing and hence my pivot tables no longer work:
I only use Caché and CSP, I am making a simple request in CSP page with #call method, and I have to define a callback of this #call method, can I do this?
This is my simple request in CSP page (javascript):
I am migration my web application of Cache 2013 to Cache 2016, in Cache 2013 I have a integration with a Java aplication using Java Gateway mapping proxy classes and consuming a method that param is a object, and it works perfectly.
But in Cache 2016 this integration don't work, I send the param as object but Cache send as String with the ref of object...
A calculated measure is a powerful feature in DeepSee and can help to enrich your analyzes. In the case of complex or long running computations plugins can be useful. This article shows with a simple example how you can build and use a plugin in DeepSee.
Is there any method/property/way in Caché to obtain something like unique identifier for installed Caché system? The idea is to get the identifier that will differ on any other installation/machine/etc, but will forever remain the same for the current installation, even if $zv changes (in case of update) or any data is removed from the database.
Presenter: Rich Taylor Task: Use an LDAP schema that differs from the provided default Approach: Give examples of customized LDAP schema development, using LDAP APIs and ZAUTHORIZE
In this session we explore the various options of for working with LDAP as an authentication and authorization framework. We will look beyond the simple LDAP schemas into working with more complex LDAP configurations that incorporate application level security information.
Content related to this session, including slides, video and additional learning content can be found here.
In the previous announcement, we asked you what are your thoughts regarding the new rubric Water Cooler Talk - basically a moderated (possibly biweekly) discussion about some programming topic not directly related to the InterSystems products. Since the majority reacted positively, we'd like to invite you to write the topics that interest you in the comments to this post and mention if you would like to lead the discussion.
https://www.youtube.com/embed/lGnJS3VMFUA [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
I've written several custom classes to add additional search capabilities to the user / clinician search defined in HS.UI.Registry.User.Find. I've tested it out, and it looks and works how I'd like it to, but I've run into a snag when trying to implement it.
The documentation for registering custom user interface pages shows a table of about 50 configuration registry keys for UI pages, but it only lists:
I need to remove content below, from PID-3/HL7 message, in IRIS. However, I need to keep the content in PID-3.1. For example, I’d need to keep only, “5050532”. Can this be done in Data Transformations? If yes, how?
Thank you!
I am sending a HL7 message from a Service called Servicios.Interconsulta PeticionInterconsultav01r00 to a Bussiness Process Process called EnrutadorVisa doMedicamento using SoapUI
InterSystems has corrected several critical defects that can result in data integrity issues. These defects were identified and corrected within a short time, so InterSystems has simplified the upgrade process by consolidating them into a single package. The effects of encountering these defects may not always be visible.