I want to manipulate all the objects whose type is%Dictionary.StorageSQLMapDefinition by %OpenId
However, I don't know the exact id to feed into %OpenId, is there a way to query all the existing ids of %Dictionary.StorageSQLMapDefinition in a namespace?
How to simply going to get the value of the system language from ^%z? Because I got a problem, U2 context isn't setup while connecting the SQL Connect.
In HealthShare clinician user specifications (as well as in the the clinical viewer), there are locations for specifying "proxy" user(s), but there doesn't seem to be anything in any of the documentation that explains what this actually means/enables.
Hello, we have a few hundreds of triggers to port from Oracle to Cachè for a migration project, and many of them have to change (for example, normalize a value, null it, etc) the value which is being inserted.
The documentation says "You cannot set {fieldname*N} in trigger code." , so we're unlucky.
Is there a good workaround for this ?
SqlComputeOnChange doesn't seem the best way, but I'm not totally sure: for example normalization and validation could have a better place somewhere else than a trigger.
I created an iKnow domain, where I supplied dictionaries, blacklist, metadata and stemming. The datasource is a table.
I would like to use iFind semantic search feature. It is said in the documentation that iFind use iKnow semantic analysis. But I want iFind to use the iKnow domain configuration I created earlier earlier. How can I do that ?
When developing a new Interoperability Production, it is quite natural that settings are initially added in the Production.
However, as soon as you want to move the Production from development to a test or staging environment, it becomes clear that some settings like HTTP Servers, IP addresses and/or ports need to be changed. In order to avoid these settings being overwritten during a redeployment later on, it is essential that you move these settings from the Production to the System Default settings.
We are trying to come up with huge DataStore which needs to store HIPAA transactions and Data getting partitioned with idkey - YYYYMM. Current Live data get inserted into current Month DB - HIPAA_202306. 10 years (Hipaa retention policy) old data is going to be sitting in to 120 DBs ( 201606_HIPAA, 201607... 202305) for historical audit legal compliance purposes.
Currently if we create Namespace we get 2 databases - CODEDB for routes/classes & DATADB for journals data.
I know what you are thinking... a new feature for ZEN.proxyObject...? NOW..???
In Spain we say that better late than never ;-)
Have you ever need to send a numeric attribue of a Json in String format?
Did you go crazy casting class objects with fixed typed properties?
Lucky you!!
With this new feature I propose a way to continue working with our loved dynamic object %ZEN.proxyObject, being able to choose whether or not we want to send numeric attributes in String format.
We’ll discuss the aspects of Security Model implementation in InterSystems IRIS, the requirements, and what do we expect from participants of the Security contest. Also, we’ll answer all the questions related to the contest!
Date & Time: Monday, November 15 — 12:00 AM EDT
Speakers: 🗣 @Andreas Dieckow, Principal Product Manager at InterSystems Corporation 🗣 @Evgeny Shvarov, InterSystems Developer Ecosystem Manager
In the previous article, we combine ZPM with Config-API to load a configuration on module loading\install.
It could be useful for small applications, but for a large application, it's not convenient.
I would like to grant access to the Event Log (below in French "Journal des événements") to a user, and more generally to an existing group of users (this group is named "Helpdesk").
Helpdesk has already access to the following tables:
https://www.youtube.com/embed/L77K9oAgexQ [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
I was understanding the cluster configuration supported by the Cache. Have couple of queries around that:
1. In Cache version 2018.2, there is a shrading concept which splits the data of a Master Data server into multiple small data server which store shraded data.
I was understanding the cluster configuration supported by the Cache. Have couple of queries around that:
1. In Cache version 2018.2, there is a shrading concept which splits the data of a Master Data server into multiple small data server which store shraded data.