Just short comment, not sure why you need to call sql date_add function?
Your SelectTermination takes a parameter as %DateTime. I would just use simple $H as current date/time then subtract DaysBack back.
e.g.
  set tDate=+$H-..#DaysBack
 

Hi Norman,

could you modify foo and do something like this at its beginning?

If $data(^SpecialcaseFlag) {
  // get last entry
  set idx=$order(ITEMS(""),-1)
  // set new last entry
  set idx=idx_"z"
  set ITEMS(idx)=......
}

I would be interested in the use case of having a special global subscript always at the end.
But essentially it comes down to convention and design. If you define your last global subscript to be e.g. "zzzz" then nobody should create entries with "zzzz1".
As i said for us to better advise we would need more information on what you need to achieve.

Hi Pietro,

%d should not be modified as it might impact down the road triggers especially for TrakCare.

As i don't know what you are trying to achieve its difficult to advise directly.

For TC i usually would use a different trigger namely OnBeforeSave / OnAfterSave which will trigger based on user activity from the UI or on Object saves. Using those you could modify the value to be saved or update after it already has been persisted. 
Maybe best to request assistance from support for your specific issue.

Best Regards
Timo

Timo Lindenschmid · Nov 26, 2025 go to post

I would not go with percent classes/globals nor with %All namespace mappings except you want/need to have your code and data available in all namespaces.

I would add Global and Package mappings as needed to make the storage class definition as well as the storage global location available in the N1 namespace.

Timo Lindenschmid · Nov 23, 2025 go to post

Hi,

the call to GetProductionInfo() will return all production that were executed in the namespace at some point in time. So you will see the history of the productions previously running in the namespace.

The return values are afaik: Status, StartDateTime, StopDateTime, CurrentDefault

Best Regards

Timo

Timo Lindenschmid · Oct 8, 2025 go to post

ISC supported customers should be looking into the use of ISC CCR (ChangeControlRecord) as well as SystemDefaultSettings. CCR is powered by Perforce and handles the transport of config and code between environments. It features in its latest version Production decomposition which transports only the actual changed code/config and no longer the complete production definition.

Timo Lindenschmid · Sep 8, 2025 go to post

Hi,

the question here is why do you want to disable ChangeControl hooks?
For localised changes you should look into using using SystemDefaultSettings. I can see the reasoning to disable the hooks on the Production page, but even that should only be done for a short period and not in general.
 

Timo Lindenschmid · Aug 31, 2025 go to post

Just to mention there is another option to storing large JSON objects. You could use DocDB for unstructured JSON objects i.e. if the JSON structure is unknown. Or, my preference, if the structure is well known, you could use %JSONAdaptor to map a Storage class to the same values in your json string and then just import the json, ending up with an IRIS Persistent object.

JSON Adaptor

Timo Lindenschmid · Aug 19, 2025 go to post

In addition to Raj comment. What usually happens is that the deprecated libraries and calls are replaced with redirects to the new implementation.

Timo Lindenschmid · Aug 11, 2025 go to post

Hi Touggourt,

Try http and make sure the firewall allows http access. If you wnt https on that port then you need to configure your production accordingly. 

Timo Lindenschmid · Aug 11, 2025 go to post

Hi David,

i would check the idle timeout and session time out settings on the web application as well as in the CSPGateway. Long timeouts mean licenses do not get released until the session is dropped.

Timo Lindenschmid · Aug 7, 2025 go to post

If you don't know what is getting execute exactly you can use MONLBL (ref : MOMLBL )
Its for performance diagnostics but will give you a statistic of routine code being called etc.

Timo Lindenschmid · Jul 28, 2025 go to post

Hi Malcolm,

using a license server is recommended in multi server deployments. But that will not alleviate the issue that users need to login again after a failover. The issue here is that the webgateway loses the login connection to the iris instance on failover and needs to reauthenticate and reestablish. In case of a stateless application this has no user impact, but in case of a non-stateless application the user needs to reauthenticate and all data in flight is lost. 
To resolve this issue you should consider going to a multi-tier setup with appservers using ECP.   
Highlevel design:
webserver(webgateway) connects to
appserver(authenticates and serves the application)
connects to mirror via ecp

ECP is mirror aware and will failover transparently for the appservers. So users stay logged in and don't notice the failover.

Caveat: This only works if the failover is fast and happens before the appservers do any ECP operation and the ECP connection then times out.
refer to: Distributed Cluster

Timo Lindenschmid · Jul 15, 2025 go to post

Hi,

a 404 error usually comes from Apache. As we don't know your Apache setup its difficult to advise. It might be that you need to add additional config to allow the new path to be accessible.
Also seeing you call this using a port number other than 80/443, i guess you are using still using PWS, which is not supported for production loads.

Timo Lindenschmid · Jul 15, 2025 go to post

It would be good to understand what versions you are talking about. You marked this s IRIS2024.1 but you are talking about Cache odbc drivers. Also it would be good to know which licenses you are using as you are talking of a paywall... Usually IRIS is not limited if you are using a full license. Only limitations in using community are resources, connections and access to some enterprise level protocols (like ECP, Sharding, API Manager). 

Timo Lindenschmid · Jul 15, 2025 go to post

Hi Dimitrii,

There are various options here. You can use $job to start new process then continue on with your main process or you can use WorkQueueManager to create a workqueue and feed it with items to process.

Best Regards

Timo

Timo Lindenschmid · Jul 11, 2025 go to post

From a performance aspect i would not use objects to retrieve the data, but use SQL.

SQL will take care of the conversion for you.

e.g.

select PAADM_PAPMI_DR->PAPMI_PAPER_DR->PAPER_StName
from SQLUSer.PA_Adm
where
PAADM_Hospital_DR = 2and
PAADM_AdmDate>='19/03/2025'and
PAADM_AdmDate<='19/03/2025'
Timo Lindenschmid · Jun 24, 2025 go to post

I would safegueard the code execution in the daemon by checking %Dictionary.CompiledClass to see if the chunk classes are compiled yet.

Timo Lindenschmid · Jun 10, 2025 go to post

Hi Norman,

we need to seperate 2 areas of fragmentation.
1. filesystem/OS level fragmentation
     nothing that we can do anything about it except running your trusted defrag if the filesystem has one and actually is in need of defragging.
2. database/global fragmentation:
     This is a very interesting topic, usually nothing needs to be done for an IRIS database, IRIS is pretty good in managing global block density. (refer to https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cl…)
You can use the output of an integrity check to see how your global density is per global in the database. Both defrag and compact operations are non-destructive and non interruptive, so even if they don't finish they can just be started again and will continue on.

Timo Lindenschmid · Jun 3, 2025 go to post

Just as an addendum here, the PWS is configured to be a very stable management platform. This stability is reached on the cost of performance. If you put any load on the PWS it will not cope very well. During my time using it i always experienced lags, CSP timeouts when trying to work with PWS with more than 4 concurrent power users. 

Timo Lindenschmid · Jun 3, 2025 go to post

Hi Scott,
Check mirror monitor if all databases are caught up on the backup. Or is there one database that is stuck on dejournaling because of that journal file?
Usually happens if the the backup is out of sync for a long time and the file got corrupted/deleted and is no longer available on the Primary or other mirror members.

Two option that i know of here is to 1. restore that file from backup and supply it in the folder that BACKUP complains about. Or rebuild the backup from your primary.

Timo Lindenschmid · May 20, 2025 go to post

Hi Pietro,

this depends on your application.
In general, you cannot define DB write access without having read access.
This said, you can though define a user that only has SQL insert rights to specific tables without select rights.
I have not tested this though, but SMP allows this type of setup.

Best Regards

Timo

Timo Lindenschmid · May 20, 2025 go to post

Hi,
Couple of things to check.

Is there any difference in Server design? .e.g. number of disks, scsi controllers, volume/storage distribution etc
Is the VM definition the same? e.g. storage driver versions (generic scsi controller vs hyperV SCSI controller)
Is the OS on the host and in HyperV the same? 
Is the storage provider design the same? 
Is the IRIS config the same (i.e. cpf file), especially are below settings present?

[config]
wduseasyncio=1
asyncwij=8

I guess both IRIS versions are the exactly the same build although i never heard that to affect disk performance.

Timo Lindenschmid · May 6, 2025 go to post

%ExecDirectNoPrivs just omits the access check on prepare, access rights are still checked on execute of the SQL.

You can create a Security Role that Grants SQL access to the required storage table via System Management Studio, then assign this access role to the UnknownUser.