go to post Timo Lindenschmid · Jun 10 Hi Norman, we need to seperate 2 areas of fragmentation.1. filesystem/OS level fragmentation nothing that we can do anything about it except running your trusted defrag if the filesystem has one and actually is in need of defragging.2. database/global fragmentation: This is a very interesting topic, usually nothing needs to be done for an IRIS database, IRIS is pretty good in managing global block density. (refer to https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...)You can use the output of an integrity check to see how your global density is per global in the database. Both defrag and compact operations are non-destructive and non interruptive, so even if they don't finish they can just be started again and will continue on.
go to post Timo Lindenschmid · Jun 10 Hi Alexey, without knowing any details, have you checked if your routine cache is big enough to cache both routines?refer to guide to monitor memory usage Best Regards Timo
go to post Timo Lindenschmid · Jun 3 Just as an addendum here, the PWS is configured to be a very stable management platform. This stability is reached on the cost of performance. If you put any load on the PWS it will not cope very well. During my time using it i always experienced lags, CSP timeouts when trying to work with PWS with more than 4 concurrent power users.
go to post Timo Lindenschmid · Jun 3 Hi Scott,Check mirror monitor if all databases are caught up on the backup. Or is there one database that is stuck on dejournaling because of that journal file?Usually happens if the the backup is out of sync for a long time and the file got corrupted/deleted and is no longer available on the Primary or other mirror members. Two option that i know of here is to 1. restore that file from backup and supply it in the folder that BACKUP complains about. Or rebuild the backup from your primary.
go to post Timo Lindenschmid · Jun 3 Hi, not sure about Cache2017 but in later versions there is a TaskMgr task available to purge application error logs, afaik this is automatically configured to purge any application errors older than 30days.%SYS.Task.PurgeErrorsAndLogs - InterSystems IRIS Data Platform 2025.1 - including private class members
go to post Timo Lindenschmid · May 20 Hi Pietro, this depends on your application.In general, you cannot define DB write access without having read access.This said, you can though define a user that only has SQL insert rights to specific tables without select rights.I have not tested this though, but SMP allows this type of setup. Best Regards Timo
go to post Timo Lindenschmid · May 20 Hi,Couple of things to check. Is there any difference in Server design? .e.g. number of disks, scsi controllers, volume/storage distribution etcIs the VM definition the same? e.g. storage driver versions (generic scsi controller vs hyperV SCSI controller)Is the OS on the host and in HyperV the same? Is the storage provider design the same? Is the IRIS config the same (i.e. cpf file), especially are below settings present? [config] wduseasyncio=1 asyncwij=8 I guess both IRIS versions are the exactly the same build although i never heard that to affect disk performance.
go to post Timo Lindenschmid · May 14 Hi,you can use the concat function:select {fn CONCAT('HELLO',' world')}
go to post Timo Lindenschmid · May 6 %ExecDirectNoPrivs just omits the access check on prepare, access rights are still checked on execute of the SQL. You can create a Security Role that Grants SQL access to the required storage table via System Management Studio, then assign this access role to the UnknownUser.
go to post Timo Lindenschmid · Apr 15 Here is the implementation: ClassMethod MaxCoverage(ByRef SourceLists, Output Solution as %List) { /* 1. run through each sourcelist and generate a value - list index e.g. List(3) would result in a index ^||idx(5,3)="" ^||idx(8,3)="" ^||idx(9,3)="" also add the list to a still valid list 2. iterate over the index find the first value with only one entry and add that list to result list, then run through the list and remove all value index entries for values contained in the list. remove the list from the still valid list. 3. if no value only has just one list entry, pick the list with the most entries that is on the still valid list. iterate over the list and check each value against the value index, if the value is still in the index remove the value index and add list to the result list. remove the list from the still available list 4. iterate above until either value index has no more entries, or the still valid list has no more entries. 5. result list contains all lists required for maximum coverage */ kill Solution kill ^||lengthIdx kill ^||idx kill ^||covered set idx="" for { set idx=$order(SourceLists(idx)) quit:idx="" set listid=0 set stillAvailable(idx)="" set ^||lengthIdx($listlength(SourceLists(idx)),idx)=idx while $listnext(SourceLists(idx),listid,listval) { set ^||idx(listval,idx)="" } } set listid="" // for loop - exit when either ^||idx has no more entries or the still valid list has no more entries for { if $data(stillAvailable)=0 { // no more lists to process quit } if $data(^||idx)=0 { // no more values to process quit } // find the first value with only one entry set val="" set found=0 for { quit:found=1 set val=$order(^||idx(val)) quit:val="" set listid="" for { set listid=$order(^||idx(val,listid)) quit:listid="" // found a value check now if ther eis more than one entry if $order(^||idx(val,listid))="" { // found a value with only one entry set found=1 quit } } } if found=0 { // haven't found one yet so use the one with the most entries ^||lengthIdx( set res=$query(^||lengthIdx(""),-1,val) if res'="" { set listid=val } else { // no more entries // should never hit this quit } } if listid'="" { // got a list now process it // first remove the list from the available lists kill stillAvailable(listid) kill ^||lengthIdx($listlength(SourceLists(listid)),listid) // iterate trhough the list, check value against the value index set listval=0 w !,"found listid:"_listid,! set ptr=0 set added=0 while $listnext(SourceLists(listid),ptr,listval) { // check if the value is still in the index w !," checking value:"_listval If $INCREMENT(^||covered(listval)) if $data(^||idx(listval)) { w " - found it!" // remove the value from the index kill ^||idx(listval) // add the list to the result list if added=0 { set Solution=$select($get(Solution)="":$listbuild(listid),1:Solution_$listbuild(listid)) set added=1 } } } } } } And the execution result: DEV>set List(1)=$lb(3,5,6,7,9),List(2)=$lb(1,2,6,9),List(3)=$lb(5,8,9),List(4)=$lb(2,4,6,8),List(5)=$lb(4,7,9) DEV>d ##class(Custom.codegolf).MaxCoverage(.List,.res)found listid:2 checking value:1 - found it! checking value:2 - found it! checking value:6 - found it! checking value:9 - found it!found listid:1 checking value:3 - found it! checking value:5 - found it! checking value:6 checking value:7 - found it! checking value:9found listid:5 checking value:4 - found it! checking value:7 checking value:9found listid:4 checking value:2 checking value:4 checking value:6 checking value:8 - found it!DEV>zw resres=$lb("2","1","5","4")DEV>zw ^||lengthIdx^||lengthIdx(3,3)=3DEV>zw ^||covered^||covered(1)=1^||covered(2)=2^||covered(3)=1^||covered(4)=2^||covered(5)=1^||covered(6)=3^||covered(7)=2^||covered(8)=1^||covered(9)=3
go to post Timo Lindenschmid · Apr 14 my approach would be:1. run through each list and generate a value - list index e.g. List(3) would result in a index ^||idx(5,3)="" ^||idx(8,3)="" ^||idx(9,3)="" also add the list to a still valid list2. iterate over the index find the first value with only one entry and add that list to result list, then run through the list and remove all value index entries for values contained in the list. remove the list from the still valid list.3. if no value only has just one list entry, pick the list with the most entries that is on the still valid list. iterate over the list and check each value against the value index, if the value is still in the index remove the value index and add list to the result list. remove the list from the still available list4. iterate above until either value index has no more entries, or the still valid list has no more entries.5. result list contains all lists required for maximum coverageHope that makes sense.
go to post Timo Lindenschmid · Apr 13 If you add a calculated field to a class definition you don't have to "Update" your data for the field to be populated. It will get calculated on record access, i.e. when the record gets selected with the field included in the select.
go to post Timo Lindenschmid · Apr 9 This sounds like the tune table messed up the table statistics. I would look at the table statistics for that boolean field. Also i would open a support ticket with WRC on this.
go to post Timo Lindenschmid · Apr 7 Hi, what SSH client are you using? Putty perchance?If so try to set the KeepAliveTimeout to something other than 0 say 60This usually solves the issue of being disconnected for me.
go to post Timo Lindenschmid · Apr 3 You might want to look into Work queue Manager. It can be configured to use multiple agents to process anything in a queue. This approach is best if the queue is fixed at the start and during the run of the processing, i.e. no items added to be processed.If you are more for a spooling type setup. You can use Integrations to monitor a spool global and start jobs based on poolsize etc. ref: Using the Work Queue Manager | Using the Work Queue Manager | InterSystems IRIS Data Platform 2025.1
go to post Timo Lindenschmid · Mar 13 Hi Evan, i think the only way is using process query like this:set currentUser = ##class(%SYS.ProcessQuery).%OpenId($job).OSUserName
go to post Timo Lindenschmid · Mar 6 Hi Jude, better option to get help is to open an iService ticket to get specialist help here.Just a high level: 1. make sure that the parameter you want to use are added via the URL expression on the mun item used to call the report 2. then the parameter can used in the report manager definition and assigned Also make sure the parameter id in the format expected. IRIS dates are usually in $Horolog format and not in yyyy-mm-dd as might be expected by LogiReports.
go to post Timo Lindenschmid · Mar 6 Hi,just wondering what you want to achieve. Is this for outputting a report? If so, there are better options available. e.g. InterSystems Reports or although deprecated Zenreports.
go to post Timo Lindenschmid · Mar 3 Just a note for embedded SQL, you can modify the compiler options in e.g. VSCode to include /compileembedded=1. This then will trigger a compilation of your embedded SQL on compile time and highlight any errors you might have there like missing tables etc.