Timo Lindenschmid · May 6, 2025 go to post

%ExecDirectNoPrivs just omits the access check on prepare, access rights are still checked on execute of the SQL.

You can create a Security Role that Grants SQL access to the required storage table via System Management Studio, then assign this access role to the UnknownUser.

Timo Lindenschmid · Apr 15, 2025 go to post

Here is the implementation:
 

ClassMethod MaxCoverage(ByRef SourceLists, Output Solution as %List) {
    /*
    1. run through each sourcelist and generate a value - list index
     e.g. List(3) would result in a index ^||idx(5,3)="" ^||idx(8,3)="" ^||idx(9,3)=""
     also add the list to a still valid list
    2. iterate over the index find the first value with only one entry and add that list to result list, then run through the list and remove all value index entries for values contained in the list. remove the list from the still valid list.
    3. if no value only has just one list entry, pick the list with the most entries that is on the still valid list. iterate over the list and check each value against the value index, if the value is still in the index remove the value index and add list to the result list. remove the list from the still available list
    4. iterate above until either value index has no more entries, or the still valid list has no more entries.
    5. result list contains all lists required for maximum coverage
    */
    kill Solution
    kill ^||lengthIdx
    kill ^||idx
    kill ^||covered
    set idx=""
    for {
        set idx=$order(SourceLists(idx))
        quit:idx=""
        set listid=0
        set stillAvailable(idx)=""
        set ^||lengthIdx($listlength(SourceLists(idx)),idx)=idx
        while $listnext(SourceLists(idx),listid,listval) {
            set ^||idx(listval,idx)=""
        }
    }


    set listid=""
    // for loop - exit when either ^||idx has no more entries or the still valid list has no more entries
    for {
        if $data(stillAvailable)=0 {
            // no more lists to process
            quit
        }
        if $data(^||idx)=0 {
            // no more values to process
            quit
        }
        // find the first value with only one entry
        set val=""
        set found=0
        for {
            quit:found=1
            set val=$order(^||idx(val))
            quit:val=""
            set listid=""
            for {
                set listid=$order(^||idx(val,listid))
                quit:listid=""
                // found a value check now if ther eis more than one entry
                if $order(^||idx(val,listid))="" {
                    // found a value with only one entry
                    set found=1
                    quit
                }
            }
        }

        if found=0 {
            // haven't found one yet so use the one with the most entries ^||lengthIdx(
            set res=$query(^||lengthIdx(""),-1,val)
            if res'="" {
                set listid=val
            } else {
                // no more entries
                // should never hit this
                quit
            }
        }

        if listid'="" {
            // got a list now process it
            // first remove the list from the available lists
            kill stillAvailable(listid)
            kill ^||lengthIdx($listlength(SourceLists(listid)),listid)
            // iterate trhough the list, check value against the value index
            set listval=0
            w !,"found listid:"_listid,!

            set ptr=0
            set added=0
            while $listnext(SourceLists(listid),ptr,listval) {
                // check if the value is still in the index
                w !,"   checking value:"_listval
                If $INCREMENT(^||covered(listval))
                if $data(^||idx(listval)) {
                    w " - found it!"
                    // remove the value from the index
                    kill ^||idx(listval)
                    // add the list to the result list
                    if added=0 {
                        set Solution=$select($get(Solution)="":$listbuild(listid),1:Solution_$listbuild(listid))
                        set added=1
                    }
                }
            }
        }
    }
}

And the execution result:

DEV>set List(1)=$lb(3,5,6,7,9),List(2)=$lb(1,2,6,9),List(3)=$lb(5,8,9),List(4)=$lb(2,4,6,8),List(5)=$lb(4,7,9)

DEV>d ##class(Custom.codegolf).MaxCoverage(.List,.res)
found listid:2

   checking value:1 - found it!
   checking value:2 - found it!
   checking value:6 - found it!
   checking value:9 - found it!
found listid:1

   checking value:3 - found it!
   checking value:5 - found it!
   checking value:6
   checking value:7 - found it!
   checking value:9
found listid:5

   checking value:4 - found it!
   checking value:7
   checking value:9
found listid:4

   checking value:2
   checking value:4
   checking value:6
   checking value:8 - found it!
DEV>zw res
res=$lb("2","1","5","4")
DEV>zw ^||lengthIdx
^||lengthIdx(3,3)=3
DEV>zw ^||covered
^||covered(1)=1
^||covered(2)=2
^||covered(3)=1
^||covered(4)=2
^||covered(5)=1
^||covered(6)=3
^||covered(7)=2
^||covered(8)=1
^||covered(9)=3

Timo Lindenschmid · Apr 14, 2025 go to post

my approach would be:
1. run through each list and generate a value - list index
     e.g. List(3) would result in a index ^||idx(5,3)="" ^||idx(8,3)="" ^||idx(9,3)=""
     also add the list to a still valid list
2. iterate over the index find the first value with only one entry and add that list to result list, then run through the list and remove all value index entries for values contained in the list. remove the list from the still valid list.
3. if no value only has just one list entry, pick the list with the most entries that is on the still valid list. iterate over the list and check each value against the value index, if the value is still in the index remove the value index and add list to the result list. remove the list from the still available list
4. iterate above until either value index has no more entries, or the still valid list has no more entries.
5. result list contains all lists required for maximum coverage
Hope that makes sense.

Timo Lindenschmid · Apr 13, 2025 go to post

If you add a calculated field to a class definition you don't have to "Update" your data for the field to be populated. It will get calculated on record access, i.e. when the record gets selected with the field included in the select.

Timo Lindenschmid · Apr 9, 2025 go to post

This sounds like the tune table messed up the table statistics. I would look at the table statistics for that boolean field. Also i would open a support ticket with WRC on this.

Timo Lindenschmid · Apr 7, 2025 go to post

Hi,

what SSH client are you using? Putty perchance?
If so try to set the KeepAliveTimeout to something other than 0 say 60

This usually solves the issue of being disconnected for me.

Timo Lindenschmid · Apr 3, 2025 go to post

You might want to look into Work queue Manager. It can be configured to use multiple agents to process anything in a queue. This approach is best if the queue is fixed at the start and during the run of the processing, i.e. no items added to be processed.
If you are more for a spooling type setup. You can use Integrations to monitor a spool global and start jobs based on poolsize etc.


ref: Using the Work Queue Manager | Using the Work Queue Manager | InterSystems IRIS Data Platform 2025.1
 

Timo Lindenschmid · Mar 13, 2025 go to post

Hi Evan,

i think the only way is using process query like this:
set currentUser = ##class(%SYS.ProcessQuery).%OpenId($job).OSUserName
 

Timo Lindenschmid · Mar 6, 2025 go to post

Hi Jude, better option to get help is to open an iService ticket to get specialist help here.
Just a high level:

1. make sure that the parameter you want to use are added via the URL expression on the mun item used to call the report  

2. then the parameter can used in the report manager definition and assigned

Also make sure the parameter id in the format expected. IRIS dates are usually in $Horolog format and not in yyyy-mm-dd as might be expected by LogiReports.

Timo Lindenschmid · Mar 6, 2025 go to post

Hi,
just wondering what you want to achieve.

Is this for outputting a report? If so, there are better options available. e.g. InterSystems Reports or although deprecated Zenreports.
 

Timo Lindenschmid · Mar 3, 2025 go to post

Just a note for embedded SQL, you can modify the compiler options in e.g. VSCode to include /compileembedded=1. This then will trigger a compilation of your embedded SQL on compile time and highlight any errors you might have there like missing tables etc.

Timo Lindenschmid · Mar 2, 2025 go to post

Hi Harshita,

please get an iService ticket raised and someone from support will assist you.

Best Regards

Timo

Timo Lindenschmid · Feb 26, 2025 go to post

Hi,

Iris comes with a PDF render engine based on Apache FOP. This is though more used to create PDF documents from scratch than convert documents to pdf.

PDF render config documentation

Which is used in the now deprecated ZENReports %ZEN.Report.PrintServer - InterSystems IRIS Data Platform 2024.3 - including private class members
The other option is to make use of InterSystems Reports, but again this is for creating new pdfs from data contained in the database, not converting existing documents to pdf.

Timo Lindenschmid · Feb 19, 2025 go to post

Option 2 is not totally correct.
The parameter [Startup] MaxIRISTempSizeAtStart=5000 will clear the IRISTemp database and shrink it to the size specified it will not prevent IRISTemp from growing further.
So you set the parameter and restart IRIS and the runaway IRISTemp database will be reset to 5000MB (per example)
To ensure IRISTemp is not taking over all your available space either relocate the DB to a dedicated volume or set the maximum DB size in SMP for the database. But this all has its own risk.

Timo Lindenschmid · Feb 19, 2025 go to post

Looking at the documentation, i found this Cube dependencies building
Essentially you need to define the dependencies of the cubes using the DependsOn (not the same as the Class keyword)  keyword in Designer. And you need to define the build order by creating either a Utility class with a BuildMethod or by using CubeManager.

Timo Lindenschmid · Feb 19, 2025 go to post

I had the same error. I manually created that folder, which did not help. Then after i put SELinux into permissive mode (internal test container) , the error went away so i guess its SELinux tags that are wrong also.

Timo Lindenschmid · Jan 21, 2025 go to post

Hi So looking at those errors:

First we have a process with an active transaction, that process crashed. So now the system is trying to rollback those transactions.
Then the process conducting the rollback (pid: 10800) ran out of process memory (STORE error) 

Although i think the clean demon in the end was able to rollback the open transaction.

Timo Lindenschmid · Jan 21, 2025 go to post

Hi Phillip,

if IRIS is frozen/hung only way to see is via messages.log

The freeze and thaw is recorded there.

Timo Lindenschmid · Jan 17, 2025 go to post

Package names belong to the class names, so this is covered by the class names are case sensitive. In addition they have to by case-insensitive unique.

e.g.
TestClass

and

TESTClass

would be valid class names but they are not unique as ToUPPER both read TESTCLASS

same is true for package names and routines.

Timo Lindenschmid · Dec 10, 2024 go to post

Hi Dmitrii,

The automated export of changed files can be handled by %Studio.SourceControl.File, this would export any changed file on save and import the latest from disk on checkout.

To automate import on a target system you can create a scheduled task that executes $system.OBJ.LoadDir regularly, this also per default compiles on load.

Timo Lindenschmid · Dec 5, 2024 go to post

There is a setting in vscode for the objectscript extension, that allows you to modify the compile flags used.

My current settings, which also triggers compilation of embedded SQL vs default behaviour to not compile embedded SQL.

"objectscript.compileFlags": "cbk/compileembedded=1"

Timo Lindenschmid · Nov 21, 2024 go to post

Hi,

not sure about Veeam installing stuff etc.

But i know some customers only backup the backup mirror, never the primary mirror as even with snapshot taking only a couple of seconds, QoS detection get triggered due to a slight performance degradation during the backup. If the backup is running against the Primary mirror, this then results in a forced failover being triggered.

Timo Lindenschmid · Nov 15, 2024 go to post

Where you install IRIS depends on your use case. E.g. if you are "only" developing, no real user load the disc layout you would use is markedly different to a production system with 1000s concurrent connections.

so just a couple of thoughts on disk layout:

Install IRIS and every of its components in a dedicated folder e.g. /iris . Reasoning behind this is a single folder is easier to be excluded from any e.g. AntiMalware scanner that could wreak havoc on a database backend. 

In that subfolder separate data, file store, journals and executables.

Reasoning: For a high-performance system each of above has a different i/o workload and a different performance profile. e.g. while journals are nearly 100% sequential writes, database executable are a mix of read and writes.  Database files are also a mix of read and writes but usually much bigger block sizes. Also different filesystems have different performance profiles.

So taking above in account also following best practises for mounts and i/o workload separation:

root (/)
/{instancename}/iris -> instance installation folder
/{instancename}/journalPrimary
/{instancename}/journalSecondary
/{instancename}/database
/{instancename}/filestore
/{instancename}/webfiles

Based on above you can mount volumes depending on workload sitting on different physical disks/lvms also for high performance environment you might want to separate the disc on multiple scsi controllers. So each controller only served a specific i/o profile.

Also to note IRIS2021 changed the default disc i/o behaviour from use OS file cache to direct unbuffered i/o. Which again means separate IRIS workloads from any OS workload.