Mark Hanson · May 5, 2025 go to post

Normally it is required that you call the Sync/WaitForCompelete method on the work queue object so that the module can report errors and you can introduce a sync point where you know that the work done by the worker tasks has completed.

If you do not call Sync/WaitForComplete then what is likely happening if your work queue oref is going out of scope at the end of your procedure which closes the oref and this terminates all work units in this work queue group so if they have not completed at this point they will never get run as the entire queue is removed.

Mark Hanson · Sep 3, 2024 go to post

This is correct. Once we start a worker job we keep it around for the next time we need a worker job to avoid the overhead of starting a new process. The workers should not be using any significant memory or CPU and will time out automatically after a while so you can leave them alone.

Mark Hanson · Mar 1, 2023 go to post

That is correct, however it is far simpler if you use the debugger built into VS Code as this uses our maps from class to INT code to automatically set the correct breakpoint based on the source class line.

Mark Hanson · Feb 28, 2023 go to post

If you have logic that is making an assumption about what label name we use for a specific method then this logic will need to be updated.

Mark Hanson · Sep 23, 2020 go to post

We ship the XSD file as part of the IRIS installation. You can find it under <install dir>/bin/irisexport.xsd. It varies from version to version as new keywords are added, but we use this to validate any XML before importing an XML export.

Mark Hanson · Jun 16, 2020 go to post

Cached queries are kept until they are explicitly purged or you upgrade your IRIS instance. So it is only useful to run this after an upgrade of IRIS or after you explicitly purged your cached queries.

Mark Hanson · Jun 8, 2020 go to post

In 2020.2 we added $system.OBJ.GenerateEmbedded function which allows all universal cached queries to be compiled on a customer system even if all the routines/classes are deployed. So this function can be run after installation to prepare the SQL queries before you bring the system up.

Mark Hanson · Aug 8, 2018 go to post

You are doing disk block reads in the one case which is why it is slower, how big is your global buffer pool? Also how big are your globals ^TestD and ^TEST2, use 'd ^%GSIZE' to find their sizes on disk. The $lb version will be slightly bigger as there is a byte taken as a type byte for each element and another length byte, this shows up when the data is very small like these single character ASCII elements, but $lb does mean you never need to worry about data that contains '#' characters and it preserves types where as the "a#b#..." format needs to convert everything into a string before storing it which adds runtime overhead too.

-- 

Mark

Mark Hanson · Mar 23, 2018 go to post

The behavior you are seeing is because of chain '.' handling of null objects. For example if you:

Set person=##class(Sample.Person).%New()

Write person.Name.AnythingYouLike

It will succeed and return "", but if you:

Set tmp=person.Name

Write tmp.AnythingYouLike

It will fail with an INVALID OREF error, as would 'Write (person.Name).AnythingYouLike'.

This behavior is inconsistent, so I will not defend it, but it is how the product works.

Mark Hanson · Nov 9, 2017 go to post

Hi John,

Unfortunately the generated code is only held long enough to build the class routines and then is discarded. The problem is this takes up a lot of space and we want to keep the size of ^oddCOM down to something fairly reasonable so there is no official way to obtain this.

Mark Hanson · Jul 19, 2017 go to post

Either will work, but referencing the 'Name' property is a lot faster.

Mark Hanson · Jul 18, 2017 go to post

A few comments:

  • %ResultSet does have %Execute, %Next, %GetData, %Get, %Prepare methods also
  • In %ResultSet use 'Data' multidimentional property to get columns by name as this is more efficient than the 'Get' method.
  • In %SQL.StatementResult, do not use %Get to get columns by name, instead just reference the properties directly, e.g. 'Write resultOref.Name' instead of 'Write resultOref.%Get("Name")'
Mark Hanson · Jul 3, 2017 go to post

$replace is your friend here so you can just do:

set escaped=$replace(str,”’”,”’’”)

You will need to parse out the single quote at the start of the statement so you do not double this quote too. 

Mark Hanson · May 27, 2017 go to post

It sounds like you mapped the data global but not the id counter global so the other namespace can not see the current id counter value hence it will overwrite the data you already inserted.

Mark Hanson · May 12, 2017 go to post

It looks like this was already fixed in 2017.1 by MAK4670, which says:

Correct bug in %Collection.ListOfObj:FindOref where it could return "" when the oref is present
Mark Hanson · Apr 20, 2017 go to post

Streams support the idea of writing to them without changing the previous stream content so you can either accept the newly changed stream value or discard it depending on if you call %Save or not. In order to support this when you attach to an existing file and then append some data you are actually making a copy of the original file and appending data to this copy. When you %Save this we remove the original file and rename this copy so this is now the version you see. However as you can see making a copy of a file is a potentially expensive operation especially when the file gets large so using a stream here is probably not what you want.

As you just want to append the data and do not want file copies made I would just open the file directly in append mode (using either 'Open' command directly or %File class) and write the data you wish to append so avoiding stream behavior.

Mark Hanson · Mar 13, 2017 go to post

Can you provide the code you are using currently so we have something definitive to base comments off, but have you tried using $translate and reading the data in big chunks e.g. 16k at a time?

While 'binarystream.AtEnd {

  Set sc=outputstream.Write($translate(binarystream.Read(16000),badchars,goodchars)

}

Where binarystream is your binary input stream, outputstream is your output stream with the converted characters and badchars is a list of the bad characters you need to convert and goodchars is the list of the values you want the badchars converted into.

Mark Hanson · Mar 10, 2017 go to post

I think you have a typo and you meant to write:

And also you should remember that you can NOT store objects in globals, even if it is Process-private one.

Mark Hanson · Mar 3, 2017 go to post

The biggest issue I saw is that when you call %Save() you are returning the Status code into variable 'Status' which is good, but then this variable is totally ignored. So if you save an object which does not for example pass datatype validation the %Save will return an error in the Status variable but the caller will never know the save failed or why the save failed.

In addition %DeleteId does not return an oref, it returns a %Status code, so you need to check this to see if it is reporting an error and report this to the caller if it does also.

Mark Hanson · Feb 10, 2017 go to post

Correct, the gateway will not cached files that are not served from 'always and cached'. Of course if you served a static file when this was set and then unset this then the gateway will still have it in its cache. Also the timeout value applies both to the browser timeout period and the CSP gateway timeout period.

You can also remove items from the gateway cache using code (which we do automatically when you edit a static file in Studio):

Set registry = $System.CSP.GetGatewayRegistry()

Set sc=registry.RemoveFilesFromCaches(remove)

See the class %CSP.Mgr.GatewayRegistry for more details of this method.

Mark Hanson · Feb 7, 2017 go to post

You are basically correct. We do try to minimize branch points in the tree, so an array with one subscript array("very long subscript") only has a single element in it rather than one per byte associated with the subscript. So this reduces it to <k depending on the distribution of the keys. 

Mark Hanson · Jan 19, 2017 go to post

I think scrolling and showing the output as soon as possible is the most useful as this shows the user what is happening and it is the same behavior for all other terminals I have used. Often you can make out patterns in the output even though it is scrolling and this can be handy.

The other thing I noticed is that Ctrl+C does not appear to work, is it meant to?

Mark Hanson · Jan 19, 2017 go to post

Looks great, nice project!

I noticed it takes a while to update the screen when I run something like this, is there any possibility for optimization of this?

for i=1:1:1000 w $tr($j("",400)," ","@"),!
Mark Hanson · Jan 6, 2017 go to post

Routine and classes are just stored in globals like everything else. So if you copy all globals from one database to a brand new database then it will copy any routines/classes that were present in the original database to the new database too.

I see another answer already addressed the mapping question.

Mark Hanson · Jan 6, 2017 go to post

The key bit of information here is the 'service unavailable' error being returned rather than say a 'not authorized' or 'not found' errors. By default when out of licenses we return the service unavailable error so if anyone else sees this they should check license usage as a first step. If you get not authorized errors it is probably a security issue so check the audit log as this often shows the exact problem.

Mark Hanson · Jan 5, 2017 go to post

It sounds like somewhere in your application you have a call that returns OID values to the client, then as a separate step you wish to return the stream associated with this OID. Is it possible instead of returning the OID to the client you just return the stream directly to the client? So what is the need for the client to store the OID when it is really the stream the client wants?

Assuming there is a good reason for returning the OID you can follow this pattern.

  • Server gets request where it would previously return the OID
  • Server generates a new random number using $system.Encryption.GenCryptRand to generate a random number
  • Server stores this random number in a table against the OID it wishes to associate it with and a timetstamp
  • Server sends the random number to the client
  • Client at some point wishes to get the stream so it sends the random number to the server
  • Server looks up the random number in the table and finds the OID and serves up this stream if the request is within some time period of the random number being generated. Then it deletes the random number from the table.

You also need to write some code to cleanup this table and remove expired random numbers from the table periodically or it could grow over time if you generate values and the client never uses them.