Hi,

After many years of development and support I've become wary of one-line requests. 🤔 I wonder why you need this (Five whys - Wikipedia). For example, if you are trying to debug a mysterious state in a background job then maybe you just need "D LOG^%ETN" to store the variables in the error log. Or, at least you could look in there for ways to use $ORDER and $QUERY  to scan local variables without involving ^SPOOL (or, as we once did, opening a file to ZW to, closing, and then reading it back).

Hi. It's been a long time since I dealt with this, but when I was trying to get a stats extract to run once a day I used ADAPTER = "Ens.InboundAdapter" (with some custom params for which operation to send to and an email address) and in the production set up a schedule as described above to define a run for a few minutes each day, but then defined CallInterval as 999999. Thus it ran once at start of interval, and not again. I don't know if it was the best solution, but it seemed to work and kept the schedule visible in the production should we need to change it. The service just did queries on the cache database, but you could use a SQL adapter to query other databases.

update - just found that I actually used that same trick for middle of the night SQL extractions (mass update of medical instrument reference data).

and looking back at the original question: the answer is that in the ideal solution (not always the best) things should not "magically" appear in the middle of a BPL, there should be a service starting from the left sending stuff to the right in a visible path, even when the input is from "inside" the system. So here you might need that SQL adapter reading rows and sending a message per row. (Though the solution might get a lot more complex, I know mine did, with special message objects, streams and stored procedures!)

Thanks for the response. I'd love to move them to IRIS, but at the moment we are struggling to even get them from v 2007 to 2018, so we are stuck on cache for now.

It looks like you are confirming my thoughts. The FHIR data is complex and unpacking will be hard. Good point though about only needing classes for each data type.

I'll have to take it to the supplier of the other end of our link. This is a one-off custom feed that will only ever be used once, so we can define it in whatever way is easiest to both ends. My end is "legacy" but maybe the other end will want to use the FHIR standard as it could need it for other links.

Hi. Not entirely sure what you mean, but I usually write a noddy test routine to clear variables, set up objects and properties, run methods, display, compare results, etc. Have it open in Studio alongside the classes being edited. Then call from terminal. Note that to save typing <ctrl>P can be used to get the previous entries at the command line (and <ctrl>N for next) and allow editing. Alternatively, for repeated test cases I often just copy and paste code from Word or OneNote. / Mike

Hi. You say the codebase is over 30 years old. Well, I have a solution from 1991...

The PAS system I (still) work on has some standard search and display ("code list") software that has to return all codes "starting with", but it's not pretty. As the starting point for the $O() it does this:

	; Return a string immediately preceding input in collating sequence (returns null if input is null)
	; ABC becomes ABB||, -1 becomes -1.0000000001, -.1 becomes .1000000001, 0 becomes -.0000000001, .1 becomes .0999999999, 1 becomes 0.9999999999
SEED(A)	Q:A="" ""  ; null string
	I '$$NUMERIC(A) S LEN=$L(A),T=$E(A,1,LEN-1)_$C($A($E(A,LEN))-1)_"||" Q T
	Q A-(1E-10)

Since you know your target global only has non-numeric subscripts, you won't need to see the 5 lines of nastiness that is the NUMERIC call. :-) The end of your SEED=$O(@CLGREF@(SEED)) is either the usual "", or ZIN'=$E(SEED,1,$L(ZIN))  (again ignoring the horrible code dealing with numeric values).

Apologies for the ancient coding style (not mine, but I wrote similar back then).  / Mike

Hi. If you mean the ODBC driver, then it gets installed when you install Cache. So, any Cache install file for that version has it. I don't know if you can select to only install the driver and nothing else as I always want the full lot on my PC.

(... just tried and a "custom" setup allows you to remove everything but the ODBC driver, but it's fiddly.)

Hi. Interesting topic.

Why did you choose to become a software engineer / developer?

Always had an interest in science and tech growing up. Introduced to programming at school and enjoyed it. When looking for a job, there was a programming one, so I chose it. And never looked back.

How and when did you start to generate a "flow state of mind" during your career?

- Again, always had it, I think. Give me a good programming problem and I can lose hours without noticing.

What are recommend habits inside and outside, during you own time and during your work time, to be focused during you coding session and daily tasks?

- As I'm lucky and it "just happens" I'm not sure I'm a good source of ideas on this. But at work I find it helps to try to stop other things breaking my concentration. So I will "clear the decks": very little in my email in tray (it all goes into "later" or is set up as a task, or done, or deleted) and all big things to do are recorded as tasks, and scheduled, so that I don't have that "I must remember to do x" popping into my head.

The hard part is avoiding interruptions like new emails and chats from colleagues. Sometimes I just don't notice them, though, as I'm in "flow". So the problem is the other way around: missing important, urgent stuff! Best to find/allocate time when I'm not expected to respond quickly. Often I split my day into two: first I'll do all the other stuff like emails, admin, training, meetings, etc. and as a reward, play with code later. :-)

Hi. Label printers are a pain. :-) In the application I worked on, the printing was originally to matrix/line printers via a "driver" that knew the escape codes to use. But then along came the "thermal" printers using things like ZPL, and squeezing that into a line by line driver was not easy. Had to keep track of a virtual print head "position".

Using a tool to build a template and just providing the field data from IRIS sounds like a much easier route. I think you can save templates in the printer memory, if you don't want to store the text in IRIS.

If you do want to build the printer commands yourself, I recommend hiding them behind a class that stores a virtual label, and have methods to add the commands that take sensible inputs like x and y in mm (or inches), rotation in degrees, font in points, etc. Then internal functions can convert to dots, replace special characters, work out font to use, etc. And then have a final "print" method that writes the whole set of label commands out.

Have fun,

Mike

When we've had license problems in the past, the WRC have supplied us with a routine that watches for the limit warnings and then does a one-off dump of what processes are running for later investigation. We also wrote our own code to do a snapshot of license usage to a file, which is called by the Caché task manager at regular intervals so that we can analyze license usage over time. Helps when trying to see if the site needs more, or sometimes less, licenses. There's nothing like a good graph to prove your point. :-)

I'm no particular fan of this aspect of the language, but changing it as an "option" (by namespace?, routine?) would be a nightmare for maintenance. :-) Only recently fell foul of it with code like: IF type="X"!type="Y" ... that can never be true.

You could argue that it is at least very simple to understand. Just one rule to remember - left to right, except for brackets - rather than a precedence order for every single operation you might use!

Hi. Awful long time since I looked at that tool, but Michel is probably right: it's to do with how the lines are defined.  I have a vague memory that "multiline" was actually provided as an array, not one string with delimiters. And the simple top of the array had a line count, e.g.

remarks=2

remarks(1)="first line"

remarks(2)="last line"

But I could be wrong.  :-)  Try the help on that "options" field.  / Mike

Hi. This was a few years ago, but we could not find any way to do this in the standard Ensemble set up at the time. The message viewer can be asked the right sort of questions, but then times out as the query takes too long.

For monthly stats we wrote embedded SQL in a class method that we ran at the command prompt. This wrote out the results as CSV to the terminal and once it finished we copied and pasted the various blocks into Excel. Very slow and manual, but only once a month so not so bad. (Our system only purged messages after many months.)

Then we wanted daily numbers to feed into another spreadsheet. So we built an Ensemble service running once a day that ran similar embedded SQL but only for the one previous day of messages. It put the CSV results into a stream in a custom message that went to an operation, that sent then emails out to a user-defined list. Essential bits below. Hope it helps.

    Set today = $ZDATE($H,3)_" 00:00:00"
    Set yesterday = $SYSTEM.SQL.DATEADD("day",-1,today)
    Set tRequest = ##class(xxx.Mess.Email).%New()
    Set tRequest.To = ..SendTo
...etc
    &sql(DECLARE C1 CURSOR FOR
    SELECT
          SUBSTRING(DATEPART('sts',TimeCreated) FROM 1 FOR 10),
          SourceBusinessType,
          SourceConfigName,
          TargetBusinessType,
          TargetConfigName,
          COUNT(*)
        INTO :mDate,...
        FROM Ens.MessageHeader
        WHERE TimeCreated >= :yesterday AND TimeCreated < :today
        GROUP BY...
        ORDER BY...
    )
    &sql(OPEN C1)
    &sql(FETCH C1)
    While (SQLCODE = 0) {
        Set tSC = tRequest.Attachment.WriteLine(mDate_","_...
        &sql(FETCH C1)
    }
    &sql(CLOSE C1)
    Set tSC = ..SendRequestAsync($ZStrip(..TargetConfigName,"<>W"),tRequest) If $$$ISERR(tSC)...

Hi. The team I work in has been supporting old MUMPS and then Cache systems for many years, and have two rules for fixing data on live systems:

1. Never write a kill statement directly at the command line.

It's quick and convenient, but we've seen many occasions  where big chunks of, or, before the database parameter to stop it, whole globals were removed by accident. Cue major panics, hours of restore and de-journal. etc.

2. Don't write SQL updates directly (SMP, command line, etc).

Because if you get the WHERE wrong then all rows in a table suddenly vanish or have the same value. :-)

So we use a local command line tool for global amendments that uses the old %G style selection search to display one node at a time, so that you can then edit the data, or kill (with a warning first if there are sub nodes you may have forgotten about).

We also have a local command line tool for SQL/objects. This takes in a table name and WHERE clause, and only allows editing and deleting when the query finds only one row.

Anything more than a few nodes/rows and we usually write a proper bit of script on our dev server, and get it checked and tested before running in live. Better safe than sorry.

Hi. If you mean the "For" loop, then it's replaced by recursion - the method calls itself. Basically, it translates only the first character of the string and then appends the rest of the string as converted by the same method. Which then does the same, and so on until it has run out of characters, and it then unwinds the stack putting it all together. Only good for small strings of course, as it could run out of stack space, a common problem with recursion. (That and getting your head around what it's doing!)

I'm not going to win, but I thought I'd try a slightly different approach. With thanks to others for the clever $P part.

 ClassMethod N(As %String, As %String = "") As %String
{
$S(i="":"",1:$S(".,?!"[$E(i):p_$E(i),i?1A.E:p_$ZCVT($E(i),"U")_$P("lfa7ravo7harlie7elta7cho7oxtrot7olf7otel7ndia7uliett7ilo7ima7ike7ovember7scar7apa7uebec7omeo7ierra7ango7niform7ictor7hiskey7ray7ankee7ulu",7,$A($E(i))#32),1:"")_..N($E(i,2,*)," "))

Size: 252

I claim the least number of lines - 1 - and least commands - 1 - and probably the least variables - 2.

Could do without the second input param if you allow a leading space in the output (not mentioned in requirements, but fails for the example output, etc.). I also noticed that the sizing does not include the ClassMethod... line, so you could reduce it by moving strings to default values for input variables, but I think that's much like the other comments that involved moving the implementation outside the method completely, just cheating the counting mechanism rather than solving the problem. :-)

It's Friday afternoon, so I thought I'd play with a variation that avoids $Query. This is for no particular reason, as I think the $Q solution is fine, but I wondered if I could:

ClassMethod Flatten(ByRef array, ref = "array") As %List [ PublicList = array ]
{
                S sub="",list=""
                F {
                                S sub=$O(@ref@(sub),1,data)
                                I sub="" Q
                                I $G(data)'="" {
                                                S line=""
                                                F i=1:1:$QLength(ref) S line=line_$LB($QSubscript(ref,i))
                                                S list=list_$LB($LB(data)_line_$LB(sub))
                                }
                                I $D(@ref@(sub))\10 { ; if there are deeper nodes, go down
                                                S nextRef=$S(ref["(":$E(ref,1,*-1)_",""",1:ref_"(""")_sub_""")"
                                                S list=list_..Flatten(.array,nextRef)
                                }
                }
                Return list
}

It's recursive, which is dangerous but powerful stuff! :-)  Call it by something like:  out=..Flatten(.in)

It also avoids output variables in the parameter list, which I find confusing and against Uncle Bob's clean coding principals (okay, he'd also be horrified by "S" for "set", but I'm old-school). Instead, it returns a list of lists which is just as easy to step through. / Mike