go to post Mike Wragg · Oct 7, 2021 Hi. Awful long time since I looked at that tool, but Michel is probably right: it's to do with how the lines are defined. I have a vague memory that "multiline" was actually provided as an array, not one string with delimiters. And the simple top of the array had a line count, e.g. remarks=2 remarks(1)="first line" remarks(2)="last line" But I could be wrong. :-) Try the help on that "options" field. / Mike
go to post Mike Wragg · Jul 29, 2021 Hi. This was a few years ago, but we could not find any way to do this in the standard Ensemble set up at the time. The message viewer can be asked the right sort of questions, but then times out as the query takes too long.For monthly stats we wrote embedded SQL in a class method that we ran at the command prompt. This wrote out the results as CSV to the terminal and once it finished we copied and pasted the various blocks into Excel. Very slow and manual, but only once a month so not so bad. (Our system only purged messages after many months.)Then we wanted daily numbers to feed into another spreadsheet. So we built an Ensemble service running once a day that ran similar embedded SQL but only for the one previous day of messages. It put the CSV results into a stream in a custom message that went to an operation, that sent then emails out to a user-defined list. Essential bits below. Hope it helps. Set today = $ZDATE($H,3)_" 00:00:00" Set yesterday = $SYSTEM.SQL.DATEADD("day",-1,today) Set tRequest = ##class(xxx.Mess.Email).%New() Set tRequest.To = ..SendTo ...etc &sql(DECLARE C1 CURSOR FOR SELECT SUBSTRING(DATEPART('sts',TimeCreated) FROM 1 FOR 10), SourceBusinessType, SourceConfigName, TargetBusinessType, TargetConfigName, COUNT(*) INTO :mDate,... FROM Ens.MessageHeader WHERE TimeCreated >= :yesterday AND TimeCreated < :today GROUP BY... ORDER BY... ) &sql(OPEN C1) &sql(FETCH C1) While (SQLCODE = 0) { Set tSC = tRequest.Attachment.WriteLine(mDate_","_... &sql(FETCH C1) } &sql(CLOSE C1) Set tSC = ..SendRequestAsync($ZStrip(..TargetConfigName,"<>W"),tRequest) If $$$ISERR(tSC)...
go to post Mike Wragg · Oct 25, 2019 Sadly, in my team we've all been writing MUMPS for so long that the abbreviated style comes naturally and is a hard habit to break. Yes, expanded is better for new starters in the language. However... Playing devil's advocate you could say that abbreviated commands are: 1. Faster to type (as you said). 2. More compact. Allowing the reader to "see" more of the structure in one go. You see, you can expand things out too much, in my opinion. Also, it only takes a few minutes for a (reasonable) programmer to get that "S" means "set", "I" means "if", etc. Commands always appear in the same part of the code (unlike some languages), there are not that many to learn, and once you know them, you can read them! So why bother with extra letters? After all "set" is itself only a token for "put the value on the right of the = into the variable on the left" or something like that. It could be "make" or "update" or "<-" (look up the programming language "APL" on Wikipedia if you want a real scare). I think the main problem with "old fashioned" code is usually poor label/variable names, squeezing too much on one line, and lack of indenting. It's hard to read mainly because of the other parts of the code, not because the commands are single characters. Some things should be longer to better convey what they are for (though not as long as COBOL), and more lines can help convey program structure. While I'm here, I'm not that keen on spurious spaces in "set x = 1", as opposed to "set x=1". It just spreads out the important stuff - spaces are there to split out the commands. :-) Mike
go to post Mike Wragg · Oct 4, 2019 Hi. It depends on what you mean by "certain criteria". If it's a special file name then you could amend the FileSpec property to skip the ones you don't want yet. If it's in the content, then maybe you should be reading in the file (creating a copy or allowing archive so the original continues to exist) and sending it as a message into Ensemble that can then be held up in a business process until ready to send out to an Operation that creates an output file. That is the way ensemble is supposed to work, so you get a full record of what happened, etc. (Otherwise, I'm pretty certain that there are actions that reset the list of processed files - maybe resetting that file path or restarting the job - but I cannot find the documentation about it at the moment. )
go to post Mike Wragg · May 3, 2019 Hi / It sounds like a good idea! I can think of a number of interfaces I've seen where the target application - a small local system - struggled to keep up with the flow of updates from a large PAS.The only thing I've done like it was complicated, and had to use a proper Business Process. In that case the "department" was neonatal, so we were only interested in patients admitted to a particular ward. The solution looked for HL7 admissions and transfers to that ward, and when found used the data to create a local record in Caché. Then all other types of message could be checked against those records to see if it needed further processing and passing on (to "BadgerNet" eventually when a full episode was built up). Of course this only works if you can define a clear "starting point" that can be spotted in the message stream. / Mike
go to post Mike Wragg · Jul 27, 2018 Hi, I also help support an NHS trust using Ensemble, and it also has ever-growing PDF files in messages. We have our incoming PDFs as external file streams and it helps, though you have to bear in mind that the files are not going to be part of the cache backup for Disaster Recovery, etc. (Not sure about mirroring. I'd assume they don't get mirrored either as the contents are not in the journal.)As yet, we don't have as big a problem as you - less messages and we only keep 92 days - but that is just as well as the PDF files are converted to base64 encoded in HL7 v2 messages, so they then do take up space in the database, and the journal, and the backup, which has resulted in the need to expand the disk space recently. I can recommend keeping Ensemble on a virtual server with disk expansion on demand.I tend to think the problem is not going to go away whatever you do. I assume, like us, the PDFs come from 3rd party applications and they are always going to be producing ever more and prettier documents as time goes by. So I recommend looking at more disk. :-) / Mike
go to post Mike Wragg · Feb 21, 2018 An alternative solution that works for us is to use the "Schedule" setting to run it for 30m (to allow some leeway as the job takes a while), and then set the "Call Interval" setting to something very large like "999999". This is for an Inbound SQL adaptor. (If something goes wrong with this overnight run then we manually remove the "Schedule" setting and restart the Service. Once complete, we put back the setting ready for the next night.)
go to post Mike Wragg · Jan 4, 2018 Hi. We had a site upgrade from 2012.2.5 to 2017.1.0 last year, and it included a mirror. We had very few code changes needed - just an issue with it failing to save objects inside a Business Process where we had used some "unusual" structures. The upgrade itself went smoothly. The only issue was afterwards when the next backup was a "full" one instead of the scheduled "partial", using more space than expected. Our Production was much smaller than yours, with only about 120 items, and it is hard to say how much effort went into pre-release testing as it was "fitted in" around other work by a team of people. Maybe a couple of man months?To be honest, it all depends on how much custom or unusual code you have, and how much testing the customer wants. We upgraded a development namespace and re-ran test messages through all the important paths and compared the result before and after upgrade. Plus some connection testing to cover all the "types" we used: ftp, web service, HL7, etc. In our case the testers included people from the user side, so they could decide when they were happy with it.InterSystems were very helpful. We raised a call a few months before and they gave advice on testing and desk checked our detailed plan of the upgrade itself, including how to do the mirror.Good luck.
go to post Mike Wragg · Nov 1, 2017 Amir's answer with option 2 is what we did. The XML we sent had to be converted to allow it to be sent, so our code looked a bit like this:Method ImportEpisode(pRequest As EnsLib.EDI.XML.Document, Output pResponse As Ens.Response) As %Status{ ; Use format 8 bit regardless of cache default (else Base64Encode gives ILLEGAL VALUE error) Set sendingXML = pRequest.OutputToString("C(utf-8)",.tSC) If $$$ISERR(tSC) Quit tSC $$$TRACE("Sending: "_sendingXML) Set sending = $system.Encryption.Base64Encode(sendingXML) Set tSC = ..Adapter.InvokeMethod("ImportEpisode",.result,sending,{plus some other id parameters}) If $$$ISERR(tSC) Quit tSC Set resultXML = $system.Encryption.Base64Decode(result)...etc.I hope this is useful to you.Mike
go to post Mike Wragg · Jul 26, 2017 I had a similar situation and ended up with an Ensemble Service reading in the meta data file (like your xml), and composing an Ensemble message with that information, including a file reference for the data file (your pdf). This meant that the meta data file could be automatically archived by Ensemble, but now I had to archive the data file instead, using calls to the OS like you have done for your xml file above.In my case this did make some sense, as I wanted to convert the data file using an OS call to an "exe", and at least the messages in Ensemble had all the meta information, file name, etc. But I also think it was a bit clumsy so would be interested in any better ideas.Regards,Mike