go to post Duncan Priest · Aug 12, 2019 The catch with the cloud options is of course that while the InterSystems side of things is free, the infrastructure on which it runs has a monthly cost. Hence the appeal of a downloadable Docker image.
go to post Duncan Priest · Jun 24, 2019 I would expect the "catchall" to work in this situation - it's how I catch general exceptions not otherwise handled. Are you using "catchall" or attempting to "catch" a particular exception with catch? If using catch, a gotcha for new players is that it only works in conjunction with a BPL "throw" as it will only catch based on a string in a corresponding throw and not based on a %Status value or %Exception or whatever.
go to post Duncan Priest · May 2, 2019 This article needs to link to pre-requisite articles to be of much use. For a developer who is used to writing ObjectScript in Studio on a local Ensemble installation, the starting point for this video is so far removed from what is familiar as to make it pretty useless. Similarly, for a newcomer to InterSystems development there is little here to latch onto to get things started. As part of a series this might well make a lot of sense, but as a standalone video its like demonstrating how to build a space rocket by showing an astronaut strapping themselves in for takeoff - it felt like all of the hard work had already been done.
go to post Duncan Priest · Mar 22, 2019 After much investigation deep into the bowels of Ensemble's SOAP handling with no solutions found, I made the decision to build a new namespace with fresh databases to see what would happen, and the problem didn't appear. So, cause unknown, but seems likely to have been something screwy about the databases (which it turned out had a questionable origin, rather than being freshly created). The infamous question "Have you tried turning it off and then on again?" comes to mind although in this case the solution was a little more drastic.
go to post Duncan Priest · Feb 3, 2019 Serverless - yes. In a sense this development is the equivalent of the migration of data storage from hard disks installed in a given server to an array of disks in a NAS shared across multiple servers. Each of these servers might be said to be diskless, yet they still rely on storing data on disks. In serverless computing, sure the code actually runs on a server somewhere, but the code is written as small independent chunks and the cloud platform may assign different chunks to run on different physical servers. If we think of the Ensemble approach where we group our functionality into Productions, each of which clearly exists on a single server, the serverless model would instead see us deploying each Service, Business Process and Operation as independent entities that could be picked up and run by any server allowing the cloud platform to decide where best to execute each of them. So sure there are indeed servers, but the developer does not think in terms of servers and does not typically control the grouping of code to execute on particular servers.
go to post Duncan Priest · Jan 15, 2019 Well, months later I stumbled across the answer myself after needing to recreate the web services from modified WSDL. It seems that the problem was the options I selected when originally generating the classes from the WSDL using the SOAP Wizard. I had been using "Class Type: Serial" to avoid cluttering the database with lots of unnecessary tables, but changing this to Persistent (+using indexed one-many relationships) produced classes that no longer give the problem - I now get SOAP requests with empty elements included in the correct form . (At least I assume this was the change that fixed the issue, I also selected "Use unwrapped message format for document style web methods", but I imagine it was the class type that made the difference.)
go to post Duncan Priest · Dec 4, 2018 Thanks Jolyon for that embarrassingly simple solution. Got my curiosity going though and I can now understand why I didn't know of this feature in the first place - the "Advanced" sub menu doesn't appear on my Edit menu in Studio 2012.1.2 (where I did a lot of my development) though the option for customising Keyword Expansion does appear in the Options. All good in subsequent versions though.
go to post Duncan Priest · Dec 2, 2018 Thanks, had no idea about <ctrl>e. Now if someone would just change its behaviour so that IT DOESN'T SHOUT AT US that would be really neat.
go to post Duncan Priest · May 16, 2018 Here's my version of the same thing. (I realise the naming doesn't follow InterSystems' normal practice but I constantly swap between languages so follow other conventions.) It's worth noting that both the original version and my own actually change the original message passed into the function as well as returning this changed message - if this isn't the required behaviour, clone the message first and work on the clone./// Strip trailing null components from fields and trailing null fields from HL7 segments,/// and optionally strip empty segmentsClassMethod TrimNullTrailingHL7(hl7Msg As EnsLib.HL7.Message, stripEmptySegments As %Boolean = 0) As EnsLib.HL7.Message{ #Dim segCount As %Integer = hl7Msg.SegCount #Dim sepField As %Char = ..SubString(hl7Msg.Separators, 1, 1) #Dim sepComponent As %Char = ..SubString(hl7Msg.Separators, 2, 2) While segCount > 0 { #Dim segStr As EnsLib.HL7.Segment = hl7Msg.GetValueAt(segCount) // Remove trailing component separators Set segStr = $ZStrip(segStr, ">", sepComponent) // Remove trailing component separators within a field While $Find(segStr, sepComponent_sepField) > 0 { Set segStr = $Replace(segStr, sepComponent_sepField, sepField) } // Remove trailing field separators Set segStr = $ZStrip(segStr, ">", sepField) If stripEmptySegments && ($Find(segStr, sepField) = 0) { Do hl7Msg.SetValueAt("", segCount, "remove") } Else { // Store resulting segment back in message Do hl7Msg.SetValueAt(segStr, segCount) } Set segCount = segCount - 1 } Quit hl7Msg}
go to post Duncan Priest · May 1, 2018 I could've sworn that I'd seen it in 2012 versions, but nope my local archived 2012 didn't have it either. So thank you, that was the answer, i.e. it simply wasn't available until 2013.
go to post Duncan Priest · Aug 2, 2017 Actually I just noticed the "Semaphore Specification" setting on the EnsLib.File.InboundAdapter which looks to possibly be relevant to your needs. In the help text for this is the following:"The semaphore sepcification (sic) can be a wildcard filename pairing of target=semaphore filename e.g. ABC*.TXT=ABC*.SEM which means do not process any files found that match ABC*.TXT unless a corresponding ABC*.SEM exists for each one."May not be relevant but thought I'd pass it on.
go to post Duncan Priest · Aug 1, 2017 Hi SalmaMy first thought is that you could write a custom service class to extend EnsLib.File.PassthroughService and within it override the OnProcessInput method. In here you would perform your IO functionality to read the second file into a stream, set the streams for both files to properties of a custom message that you define (extending Ens.Request), before passing the custom message on to the target process or operation. You would also need to perform matching processing in a custom operation to write both files to the target folder.But all of this really assumes that you want to do something useful with the file streams along the way. If all you want to do is copy them both as soon as they arrive in an input folder, why not just set up 2 x pass through services, each detecting a different file extension and both targeting the same operation. On re-reading your question, it looks like timing for the two files may be the issue though, in which case I'd stick with the custom service/request/operation classes.RegardsDuncan
go to post Duncan Priest · Nov 1, 2016 Wouldn't it make sense to run these tasks in the opposite order? Run the quicker task (KeepIntegrity=false, DaysToKeep=90) first to purge all of the oldest messages unconditionally and then let the slower task run through the more recent, rather than have the slower task run through everything and then run the quicker task to pick off the small number of remaining older messages.
go to post Duncan Priest · Aug 18, 2016 Hey InterSystems compiler developers, why not introduce a foreach statement that compiles to the "usual best performance" approach. That way developers don't need to know about these performance tricks for day to day work. If a developer wants to squeeze the last drop of performance out of their code they can still refer to articles like this and override the behaviour, but at least if they use foreach by default they won't choose the worst performing case. Foreach would be used as:foreach(",") piece in string { // Do something with piece}Further, if the relative performance of different approaches changes with version updates or is different across different platforms, the use of foreach would also remove the need for any source code changes on the part of the developer.Just an ideaDuncan