go to post Sean Connelly · Oct 10, 2017 Hi Thembelani,Assuming that you have got as far as serialising the objects to an XML stream, and you just want to prettify the XML, then you can use the following solution.Add the following to a new / existing class... ClassMethod Format(pXmlStream As %CharacterStream, Output pXmlStreamFormatted As %CharacterStream) As %Status{ set xslt=##class(%Dictionary.XDataDefinition).%OpenId(..%ClassName(1)_"||XSLT",-1,.sc) quit ##class(%XML.XSLT.Transformer).TransformStream(pXmlStream,xslt.Data,.pXmlStreamFormatted)}XData XSLT{<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:strip-space elements="*"/> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template></xsl:stylesheet>}To use it...set sc=##class(Your.Class).Format(.xmlStream,.xmlStreamFormatted)Pass in your XML stream as the first argument (by ref) and get back the formatted stream as the second argument (by ref).
go to post Sean Connelly · Oct 10, 2017 https://community.intersystems.com/post/art-mapping-globals-classes-1-3
go to post Sean Connelly · Oct 6, 2017 Hi Mike,Have you had a look at the FAQ posts, the link is on the Learn button above, and here...https://community.intersystems.com/tags/developer-community-faq
go to post Sean Connelly · Oct 4, 2017 On your record map service there is a setting called HeaderCount, description..."Number of prefix lines to ignore in incoming documents to permit parsing of reports and CSV exports with column headers."
go to post Sean Connelly · Sep 26, 2017 Looking at my dev version of Cache (2014) the Status command has a $get wrapped around it... Quit $g(Status,$$$OK)Which should stop that error, but wondering if your version does not have this?You could try just setting Status=1 after you call the routines, to prevent this from erroring.
go to post Sean Connelly · Sep 15, 2017 Take a look at the section invoking a business service directly...http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EGDV_adv#EGDV_busservice_invokeEssentially, you need to create an instance of a business service using the ens director, then you can call the process input method of your adapterless service.
go to post Sean Connelly · Sep 15, 2017 Hi Maks,This is an interesting topic that I have spent a great deal of time thinking and working on myself.I developed a transcompiled object oriented Mumps language called MOLE a number of years ago...http://www.memcog.com/mole/index.htmlI've also had a number of attempts at writing a JavaScript to Mumps transcompiler and can share my thoughts on why this is not a good idea.Firstly COS is a perfectly good language that I am very happy to use on a day to day basis. There are times when it feels a little stale, particularly as I jump around a lot of languages, but on the whole, it does a very good job. Any peccadilloes or missing features seem trivial when you have all the other benefits from the entire Caché platform. That said, there is always room for improvement, and anything that helps us write better, smaller, and more maintainable code has to be a good thing.A couple of things that I learned from MOLE was that when building a transcompiler you have to work with the underlying language. Fundamental aspects such as coercion should be first class citizens of the transcompiled language. As an example, JavaScript and Mumps differ greatly on data types and coercion. Mumps has one single data type whilst JavaScript has 6. Mumps has a fairly simple set of coercion rules whilst JavaScript is far more complex. As an exploration, I wrote a bank of 1600 coercion tests that Mumps would have to replicate, which highlights some of the complexities...https://gist.github.com/SeanConnelly/4c25afc61113ee113ec14211f59076ecThe other consideration is that whilst JavaScript can have just in time compilers, it is fundamentally a run time language with a complex run time environment for managing things like dynamically changing prototypal inheritance and async call stacks, all of which would all have to be simulated in a Mumps compiled environment. On reflection, it's just a really complex idea that would be much less performant than its underlying language.So any transcompiled language should essentially be a super set of its underlying language, not a wild shift in syntax or behaviour to JavaScript, C# or any other language.We can learn lessons from other transcompiled languages such as CoffeeScript and TypeScript. For me CoffeeScript is marmite and I refused to even try and like it. Whilst there were some interesting features to CoffeeScript, it went too far by changing the syntax of the language. It's interesting however that features such as arrow functions influenced and made its way into the core JavaScript languages (ES6). This shows that a transcompiled language can be a good test bed for ideas that can influence and improve an underlying language. The gold standard of transcompilers is TypeScript. TypeScript didn't try to change the language, but rather provide a backwards compatible compiler for future features of JavaScript, whilst providing a small amount of syntactic sugar for injecting compile time type checking.In COS we already have syntax for declaring things like argument types, but it's just a shame that the compiler does not enforce them, something I touched on here...https://community.intersystems.com/post/compilation-gotchas-and-request-...TypeScript actually shows that we don't need stronger types, just better compile time validation of using types. For me, I would borrow this aspect of TypeScript as the first feature of any transcompiled language. As an example...set x = "Hello World"can be marked as type string...set x : string = "Hello World"This would be removed at compile time and have no run time behaviour, but it would enable the compiler to stop this variable from being used incorrectly, e.g. passing it as an argument that should have been of type integer.This would allow code to still be written loosely, but where someone is trying to enforce better protection, it will warn when it is implemented and implemented wrongly. Generics would also be a good fit.The other aspect I would borrow from TypeScript would be to shim backwards compatible features.An interesting example is the return command which was introduced in 2013. It's nice to see a new language feature, but then, of course, we have the problem of backwards compatibility. I think back to using the asterisk in $piece, e.g. $piece(x,2,*) which is a great feature but tripped me up on one particular deployment, and still makes me a little nervous to use it in library code.This is where a transcompiler could be a big help. Being able to use these new features and know that they are transcompiled into backwards compatible code would be a really useful tool.The next on my list would be the addition of the "in" command, to be combined with the "for" command. Whilst the $order function is mighty powerful, it's not the prettiest to look at. Instead of writing...set name=$order(^names(""))while name'="" { write !,name set name=$order(^names(name))}you end up with a more cleaner and easier to read solution...for name in ^names { write !,name}which obviously gets cleaner when these $order loops become heavily nested.Next on my list would be to promote simple types to fake objects. As in the above type declaration example, if a variable is declared as a string then it would automatically inherit a whole host of useful string functions, e.g.set x : string = "Hello World"write x.toUpper()HELLO WORLDThis would compile down to...set x = "Hello World"write $zconvert(x,"U")Next up would be lambda functions. Combined with an array type it would allow for things like map, reduce and filter methods on variables declared as an array, or implicitly referenced as a global. Arrow function syntax would sit well with cos...//filter out all numbersset bar : array = foo.filter( value, index => { if 'value.IsNumber() return value})From here you can then introduce method chaining which would make code much more readable, so instead of...set x=$zconvert($extract($piece($piece(y,","),"~",2),1,10),"U")you end up withset x=y.piece(",").piece("~",2).extract(1,10).toUpper()As a starter these would all be great additions to an already great language. There are probably more suggestions, such as built in async callback support, but this is where I would start from.That said, there is one important aspect that also needs to be addressed. All of these additions would break both studio and atelier.You could also argue that these types of additions would only be half of a solution, where improved tooling should and must go hand in hand with a transcompiler. As a minimum the IDE highlighting needs to support the new syntax, as well as adding value with improved real time type detection issues.I spent a great deal of time working on IDE solutions for this very reason. I felt there was no point releasing anything unless it had its own IDE as well. I went through lots of experiments, extending Eclipse and various other IDE's, but I always came back to one important requirement. Any solution, should and must work remotely (as does studio) and that inevitably it would have to be a browser based solution to work as a modern day cloud supporting solution.Here is a glimpse of what I have been up to here, it's 100% JavaScript and built on top of its own in-house widget library...http://www.memcog.com/images/Nebula.pngIt's far from finished as I am working on a better way to write API code with Caché, which is what I am currently working on with the Cogs library. If I eventually get there then the plan is to release all of this into the wild, along with the MOLE transcompiler which could then be used to extend COS, or experiment with other Mumps based ideas and innovations.What I would say is it's very easy to underestimate how much effort is required to make a transcompiler robust, fully unit tested, tooled up and fully supported to the degree that anyone would consider using it in production. When you're doing this without a sponsor and part time it's a long hard road...Sean
go to post Sean Connelly · Sep 13, 2017 Hi Murillo,It is possible to return messages back to your service.HL7 TCP services have a setting called "Ack Mode", found under "Additional Settings".By default, this is set to immediate which will produce an automated HL7 ACK at the time the message is received.To return an ADR A19 message from a process/operation as an ACK to a QRY A19 then you need to change this setting to "Application".You don't mention how the ADR A19 is generated. I will assume two possibilities.1. The operation sends the QRY A19 to an upstream system, which returns the ADR A19 message as its ACK.In this instance wire up the service to the operation directly (using application mode) and it will just work.Note, operations have a setting called "Reply Code Actions". If the upstream system returns an error ACK then by default Ensemble will fail the message and the ACK will not make it back down stream. Most likely, these types of ADR A19 message do need to make it back with the error message, so change the reply code error action to "W", which will log a warning, but treat the message as ok.2. The operation is querying a database, for example, SQL server.The operations OnMessage (or message map method) has a request and response argument. You will need to make the response argument of type EnsLib.HL7.Message. The operation should perform the SQL query and then use the resultset data to create the EnsLib.HL7.Message ADR A19 message. This will be returned by reference as the response argument.Note, if you are using an HL7 router between the service and the operation, you will also need to configure the "Response From" setting so the router understands which operation it should expect an ACK from (as some messages will be routed to multiple targets).Also, routers provide segment validation of messages. If the inbound QRY A19 message is malformed then the router will become responsible for producing an error ACK message. If you want to shift this responsibility upstream (pass-through solutions) then either wire up the service directly to the operation or change the "Validation" settings.Hope that helps...Sean
go to post Sean Connelly · Sep 7, 2017 To programmatically access the raw BPL XML...ENSDEMO>set className="Demo.Loan.FindRateDecisionProcessBPL" ENSDEMO>set xdata=##class(%Dictionary.XDataDefinition).%OpenId(className_"||BPL",.sc) To then access the BPL data as objects...ENSDEMO>set parser=##class(Ens.BPL.Parser).%New() ENSDEMO>set sc=parser.ParseStream(xdata.Data,.bpl)E.g.ENSDEMO>zw bplbpl=<OBJECT REFERENCE>[7@Ens.BPL.Process]+----------------- general information ---------------| oref value: 7| class name: Ens.BPL.Process| reference count: 4+----------------- attribute values ------------------| Component = ""| ContextSuperClass = ""| DerivedVersion = ""| Height = 2000| Includes = ""| Language = "objectscript"| Layout = ""| Package = ""| Request = "Demo.Loan.Msg.Application"| Response = ""| Version = ""| Width = 2635+----------------- swizzled references ---------------| i%Context = ""| r%Context = "8@Ens.BPL.Context"| i%Parent = ""| r%Parent = ""| i%Sequence = ""| r%Sequence = "17@Ens.BPL.Sequence"+-----------------------------------------------------ENSDEMO>set context=bpl.Context ENSDEMO>zw contextcontext=<OBJECT REFERENCE>[8@Ens.BPL.Context]+----------------- general information ---------------| oref value: 8| class name: Ens.BPL.Context| reference count: 9+----------------- attribute values ------------------| (none)+----------------- swizzled references ---------------| i%Parent = ""| r%Parent = "7@Ens.BPL.Process"| i%Properties = ""| r%Properties = "9@Ens.BPL.PropertyList"+-----------------------------------------------------
go to post Sean Connelly · Sep 1, 2017 Thanks for the help Ray, it's been a really interesting conversation.
go to post Sean Connelly · Sep 1, 2017 I want the float member to be a canonical number, not a string.So a unit test would look like...AssertNumberEquals(x.Amount,0.1)which would fail, this would require a change in the method code to...set test.Amount=+$p(data,",",2)Which means the unit test will now pass, and quirky things won't happen downstream.
go to post Sean Connelly · Sep 1, 2017 On reflection I agree, unit testing simple return types is pointless.It's only return objects with %Float properties that would need to be unit tested for type as well as value... Class Test.Types Extends %Persistent{Property Amount As %Float;ClassMethod foo() As Test.Types{ set data="0.0,0.1,0.2" set test=..%New() set test.Amount=$p(data,",",2) quit test}} >s x=##class(Test.Types).foo() >w x.Amount0.1>w x.Amount=0.10
go to post Sean Connelly · Sep 1, 2017 Agreed, this is a highly specialised use case, specifically for unit testing against potential floating point equality errors. Using IsString() as a day to day function would in most cases be a bad thing.Just to clarify, the sorts after suggestion does NOT work, whilst it can detect stringy fractions, it does not work for even the simplest floating point number...>w "1.1"]]$c(0)0> The difference is that the method above will fail numbers in canonical formDo you have a specific condition where it will fail, I tested 1.6e+8 fractional number tests without any problem, so obviously concerned that there are conditions where it fails that I have not thought about yet.
go to post Sean Connelly · Aug 31, 2017 Thanks Ray.Btw, I found a solution earlier, I've added an answer to the post.I accept that implementations like $lb might change in the future, but I now have a backwards compatible solution that can work along side the dynamic object solution on future releases. Comparing the two outputs in itself will make a good unit test of the unit tester on installation.
go to post Sean Connelly · Aug 31, 2017 Turns out that stringy numbers are treated as strings by $lb, so a simple string test can be created . .. ClassMethod IsString(val) As %Boolean{ quit $lb(val)[val}ClassMethod AssertNumberEquals(val1 As %Float, val2 As %Float) As %Boolean{ if ..IsString(val1) quit 0 if ..IsString(val2) quit 0 if val1'=val2 quit 0 quit 1}
go to post Sean Connelly · Aug 31, 2017 Hi Ray, thanks for the long responses, these will be great for anyone new to Caché.No imposing coding convention here, just 20 years on the rock face with Caché/COS and a good pattern of the trip hazards inherent in the language, as all languages do (love COS, no bashing here).I've been evolving a new version of a unit test framework I have been using for years and want to make sure that it handles some of these regular trip hazards.In this instance, I have my own backwards compatible JSON library that failed a test because it was assigning a stringy number to a %Float property in its own normalisation method...https://github.com/SeanConnelly/Cogs/blob/master/src/Cogs/Lib/Json/Cogs.Lib.Json.ClassDeserializer.clsIf I can add a new assert method as described earlier, I can catch this type of problem upstream and prevent potential bugs leaking out into live code.So back to the simple question, would be great if anyone at Intersystem's knows any way to check the type of a variable, I can't think of anything from my legacy ANSII M days, perhaps there is a $zu function or similar?
go to post Sean Connelly · Aug 30, 2017 Thanks Alexander, but the AssertNumberEquals method should create a failed unit test when the values are of different type. I have updated my question to be a little more clear on this.
go to post Sean Connelly · Aug 30, 2017 Thanks Ray, but coercing the unit test to force a pass will cloak the underlying problem.Let me expand on my original post, we know in COS that many variables start out as stringy values, no matter if they contain a number or not...set price=$piece(^stock(321),"~",2)set stock.price=%request.Data("price",1)COS coercion operators do a pretty good job at dealing with numbers inside strings, except that there is an inconsistency in the equality operator when comparing floating point numbers. If "1.5"=1.5 is true, then arguably "0.5"=0.5 should also be true, but it is not. This means that developers should be wary of automatic equality coercion on floating point numbers.To compound this problem, the COS compiler will ignore the following two potential problems...1. Assigning a stringy value to a %Float property2. Returning a stringy value from a method with return type of %FloatWhich can lead to a false understanding of what a developer is dealing with.To make things a little more interesting, a persistent object will automatically coerce a %Float property to a true number value when saved. That's fine, but what if the developer is unaware that he / she is assigning a stringy float value and later performs a dirty check between another stringy float value and the now saved true float number. The code could potentially be tripped up into processing some dirty object logic when nothing has changed.As developers we need to code defensively around this problem, probably the best thing that we can do is always manually coerce a variable / property at source when we know it's a floating point value...set price=+$piece(^stock(321),"~",2)set stock.price=+%request.Data("price",1)But, since we are not perfect, and the compiler won't help us, it's easy to consider that a few bugs might slip through the net.This is where unit testing and good code coverage should highlight these exact types of problems. In this instance, a unit test should fail if the two values are not both the same type and value. So the implementation of AssertNumberEquals should check both type and value. Therefore, both of the following comparisons should fail..."1"=1"0.12345"=0.12345This is why as I originally posted that (+"0.12345"=0.12345) is not the right answer, as it will create a false positive.So the question boils back to, how do you detect the runtime type of a variable or property.One solution that I have come up with so far would roughly look like...ClassMethod AssertNumberEquals(v1, v2) As %Boolean{ set array=[] set array."0"=v1 set array."1"=v2 if array.%GetTypeOf(0)'="number" quit 0 if array.%GetTypeOf(1)'="number" quit 0 if v1'=v2 quit 0 quit 1}Except that it is dependent on recent versions of Caché.What I need is a similar solution that would be backwards compatible.
go to post Sean Connelly · Aug 29, 2017 I figured out that $length can detect a stringy number starting with zero that is not dependent on local collation...Is a string type...USER>s x="0.12345" USER>w $l(x)'=$l(+x)1 Is not a string type...USER>s x=0.12345 USER>w $l(x)'=$l(+x)0 BUT, this or "sort after" will only work for values starting with a zero.I could use this to fix the specific generic assertion test failure I have, but it would be nice to expand the unit test methods to have an AssertNumberEquals().It might be that I have to settle on...>w ["1"].%GetTypeOf(0)stringAnd only enable this method in supported versions.
go to post Sean Connelly · Aug 29, 2017 Hi Ray,The trouble is determining if a number value is also a string type or a special number type, as comparisons can give different answers for numbers starting with a zero...USER>s x=0.12345 USER>w (x=+x)1USER>s x="0.12345" USER>w (x=+x)0The obvious answer is to do (+x=+x) but this does not solve how to unit test the type and value.I agree that on reflection the dependency on local collation would not work for my unit test framework as it would restrict its scope of use, but still an interesting answer.Any more suggestions...