go to post Eduard Lebedyuk · Jun 11, 2020 Is your app in that Namespace an Interoperability production?
go to post Eduard Lebedyuk · Jun 9, 2020 &sql(SELECT Name, DOB, Gender INTO :Name, :DOB, :Gender FROM osuwmc_RQGPatient.DataTable WHERE MRN=:MRN)
go to post Eduard Lebedyuk · Jun 9, 2020 ... quit $classmethod(class, index _ "Exists", value) As you presumably want to return the value.
go to post Eduard Lebedyuk · Jun 9, 2020 %JSON.Adaptor does not construct an intermediate dynamic object. You can use %ObjectToAET to convert normal object into dynamic object: set dynObj = ##class(%ZEN.Auxiliary.altJSONProvider).%ObjectToAET(obj) There are two approaches: 1. Create a "result set" class to hold the results (interestingly, InterSystems provides %XML.DataSet and other tools for this specific use case with XML/SOAP. Docs): Class Test.JSONRS Extends (%RegisteredObject, %JSON.Adaptor) { Property count As %Integer; Property results As list Of Book; } 2. Simple approach: Output header {"results": 3, "items": [ Call %JSONExport on each book (don't forget the comma at the end) Output footer ]} Despite being hacky, the second approach is better: If JSON export on each individual object is successful it works every time and if some object fails we won't get valid a JSON anyways It can be easily generalized to use with any type of result It does not hold a large structure in memory as each object is loaded/displayed individually. That said, I generally recommend against supplying count in query results because of many reasons: Users do not care for thousands of results It's slow - we need to know how much results there are Use pagination instead - if a current page has a max number of results, display the link to the next page
go to post Eduard Lebedyuk · Jun 9, 2020 So %Dictionary.CacheClassname would be my table name then? Well, class name. If you want you can convert table name to class name with: set:table'["." table=$$$DefaultSchema_"."_table // support unqualified names set class = $$$GetClassNameFromIQN(table) I don't have to use any :sql.... type code for it to do the lookup? You can but you don't have to. What about the existing EXISTS function that already exists that is used for Data Lookup Tables (lut)? It's a separate function that works only with Data Lookup Table (which are actually not tables at all).
go to post Eduard Lebedyuk · Jun 9, 2020 You'll need a Custom Function. If the lookup field is indexed (Location in your case) you can use this function for all Lookup checks: ClassMethod Exists(class As %Dictionary.CacheClassname, index As %String, value As %String) As %Boolean [ CodeMode = expression] { $classmethod(class, index _ "Exists", value) }
go to post Eduard Lebedyuk · Jun 8, 2020 Interesting. Can you please add source code for java classes, such as ParseXML.Parse?
go to post Eduard Lebedyuk · Jun 8, 2020 This is my first post Eduard so I'm not sure what the etiquette is. Should I mark your post as correct as it lead me to realise where the problem was or this one as it contains the working code? You can mark any number of comments as correct answers, so both I guess.
go to post Eduard Lebedyuk · Jun 6, 2020 You can use %ZSTART routine but I'd recommend writing idempotent code for OnProductionStart - this way all Interoperability functionality is contained inside the production.
go to post Eduard Lebedyuk · Jun 4, 2020 Config package is a wrapper over CPF file. You edit the objects of these classes and CPF file is edited automatically by Caché. You can check in the CPF file where data server is defined and get the corresponding Config object. WRC may provide more info I think.
go to post Eduard Lebedyuk · Jun 4, 2020 If you're interested in monitoring, try brand new System Alerting and Monitoring. There was also a Tech Talk about it recently.
go to post Eduard Lebedyuk · Jun 4, 2020 Interesting article! I encountered Business Process variation of this issue recently and would like to add that setting SKIPMESSAGEHISTORY parameter is not always a complete solution. It disables sent/received reference history, but pending messages are still stored in a collection (no way around it). In cases where one BP instance waits on more than ~10 000 messages simultaneously same issue occurs (I got a journal file per minute on 50 000 pending messages). The recommended approach would be to change architecture layout so that one process would wait on a more reasanable amount of messages.