Hi, after installed Healthshare 2015.2, all the tables previously correctly listed working with Healtshare 2014.1 are NOT listed anymore. TrakCare tables are not listed in the catalogue, nor by the WinSQL Intellisense.
Does anybody know a trick or have a hint to resume this useful functionality back?
Thing is, in rule 2, the operation succeeds, the message reachs BO HL7, but in rule 1 it does not go through. In rule 1, I have two different conditions, but they are linked by an OR and one of the conditions is simply IF 1=1.
Do you guys have any clue on why rule 1 does not reach BO HL7?
In the HL7 Annotations available in the Management Portal, at the message type and message structure levels, there are columns for 'Explicit Usage' and 'Implicit Usage'. In nearly all cases, the values in these two columns match, but at least for message types RAS and RGR, they don't.
What's the meaning of explicit and implicit in the annotations?
Earlier in this series, we've presented four different demo applications for iKnow, illustrating how its unique bottom-up approach allows users to explore the concepts and context of their unstructured data and then leverage these insights to implement real-world use cases. We started small and simple with core exploration through the Knowledge Portal, then organized our records according to content with the Set Analysis Demo, organized our domain knowledge using the Dictionary Builder Demo and finally build complex rules to extract nontrivial patterns from text with the Rules Builder Demo.
This time, we'll dive into a different area of the iKnow feature set: iFind. Where iKnow's core APIs are all about exploration and leveraging those results programmatically in applications and analytics, iFind is focused specifically on search scenarios in a pure SQL context. We'll be presenting a simple search portal implemented in Zen that showcases iFind's main features.
I also have a Caché server with "downloadedposts" table.
They are connected from Caché to MySQL via SQL Gateway
I want to keep Caché table synced with MySQL one (MySQL "posts" table is a master copy), so periodically Caché queries MySQL server and downloads data. So far so good, and if a record appears or changes in MySQL table, Caché downloads the changes.
The problem I'm encountering is that sometimes rows would be deleted from MySQL "posts" table.
Steve Glassman is on vacation today so in his place I wanted to announce the availability of a new kit for the 2016.2 Field Test. The kit details are: 2016.2.0.665.0
There is a wide range of changes to the 2016.2 Field Test, 145 of them in total. You can find a complete listing here:
I am pleased to announce the next 2016.2 field test kit, 2016.2.0.677.0.
I haven’t sent an update to this thread in a while and it should come as no surprise that there has been quite a lot of development going on since I wrote about build 632. In fact, there have been almost 300 changes covering most areas of the product.
The accumulated list of fixes to problems found in the field since build 632 includes the following changes:
My manager wants to send a couple of people to one of InterSystems's courses about developing Ensemble productions. I work in a healthcare setting, but my group does not do much work with HL7 interfaces. We mainly use Ensemble to implement custom (non-HL7) interfaces and web services/clients.
Studio supports an option to display arguments using multiple lines. Is there an equivalent option for Atelier? The Studio setting is under Tools->Options, Environment->Class and is controlled by the "Multiline method arguments" flag.
Recently I have been posting some updates to our JSON capabilities and I am very glad that so many of you provided feedback. Today I would like to focus on another facet: Producing JSON with a SQL query.
I am trying to calculate a dimension percentage for WFRole on a child cube named MbMRouteHistoryInitiatorObj joined to a parent cube named MbmQaSObj by using a calculated member dimension within MbmQaSObj.
If I use the calculated member dimension value expression of [WFRole].[H1].[WFRole].CurrentMember/ [WFRole].[All WFRole].%ALL in the child cube MbMRouteHistoryInitiatorObj, the percentages work correctly.
I'm trying to rewind a cursor back to the first row after looping part of the way through the implied result set, but I'm not finding a way to make this happen, is there some such iterator variable or directive that I can leverage to accomplish this?
I could code around it by pulling identifiers and/or values into a local array, and then hand code up an iterator over my local results copy, but this feels like a "redesigning of a wheel" approach, and I thought I would check before I start down this path.
If I wanted to build a web-based dashboard that monitored various HIE transactions where would I start?
Examples of measures would be Provide & Registers by Facility, Patient Views by facility or even PHR related data.
I have a successful POC that uses nodejs, html and SQL but I think it makes more sense to use Intersystems technology all the way around, I just don't know where to begin.
I can see mention of it online - http://www.intersystems.com/our-products/cache/tech-guide/chapter-3/ - but nothing in the docs on how to implement. Does anyone have any experience with this? We've managed to implement Hibernate and JDBC connections to Cache DBs fine (these are documented!).
Customer is using an ancient MSM based client Workstation for their application and they made a small change to their server code. They introduced property of %Double() and discovered an issue. The MSM workstation is not able to retrieve any property of an object instance that holds value of $double(0). see images illustrating the issue.
I'm posting this to DC intentionally before I eventually send it to WRC so others can comment.
I am looking for a mapping from SDA collections to the HS Analytics (HSAA) data model. Specifically HS.SDA3.Container.Observations to the tables (Couldn't find all the fields in HSAA.Observation). Can someone help? Thanks
Let assume you have a infinitely scaling algorithm implemented in your application, using replication, ECP, or any other means of horizontal scaling, and let assume you know how to run your system under any volume of requests, the trick is to deploy required number of computing nodes in the cluster. If we are talking about cluster of 2-4 nodes your administrator (or as they call it today "devops engineer") will install anything manually. Probably he will easily handle 5 nodes configuration in the cluster.