In terms of general through-put design and long term support, I'm considering what would be a "best approach" for needing to create multiple batch files in a few different layouts from the same data-sets.
Like many others probably find themselves, we were stuck doing live data mapping in our Interface Engine that we really didn't want to do, but had no good alternative choice. We want to only keep mappings for as long as possibly needed and then purge expired rows based upon a TTL value. We actually had 4 use cases for it ourselves before we built this. Use cases:
I am configuring IIS to work with the CSP Gateway but I'm running into this error:
"The Module DLL 'C:\Inetpub\CSPGateway\CSPms.dll' could not be loaded due to a configuration problem. The current configuration only supports loading images built for a x86 processor architecture."
When using standard SQL or the object layer in InterSystems IRIS, metadata consistency is usually maintained through built-in validation and type enforcement. However, legacy systems that bypass these layers—directly accessing globals—can introduce subtle and serious inconsistencies.
Prompted by this post about accessing a global at its original location after you have changed a mapping, here's a tip about one specific dropdown in Portal that's sometimes useful.
IRIS supports CCDA and FHIR transformations out-of-the-box, yet the ability to access and view those features requires considerable setup time and product knowledge. The IRIS Interop DevTools application was designed to bridge that gap, allowing implementers to immediately jump in and view the built-in transformation capabilities of the product.
In addition to the IRIS XML, XPath, and CCDA Transformation environment, the Interop DevTools package now provides:
Presenter: Jeff Semmens Task: Model and access data as objects in .NET without designing the database first Approach: Use InterSystems Entity Framework
Description: Come and experience how you can design your database model in .NET and evolve it over time by leveraging the Entity Framework.
Problem: Current bindings require database-first design. I cannot design my model in .NET.
Solution: The Entity Framework is an object-relational mapping framework that allows data-oriented applications to model and access their data as objects in .NET. Database- first is still supported but will fade away in future releases.
Content related to this session, including slides, video and additional learning content can be found here.
The following post is a guide to implement a basic architecture for DeepSee. This implementation includes a database for the DeepSee cache and a database for the DeepSee implementation and settings.
Just released from Learning Services: Using the Complex Record Mapper video. See how the Complex Record Mapper lets you turn nested and serial records into production messages — without writing any code.
See how X12 SNIP validation can be used in InterSystems IRIS data platform and how to create a fully functional X12 mapping in a single data transformation language (DTL):
https://www.youtube.com/embed/ppKpMx_MAtg [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]
How Tax Service, OpenStreetMap, and InterSystems IRIS could help developers get clean addresses
Pieter Brueghel the Younger, Paying the Tax (The Tax Collector), 1640
In my previous article, we just skimmed the surface of objects. Let's continue our reconnaissance. Today's topic is a tough one. It's not quite BIG DATA, but it's still the data not easy to work with: we're talking about fairly large amounts of data. It won't all fit into RAM at once, and some of it won't even fit on the drive (not due to lack of space, but because there's a lot of junk). The name of our subject is FIAS DB: the Federal Information Address System database - the databases of addresses in Russia. The archive is 5.5 GB. And it's a compressed XML file. After extraction, it will be a full 53 GB (set aside 110 GB for extraction). And when you start to parse and convert it, that 110 GB won't be enough. There won't be enough RAM either.