Question
· Mar 25

Anyone experience hurdles when using the Production Validator toolkit?

Been testing out the Production Validator toolkit, just to see what we can/not do with it. Seems really interesting and there seem to be some use cases for it that can really streamline some upgrades (or at least parts of upgrades) but I was running into so many hurdles with the documentation. I am curious if anyone else has used it.

Did you experience any issues getting it working? Any clarification that you would have liked in the documentation? any use cases that you worked through that made it particularly valuable? etc?

The hurdles I experienced included:

  1. The documentation stated that the toolkit comes pre-installed in 2024.3+ but I got to a point where it kept throwing an <METHOD DOES NOT EXIST> error. Upon investigation the toolkit either wasn't installed or only part of it was installed so I had to download and install the package from WRC.
  2. The documentation was frequently a little vague about things like some of the terminal command inputs. For example, it isn't clear what the arguments were for in the following command (later I figured out that it creates a new temporary namespace wherein the copied production is loaded and run):
    HSLIB>set sc = ##class(HS.InteropTools.HL7.Compare.ProductionExtract).ConfigureAndImport(<Testing_Namespace>,<Full_Path>)
  3. Since I was simulating an upgrade between 2024.1 and 2024.3 I wasn't able to get an output that resulted in any differences. The documentation doesn't have any kind of test or demo baked in, just screenshots so I'm still not 100% sure what it can handle and what it can't (see Outstanding Questions below)

 

Outstanding Questions

  1. I don't have any demo prods that use BPLs so there is a question regarding how the PV manages when those are present
  2. Doesn't it seem that a good amount of this could be scripted since these are commands that mostly don't take user input? (and if they do wouldn't it make sense to get as much scripted as possible?)
  3. I had not worked with the syntax that produces the JSON and initially thought that it was invalid JSON (what with the doubled double quotes, e.g. ""example""). Would a link to that syntactical documentation be unnecessary? or would an example of the JSON output be out of place?
  4. It isn't totally clear, until you bang your head into every wall, what the workflow is, is there a way to clarify that with a diagram/wireframe of the workflow? or is that inappropriate?
  5. Does this allow for rerunning a test once it has been run once? I tried doing so but kept bumping into errors like "DB already exists", "namespace already exists", etc. so I had to create a new NS and delete the COMPARE file each time. Is that just how it runs or is there a way to rerun that isn't easily identifiable?
  6. Initially I had been trying to get this all running in a container (Docker) and seemed to be running into too many challenges to justify the time spent. I ended up running 2 parallel instances (e.g. 2024.1 and 2024.3) locally and that worked a treat. But is there a way anyone has been able to test this using a docker container?

Interested in what you all have experienced and if you have made any customizations of this or if you have been able to figure out any workarounds, etc.

Thanks

Product version: IRIS 2024.3
$ZV: IRIS for Windows (x86-64) 2024.3 (Build 217U)
Discussion (3)2
Log in or sign up to continue

For the hurdles section:

  1. At the moment, I always load the most recent version of the Production Validator available to download from WRC. This ensures that you always have all the classes listed in the documentation. I think in the future this may not be needed but currently this is a good practice.
  2. I think this a little vague but maybe on purpose because you can name that new TESTING_NAMESPACE anything you would like.
  3. If you are trying to simulate differences probably the best path would be to do the optional steps of running the ConfigureNamespace and Import methods separately. This would allow you to run ConfigureNamespace to create the namespace then you could manually make changes to code to ensure there will be differences in your output after you run the Import.

For the outstanding questions:

  1. BPLs are handled just fine. I have a couple in our environment and haven't run into any issues using the PV. As long as the code exists in the environment, PV doesn't care if you are running standard Rules and DTLs, BPLs, or a custom class.
  2. Yes, more of this could be scripted. I believe in the last iteration of the code more of the commands needed user input. Looks like they have made a lot of good progress with this latest version so I can only imagine more will follow.

5. In order to run the complete test again where messages are completely reprocessed, you would need to delete the TESTING_NAMESPACE (whatever you called it in the ConfigureNamespace method), copy the original IRIS.DAT from your source to your folder, then start the process again. That clean up could probably be added to the documentation.

With the way that we built our namespaces, I had to create a smaller namespace that included all of our nuances that we do like... BPL making JDBC MS SQL Stored Procedure calls, making internal cache calls, making linked table/store procedure calls.

Larger namespaces will cause the comparison run to take longer to run.

So, my mini namespace is used to test new versions of HealthShare Health Connect.