Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.
Continuing on the journey of implementing %UnitTest, @Timothy.Leavitt's Test Coverage package, and automated testing with Jenkins.
My question today: why do we utilize a unit test root directory?
I've been defining packages and classes to write unit tests as I've been developing and I run the tests on the command line or using a routine file that is setup as a debug target which has been working great. I use DebugRunTestCase() to do this (so the classes aren't deleted).
In the main software development methodologies there is always a chapter dedicated to testing. It is a mandatory approach to achieving quality in deliveries on an ongoing basis.
IRIS Interoperability Productions formerly known as Ensemble are fun to work with. Yes, I really think my work is fun. I have seen File Passthrough Services and File Passthrough Operations come in handy. At one point we placed test messages in files, then we utilized a File Passthrough Service with Inbound File Adapter to send the contents of the file as a Stream to a File Passthrough Operation with Outbound TCP Adapter.
Continuing on with providing some examples of various storage technologies and their performance profiles, this time we looked at the growing trend of leveraging internal commodity-based server storage, specifically the new HPE Cloudline 3150 Gen10 AMD processor-based single socket servers with two 3.2TB Samsung PM1725a NVMe drives.
I was working on a DTL but kept getting ERROR #5002... MAXSTRING errors. The problem was that most of the DTL GUI action steps only support the string data type when working with the segments. A %String has a limit of 3,641,144 characters and my OBX5.1 was 5,242,952 characters long as the example provided. Of course PACS admin stated ultra high quality up to and including 4K resolution files were needed, so we could not get the vendor to compress or reformat these files to compressed jpg or something similar.
Are there any tools to check the code coverage and to do a lint check for cache object script? Developers will be working with HealthConnect (IRIS based)
Often InterSystems technology architect team is asked about recommended storage arrays or storage technologies. To provide this information to a wider audience as reference, a new series is started to provide some of the results we have encountered with various storage technologies. As a general recommendation, all-flash storage is highly recommended with all InterSystems products to provide the lowest latency and predictable IOPS capabilities.
The first in the series was the most recently tested Netapp AFF A300 storage array. This is middle-tier type storage array with several higher models above it. This specific A300 model is capable of supporting a minimal configuration of only a few drives to hundreds of drives per HA pair, and also capable of being clustered with multiple controller pairs for tens of PB's of disk capacity and hundreds of thousands of IOPS or higher.
Windows Subsystem for Linux (WSL) is a feature of Windows that allows you to run a Linux environment on your Windows machine, without the need for a separate virtual machine or dual booting.
WSL is designed to provide a seamless and productive experience for developers who want to use both Windows and Linux at the same time**.
The Mockable.io (https://www.mockable.io/) is an online service to deploy REST API or SOAP services in seconds. This is useful to test the consumption of an API or SOAP service in your production or objectscript class without having to implement a real service, including https option.
Suppose you have large set of cubes, pivots and dashboards in your DeepSee solution.
Then you change the level, measure or dimension in the cube and is there any way to test that your new input didn't break current pivots, dashboards, etc?
When we write unit test cases for cache object script code using %UnitTest.TestCase, what is the best way to write code to identify code coverage?
So, let say my unit test case hit all 10 lines of code of a method for a given class. So, unit test coverage should be 100% for that. But, using line-by-line coverage [(%Monitor.System.LineByLine] getting wrong percentage, because it also includes code comment/documentation as part of code. So, practically we can not ever achieve 100% of code coverage by using this API.
Those who actively use unittests with ObjectScript know that they are methods of instance but not classmethods.
Sometimes this is not very convenient. What I do now if I face that some test method fails I COPY(!) this method somewhere else as classmethod and run/debug it.
Is there a handy way to call the particular unittest method in terminal? And what is more important, a handy way to debug the test method?
Why do we have unittest methods as instance methods?
I found a package on OEX for a Sharding Demo If Sharding is NOT included in the Community License I can not use the Community Distribution but require a different one. And have to add ZPM manually.
We are upgrading from Health Connect 2018.1.3 to IRIS Health Connect 2022.1, and one thing that we are particularly hesitant about is if our Business Rules will work in the new version.
I am trying to come up with a testing process for bulk testing our rules, and wanted to know if this could be done programmatically instead of having to modify all the Business Operations to have them write the HL7 data to a file. I caught Orlando Health's presentation at GS2022 but I am not sure that will work for my team.
I am currently wokring on integrating unit tests into a project. I am also attempting to test productions with the TestProductions class. This works great, but I noticed that no code coverage information is being gathered when I run the production tests?
Am I doing something wrong (forgot to add something in the coverage.list for instance) or is TestProduction not intended for code coverage?
Did anyone run into this error when stopping a Production from Ens.Director?
Ens.Director::StopProduction => ERROR <Ens>ErrProductionNotQuiescent: IRIS can not become quiescent
It happens sporadically when an automated unit test from a class that extends %UnitTest.TestProduction runs a test on a Business Process. I already increased the parameter MAXWAIT to 30 seconds, but the error still happens.
OAuth server to be deployed on the IRIS learning cloud platform. Clients - one on the other instance of the learning IRIS server, the other client locally on my computer in the container docker.
Both clients get a seemingly correct link (through ##class(%SYS.OAuth2.Authorization).GetAuthorizationCodeEndpoint()) to the login request form:
I needed to know programmatically if last ran failed or not.
After some exploring, here's the code:
ClassMethod isLastTestOk() As %Boolean
{
set in = ##class(%UnitTest.Result.TestInstance).%OpenId(^UnitTest.Result)
for i=1:1:in.TestSuites.Count() {
#dim suite As %UnitTest.Result.TestSuite
set suite = in.TestSuites.GetAt(i)
return:suite.Status=0 $$$NO
}
quit $$$YES
}
I want to test automatically that HL7 business operation works correctly in error conditions. One is to test CE acknowledgement. I have planned to implement test production which includes business services for different situations (AA, AE, CA, CE, timeout, late response etc).
How should I implement HL7 business service that always returns CE (commit error)? I have tried but it keeps returning "AA".
https://www.youtube.com/embed/Bn5VPKAUs0U [This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]