Hi Randy,

I've been around the block on this problem.

There is no simple native solution. I have tried to create a native COS to PDF generator but its a much bigger problem than time affords.

I know you are asking for a solution that does not include creating a web page, BUT, with exception of going to some other document type (e.g. DOC to PDF) you really have no other easy choice.

However, there is a better way than Zen reports to implement this (IMHO). I use wkhtmltopdf...

https://wkhtmltopdf.org/

It's an executable that takes the name of an HTML file and produces a PDF file. You can also control headers, page numbers etc with it. It's rock solid and really easy to use.

Essentially...

1. Create a simple HTML string with an img tag and save it to a file.

2. Call out to wkhtmltopdf using $ZF passing in the name of the files

Done.

If you want to be clever and not save the image to a linked file then you can take its base64 directly from the HL7 message and embed it into the img tag, e.g.

<img src="data:image/png;base64,iVBORw0KGgo...">

I have wkhtmltopdf running at numerous hospitals turning out upwards of 10,000 letters a day, rock solid, no problems.

Hope that helps.

Sean.

You can roll your own export function in a few lines of code and tailor it to your specific needs, something like..

ClassMethod Export(pGlobal, pFile)
{
    set file=##class(%File).%New(pFile)
    do file.Open("WN")
    set key=$order(@pGlobal@(""))
    while key'="" {
        do file.WriteLine(@pGlobal@(key))
        set key=$order(@pGlobal@(key))
    }
    do file.%Save()
}

and call it like so...

do ##class(Foo.CSVUtil).Export("^foo","C:\Temp\foo.csv")

(its a 30 second hack so might need some tweaking, also assumed no CSV escaping needed since commas are already used in the data.)

Hi Murali,

Your perfectly right.

You can have multiple namespaces on the same Caché instance for different purposes. These should have a naming convention to identify their purpose. That convention is really down to you. I normally postfix the name with -DEV and -TEST, e.g. FOO-DEV & FOO-TEST.

These namespaces will share commonly mapped code from the main library, but unless configured otherwise they are completely independent from each other. You can dev and test in them respectively without fear of polluting one another.

Tip

You can mark the mode of an instance via the management portal > system > configuration > memory and startup. On that configuration page you will see a drop down for "system mode" with the options...

Live System
Test System
Development System
Failover System

The options are mostly inert, but what they will do is paint a box on the top of your management portal screens. If you mark it as live then you will get a red box that you can't miss.

Sean

Hi Richard,

It will be because you are trying to send %Net.MailMessage as an Ensemble message. This does not extend %Persistent which is the minimum required for an Ensemble message.

You will need to create your own custom class that extends Ens.Request and have reflective properties for each of the MailMessage properties that you need at the other end. The operation will then need to create a new MailMessage from the message class.

Sean.

In relation to stackoverflow I think the intention was to stop people abusing the implementation of the down vote.

I'm not sure that's relevant here, unless the main DC site started displaying some kind of reputation score in the same way as stackoverflow.

> Then downvotes will never be used, they why even have them?

I think dangerously incorrect is a poor description and perhaps needs down voting :)

It should just say "contains incorrect information".

Despite that, there are 1,000,000+ users on stackoverflow regularly using the down vote so it must be working.

Hi Evgeny,

I couldn't help notice that this post has a -1 rating.

I can't see anything wrong with the post so curious what the reason for a down vote would be.

On stack overflow they explain the use of down voting as...

When should I vote down?

Use your downvotes whenever you encounter an egregiously sloppy, no-effort-expended post, or an answer that is clearly and perhaps dangerously incorrect.

You have a limited number of votes per day, and answer down-votes cost you a tiny bit of reputation on top of that; use them wisely.


This post has an interesting conversation around down votes...

https://meta.stackexchange.com/questions/135/encouraging-people-to-expla...

As the original poster explains...

Where the down-vote has been explained I've found it useful & it has improved my answer, or forced me to delete the answer if it was totally wrong

This initiated a change that prompts down voters with the message, "Please consider adding a comment if you think the post can be improved".

We all make mistakes so its good to get this type of feedback. It also helps anyone landing on the answer months or years down the line and not realising the answer they are trying to implement has a mistake. Perhaps the DC site could do with something similar?

Sean.

Ahhh OK, the Page() method has been overridden so we lose all of the %CSP.Page event methods.

Since there are no event methods the only thing you can do is override either the Page() method or the  DispatchRequest(). At least the headers are not written too until after the method call.

I guess what you are doing will be as good as it gets. Only worry is if the implementation changes in a later release.

Ideally the class should have an OnBeforeRequest() and an OnAfterRequest() set of methods.

No problem. I know you said you didn't want to write your own custom code, so this is for anyone else landing on the question.

If you use the DTL editor (which I advise even for ardent COS developers), then you will most likely use the helper methods in Ens.Util.FunctionSet that your DTL extends from, e.g. ToUpper, ToLower etc.

Inevitably there will be other common functions that will be needed time and time again. A good example would be to select a patient ID in a repeating field of no known order. This can be achieved with a small amount of DTL logic, but many developers will break out and write some custom code for this. The problem however is that I see each developer doing there own thing and a system ends up with custom DTL logic all over the place, often repeating the same code over and over again.

The answer is to have one class for all of these functions and make that class extend Ens.Rule.FunctionSet. By extending this class, all of the ClassMethods in that class will magically appear in the drop down list of the DTL function wizard. This way all developers across the team, past and future will visibly see what methods are available to them.

To see this in action, create your own class, something like this...

Class Foo.FunctionSet Extends Ens.Rule.FunctionSet
{

  ClassMethod SegmentExists(pSegment) As %Boolean
  {
      Quit pSegment'=""
  }

}


Then create a new HL7 DTL. Click on any segment on the source and add an "If" action to it. Now in the action tab, click on the condition wizard and select the function drop down box. The SegmentExists function will magically appear in the list. Select it and the wizard will inject the segment value into this function.

Whilst developers feel the need to type these things out by hand, they will never beat the precision of using these types of building blocks and tools. It also means that you can have data mappers with strong business logic and not so broad programming skills bashing out these transforms.

I've found Veeam can cause problems in a mirrored environment for Cache.

We invested a great deal of energy trying to get to the bottom of this (with ISC) but never truly fathomed it out.

As you can imagine we tried all sorts of freeze thaw scripts and QOS settings. Whilst we did reduce the frequency we could not completely eliminate the problem.

In the end we came up with this strategy.

1. Run Veeam on the backup member nightly
2. Run Veeam on the primary member only once a week during the day
3. Only use Veeam as a way to restore the OS
4. This would exclude all DAT, Journal and WIJ files
5. Take a nightly dat backup of both members and stash several days of Journal files
6. Ensure the mirror pair are on completely different sets of hardware and ideally locations
7. Configure the DAT and Journal files to be on different LUNs on the VM setup
8. Understand how to physically recover these files should the server crash and not recover

Conclusion for me was that Veeam is a nice to have tool in a VM set-up, but it should not be a replacement for the battle tested cache backup solution.
 

Hi Andre,

1. Description might be empty whilst NTE might actually exist, so it would be better to do...

request.GetValueAt("NTE(1)")'=""


2. If you are looking to shave off a few characters then take a look at $zdh...

https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY...

$zdh((source.{G62:Date},8)


3. For time there is $zth, but it expects a : in the time (e.g. HH:MM).

https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY...

It doesn't look like you have a colon, so you could use $tr to reformat the time first. This will save you 10 or so characters...

$zth($tr("Hh:Mm","HhMm",source.{G62:Time}))


4. You can have more than one PID group, but you should not see more than one PID segment in a PID group. Not all message types define (or allow) a repeating PID group. You might see multiple PID groups in an ADT merge message. You might also see bulk records in an ORU message, but in the real world probably not. If you know the implementation only ever sends one PID group then you will see many developers just hard coding its ordinal key to 1, e.g.

request.GetValueAt("PIDgrpgrp(1).ORCgrp(notHardcodedKey)")


Developing with HL7 can feel verbose to begin with. Tbh, what you are currently doing is all perfectly fine and acceptable.

Sean.

I can remember reading your previous post about the bottleneck before. I just wonder if such a fix would make the V8 less secure and as such would be unlikely to happen under the current architecture.

I did think about the problem for a while and I managed to get a basic AST JavaScript to Mumps compiler working. Its just a proof of concept and would require much more effort. Coercion missmatch is the biggest barrier. I came up with 1640 unit tests alone as an exercise to figure out how big the task would be. And that's just 1/20th of the overall unit tests it would require.

Effectively you would write JavaScript stored procedures inline that would run on the Cache side. I'm still not sure of the exact syntax, but it might be something like

db("namespace").execute( () => {
  //this block of code is pre-compiled into M
  //the code is cached with a guid and that's what execute will call
 //^global references are allowed because its not real JavaScript
  var data=[];
  for k1 in ^global {
    for k2 in ^global(k1) {
      data.push(^global(k1,k2));
    }
  }
  return data;
}).then( data => {
  //this runs in node, data is streamed efficiently in one go
  //and not in lots of costly small chunks
  console.log(data)
})


In terms of benchmarks, I would like to see the likes of Cache and ilk on here...

https://www.techempower.com/benchmarks/#section=data-r13&hw=ph&test=update