Hi George / Gordon,

Feels like too many potential trip hazards getting this right, and the possibility of creating false positives.

I would recommend using an HTTP client instead.

Here is a quick and dirty example, it uses %Net.HttpRequest to make the REST calls. Take a look at the documentation here http://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls as there are many other settings (e.g. credentials) that you might also want to set.

The example has both a GET and a POST...

Class Foo.Rest1 Extends %CSP.REST
{

XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap]
{
    <Routes>
    <Route Url="/book" Method="POST" Call="PostBook" />
    <Route Url="/book/:ibn" Method="GET" Call="GetBook" />
    </Routes>
}

ClassMethod PostBook() As %Status
{
    set obj=##class(%DynamicObject).%FromJSON(%request.Content)
    set ^Foo.Books(obj.ibn,"name")=obj.name
    set ^Foo.Books(obj.ibn,"author")=obj.author
    quit 1
}

ClassMethod GetBook(pIBN) As %Status
{
    if '$Data(^Foo.Books(pIBN)) set %response.Status = 404 return 1
    set book = {
        "ibn" (pIBN),
        "name" (^Foo.Books(pIBN,"name")),
        "author" (^Foo.Books(pIBN,"author"))
    }
    write book.%ToJSON()
    return 1
}

ClassMethod GetNewRestHelper() As %Net.HttpRequest
{
    set httpRequest=##class(%Net.HttpRequest).%New()
    set httpRequest.Server="localhost"
    set httpRequest.Port=57772
    set httpRequest.Timeout=2
    quit httpRequest
}

ClassMethod Tests() As %Boolean
{
    //basic assert, replace with actual unit tester implementation
    #define assert(%a,%b) write !,"Test ",$increment(count)," ",$select(%a=%b:"PASSED",1:"FAILED, expected """_%b_""" recieved"""_%a_"""")

    //TEST 1 : should save ok
    set rest=..GetNewRestHelper()
    set book = {
        "ibn" 1234,
        "name" "Lord Of The Rings",
        "author" "J.R.R. Tolkien"
    }
    set json = book.%ToJSON()
    do rest.EntityBody.Write(json)
    do rest.Post("/foo/book")
    $$$assert(200,rest.HttpResponse.StatusCode)

    //TEST 2 : should find book 1234
    set rest=..GetNewRestHelper()
    do rest.Get("/foo/book/1234")
    $$$assert(200,rest.HttpResponse.StatusCode)
    set json=rest.HttpResponse.Data.Read(32000)
    set book=##class(%DynamicObject).%FromJSON(json)
    $$$assert(book.ibn,1234)
    $$$assert(book.name,"Lord Of The Rings")
    $$$assert(book.author,"J.R.R. Tolkien")

    //TEST 3 : should NOT find book 5678, test should fail
    set rest=..GetNewRestHelper()
    do rest.Get("/foo/book/5678")
    $$$assert(200,rest.HttpResponse.StatusCode)
}

}

Hi Tom,

The HTTP error looks like a red herring. I suspect its just a generic CSP broker issue, probably a secondary side effect to the SQL connection failing.

It's difficult to see anything specific from your post that might be causeing the SQL connection to fail. I normally keep tweaking settings and trying different drivers at this stage until it works (sorry, I know its obvious advice).

I assume from port 1433 that you are trying to connect to SQL Server. The only thing that I am suspicious of is if you really need a kerberos connection to SQL Server. If you have not specifically configured this on SQL Server then I think the 459 error is confusing matters, in which case I would raise a WRC.

As an alternative, you could try using a JDBC driver instead. I've had better luck in the past using JDBC on Linux.

Sean.

Hi Ponnumani,

I agree with Robert and Eduard, it's very difficult to answer such a broad question in any detail without duplicating in a single post (at length and time) what has already been written many times over.

However, tree traversal and its methods are very pertinent to mastering the fundamentals of Caché and worthy of a reference answer to help anyone else starting out on this path.

It's important to understand that Caché is essentially a modern implementation of the M programming language / database.

M was first developed back in the late 60's. It was mainly implemented on PDP servers which at that time where limited to a few hundred K of memory. These limitations greatly influenced the design and efficiency of M's data globals which would go on to be its core strength.

Caché was an evolutionary step in this technology time line which added a new high level object oriented programming language as well as a comprehensive set of libraries and tools.

Despite being an evolutionary step, Caché is still considered an implementation of M, adhering to much of the 1995 ANSI language standards.

Whilst some people distance themselves from this heritage, many of us are embrace these roots and like to combine both the modern aspects of Caché with much of what we mastered as M developers, the combination of which makes for a very powerful platform that you won't find in any other single technology.

It's important to understand this history as much of the information and teachings to master the raw fundamentals of Caché can be found by using information and teachings that still exist for the M programming language.

To help on-ramp any new Caché developers, I always buy them "M Programming: A comprehensive Guide" by Richard Walters as a short staple learning tool. There are also many other good books out there and a simple search on Mumps books will bring up a good selection.

All of these books will dive into globals, the hierarchical tree data storage which is very relevant to your question. They will also explain the $order and $query functions which are key to traversing global's.

Most of the time you will have shallow global data and traversing this data will be implemented with an equal number of nested $order functions. When the data is deeper, or the traversal needs to be more generic then you can either use the $query function for a brute force traversal, or implement a combination of $order and a recursive loop.

So the short answer to your question would be to seek out everything that you can find on Caché / M globals and the $order and $query functions. Take the time to read and practice using globals directly and you will master tree traversal. In addition, if you haven't already done so, I would also seek out generic books and teachings on programming algorithms that cover topics such as recursion and trees.

I hope this helps, and we are more than happy to help when you get stuck with more specific (and hopefully more detailed) questions.

Hi Motti,

Sounds like you could get in a tangle trying to do this.

My approach would be to throw more disk space at it and keep everything at 60 days, lowest TCO solution with minimal effort.

If you were to move some of the interfaces, then you could use your web servers config to reverse proxy specific URL paths to different namespaces. If your using the default apache web server then you might look to implement a dedicated installation.

Alternative suggestions.

1. Write your own message purge utility.

2. Use an enterprise message bank (latest versions of Ensemble recommended)...

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EGDV_message_bank

The %Net.HttpRequest class provides an HTTP client for interacting with HTTP servers.

Underneath it essentially opens up a TCP port to the given IP address (DNS lookup), sends the required HTTP headers, request body / MIME parts, and waits for the response which is unpacked into a response object.

I'm not sure if you are asking if its safe to use or how to use it, so here is a reply to both.

The class is very robust and adheres to the HTTP/1.1 specification.

You will find that this class underpins many other classes in Caché / Ensemble and should be considered battle tested code that you can fully rely on.

To use it, the best way to start out would be to play around with the code in caché terminal. You can cut and paste the snippet you provided directly into the terminal (without the tabs), and this is what you will see...
 

USER>Set httprequest=##class(%Net.HttpRequest).%New()
 
USER>Set httprequest.Server="www.intersystems.com"
 
USER>Do httprequest.Get("/")
 
USER>Do httprequest.HttpResponse.OutputToDevice()
HTTP/1.1 301 Moved Permanently
CONNECTION: keep-alive
CONTENT-LENGTH: 178
CONTENT-TYPE: text/html
DATE: Thu, 12 Oct 2017 14:14:02 GMT
LOCATION: https://www.intersystems.com/
SERVER: nginx
X-TYPE: default
 
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx</center>
</body>
</html>

As you can see, it returns a 301 status code because the intersystems web site is only running over HTTP on port 443, there are settings on the request object that support HTTPS.

Better to play around with a non HTTP site to begin with, here is an example with my website...

(a small trick, the command zw will dump the contents of an object to screen, but not if that object is a property of another object, so assign it to a variable and zw that variable, here you can see the status code and other response properties that you can use)
 

USER>set httprequest=##class(%Net.HttpRequest).%New()
 
USER>set httprequest.Server="www.memcog.com"
 
USER>set sc=httprequest.Get("/")
 
USER>set response = httprequest.HttpResponse
 
USER>zw response
response=<OBJECT REFERENCE>[4@%Net.HttpResponse]
+----------------- general information ---------------
|      oref value: 4
|      class name: %Net.HttpResponse
| reference count: 3
+----------------- attribute values ------------------
|    ContentBoundary = ""
|        ContentInfo = ""
|      ContentLength = 4660
|        ContentType = "text/html"
|               Data = "5@%Stream.FileCharacterGzip"
|Headers("ACCEPT-RANGES") = "bytes"
|Headers("CONTENT-ENCODING") = "gzip"
|Headers("CONTENT-LENGTH") = 4660
|Headers("CONTENT-TYPE") = "text/html"
|    Headers("DATE") = "Thu, 12 Oct 2017 14:29:58 GMT"
|    Headers("ETAG") = """66a36a6a-3f38-53cedab4fb6b1"""
|Headers("LAST-MODIFIED") = "Tue, 20 Sep 2016 10:12:42 GMT"
|  Headers("SERVER") = "Apache"
|    Headers("VARY") = "Accept-Encoding,User-Agent"
|        HttpVersion = "HTTP/1.1"
|       ReasonPhrase = "OK"
|         StatusCode = 200
|         StatusLine = "HTTP/1.1 200 OK"
+-----------------------------------------------------

USER>write response.Data.Read(1500)​

<!--
                                                         DEVELOPED BY...
 

__/\\\\____________/\\\\__/\\\\\\\\\\\\\\\__/\\\\____________/\\\\________/\\\\\\\\\_______/\\\\\__________/\\\\\\\\\\\\_
 _\/\\\\\\________/\\\\\\_\/\\\///////////__\/\\\\\\________/\\\\\\_____/\\\////////______/\\\///\\\______/\\\//////////__
  _\/\\\\///\\\/\\\/_\/\\\_\/\\\\\\\\\\\_____\/\\\\///\\\/\\\/_\/\\\__/\\\______________/\\\______\//\\\_\/\\\____/\\\\\\\_
   _\/\\\__\///\\\/___\/\\\_\/\\\///////______\/\\\__\///\\\/___\/\\\_\/\\\_____________\/\\\_______\/\\\_\/\\\___\/////\\\_
    _\/\\\____\///_____\/\\\_\/\\\_____________\/\\\____\///_____\/\\\_\//\\\____________\//\\\______/\\\__\/\\\_______\/\\\_
     _\/\\\_____________\/\\\_\/\\\_____________\/\\\_____________\/\\\__\///\\\___________\///\\\__/\\\____\/\\\_______\/\\\_
      _\/\\\_____________\/\\\_\/\\\\\\\\\\\\\\\_\/\\\_____________\/\\\____\////\\\\\\\\\____\///\\\\\/_____\//\\\\\\\\\\\\/__
       _\///______________\///__\///////////////__\///______________\///________\/////////_______\/////________\////////////____

-->

<html>
    <head>
    <title>MEMCOG: Web Development, Integration, Ensemble, Healthshare, Mirth</title>
    <link href="main.css" rel="stylesheet">
    <link href="https://fonts.googleapis.com/css?family=PT+Sans" rel="stylesheet">
    <script src="https://use.fontawesome.com/

When you created your router, you selected VDocRoutingEngine, which expects a doc type in the request message.

For your requirements you should use the standard RoutingEngine.

I would remove the process you have created and start again, open the business process wizards, and select...

EnsLib.MsgRouter.RoutingEngine

from the business process class drop down list (within the "all processes" tab), then add the rules as you have done previously.

Hi Tom,

There is no default rename method inside studio.

The workaround is to do the following...

1. File > Open Project , then select the project

2. File > Save As , provide a name for the new project

As its the default you wont need to clean up, but if you want to delete a previously named project then click on file open project, right click on the project name and select delete from the context menu.

Hi Thembelani,

Assuming that you have got as far as serialising the objects to an XML stream, and you just want to prettify the XML, then you can use the following solution.

Add the following to a new / existing class...
 

ClassMethod Format(pXmlStream As %CharacterStream, Output pXmlStreamFormatted As %CharacterStream) As %Status
{
    set xslt=##class(%Dictionary.XDataDefinition).%OpenId(..%ClassName(1)_"||XSLT",-1,.sc)
    quit ##class(%XML.XSLT.Transformer).TransformStream(pXmlStream,xslt.Data,.pXmlStreamFormatted)
}

XData XSLT
{
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
 <xsl:output omit-xml-declaration="yes" indent="yes"/>
 <xsl:strip-space elements="*"/>
 <xsl:template match="node()|@*">
  <xsl:copy>
   <xsl:apply-templates select="node()|@*"/>
  </xsl:copy>
 </xsl:template>
</xsl:stylesheet>
}

To use it...

set sc=##class(Your.Class).Format(.xmlStream,.xmlStreamFormatted)


Pass in your XML stream as the first argument (by ref) and get back the formatted stream as the second argument (by ref).

Hi Maks,

This is an interesting topic that I have spent a great deal of time thinking and working on myself.

I developed a transcompiled object oriented Mumps language called MOLE a number of years ago...

http://www.memcog.com/mole/index.html

I've also had a number of attempts at writing a JavaScript to Mumps transcompiler and can share my thoughts on why this is not a good idea.

Firstly COS is a perfectly good language that I am very happy to use on a day to day basis. There are times when it feels a little stale, particularly as I jump around a lot of languages, but on the whole, it does a very good job. Any peccadilloes or missing features seem trivial when you have all the other benefits from the entire Caché platform. That said, there is always room for improvement, and anything that helps us write better, smaller, and more maintainable code has to be a good thing.

A couple of things that I learned from MOLE was that when building a transcompiler you have to work with the underlying language. Fundamental aspects such as coercion should be first class citizens of the transcompiled language. 

As an example, JavaScript and Mumps differ greatly on data types and coercion. Mumps has one single data type whilst JavaScript has 6. Mumps has a fairly simple set of coercion rules whilst JavaScript is far more complex. As an exploration, I wrote a bank of 1600 coercion tests that Mumps would have to replicate, which highlights some of the complexities...

https://gist.github.com/SeanConnelly/4c25afc61113ee113ec14211f59076ec

The other consideration is that whilst JavaScript can have just in time compilers, it is fundamentally a run time language with a complex run time environment for managing things like dynamically changing prototypal inheritance and async call stacks, all of which would all have to be simulated in a Mumps compiled environment. On reflection, it's just a really complex idea that would be much less performant than its underlying language.

So any transcompiled language should essentially be a super set of its underlying language, not a wild shift in syntax or behaviour to JavaScript, C# or any other language.

We can learn lessons from other transcompiled languages such as CoffeeScript and TypeScript. For me CoffeeScript is marmite and I refused to even try and like it. Whilst there were some interesting features to CoffeeScript, it went too far by changing the syntax of the language. It's interesting however that features such as arrow functions influenced and made its way into the core JavaScript languages (ES6). This shows that a transcompiled language can be a good test bed for ideas that can influence and improve an underlying language.  

The gold standard of transcompilers is TypeScript. TypeScript didn't try to change the language, but rather provide a backwards compatible compiler for future features of JavaScript, whilst providing a small amount of syntactic sugar for injecting compile time type checking.

In COS we already have syntax for declaring things like argument types, but it's just a shame that the compiler does not enforce them, something I touched on here...

https://community.intersystems.com/post/compilation-gotchas-and-request-...

TypeScript actually shows that we don't need stronger types, just better compile time validation of using types. For me, I would borrow this aspect of TypeScript as the first feature of any transcompiled language. As an example...

set x = "Hello World"

can be marked as type string...

set x : string = "Hello World"

This would be removed at compile time and have no run time behaviour, but it would enable the compiler to stop this variable from being used incorrectly, e.g. passing it as an argument that should have been of type integer.

This would allow code to still be written loosely, but where someone is trying to enforce better protection, it will warn when it is implemented and implemented wrongly. Generics would also be a good fit.

The other aspect I would borrow from TypeScript would be to shim backwards compatible features.

An interesting example is the return command which was introduced in 2013. It's nice to see a new language feature, but then, of course, we have the problem of backwards compatibility. I think back to using the asterisk in $piece, e.g. $piece(x,2,*) which is a great feature but tripped me up on one particular deployment, and still makes me a little nervous to use it in library code.

This is where a transcompiler could be a big help. Being able to use these new features and know that they are transcompiled into backwards compatible code would be a really useful tool.

The next on my list would be the addition of the "in" command, to be combined with the "for" command. Whilst the $order function is mighty powerful, it's not the prettiest to look at. Instead of writing...

set name=$order(^names(""))
while name'="" {
    write !,name
    set name=$order(^names(name))
}

you end up with a more cleaner and easier to read solution...

for name in ^names {
    write !,name
}

which obviously gets cleaner when these $order loops become heavily nested.

Next on my list would be to promote simple types to fake objects. As in the above type declaration example, if a variable is declared as a string then it would automatically inherit a whole host of useful string functions, e.g.

set x : string = "Hello World"
write x.toUpper()

HELLO WORLD

This would compile down to...

set x = "Hello World"
write $zconvert(x,"U")

Next up would be lambda functions. Combined with an array type it would allow for things like map, reduce and filter methods on variables declared as an array, or implicitly referenced as a global. Arrow function syntax would sit well with cos...

//filter out all numbers
set bar : array = foo.filter( value, index => {
    if 'value.IsNumber() return value
})

From here you can then introduce method chaining which would make code much more readable, so instead of...

set x=$zconvert($extract($piece($piece(y,","),"~",2),1,10),"U")

you end up with

set x=y.piece(",").piece("~",2).extract(1,10).toUpper()

As a starter these would all be great additions to an already great language. There are probably more suggestions, such as built in async callback support, but this is where I would start from.

That said, there is one important aspect that also needs to be addressed. All of these additions would break both studio and atelier.

You could also argue that these types of additions would only be half of a solution, where improved tooling should and must go hand in hand with a transcompiler. As a minimum the IDE highlighting needs to support the new syntax, as well as adding value with improved real time type detection issues.

I spent a great deal of time working on IDE solutions for this very reason. I felt there was no point releasing anything unless it had its own IDE as well. I went through lots of experiments, extending Eclipse and various other IDE's, but I always came back to one important requirement. Any solution, should and must work remotely (as does studio) and that inevitably it would have to be a browser based solution to work as a modern day cloud supporting solution.

Here is a glimpse of what I have been up to here, it's 100% JavaScript and built on top of its own in-house widget library...

http://www.memcog.com/images/Nebula.png

It's far from finished as I am working on a better way to write API code with Caché, which is what I am currently working on with the Cogs library. If I eventually get there then the plan is to release all of this into the wild, along with the MOLE transcompiler which could then be used to extend COS, or experiment with other Mumps based ideas and innovations.

What I would say is it's very easy to underestimate how much effort is required to make a transcompiler robust, fully unit tested, tooled up and fully supported to the degree that anyone would consider using it in production. When you're doing this without a sponsor and part time it's a long hard road...

Sean

Hi Murillo,

It is possible to return messages back to your service.

HL7 TCP services have a setting called "Ack Mode", found under "Additional Settings".

By default, this is set to immediate which will produce an automated HL7 ACK at the time the message is received.

To return an ADR A19 message from a process/operation as an ACK to a QRY A19 then you need to change this setting to "Application".

You don't mention how the ADR A19 is generated. I will assume two possibilities.

1. The operation sends the QRY A19 to an upstream system, which returns the ADR A19 message as its ACK.

In this instance wire up the service to the operation directly (using application mode) and it will just work.

Note, operations have a setting called "Reply Code Actions". If the upstream system returns an error ACK then by default Ensemble will fail the message and the ACK will not make it back down stream. Most likely, these types of ADR A19 message do need to make it back with the error message, so change the reply code error action to "W", which will log a warning, but treat the message as ok.

2. The operation is querying a database, for example, SQL server.

The operations OnMessage (or message map method) has a request and response argument. You will need to make the response argument of type EnsLib.HL7.Message. The operation should perform the SQL query and then use the resultset data to create the EnsLib.HL7.Message ADR A19 message. This will be returned by reference as the response argument.

Note, if you are using an HL7 router between the service and the operation, you will also need to configure the "Response From" setting so the router understands which operation it should expect an ACK from (as some messages will be routed to multiple targets).

Also, routers provide segment validation of messages. If the inbound QRY A19 message is malformed then the router will become responsible for producing an error ACK message. If you want to shift this responsibility upstream (pass-through solutions) then either wire up the service directly to the operation or change the "Validation" settings.

Hope that helps...

Sean
 

To programmatically access the raw BPL XML...

ENSDEMO>set className="Demo.Loan.FindRateDecisionProcessBPL"
 
ENSDEMO>set xdata=##class(%Dictionary.XDataDefinition).%OpenId(className_"||BPL",.sc)                                                        

To then access the BPL data as objects...

ENSDEMO>set parser=##class(Ens.BPL.Parser).%New()
 
ENSDEMO>set sc=parser.ParseStream(xdata.Data,.bpl)

E.g.


ENSDEMO>zw bpl
bpl=<OBJECT REFERENCE>[7@Ens.BPL.Process]
+----------------- general information ---------------
|      oref value: 7
|      class name: Ens.BPL.Process
| reference count: 4
+----------------- attribute values ------------------
|          Component = ""
|  ContextSuperClass = ""
|     DerivedVersion = ""
|             Height = 2000
|           Includes = ""
|           Language = "objectscript"
|             Layout = ""
|            Package = ""
|            Request = "Demo.Loan.Msg.Application"
|           Response = ""
|            Version = ""
|              Width = 2635
+----------------- swizzled references ---------------
|          i%Context = ""
|          r%Context = "8@Ens.BPL.Context"
|           i%Parent = ""
|           r%Parent = ""
|         i%Sequence = ""
|         r%Sequence = "17@Ens.BPL.Sequence"
+-----------------------------------------------------

ENSDEMO>set context=bpl.Context
 
ENSDEMO>zw context
context=<OBJECT REFERENCE>[8@Ens.BPL.Context]
+----------------- general information ---------------
|      oref value: 8
|      class name: Ens.BPL.Context
| reference count: 9
+----------------- attribute values ------------------
|           (none)
+----------------- swizzled references ---------------
|           i%Parent = ""
|           r%Parent = "7@Ens.BPL.Process"
|       i%Properties = ""
|       r%Properties = "9@Ens.BPL.PropertyList"
+-----------------------------------------------------

Probably not a good idea to change the production configuration settings with every operation call, each change will trigger a production update request on the production view.

If the customer wants to see a counter since a defined point then it sounds like there might be a point in time that can be used. In which case you get a count using SQL...

select count(id)
from Ens.MessageHeader
where TargetConfigName='CosTutorial.testIncrementalParameter'
 and TimeCreated between '2017-08-07' and '2017-08-08'

Hi Sebastian,

> The rest service won't see a whole lot of usage still I wonder whether it is a good idea. Or let me rephrase this, it certainly isn't a good idea but is it a viable one due to lack of alternatives?

You don't have to use REST, you can use a standard CSP page (particularly for anyone using a pre REST version of Caché).

Class Foo.FileServer Extends %CSP.Page
{

ClassMethod OnPreHTTP() As %Boolean [ ServerOnly = 1 ]
{
    set filename=%request.Get("filename",1)
    set %response.ContentType=..GetFileContentType($p(filename,".",*))
    do %response.SetHeader("Content-Disposition","attachment; filename="""_$p(filename,"\",*)_"""")
    quit 1
}

ClassMethod GetFileContentType(pFileType) As %String
{
    if pFileType="pdf" quit "application/pdf"
    if pFileType="docx" quit "application/msword"
    if pFileType="txt" quit "text/plain"
    //TODO, add more MIME types...
    quit ""
}

ClassMethod OnPage() As %Status [ ServerOnly = 1 ]
{
    set filename=%request.Get("filename",1)
    set file=##class(%File).%New(filename)
    do file.Open("R")
    do file.OutputToDevice()
    Quit $$$OK
}

}


There are three things to point out.

1. You need to set the ContentType on the %response object
2. If the user is going to want to use the URL directly and download the file to a local disk, then set the content disposition, otherwise, the file name will end in .CLS
3. Simply use the OutputToDevice() method on the %File class to stream the file contents to the client.

This can be easily applied to REST, but there is a caveat that you need to look out for.

The initial REST path might look like this...

/file/:fileref


However, if you allow a full path name in the file name (including folders) then you will hit two problems

1. Security, file paths will need validating, otherwise, any file could be accessed
2. Caché REST just does not work well with \ characters or %5C characters in the URL

If you want to get around the second problem, use a different folder delimiter.

Personally, I would limit the solution to a few nick-named folders, such that the URL match would be

/file/:folder/:file


So

/file/Test/Hello.txt

would map to say C:\REST\Test\Hello.txt, or the alternative file that you would provide. Putting that together the solution would look something like...

Class Foo.RestRouter Extends %CSP.REST
{

XData UrlMap
{
<Routes>
  <Route Url="/file/:folder/:file" Method="GET" Call="GetFile" />
</Routes>
}

ClassMethod GetFile(folder, file)
{
    set filename="C:\REST\"_folder_"\"_file
    set %response.ContentType=..GetFileContentType($p(filename,".",*))
    do %response.SetHeader("Content-Disposition","attachment; filename="""_$p(filename,"\",*)_"""")
    set file=##class(%File).%New(filename)
    do file.Open("R")
    do file.OutputToDevice()
    Quit $$$OK
}

ClassMethod GetFileContentType(pFileType) As %String
{
    if pFileType="pdf" quit "application/pdf"
    if pFileType="docx" quit "application/msword"
    if pFileType="txt" quit "text/plain"
    //TODO, add more MIME types...
    quit ""
}

}

Note, if you look at Dmitry's solution the file name is added to the CGI variables which would work around some of the issues I have mentioned, but of course, the REST path would no longer keep its state if bookmarked etc.

In general, REST is fine for this type of use case.

Sean

There are a few approaches.

The schedule setting on a service can be hijacked to trigger some kind of start job message to an operation. It's not a real scheduler and IMHO a bit of a fudge.

A slightly non Ensemble solution is to use the Caché Task manager to trigger an Ensemble service at specific times. The service would be adapterless and would only need send a simple start message (Ens.StringContainer) to its job target. A custom task class (extends %SYS.Task.Definition) would use the CreateBusinessService() method on the Ens.Director to create an instance of this service and call its ProcessInput() method.

The only downside to this is those scheduled configuration settings are now living outside of the production settings. If you can live with that then this would be an ok approach.

Alternatively, you could write your own custom schedule adapter that uses custom settings for target names and start times. The adapters OnTask would get called every n seconds via its call interval setting and would check to see if it's time to trigger a process input message for one of the targets. The service would then send a simple start message to that target.

I prefer this last approach because it's more transparent to an Ensemble developer new to the production, also the settings stay with the production and are automatically mirrored to fail over members.

Hi Scott,

It does sound like you have duplicate or previously processed records. The SQL inbound adapter will skip these.

One of the things to note about the inbound adapter is that the underlying adapter will call the on process input of your service for every row in the resultset. The potential problem with this is that the service could be killed before it has finished off all of the rows in the resultset (forced shut down etc). For this reason, the adapter has to keep track of where it last processed a record, such that it can continue from where it left off.

In your instance, it sounds like this behavior does not fit your data.

If you want to implement your own ground up service solution then you would have to build your own adapter and make the query execution from that adapters OnTask() method. Your adapter should probably extend EnsLib.SQL.Common and implement ExectureQuery* as a member method of the adapter.

If you are going to go down this route then be mindful of building resilience to handle forced shut downs so that it can continue from where it left off.

Also, it's good behavior for adapters to not block the production for long periods. Any adapter such as this will be looping around a set of data, calling out to the ProcessInput method of its business host. If there are many rows then this loop could be going for minutes. It's only when an adapter drops out of its OnTask method can the Ensemble Director cleanly shut down a production. This is why you sometimes see a production struggling to shut down. To not block the production, the adapter will need to chunk its work down, for instance limiting a query size and continuing on from that point on the next OnTask().

Alternatively, you could look to use the SQL outbound which avoids all of the high water mark functionality. Your query will always return everything you expect. I often do it this way and have never had any problems with skipped rows (I run separate audit jobs that also double check this).

Sean.

Jeffrey has the right answer.

Murillo, here are the comments for the %SYS.Task.PurgeErrorsAndLogs task that you are currently using, the ERRORS global is for Caché wide errors and not Ensemble errors...

/// This Task will purge errors (in the ^ERRORS global) that are older than the configured value.<br>
/// It also renames the cconsole.log file if it is larger than the configured maximum size.<br>
/// On a MultiValue system it also renames the mv.log file if it grows too large.<br>
/// This Task is normally run nightly.<br>

Hi Simcha,

Your production class has a parent method called OnConfigChange() which you can override.

The method receives two objects, the updated production config object (Ens.Config.Production) and the production item config object that changed (Ens.Config.Item).

You will need to write your own diff solution around this.

Note, this method only gets called for changes made via the management portal, it will not record changes made to the production class directly.

Alternatively, you could implement an abstract projection class to trigger a diff method on compilation. This will work for both cases. Take a look at this post... https://community.intersystems.com/post/class-projections-and-projection... on how to implement this alternative.

If you want to know who made the change via the management portal then you can get the users login name using this value...

%session.Username

Sean.