Hi,

Can you also post a screenshot of your Web Application setup on the IRIS server that defines the rest endpoint ?

Thanks-  Steve

Hi,

In reading this - I'm wondering - what is the real advantage of #2 ?   

- Steve

Hi,

My guess is you are calling the query with more arguments than it expects on those times that it is failing..

Providing or inspecting again  the code making the call from the client side, would be a good start to fixing your issue. 

Steve

Hi Evgeny,

Any new on when the Community Edition of IRIS will be available ?

Steve

Hi Chris,

I agree - note the Schedular basically STARTS or STOPS a business host automatically, on a pre-defined schedule -  (so it applies to Operations and Processes too, not just Services, which are the items that have a CallInterval feature).

For regular invocations of work on services,  in almost all cases,  absolutely - CallInterval is the way to go, and is what is used mostly. I certainly would prefer looking at the production and status of my business hosts and see all of them 'green' and running - even though, running might actually means 'idle in between call intervals' .  Using the Schedular the stopped business host will appear 'gray' when it is not started (ie, it is disabled) 

There are valid use cases -  though, a Schedule on, say, a Business Operation makes sense. For example, you may want to send out messages to business operation that interacts with a fee-per-transaction end point that is cheaper during certain times of the day. In this case, you can disable the operation, send queue it messages all day (which will accumulate in it's queue), then, at the appropriate time, enable the business operation via the schedular, then, disable it again after a period of choice.

In this thread's case, the easiest approach is to use OnInit to prepare the data and send the data. OnProcessInput (called by the interval, which can be very long), would do nothing but quit. That would work. Of course there are other approaches.

I wanted to include the Schedular information as it is often overlooked, and, sometimes, the full story and use case of the original poster might not be evident, and, schedular might have been appropriate. 

Thanks for your feedback.

Steve

That's great !

Does it have to be a Github repo or can I use BitBucket ?

Also - if we find an error  (eg WebTerminal on IRIS), can we leave a comment generally or for the developer ?

Steve

Hi

I believe this is a work in progress in the product - but I know of no ETA, so at the moment, everyone builds their own synchronisation techniques as other comments here explain.

* Disclaimer - this is not necessarily the 'Answer' you where looking for, but merely a comment you might find useful * 

In the past I create a framework for doing this and more.   I'll describe it here :

Using a pre-defined base - one would create any number of subclasses for each type of data you wanted to synchronise (for example, a sub-class for mirroring security settings) and in these subclasses implement 2 methods only:

- The first method 'export' deals with collecting a set of data from wherever, and, saves it as properties of the class (for example, in this case the method will export all security settings and read back the xml export in a global character stream for persistence within the DB). These are persistent sub-classes

- The second  method  'import' is the opposite side, which would unpack recently collected data and sync (for example - export the global character stream of the instance data to a temporary 'security.xml'  file and run through the system API's to importing those settings)  

The persistent data created by the classes during the 'export' method call is be saved to a mirrored database, so by default becomes available on other nodes, during 'import' invocation.

A frequently running scheduled task , executing  on any mirror member (primary, secondary or async member) would iterate through known subclasses and based on knowing that server's role, would either invoke the 'export' or 'import' methods of each subclasses . (of course the primary members call the 'export' method only, the other roles call the 'import' method.)

There are various checks and balances, for example, to ensure only the last instance data is imported on the import sides in case some where skipped for some reason. Also - that no import executes midway - ie - waits until an export has been flagged as complete...

I wrote this as a framework because I felt there is other data - not just the Security data in CACHESYS  that would need replicating between members. 

I have done a fair amount of testing on the tools,  and completed it around the time I heard InterSystems was working on a native solution so have not persisted further in documenting/completing.  I wrote this for someone else, who ended up building a more hardcoded, straightforward approach, so it is not actually in production anywhere.

Steve

Don't see why not...

You've got to ask yourself  - do you want to hit that website (which returns the full set) every 5 seconds ?.. Probably not.  I would be hitting it every hour and, spend the time in between hits to go through and update the documents in the document database.

It's your choice whether to pause operations, delete all documents, and upload all documents every n seconds as a whole - that would be an easy approach. I think however, you can get clever and identify an element that can act as a key for you, and use it to extract individual documents and update them with changes, - then, insert new ones.   Keeping a track of rows inserted and updated with each cycle via some 'last updated' property, will also allow you to purge any rows which have been deleted and should no longer appear in your collection.

The above seems like a good approach, your use case may dictate a slightly different one. I'm not sure if there is a technical question here.  Technically - you will call the web site for the batch content in the same way, and, given the properties you already setup via CreateProperty - you can run an SQL Query to extract an individual document for updating/deleting.

Steve

Hi,

Sorry not getting back to you sooner - I was on a flight.

You are on the right track. You just need to understand the makeup  of the data returned.

So - to recap - in IRIS you are preparing a Document database - a collection of JSON documents. Each document represents an 'artist', with 'name', 'playcount', 'listeners' and other  properties.

The JSON string for 1 such entry (artist), or document in the collections would be look something like this, which is embedded actually, in the whole JSON your HTTP request returns..:

{
        "name": "Tough Love",
        "playcount": "279426",
        "listeners": "58179",
        "mbid": "d07276bc-3874-4deb-8699-35c9948be0cc",
        "url": "https://www.last.fm/music/Tough+Love",
        "streamable": "0",
        "image": [
          {
            "#text": "https://lastfm-img2.akamaized.net/i/u/34s/3fa24f60a855fdade245138dead7ec...",
            "size": "small"
          },...

}

If you extracted each artist document in the collection,  you can insert it into the database individually like this:

do db.%SaveDocument({"name":"Tough love","playcount":"279426",... })

The %SaveDocument method takes a single JSON dynamic object and inserts into the collection.  The whole JSON blob goes into the %DocDB 'column' of the projected table, and, for some elements like 'name' specifically created as columns via %CreateProperty - will be individually populated as column values.

But  - as mentioned earlier - the output from your HTTP call returns JSON which, down a few levels deep, has a collection of 'artist' documents :

{
  "artists": {
    "artist": [           <-- This is where the collection of artists starts from
      {
        "name": "Tough Love",
        "playcount": "279426",

Here are two approaches:

1. Option #1 - iterate through the correct part of your returned data, to extract individually each artist. (I prefer the next option #2)

Using the returned whole JSON, access the element 'artists', - and then, it's property 'artist'.  This element (represented by path "artists.artist", (poorly named imho), is actually the collection.  Use an iterator to iterate through each item in the collection.  

set wholeJSON={}.%FromJSON(httprequest.HttpResponse.Data)
set iArtist=wholeJSON.artists.artist.%GetIterator()  // iArtist is the iterator for the collection of artist JSON's
while iArtist.%GetNext(.key,.value) {
    // key is the item number in the collection
    // value is a dynamic object of the item in this collection
    do db.%SaveDocument(value)  //  insert 1 artist at a time.
}

2.  As you have discovered, you can use db.%FromJSON to import a whole collection of documents, in one hit, but, what you supply, should be a string or stream in JSON format, representing an array of documents, which the raw HttpResponse.Data is not because of the leading elements 'artists', etc..- but you can dive in and get the array:

set wholeJSON={}.%FromJSON(httprequest.HttpResponse.Data)
set arrArtists=wholeJSON.artists.artist   // this is the collection of artists
do db.%FromJSON(arrArtists.%ToJSON())  // need to give %FromJSON a json string.

.. and in one GULP, ALL artist documents, ie , all items in the collection, are added into the document database (I tried this - I confirm 1000 rows created).

Use option 1 if you want to filter the injection of data into your document database, or option 2 if you want to do a batch upload in one hit.

Let us know how you get on...

Steve


 

 

Hi,  at this point I would check what the value of 'st' is, as it seems the GetURL call might be failing.