Replies

Seems atelier is spelt incorrectly in the curl example, here's an example of use

curl --user jamien:SYS http://127.0.0.1:57772/api/atelier/


{"status":{"errors":[],"summary":""},"console":[],"result":{"content":{"version":"Cache for UNIX (Apple Mac OS X for x86-64) 2017.3 (Build 529U) Wed Feb 15 2017 01:28:51 EST","id":"DD2AAF5C-F6C1-11E6-AC71-38C986213273","api":2,"features":[{"name":"DEEPSEE","enabled":true},{"name":"ENSEMBLE","enabled":true},{"name":"HEALTHSHARE","enabled":false}],"namespaces":["%SYS","DOCBOOK","ENSDEMO","ENSEMBLE","SAMPLES","USER"]}}}

I think you are laboring under a misapprehension. You can only use the XSLT2 Cache functionality through the Java Gateway, originating your request from the cache server. It's s server-side technology. You can use XSLT extensions in Atelier ( or any other tool ) to test and debug your XSLT on filesystem files but you won't be able to use the server callback isc:evaluate() within those stylesheets.

This will be supported in the next release of Atelier 1.1

You can have multiple projects in your workspace and thats the way to go.

Below is an overview of the synchronization strategy. You will see if that someone changes files on the server underneath you then conflicts will occur. With Atelier you need to focus on the fact that the source on the client is the 'source of truth' the server is where you run your code. Your workflow should be sourcecontrol -> client -> server.

Synchronization Services for the New Development Paradigm 

This document describes the current approach to client/server synchronization.

Documents may be edited on the client independently of a connection to a Caché server. However, when a connection is established with a server for the purposes of saving, compiling and running code, then the client and server must agree with respect to the version of documents that are being operated upon.

Each document that exists on a server has a hash value associated with it. Each time the document changes, the hash value also changes. The hash value represents the server's version of the document. It's important to keep in mind that there is only ever ONE current version of a document on a server. The hash value is an opaque value, as far as the user is concerned it's just a 'cookie'.

All that is necessary when a client pushes a document to the server, is for the client to specify (via the hash) the version of the document it is replacing on the server. If the current server hash and the hash passed to the server from the client are equal then there is a MATCH on version and the server can go ahead and SAVE the document. If the hashes don't MATCH then there is a CONFLICT and the document will NOT be saved on the server.

*

The key idea here is that you cannot successfully push a document from a client to a server unless you identify by version the server document you are replacing. The hashes allow you to do that.

*

In the case of creating a new file, the client will not know the server hash. In this case it just passes the empty string to the server as the server hash value.

If the server does not have that document (that is, it is new to the server too) then the hashes will MATCH, the operation will succeed and the server will return a new document hash.

If the server already has the document, then there is a CONFLICT, the operation will not succeed.

Conflicts have to be resolved before a document can be modified on the server. How the conflict is actually resolved really does not matter (one could pull the server version, merge the diffs or do whatever). What is important is that when the resolution on the client has been done, the client must update its cached server hash to reflect the current server version. This means that when the client passes that hash value back to the server on the next save, the versions will match and the modification will go ahead.

In the case of deleting a file, the client must once again specify the server hash (if it has it). A MATCH on the server will result in the document being deleted, a MISMATCH will result in a CONFLICT.

All this is predicated on the client's ability to cache the hashes according to client source file, server address, server namespace and server document. How the data is stored is not important, what is important is that the hashes must be cached and passed to the server when a document modification is requested.

It's also important that the hash cache is persisted in some sort of client database. If not, then the synchronization of sources would have to be redone each time an interaction with a server is initiated.

The relationship between client and server sources is shown in this diagram

PushPull.png

Server Sync does take some time. This is mitigated for intersystems distributed library databases by pre-populating meta data and placing it on the server.

See for example, {serverdir}/atelier/CACHELIB/Metadata.zip.

If you have libraries of code then you can generate your own metadata.zip and place it on the server under the appropriate directory. You can call %Atelier.v1.Utils.MetaData.Build(databasename) and then add the generated files to Metadata.zip. We don't do this for you because Caché doesn't have a portable means of creating zip files.

For occasions where you don't have this pre-populated metadata, the initial load can take some time. However following the initial sync, synchronization should be quick as we do sophisticated cacheing and reporting of server diffs. The delay can't be completely avoided because large amounts of meta data have to be transferred over the network.

This feature has not yet been implemented but planned for the next release.

Yes, Atelier requires all files to have a header which defines the metadata for the file. The extension of the file is not enough ( for example in an INT the language mode is important ). Normally you don't need to know the rules as Atelier will handle this for you.  For reference here are the rules. I'll ensure that they are added to the Atelier reference documentation

It is required that Caché sources be stored in the file system in a textual format as opposed to the XML format that we have been using for some time now. The primary purpose of this requirement is to facilitate easy comprehension, editing and the performance of diffs. 


The current XML format captures additional meta-data ( for example, 'language mode' ) that does not appear in the body of the document. Any textual format MUST be able to accommodate this meta-data to ensure that no information is lost. 
Therefore, for exporting .mac, .int, .inc, .bas, .mvb and .mvi items during export a single line of header information appears as the first line of text in the following format :-


Routine NAME [Type = TYPE, LanguageMode = MODE, Generated] "Routine" is a case insensitive string which identifies this document as being a routine document container NAME is the name of the routine item (case sensitive) Following the name there is an optional collection of keyword value pairs that are enclosed in square brackets '[' ']'.

Currently three keywords are supported :-

Type = TYPE. Where TYPE is one of bas, inc, int, mac, mvb, mvi.


LanguageMode = MODE. Where MODE is an integer that is supported as an argument to $SYSTEM.Process.LanguageMode(MODE).


Generated. Its presence indicates that the routine is generated.


The keywords are optional and if none is specified then the square brackets containing them should not be present (though it is NOT a syntax error to specify an empty pair of square brackets ([])).


The LanguageMode keyword applies only for Type=mac or Type=int and will be ignored for other types. The default value for the LanguageMode keyword is 0.


Whitespace is freely supported within the header

Everything that comes after the single first line is source content and MUST be formatted according to the established rules of the particular document type. There is no trailer that indicates the end of the document.

The first line of the routine has to be in a particular format. Check that you didn't break the formatting.

Thanks for your feedback. To some extent we are constrained by the eclipse framework with what we can do but clearly the things that you point out can be improved. We will be sure to take this into account in our continued development.

Please contact support regarding this