21916 results


  • Answer:

    While Atelier csp support continues in development, I've been able to get existing csp files over to Atelier as described below.  It takes advantage of the relationship between the.csp file and the generated.cls file described here:     http://docs ...

    Answer to the post "How do I get existing CSP files into Atelier?" by Robert Boonstra, 4 June 2016


  • Answer:

    In the IHE context I onced faced the scenario that there was no given encoding in the XML declaration. The rule was to use in such a case the character encoding which could be found in the HTTP HEADER. For example:   Content-Type: text/html; charset=utf-8 ...

    Answer to the post "How to detect a character encoding?" by Markus Mechnich, 4 June 2016


  • Comment:

    Hi, Asaf!

    Yes, you are right - Compound Cubes!

    In any case, I believe that giving the option (in Architect and Studio), to mark a date dimension to include all dates (or a range of years), would be a good enhancement to...

    Comment to the post "Full series of dates in Time Dimensions in DeepSee" by Evgeny Shvarov, 12 May 2016


  • Question: How to remove a job PID from a production

    Caché, Business Service, System Administration, Ensemble Problem: A file-based business service uses a local path on a Linux machine that is actually a mounted CIFS share. The mount is "soft" and is designed to not cache data, etc. There are ...

    Post by Ryan Hulslander 3 June 2016


  • Comment:

    Stefan,

    This is a great article and an excellent resource to help people come up to speed quickly on this new feature - thank you.

    I do have one question / comment however.  You listed 4 benefits to using the Document approach, and I...

    Comment to the post "Introducing the Document Data Model in Caché 2016.2" by Benjamin Spead, 31 May 2016


  • Comment:

    Hello Evgeny,

    When you say combined cubes, so you mean Compound cubes?

    In any...

    Comment to the post "Full series of dates in Time Dimensions in DeepSee" by Asaf Sinay, 12 May 2016


  • Comment:

    You might have noticed my reference to these tests in past tence as a thing that happened.

    I ran these tests during a POC so I am not sure we have the storage controllers to test against. I already spoke to the client. They will be deciding...

    Comment to the post "Read I/O Performance On >50,000 IOPS" by Isaac Aaron, 3 June 2016


  • Comment:

    This sounds very interesting.

    I could not give any data proven onclusion without looking into sar or mgstat data, but from your words it sounds like the bottleneck here is ObjectScript VM or engine interprocessor locks implementation. This...

    Comment to the post "Read I/O Performance On >50,000 IOPS" by Timur Safin, 3 June 2016


  • Comment:

     

    Well, my test was parallelized, and I have to say I did mention it.

    In my test I took a 160GB global and wrote a simple process that starts at a random point from that global, spans forward with a $O run, advancing by about 200...

    Comment to the post "Read I/O Performance On >50,000 IOPS" by Isaac Aaron, 3 June 2016


  • Comment:

    I did revert to doing only read tests after having understood the issues I'm having with the write daemon.

    I'm note sure how global prefetching is going to help because I'm trying to get as random as I can. I'm intentionally trying to...

    Comment to the post "Read I/O Performance On >50,000 IOPS" by Isaac Aaron, 3 June 2016