Hi Evgeny

Fascinating conversation.....

I am aware of projections but don't use them in my systems.
I think there is some confusion when I use the term "Test" server  - in my usage this is used for End User Acceptance testing and there are usually more than one project/release undergoing End User Acceptance Testing at any one time - copying the Cache.dat would take over releases that are not ready to go.

= =

I guess it depends on the nature of the operation - I work for individual clients rather than having a monolithic product - and (as above) there will be several projects on the go at each time for each client - so what I do works for me.

If there is a possible problem with the compile (your projections) then, I think, the solution is a staging server - where individual releases are deployed to this and once proven that cache.dat is copied to the Live

My method works in my situation 
I guess there is no single "correct" solution that will work for all cases.

Peter

Hi Evgeny

What situation do you have in mind that could cause the compilation to be unsuccessful?

With proper version control and release procedure this has rarely happened in my experience - when it does it's been due to unresolved dependencies - and in this case a re-compile fixes it.

There is one possibility where it *could* happen - that is if the VC system allows multiple reservations/branches for the same class - bet we don't allow that.

= =
I can't see how deploying/copying the Cache.dat will avoid problems when you have multiple developers or multiple projects on the test server.

= =
I guess the only 100%  way is to have a staging server where a deployment can be copied to and tested before deploying to the Live server - in this case it is tightly controlled and copy the Cache.dat is possible

Peter

Just realised that copying the Cache.dat is only sensible for a single developer
If you have more than one developer working on different projects all deploying to the same Test server then copying won't work - you would get bits and pieces from different projects

Even for a single developer it's a bit dodgy - you could be working on two or more projects at the same time - waiting on user acceptance testing - if one passes then you want to deploy that to live but not the  others.

The more I think about it the more I believe that my method of working is the only one that works for all possibilities - or so I believe - if anyone has a better method please tell

Peter

Hi Robert and all

You can achieve the separation quite easily with routine and class package mapping.

I have a client that has overseas affiliates they all share the same code/class base but have their own namespace with routines and packages mapped to the master namespace.

Works just fine

The only issue is that developing the code is more complex as the different affiliates have started to need different functionality starting from the same base screen

Peter

Hi All

The way I work is:- Personal Development Machines - Deploy to Test Server for User Acceptance Testing - Deploy to Live

The version control system that I use is TrakWarePro from Globalware - sadly no longer in existence  - but it works for us.  Not only to maintain versions but to ship releases between the three environments.

When deploying the classes need to be compiled (obviously) but I don't trust ISC compiling *all* the required classes and SQL statements etc.  Neither does $System.OBJ.CompileAll() work 100% in resolving the dependencies in the correct order.

Also a release will need to set SQL access on tables for any new data classes.

So I have developed a do ##class(setup.rCompileAll).doit() method that does all the necessary - in the correct order, set the SQL Access etc.

Usually a deployment will require changing data/updating indices/adding pages to the access database etc etc - so there is usually a setup class that contains the code to do this.

So I have

  • a Studio project "pfcXXX"
  • a version control project "pfcXXX"
  • a setup class "setup,rPFCxxx.cls"

And all this works 99.9% over 10 plus years - I can't actually remember wen it went wrong but nothing in this world is 100% smiley

The downtime can be as little as 5 minutes for a simple release or up to 1 hour or so if the data changes are complex.

The only downside is system is unavailable to the user whilst this process is happening - I know about the technique of using mirroring and updating one mirror and then swapping - but this is overkill for my users.

Peter

Hi Evgny

Thanks - Only hope I can find the time and energy to keep it up :}

Some questions about the community HTML editor....
1. Is there an easy way to add hyperlinks in-line or do you have to manually edit the source
2. Also are there any checking rules that are enforced re hyperlinks
3. How do you upload images (screen shots)
4. Is it possible to embed video (again screen shots) or is it better to use YouTube and add a link
5. Is there a better free HTML editor/method of working to use to create articles rather than  built in?

Peter

Hi Evgeny

We have met briefly at last autumn's Developer meet at the Belfry - keep up the great work

New Tag - absolutely (please)
Angular2, Angular4 and now Angular5 are enhancements of the same basic product Angular1 (now AngularJS) is a different thing

Have a look at https://dzone.com/articles/learn-different-about-angular-1-angular-2-amp...
(that's not a typo the URI is as spelt)

or google differences between angularjs and angular2

Peter

Hi Sabarinathan

I use pdfPrintCmd - see http://www.verypdf.com/app/pdf-print-cmd/index.html

it's not free but I have had no issues with it over many years of use

The idea is:
a. Write out the pdf to a directory
b. set xResult=$zf(-1, "print command exe")

The great advantage is that it has command line options that control margins, double sided etc etc

Also it can be run in the background - this is very useful when doing batches of pdf where creating the pdf using FOP can take several seconds - the idea is
a. scan the data via SQL
b. create the pdf
c. use printCmd to send to a printer
To give you an idea of it working I have a client that produces around 50 multi page passport a day - the printing takes around 1 hour - it's set going as a Cache background job and the printer just chugs away.

Peter

FAO Mike

Hi Mike

You ask some very good questions!

It's been a few (10+) years since I did some metrics on Cache performance - working on it now (when I have time)

I will publish the results here

But my initial findings are much what I said - given enough memory (8Gb allocated to global buffers) it don't much matter - so a 200Mb global (10,000,000 rows with links to 2 other tables) I can't see any significant diff in the performance between parent/child or foreign key - or bitmap re normal indicies.

going to try it with 50,000,000 and limit the amount of memory allocated to global buffers

Watch this  space

Peter

FAO Scott

Hi Scott

Sorry it's taken me so long to get back to you re real world parents and children

It's the example you gave of Cache relationships - and I was a bit quick fire with my answer - sorry!. 

Please let me expand.....

Real world parents and children is an interesting problem and a simple solution that you described is *not* the way I would model it!!

this is regardless of the actual implementation - eg parent/child or foreign keys etc etc

= =

If i wanted to model family trees I would have a class "Person" and another class "PersonRelationship" - the second of these would have links to two instances in the "Person" table

Something like (in relationships rather than foreign keys - but that's implementation rather than the oo design)

PS - I am typing on the fly - so there may be errors!!!!

= =

Class Person as %Persistent

Relationship rRelatedPerson as PersonRelationship [cardinality="many", inverseproperty="rLink"]

property Name as %string;

property pDoB as %Date;

....etc

And then PersonRelationship as %Persistent

Relationship1 as Person[cardinality="one", inverseproperty rRelatedPerson]

Relationship2 as Person[cardinality="one", inverseproperty rRelatedPerson]

property RelationshipType as SomeLookupTable;

....etc

The SomeLookupTable would describe (in words) the relationship eg "Son Of" and "Farther Of"

For me this has some beauty
You can...

  • Construct a family tree of infinite depth both forwards and backwards
  • Use recursive SQL programming to construct the tree
  • a "child" can have multiple links - eg "gene father" and "step father"
  • The record is complete - eg a woman might have a "gene father" then a "step father" and then go back and be linked to the same "gene father" (life happens)
  • It can also model surrogate parents via AI fathers or surrogate mothers or same sex relationships
  • It can be extended to, say, pets

Some care has to be taken in the database insert/amend eg

  • avoid recursive relationships eg a Person is her own grandmother is not physically possible
  • a person cannot have the same mother and father (well with AI and embryo manipulation tailoring this *may* become a reality - but no problemo the model will still work)

= =

Hope this is clear and of interest - if you need any more info please ask

= =

PS - I was involved around 30 years ago in computing an "In Breeding Coefficient" for rare breeds (think small populations of endangered species in zoos) the aim was to give a metric on how inbred an individual was - small populations where is was common for both grandparents to be the same individual - the logic was intense to get a metric - you could have the case where the same individual was all of the both grandfathers and all 4 great grandfathers- not so good for preserving the gene pool !

Peter

Wolf

Thinking about this the first point is do the children get added to or amended - if it's a lot (eg with case notes) then that will lead to mega block splitting as the data size for the whole record grows

OTOH - if it's an invoice that (essentially) never changes then it's fixed and p/c is cools

==  ==
Apart from the bit map indices only working for integer ID's - do if you really have squillions of products that you need to process then one-many must be the business solution so it gives the performance of

   select sum(qtySold) from invoiceLines where ProductID=123

Peter

FAO Mike Kiddow

All of us Cache gurus are getting carried away with the internals and very deep design concepts that rely on an in-depth knowledge of how Cache works.....

I guess you do not fully follow what we are discussing - (sorry not being dismissive at all) - it's just that me, Wolf, Dan Otto, Kyle are just being super-dooper Cache egg heads.

Bottom line is...

That for any reasonable sized database (ie less than <~10,000,000) rows on a modern server with adequate memory - it don't much matter!!! Cache will do the biz.

= =
But please note that the example from Scott is not correct...
Having real children with real parents then the parent-child relationship is incorrect on many levels - because as I said before "a parent can die" and you don't want the children records to disappear!!!!

In case I would have (as a first pass) a class "people" and another class "people relationships" which links different instances of people with each other

= =

oo methodology seems intuitively easy to understand - but it's not
and on top of that is the pros and cons of Cache performance considerations that we have been discussing

But give it a go
As I said it don't much matter unless you are in serious terabyte country

Peter

Hi Wolf

Long time no meet !!!

This is such an interesting conversation...

And it all depends on on the actuality - if you have a parent with a sqillion children - then the answer to p/c performance is not so good

Also if you have a business object where the many side are constantly added to - eg a patients lab results- again different - leads to block splitting and a bunch of pointer blocks having to be loaded into memory

So...

my business case argument is with an Invoice header and invoice lines then parent/child v one/many is the most efficient - they (really never change) so no index bock splitting

But I do take Dan's and Otto's comment about bitmaps only doing integer ID's

= =

Tell you a story..

around 12 tears ago I was involved with a client with a 1.2Tb database - I was critisised by the in-house staff for IT staff for using naked global references in one module - blab blah - but it was 2% faster than full global references and  in a tight loop 2%  meant 30 mins off the processing time

= =

Having said that - I *will* be trying out one-many with a cascade delete

Peter

Hi Kyle

Thanks for your excellent comment

I agree - sort of....

Bit it's a balance (as always) between loading buffers - it may be the case that there is an occasional need to just grab the dates - but if it's only a 10% (say) need whereas the 90% need is to display/process the header and the lines together then, for me, the 90% need should win out.

Also if the dates (or whatever) are indexed then a selection on a date range (say) will only select the required rows from the index.

= =

But as I said before - it depends on the size of the system - my clients have modest needs (maybe 3 million rows max) and with a 64Gb machine all/most of the blocks are in memory anyway smiley

 

But thanks for the thoughts - I will certainly start looking at one-many with a cascade delete

Peter