I have one prepared but not found the time yet. My plan is to measure the electricity use of my house continuously, do analysis on when I have a surplus of energy of my solar panels, and then switch certain devices based on that. My dishwasher, pool pump and heat pump are all connected. Currently I can do net metering for electricity but that will change in the near future. Better to use it myself if they don't want to pay a decent price for it ;-) The Pi is perfect for that kind of home automation.

I have a customer who uses a cloud based storage for all error/warning messages which happen in Ensemble. I don't remember the name of the service, but it is accessible through a REST service. They have written a simple REST client (Ensemble Business Operation) which uses PUT/POST to send the messages/warnings they capture. The API description should help you connect to such a system. You can use the alerting framework (ens.alert) to capture and forward it to your BO which connects to Tivoli or other system. SNMP might also be an option.

With this you can override the global setting. It is useful to have a maximum to protect you from having a "runaway" process to eat all your memory. If you need more in a process (for example I needed to instantiate a very large XML file as an object), you could overide the global setting (in my case 4GByte). But since only one specific process needs it, it is better to set it dynamically in that process. As Eduard said, it is only a maximum, it is not claimed when the process starts but only when needed.

Hi Steve,

I have tested with this in the past, when 2011.1 was released and the 10 seconds delay was introduced for anonymous soap calls. With SOAPSESSION=1 the calls utilize the same session on the back-end. Multiple subsequent calls will only use 1 license, and the license will be held until the normal time-out expires. I could easily see this behaviour in the license statistics.

I'm not sure what happens when you do multiple calls in parallel, will they be queued?

For one customer we use this solution which works fine for them:

DATA database which contains the end customer data and is the default mapping

CODE database which contains the cubes (package mapping) and Pivots and Dashboards (^DeepSee.Folder en ^DeepSee.FolderItem global mapping)

IDX database which contains all the other DS stuff (^DeepSee global mapping)

This works fine so far as we can replace the end-customer data easily and keep it seperate, and we can also update a new batch of cubes and dashboards easily by replacing the CODE database with a new version. IDX can be rebuild by compiling/building the cubes so that’s easy too.

This works fine because currently they create all dashboard for their customers.

We foresee that some end customers need some custom pivot/dashboards, which we then would like to map to a 4th database; we don’t want it in CODE because we want to keep that the same at all end-customers. Maybe we  should we do a subscript type mapping of ^DeepSee.Folder(Item), and is that sustainable among Cache versions? I have not looked into it further as both their DS customers are happy with the current solution and don't create any dashboards or pivots themselves.

Marcel