There is a huge benefit from two perspectives:

- If you need to refresh data in your Dev or Test environment, you can just grab the globals DB from Live and drop it in and not worry about overwriting any code in Dev or Test

- If you choose to deploy your code via a DB drop, you can drop in a new DB to replace the existing routines DB

NOTE - for either of these to work, you may need to map configuration into the routines DB (so you don't bring back Live config into Dev for instance)

HTH!

Ben

FYI ... we will have several sessions covering this topic at the Global Summit - attend if you can, otherwise check out the material afterwards!

For internal application development within InterSystems we use a variety of approaches, but the most common is as follows:

1) We use an internally developed issue tracking system, but we plan to eventually migrate to JIRA

2) We use Perforce for all of our source control 

3) We have BASE, TEST and LIVE environments for every application, typically BASE and TEST being cloned from VM snapshots of LIVE.  In addition to the Shared BASE VM, for those applications which are undergoing the highest rate of change, developers will create a local copy of the application to do their development work.  Some apps have all changes being developed on Shared BASE and the changes are progressed (via our Change Control tool) to TEST and the LIVE.  For applications where developers use Private BASEs, they commit there and then push the changes to Shared BASE and then to TEST and LIVE.

Feel free to ask questions (here or at Global Summit)!

Thanks for asking.

Kyle,

The macros are intended to prevent developers from having to refactor code at the same time as they perform an upgrade, as well as make it easier for application providers who have code running on a number of versions.

I've learned from experience that it is always best to have the fewest moving parts when possible when doing an upgrade so you can quickly find the cause of any issues that pop up.  Therefore, I always try to write forward compatible code and only after all of my systems for a given codebase have been upgraded and are stable do I start to introduce backwards incompatible changes.  These macros allow that very nicely.  In addition, using the macros means that you have more flexibility to upgrade without having to schedule a concurrent refactoring project (even if it is just a find and replace refactoring project :) ).

All that being said, the macros are not intended for long-term use with-in an application.  Once the 2016.1 > 2016.2 hurdle has been cleared then my recommendation would be to pull out the macros (find & replace) and stick with Caché's native JSON access going forward.  But that can then be a project that takes place post-upgrade, thus simplifying the upgrade and lowering risk.

Joe,

Better yet, follow the instructions and use the Macros available in the following article (the macros are in a linked Gist code snippet) and then you can write your code in a way that it will work on 2016.1 and also on future versions of Caché without having to rewrite your JSON logic:

https://community.intersystems.com/post/writing-forward-compatible-json-...

Hope that helps!

Ben