Discussion
· Aug 8, 2017

Your Workflow: Issue Tracking, Version Control etc.

I understand this is a rather broad topic (and at times involves religious sentiments) yet I would like to look at it from the Caché perspective:

  • Do you use an issue tracking / collaboration system? If so which one. Any you would recommend or immediately dismiss based on personal experience?
  • How do you keep track of large code bases? Thousdands of folders named backup1, backups2, ..., SVN, git?
  • Do you have a development server to which you commit and test features there, or do you rather run a local copy of caché and implement features locally first, then push to the server?
  • Bonus question: How do you handle legacy code (and I mean the using lots of $ZUs kind of legacy) code? Leave it untouched and try to implement new features elesewhere? Rewrite the entire thing? 

I hope this is the right place to post this question since similar questions are all over stack exchange/reddit/elsewhere yet nothing in any way shape or form focused on caché. Looking forward to your answers and ideas!

Cheers!

Discussion (14)4
Log in or sign up to continue

That's quite a topic for complex discussions.

  • Do you use an issue tracking / collaboration system? If so which one. Any you would recommend or immediately dismiss based on personal experience?

I use Github plus repository issues.

  • How do you keep track of large code bases? Thousdands of folders named backup1, backups2, ..., SVN, git?

Git.

  • Do you have a development server to which you commit and test features there, or do you rather run a local copy of caché and implement features locally first, then push to the server?

Locally implemented and tested. tested and implemented.

  • Bonus question: How do you handle legacy code (and I mean the using lots of $ZUs kind of legacy) code? Leave it untouched and try to implement new features elesewhere? Rewrite the entire thing?

It depends, the more complex the code is, more I consider creating modern API wrappers instead of re-writting it.

  • Do you use an issue tracking / collaboration system? If so which one. Any you would recommend or immediately dismiss based on personal experience?

I use Github and GitLab. Issues are tracked there. They are fairly similar, use GitLab if you want on-premise solution.

  • How do you keep track of large code bases? Thousdands of folders named backup1, backups2, ..., SVN, git?

Git.

  • Do you have a development server to which you commit and test features there, or do you rather run a local copy of caché and implement features locally first, then push to the server?

Everything is implemented and tested locally. Then I push to a version control. Continuous integration does the rest.

I see, we have been using an in-house tool with features similar to yours as well, at that time files were still being exported as XML instead of UDL. But now we're moving our development to local and enforcing project usage using this tool.

 

We also had our share of pain with editing live production code and I have to say, it's not the greatest feeling.

  • Do you use an issue tracking / collaboration system? If so which one. Any you would recommend or immediately dismiss based on personal experience?

We have outlook tasks

  • How do you keep track of large code bases? Thousdands of folders named backup1, backups2, ..., SVN, git?

a homemade tool

 

  • Do you have a development server to which you commit and test features there, or do you rather run a local copy of caché and implement features locally first, then push to the server?

we develop everywhere (dev/test/prod) and mixtures.

I use GitHub's issues that allows you to create issues of all sorts (bugs, enhancements, tasks, etc.) and assign them to someone or to a group of people. The issues are associated with the source code repository and there is also a functionality for projects. You create a project and then list the tasks needed to accomplish that project. Then you can make every task an issue and assign it to people. Then you can drag and drop the tasks from one stage to the next like Specification > Development > Testing > Product.

GitFlow is a good and flexible workflow. But I don't use Git to deploy on Pre-Live or LIVE. I normally would have four environments:

  • Development - Your machine
  • Development - where you integrate the development branch from Git with the work from all developers. Downloading the code from GitHub can be done automatically (when a change is integrated back into the develop branch) or manually.
  • QA - This environment is where you download code from GitHub's master branch with a new release. Your users can test the new release here without being bothered. 
  • Pre-Production/Pre-LIVE - This environment is periodically overwritten with a copy of LIVE. It is where you try and test applying your new release.
  • Production

GitFlow's hotfix may be used depending on your evironment. Depending on the change and on the urgency, it will be a pain to actually test the fix on your develop machine. Your local globals may not match the storage definition of what is in production because you may have been working on a new version of your classes with different global structures. You may need larges amount of data or specific data to reproduce the problem on your developer machine, etc. You can do it, but every hot fix will be a different workflow. Depending on the urgency you may simply not have the time to prepare your develop environment with the data and conditions to reproduce the problem, fix it and produce the hot fix. But it can be done.  On the other hand, as pre-production is a copy of LIVE, you can safely fix the problem there manually (forget GitHub), apply the change to LIVE and then incorporate these changes into your next release. I think this is cleaner. Everytime you have problem in LIVE, you can investigate it on PRE-LIVE. If PRE-LIVE is outdated, you can ask Operations for an emergency unscheduled refresh of PRE-LIVE to work on it.

About patching

I recommend always creating your namespaces with two databases: One for CODE and another for DATA.

That allows you to implement patching with a simple copy of the CODE database. You stop the instance, copy the database and start the instance. Simple like that. Every release may have an associated Class that has code to be run to rebuild indices, fix some global structures that may have changed, do some other kind of maintenance, etc.

If you are using Ensemble and don't want to stop your instance, you can generate your patch as a normal XML package and a Word document explaining how to apply it. Test this on your Pre-LIVE environment. Fix the document and/or the XML package if necessary, and try again until patching works. The run it on LIVE.

Before patching applying new releases to PRE-LIVE or LIVE, execute a full snapshot of your servers' virtual machine. If the patching procedure fails for some reason, you may need to rollback to that point in time. This is specially useful on PRE-LIVE where you are still testing the patching procedure and will most likely break things until you get it right. To be able to quickly go back in time and try it again and again and again will give you the freedom you need to produce a high quality patching procedure.

If you can afford downtime, use it. Don't try to push a zero downtime policy if you don't really need it. It will only make things unnecessarily complex and risky. You can patch Ensemble integrations without down time with the right procedure though. I micro services architecture may also help you to eliminate downtime but it is complex and requires a lot of engineering.

Using External Service Registry

I recommend using External Service Registry so that when you generate your XML project with the new production definition and etc. no references to End Points, folders, etc. are there. Even if you don't send your entire production class, this will help you with the periodic refreshing of the pre-live databases from live. The External Registry Service would store the end point configurations outside your databases and they would be different on LIVE, PRE-LIVE, QA, DEV and on the developer's machine (that may be using local mock services, new versions of services elsewhere, etc.).

FYI ... we will have several sessions covering this topic at the Global Summit - attend if you can, otherwise check out the material afterwards!

For internal application development within InterSystems we use a variety of approaches, but the most common is as follows:

1) We use an internally developed issue tracking system, but we plan to eventually migrate to JIRA

2) We use Perforce for all of our source control 

3) We have BASE, TEST and LIVE environments for every application, typically BASE and TEST being cloned from VM snapshots of LIVE.  In addition to the Shared BASE VM, for those applications which are undergoing the highest rate of change, developers will create a local copy of the application to do their development work.  Some apps have all changes being developed on Shared BASE and the changes are progressed (via our Change Control tool) to TEST and the LIVE.  For applications where developers use Private BASEs, they commit there and then push the changes to Shared BASE and then to TEST and LIVE.

Feel free to ask questions (here or at Global Summit)!

Thanks for asking.

At J2 Interactive we have created a development and deployment system that we use internally and at most of our client locations. This ranges from teams of one on a local system to greater than 100 developers on an enterprise. With most everything in between. The system has been around for a decade now and has evolved as Caché and Ensemble have.

It’s easier for me to take the questions out of order.

How do you keep track of large code bases? Thousands of folders named backup1, backups2, ..., SVN, git?

The heart of the system is a Subversion (SVN) server that Caché Studio communicates with directly using a custom hook library built off the %Studio.SourceControl framework provided natively in Caché. The developer has additional context and top level menu items that allow him or her to perform all the typical version control actions: update, check-in, revert, diff, etc. We based our system on SVN because it was important for us to have a cross platform license free solution and at the time of development SVN was the industry standard. It's still very robust, just not turning heads like it used to. ☺

It’s worth noting that many of our installations pre-date the rise of cloud repositories like GitHub and Beanstalk. But we still find today that it is usually a requirement to have an on premise server owned and controlled by the organization doing development.

Do you have a development server to which you commit and test features there, or do you rather run a local copy of Caché and implement features locally first, then push to the server?

The second and more complex part of our solution is what we call our "Deployment" system. The deployment system drives code promotion through the local developer sandbox -> development -> test -> production systems with room for customization. (Some organizations have multiple test and validation stages.) The system is Caché project based. A developer collects one or more assets that make up a change, fix or feature and adds them to a Studio project. Those assets are then bundled and moved through the environments using a web based tool running on the target Caché systems. A developer never touches the target system's code directly, instead "agents" on the target use a combination of Subversion and COS commands to fetch and install.

These assets can be COS classes, routines, CSP files, schemas, Ensemble rules, DTL, binary files and anything else that one finds in the Caché ecosystem. In addition to the standard asset types we have also introduced a few specialty classes that we call "patch classes". These are COS classes that implement a patching interface. This allows the developer to do things like create/modify globals, manipulate Ensemble production settings, and perform arbitrary actions pre and post install. Each of these tasks can also be configured to behave differently depending on the target deployment environment. For example the endpoint of a REST service may be the same for all the test environments but in prod need to go to a different location and use a SSL credential.

We have tailored the deployment system around the complexities introduced by updating live Ensemble and HealthShare systems. This includes taking in to account things like running productions, default settings and HealthShare specific configuration items. The goal is to have no external (or manual) configuration of downstream systems done. Everything is handled through the deployment system.

Do you use an issue tracking / collaboration system? If so which one. Any you would recommend or immediately dismiss based on personal experience?

This is a great question as I find that we are still circling this issue ourselves. Between all of our developers and all of our clients everyone has their own take on SDLC and even within the various disciplines of the Agile standards no one has the same idea of what is correct. We’ve used more tools than I can name or remember ranging from simple open source ticket systems to full blown Scrum on Jira with all the plugins you can throw at it. Most of the time it is best determined by what your team is familiar with and what resources they have. And different projects require different approaches. For example I’m not afraid to say out loud that I believe building a traditional HL7 interface is more suited to Waterfall type development than Agile (gasp!).

Part of our deployment system is agnostic support for ticket numbers back out to whatever system a team is using as well as a permissions system for being able to promote your code only so far. This lets you introduce gatekeepers and code reviewers. Right now we are working on integrating that with an online code review tool (exploring GitLab) as we are finding that the process of "let’s jump on a screen share for a review" doesn’t scale how we need it to. The ability to import/export COS code in UDL on newer installs is making this much easier than in the past.

We are at a very interesting point in the evolution of COS development right now with the introduction of Atelier, all the side projects that have popped up and an overall interest in bringing the advances and culture of the new generation of devops tooling we are seeing outside our community. The whole system I described above we built out of necessity and we joke about how much we look forward to throwing pieces of it away as the functionality becomes both part of the Caché tool and developer mindset! Thanks for kicking off this discussion, it's a good one.