Article
· Apr 16 5m read

Thoughts on Coding with GenAI

Thirteen years ago, I attained dual undergraduate degrees in electrical engineering and math, then promptly started full-time at InterSystems using neither. One of my most memorable and stomach-churning academic experiences was in Stats II. On an exam, I was solving a moderately difficult confidence interval problem. I was running out of time, so (being an engineer) I wrote out the definite integral on the exam paper, punched it into my graphing calculator, wrote an arrow with “calculator” over it, then wrote the result. My professor, affectionately known as “Dean, Dean, the Failing Machine,” called me to his office a few days later. He did not take kindly to my graphing calculator use. I found this unreasonable – this was after all Stats II, not Calc II, and I did the Stats II part right… right? As it turned out, writing “calculator” over that arrow earned me zero credit for the question and a chuckle from Dean; if I had omitted it, I would have gotten zero credit for the test. Yikes.

I’ve thought back to this event quite a bit recently. Generative AI makes my TI-89 Titanium, plastered with star stickers from top marks in high school math classes and modded to be able to play Tetris, look like an overpriced plastic brick. Well, it was an overpriced plastic brick 20 years ago, but it still is, too.

  

My trusty old TI-89 Titanium

Risks and Rules

With the advent of actually-good AI-enabled integrated development environments (IDEs), there’s the potential for AI to do the average entry-level software developer’s job… right? If that entry-level software developer is using Python – or some other sufficiently mainstream language that the models have been trained on – then yes. And if you’re comfortable with the technical debt of mountains of code that might be utter garbage – or even worse, mostly good with little snippets of utter garbage hidden away – then yes. And if you have some mystical means to equip entry-level software developers into principal/architect-level roles without requiring them to write any code, then yes. That’s a lot of caveats, and we need some rules to mitigate these risks.

Note: these risks are independent from the copyright/intellectual property concerns that go both ways: are we infringing on copyrighted code in the training dataset by using the output of GenAI? Are we risking our own intellectual property by sending it out to the cloud? For this article, we assume both of those are covered by our choice of model/service provider, but these are major driving concerns at the corporate level.

Rule #1: Scope of Work

Don’t use GenAI to do something you couldn’t do yourself, beyond or near the limits of your current understanding and capacity. To the original illustration, if you’re in Stats II, you can use GenAI to do Calc II – but not Stats II, probably not Stats I, and definitely not Measure Theory. This means that if you’re an intern or entry-level developer, you shouldn’t have it do any of your work. Using GenAI as a Google-search-on-steroids is totally fine and using an intelligent autocomplete might be OK, just don’t have it write actual greenfield code for you. If you’re a senior developer, use it to do entry-level developer work in technologies where you have senior-level proficiency; think of it as similar delegation and review the code as if it was written by an entry-level developer. My AI-assisted software development experience has been with Windsurf, which I like in this regard: I’ve been able to coach it a bit, giving it rules and advice to remember, follow, and (like an entry-level developer) occasionally apply in the wrong context.

Rule #2: Attribution

If GenAI writes a big chunk of code for you, make sure that the commit message, the code itself, and any associated human-readable documentation are very clear about that. Make it obvious: I didn’t write this, a computer did. This is a service to those reviewing your code: they should treat it as written by an entry-level developer, not you, and as part of the review should question whether AI is doing things beyond your technical depth (which could be wrong – and you wouldn’t know). It is a service to those who look at your code in the future and try to determine if it’s garbage. And it is a service to those who try to train future AI models on your code, to avoid collapse and “irreversible defects in the resulting models.”

Rule #3: Learning Mode and Reinforcement

When you’re in college taking Stats II, there’s some value in reinforcement of the skills learned in Calc II. I’ve forgotten most of both now, to be honest, due to disuse. Maybe “Dean, Dean the Failing Machine” was right after all. In situations where your main objective is to learn new things or to reinforce your existing, new-ish (or years-old!) skills, doing all the work yourself is the best way to go. Rapid prototyping is an exception (though Rules #1 and #2 still apply!), but even in regular daily work it would be harmful to become overly reliant on GenAI performing “entry-level developer” tasks, personally or organizationally. We will always need entry-level developers, because we will always need principal developers and architects, and the path between the two is a continuous function. (There, I do remember some math things!)

This is something I’d love to see as an IDE feature: toggle “learning mode” where AI watches/mentors you rather than doing any of your work for you. Absent some tailored implementation in software, you can also choose to use AI in this way.

Rule #4: Reflection

Don’t get lost in the churn of work and deliverables and meetings. Take time to reflect. This is important in general, and important in use of GenAI specifically.

  • Is this technology making my life better, or worse?
  • Is it making me smarter, or dumber?
  • Am I producing more value, or just more output?
  • Does that output include technical debt incurred for the sake of expediency?
  • Am I learning more effectively, or forgetting how to learn?
  • How do AI’s solutions compare to the ones I have in mind? Do I overlook something that AI pays attention to? Does AI systemically overlook things that are important to me?
  • Am I becoming like ChatGPT, soullessly and randomly introducing bullet pointed lists and bolded text into documents I write? (Oh no!!)

A Final Thought: GenAI and InterSystems IRIS-Based Development

In a world where developers expect GenAI to be able to do their work for them – or at least make it much easier – one of two things can happen:

  • Dominant technologies and languages (see: Python) can become super-dominant, even if they aren’t the best for the task at hand. Since GenAI is so good at Python, why use anything else?
  • Niche technologies and languages (see: ObjectScript) can become more palatable to developers. Learning a new language isn’t as rough if GenAI can help you ramp up quickly and do things right.

My hope is that, as vendors and software development leaders realize the risks I’ve outlined, tools will tend toward supporting the latter outcome – which is an opportunity for InterSystems. Yes, people can just use Embedded Python for everything, but our technological legacy and core platform strengths may become more palatable as well, and ObjectScript can get the love it deserves.

Discussion (6)5
Log in or sign up to continue