Article
· 9 hr ago 10m read

iris-pgwire: Building Software Sanely with AI and Specifications

The Rut

Up until early this year, I haven't been not doing much coding at all -- I had gotten sick of it.

After many years as a hands-on software engineer and data scientist, I got burned out around 2015. I switched to business development roles focused on "external innovation," then joined InterSystems in 2019 as a product manager. I missed the creative aspects of coding, but not the tedium. The endless cycle of boilerplate, debugging, and context-switching had left me creatively depleted. Like Jim Carrey's character in Yes Man, I found myself saying "no" to new projects -- so much so that I had switched careers!

Then AI coding assistants arrived. And I became a "Yes Man", for bots, that is.


Act I: Exuberance ("Yes to Everything!")

When I first started using AI coding assistants (Windsurf, then Cline, then Roo Code, now Claude Code and flirting with opencode), it felt like magic. Natural language → working code. I said "yes" to every suggestion, every refactor, just about every every wild idea.

My first major AI-assisted project was an internal project I started a few months ago - a collection of Python scripts and pipelines for IRIS. I was so excited I let the bot run wild:

"Add this feature!" Yes!
"Refactor that module!" Yes!
"Make it configurable!" Yes!
"Add more integrations!" Yes!

The creative energy was back. Code was flowing. I felt productive again.

Then my intern - a software engineering major - looked over the codebase.

He was NOT impressed.

Though I had implemented several complete modules and pipelines, only some of them "really worked" -- the tests were passing, but they had a lot of mocks that were being used instead of real database queries. In many cases "fast iteration" was admittedly "AI slop" - inconsistent patterns, duplicated logic, questionable architectural decisions. The bot had said "yes" to everything I asked, but nobody was saying "no" to bad ideas or "wait, let's think about this first."


Act II: The Trough of Reality

That intern review was a wake-up call. Like Jim Carrey learning that saying "yes" to everything creates chaos, I had to face the downsides:

  • Hallucinations: The bot confidently generated code for APIs that didn't exist -- easy to spot, but annoyingly frequent and time-consuming to debug.
  • Context drift: Long sessions lost track of architectural decisions
  • Quality variance: Some outputs were brilliant; others needed complete rewrites
  • The "Yes, and don't..." dance: Every prompt became "Yes, add this feature... and don't break what we did yesterday... and don't forget that thing I mentioned three hours ago...", and many many exclamation points and ALL CAPS as I tried to communicate the severity of the issue ;)

I had to admit that I was spending more time managing the bot than it was worth. The exuberance phase had ended, and I wasn't alone in being disillusioned. I needed a different approach, a course correction.


Act III: Enter specify-kit ("Taming the Beast")

I came to realize: I needed a system, not just a bot.

That's when I discovered spec-kit - a code assistant-agnostic workflow that transforms interaction with AI assistants. (AWS's Kiro is another flavor of spec-driven AI-assisted development - the pattern is emerging across the industry.) Instead of freeform "yes," I now have structured specifications:

The Workflow

/specify → /clarify → /plan → /tasks → /implement

Each step produces artifacts that survive context windows:

Command Output Purpose
/specify spec.md User stories, requirements, acceptance criteria
/clarify Updated spec Resolve ambiguities before coding
/plan plan.md Implementation strategy, architecture decisions
/tasks tasks.md Ordered task list with dependencies
/implement Code + tests Actual implementation

What Changed

Before specify-kit:

"Add Open Exchange support... no wait, don't auto-start the server... actually, check if module.xml exists first... and make sure the tests pass..."

After specify-kit:

/specify make this an Open Exchange package

The system asks clarifying questions, documents decisions, and generates implementation plans. When I picked "manual start" vs "auto-start" during /clarify, that decision was encoded into the spec and carried through to implementation.


The Proof: IRIS PGWire

IRIS PGWire is my (slightly late) Christmas gift to the InterSystems developer community. It's a PostgreSQL wire protocol server that lets you connect nearly any PostgreSQL client to IRIS.

Don't get me wrong - InterSystems has excellent, production-grade drivers: high-performance xDBC, native DB-API, and soon an officially supported SQLAlchemy adapter. These are the right choice for production systems where you are in control of the stack and can ensure that your application is secure, performant, and reliable.

But iris-pgwire isn't about replacing those. It's about possibilities. It's about that BI tool your team wants to try that only supports PostgreSQL connection strings. It's about experimenting with a new ORM without waiting for official support. It's about saying "yes" to tools that don't have IRIS drivers - and having it just work.

Plus, it's a lot of fun:

  • psql, DBeaver, Superset, Metabase, Grafana - zero configuration
  • psycopg3, asyncpg, node-postgres, Npgsql - 171 tests across 8 languages
  • pgvector syntax - use <=> for cosine similarity, <#> for dot product
# Quick Start (Option 1: Docker)
git clone https://github.com/intersystems-community/iris-pgwire.git
cd iris-pgwire
docker-compose up -d

# Quick Start (Option 2: PyPI)
pip install iris-pgwire
iris-pgwire  # Start the server

# Connect with any PostgreSQL client
psql -h localhost -p 5432 -U _SYSTEM -d USER

Quick Demo: From Zero to Analytics

Once your container is up, you’re not just connected to a database—you’re connected to an ecosystem.

1. The Classic Handshake

psql -h localhost -p 5432 -U _SYSTEM -d USER -c "SELECT 'Hello from IRIS!' as message"

2. Create Sample Data

-- Create a table and insert some data
CREATE TABLE public.Patients (
    id INTEGER PRIMARY KEY,
    name VARCHAR(100),
    category VARCHAR(50)
);

INSERT INTO public.Patients VALUES (1, 'John Doe', 'Follow-up');
INSERT INTO public.Patients VALUES (2, 'Jane Smith', 'New Patient');
INSERT INTO public.Patients VALUES (3, 'Bob Johnson', 'Follow-up');

3. Standard SQL, IRIS Power

-- This runs on IRIS, but feels like PostgreSQL
SELECT COUNT(*) FROM public.Patients WHERE category = 'Follow-up';
-- Returns: 2

4. The "Killer Feature": Vector Search
IRIS PGWire translates PostgreSQL pgvector syntax into native IRIS vector functions:

-- Create a table with vector embeddings
CREATE TABLE documents (
    id INTEGER PRIMARY KEY,
    content VARCHAR(500),
    embedding VECTOR(DOUBLE, 3)
);

-- Query with pgvector operators (translated to IRIS automatically)
SELECT id, content
FROM documents
ORDER BY embedding <=> TO_VECTOR('[0.1, 0.2, 0.3]', DOUBLE)
LIMIT 5;

The "Impossible" Connection: No IRIS Driver? No Problem.

This isn’t just about making things easier—it’s about making things possible.

Take Metabase Cloud or Prisma ORM.

  • Metabase Cloud is a beautiful, managed BI tool. You can’t upload an IRIS JDBC driver to their cloud servers. You are limited to their pre-installed list.
  • Prisma is the standard ORM for modern TypeScript developers. It uses a custom engine that doesn’t (yet) speak IRIS.

Without a wire protocol adapter, these tools are locked out of your IRIS data. With IRIS PGWire, they just see a high-performance PostgreSQL database.

Demo: Prisma with InterSystems IRIS
Just point your schema.prisma at the PGWire port:

datasource db {
  provider = "postgresql"
  url      = "postgresql://_SYSTEM:SYS@localhost:5432/USER"
}

Now you can use Prisma’s world-class CLI and type-safety:

npx prisma db pull
npx prisma generate

Built with Structured AI Collaboration

What makes this project interesting isn't just the code - it's how it was built. The specs/ directory contains 31 feature specifications documenting the entire development journey:

specs/
├── 001-postgresql-wire-protocol/    # Where it all began
├── 002-sql-query-processing/        # Query translation layer
├── 003-iris-integration-layer/      # IRIS backend connection
├── ...
├── 006-vector-operations-pgvector/  # AI/ML vector support
├── ...
├── 012-client-compatibility-testing/ # 8-language test matrix
├── ...
├── 019-async-sqlalchemy-based/      # FastAPI integration
├── ...
├── 027-open-exchange/               # Package publication
├── 030-pg-schema-mapping/           # PostgreSQL schema compatibility
└── 031-prisma-catalog-support/      # ORM introspection support
    ├── spec.md                      # Feature requirements
    ├── plan.md                      # Implementation strategy
    └── tasks.md                     # Task breakdown

Each feature started as a natural language description like:

"PostgreSQL Wire Protocol Foundation - SSL/TLS handshake, authentication, session management, and basic protocol compliance"

And became a structured specification with user stories, acceptance criteria, and [NEEDS CLARIFICATION] markers for decisions that required human judgment.

The Evolution:
- Spec 001: "Can we make PostgreSQL clients talk to IRIS?"
- Spec 006: "What about vector search and AI workloads?"
- Spec 019: "FastAPI developers need async support"
- Spec 027: "Let's share this with the world"
- Spec 031: "Can Prisma ORM introspect IRIS schemas?"

The result: Over 100 tests passing across 8 programming languages. Ready to use.

Note Bene: On AI "Slop" and Hallucination

Can I guarantee you won't find "AI slop" in this repo (and even this article, for that matter)? Absolutely not - the funny thing about LLMs is that hallucination and inaccuracy is pretty much baked into how they work!

Research shows that hallucinations in large language models arise from their fundamental architecture - the way transformers predict next tokens based on subsequence associations. As OpenAI's recent paper explains, language models hallucinate because training procedures reward guessing over acknowledging uncertainty.

So for now, no matter how much you try to "tame the beast" you will only be fighting a natural tendency to go astray from the facts. This generative creative capacity is partly the source of the power of modern AI, and somewhat ironically the first "killer app" for LLMs IMHO is coding assistants.

Why? Because via "agentic" loops that can iteratively verify the correctness of what is generated, you arrive at useful, working code. As Eno Reyes from Factory AI emphasized at the AI Engineer Code Summit: "verification over specification" - stop telling agents exactly how to solve problems and instead tell them what correct looks like.

The Anthropic team's Agent Skills approach takes this further with progressive disclosure - building reusable skills with built-in verification rather than trying to specify every detail upfront.

The bottom line: Specifications help immensely, but they're not magic. Tests, verification, and iterative refinement remain essential when working with AI assistants.

AND if you're feeling a bit overwhelmed by all the changes in the world of software development, you're not alone - even the preeminent AI guru Andrej Karpathy who ran Tesla's self-driving group can't keep up!


What I Learned

1. Specifications are Force Multipliers

A 30-minute investment in /specify and /clarify saves hours of debugging and rework. The bot doesn't have to guess your intent when it's documented.

2. Clarification Questions Reveal Gaps

When specify-kit asked "Should the server auto-start after ZPM installation?", I realized I hadn't decided. That one question prevented a design mistake that would have affected every user.

3. The Spec is the Source of Truth

When context windows overflow or sessions restart, the spec survives. The bot can read spec.md and get back to work without re-explaining everything.

4. Test-First Still Matters

Every user story in the spec maps to acceptance criteria. Every acceptance criterion maps to a test. The bot doesn't "forget" to write tests because they're required by the spec.


Try It Yourself

IRIS PGWire

specify-kit

  • GitHub: https://github.com/github/spec-kit
  • Usage: Add to your Claude Code project at the command line with specify init --here and then in Claude Code run /specify <what you want to build>
  • Alternative: Kiro by AWS - similar spec-driven approach in a full IDE

The Balance

I'm no longer a "Yes Man" for bots. I'm not saying "no" either.

I'm saying: "Yes, with structure."

The creative energy is back. The tedium is managed by the bot. But the specifications ensure we're building the right thing, the right way.

Happy Holidays from InterSystems. May your prompts be clear and your tests be green.


Resources

Discussion (0)1
Log in or sign up to continue