Article
· 6 hr ago 5m read

"IRIS-CoPilot" prototype - English (etc) as an IRIS language?

Keywords:  IRIS, Agents, Agentic AI, Smart Apps

Motive?

Transformer based LLMs appear to be a pretty good "universal logical–symbolic abstractor".  They started to bridge up the previous abyss among human languages and machine languages, which in essence are all logic symbols that could be mapped into the same vector space. 

Objective?

Wondering for 3 years we might be able to just use English (etc human natural languages) to do IRIS implementations as well, one day. 

Possibly tomorrow all machines, software and apps will be "intelligent" enough to interact with users in any human languages to get desired outcomes.   And tomorrow is likely today's tomorrow; not tomorrow's tomorrow.  

Research?

Researches indicate LLM are still likely to be probabilistic sequence models that internalise statistical approximations to patterns of symbols, rather than actually implement formal symbolic logic. While CoT etc produces outcomes that statistically emulates structured reasoning and can act as an abstraction layer between human language and machine actions, they *may not* manifest logically grounded inference and remain limited by statistical mimicry, shallow heuristics and the absence of semantic grounding. By saying that, theoretically we don't actually understand how to measure "intelligence" by today yet, or whether it is a thing or not, so we don't actually understand LLMs' theoretical boundaries and limits that well anyway. 

Evidences?

Today's "Vibe Coding" tools are using human languages to drive software lifecycle implementations.  But what if people don't even want to use vibe coding tools or Visual Studio - they just want to speak to IRIS directly that gets things "done"?   How are clinical quality and enterprise governance etc BAUs are auto enforced too. IRIS-CoPilot app is just a prototype, an initial demo towards our vision above.  

Prototype ideas?

https://github.com/zhongli1990/iris-copilot#iris-copilot

Human natural language-driven agentic AI platform for IRIS implementation lifecycles. This prototype is built for NHS Trust integration delivery: users describe clinical integration requirements in natural language; Copilot designs and generates IRIS artifacts, and deployment is executed only after explicit human approval.

Design?

https://github.com/zhongli1990/iris-copilot#architecture

  • CSP Chat UI: AIAgent.UI.Chat.cls
  • IRIS backend REST APIs: AIAgent.API.Dispatcher
  • IRIS backend: CoPilot Orchestrator/engine services in IRIS
  • Node.js bridge adapters for:
    • Claude Agent SDK
    • OpenAI Codex (standard API runner)
    • OpenAI Codex SDK runner
    • Azure OpenAI (to be added)
    • Google Gemini (to be added)
    • LiteLLM gateways (on the roadmapp)

Deployment?

https://github.com/zhongli1990/iris-copilot?tab=readme-ov-file#1-deploy-...

A few very simple deployment steps on any laptop: 

0. Git clone this repo into a local working path to IRIS server:  git clone https://github.com/zhongli1990/iris-copilot

1. Identify an existing IRIS namespace that you want the agent to have access to.

2. Import this IRIS-CoPilot package via Studio/Terminal, such as:  https://github.com/zhongli1990/iris-copilot/blob/main/deploy/AIAgent-exp...

3. Create a REST web app in IRIS Management Portal:  `/ai` for REST APIs (dispatch class `AIAgent.API.Dispatcher`)

4. Start the external node.js bridge, which acts as a REST adaptor for OpenAI Codex and Claude Code etc intelligence agents.

I am running Node.js v24.13.0 on a win10 laptop, so didn't use Docker.  I will dockerise it into a Ubuntu demo server later. 

cd <working path>/AIAgent/bridge
npm install
npm run build
npm start

5. Configure keys and runner settings in:

  • bridge/.env (local - add in your OpenAI and/or Claude API keys)
  • bridge/.env.example (template)

 

Demo?

1. Health checks:

  • IRIS API: http://localhost:52773/ai/health
  • Bridge: http://localhost:3100/api/health

2. Open the CSP Chat UI page:  http://<iris-host>:<port>/csp/<namespace>/AIAgent.UI.Chat.cls

      For example: http://localhost:52773/csp/healthshare/demo2_ai2/AIAgent.UI.Chat.cls

3. Demo Chat UI when it's running:

4. Local CMD console for the bridge


 

Test report?

https://github.com/zhongli1990/iris-copilot/blob/main/docs/REALWORLD-EVA...

I created 34 demo queries in the tests script along a typical lifestyles of NHS TIE implementation tasks. The above is a quick run of the test report.

Below is the actual sample queries and actual reponse to each query, using LLM-as-a-Judge to determine Pass or Fail, as listed below:

https://github.com/zhongli1990/iris-copilot/blob/main/docs/REALWORLD-LIF...

The failed test cases are also for illustration purpose - they failed simply because I haven't added in sufficient tools and resource accesses for them yet.  

 
Next Actions?

This demo app is more about conveying the ideas. It's a lightweight implementation of agent wrappers - one of design principles since LLMs and Agent SDKs are evolving rapidly - we hope to rise with tides, not stuck into our any hard-coded Langgraph etc workflows.

Next actions could be:

 1. Agents to be more generic, aiming for real human engineer tasks along daily implementation lifecycles.

 2. Hope to embed the CSP Chat UI page better within IRIS Management Portal, which would be more convenient.   

 3. An IRIS native agent SDKs other than current agent runners? (again, hope to be lightweight and future compatible)

4. Add in Skills and Hooks placeholders to auto enforce enterprise QA, governance and compliances per site-specific policies? 

5. OK, how about a self-evolving software/system: the user/clinician/engineer sets the targets, and the application just starts build/refine itself via RL etc loops, just consuming tokens. The engineer would just manage the agents like managing a production line, rather than manually manufacturing each specific product on the line?

Disclaimer:

    Prototype in progress - initial versions for bouncing ideas.  

   Rushed this through some spare time, so pardon me if some thought is still being shaped. 

Discussion (0)1
Log in or sign up to continue