· Jul 2, 2023 4m read

LangChain – Unleashing the full potential of LLMs


Hi Community

In this article, I will introduce my application irisChatGPT which is built on LangChain Framework.

First of all, let us have a brief overview of the framework.

The entire world is talking about ChatGPT and how Large Language Models(LLMs) have become so powerful and has been performing beyond expectations, giving human-like conversations. This is just the beginning of how this can be applied to every enterprise and every domain! 

The most important question that remains is how to apply this power to domain-specific data and scenario-specific response behavior suitable to the needs of the enterprise. 

LangChain provides a structured and effective answer to this problem at hand! LangChain is the technology that can help realize the immense potential of the LLMs to build astounding applications by providing a layer of abstraction around the LLMs and making the use of LLMs easy and effective. LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models, for example, GPT-3.

The framework, however, introduces additional possibilities, for example, the one of easily using external data sources, such as Wikipedia, to amplify the capabilities provided by the model. I am sure that you have all probably tried to use Chat-GPT and find that it fails to answer about events that occurred beyond a certain date. In this case, a search on Wikipedia could help GPT to answer more questions.

LangChain Structure

The framework is organized into six modules each module allows you to manage a different aspect of the interaction with the LLM. Let’s see what the modules are.

  • Models: Allows you to instantiate and use three different types of language-models, which are:
    • Large Language Models (LLMs): these foundational machine learning models that are able to understand natural language. These accept strings in input and generate strings in output.
    • Chat Models: models powered by LLM but are specialized to chat with the user. You can read more here.
    • Text Embedding Models: these models are used to project textual data into a geometric space. These models take text as input and return a list of numbers, the embedding of the text.
  • Prompts: The prompt is how we interact with the model to try to obtain an output from it. By now knowing how to write an effective prompt is of critical importance. This framework module allows us to better manage prompts. For example, by creating templates that we can reuse.
  • Indexes: The best models are often those that are combined with some of your textual data, in order to add context or explain something to the model. This module helps us do just that.
  • Chains: Many times to solve tasks a single API call to an LLM is not enough. This module allows other tools to be integrated. For example, one call can be a composed chain with the purpose of getting information from Wikipedia and then giving this information as input to the model. This module allows multiple tools to be concatenated in order to solve complex tasks.
  • Memory: This module allows us to create a persisting state between calls of a model. Being able to use a model that remembers what has been said in the past will surely improve our application.
  • Agents: An agent is an LLM that makes a decision, takes an action, makes an observation about what it has done, and continues in this manner until it can complete its task. This module provides a set of agents that can be used.

Now let’s go into a little more detail and see how to implement code by taking advantage of the different modules.

How LangChain works

Step1 :
User sends the question to LangChain

Step2 :
LangChain send this question to Embedding Model 

Step3 :
Embedding model converts the text to vectors as text is stored as vectors in the database and returns to LangChain

Step4 :
LangChain send these vectors to the vector database (There are multiple vector database, We are using chroma in our application)

Step5 :
Vector database returns Top K Approximately Nearest Neighbors (KNN) Vectors 

Step6 :
LangChain send question along with KNN vectors to Large Language Models (LLMs) (We are using OpenAI in our application)

Step7 :
LLM returns the answer to Langchain

Step8 :
Langchain returns the answer to the user

About Application

irisChatGPT application leverages the functionality of one of the hottest python framework LangChain built around Large Language Models (LLMs). LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models. Application is built by using objectscript with the help of intersystems Embedded Python functionality. It also contains Streamlit web application which is an open-source Python app framework to create beautiful web apps for data science and machine learning.



Below is the list of application features along with the related screenshots


 Built-in Intersystems ObjectScript Reference ChatGPT


Built-in InterSystems Grand Prix Contest 2023 ChatGPT 


ChatGPT with FHIR server

Answer questions over a Cache database by using SQLDatabaseChain


Create your own chatGPT model and chat with it

OpenAI ChatGPT

Wikipedia Search

Search on the internet by using DuckDuckGo (DDG) general search engine

Generate Python code by using Python REPL LangChain functionality



Streamlit Web application  ONLINE DEMO     

Objectscript Reference
Grand Prix Contest 2023

Personal ChatGPT 

OpenAI ChatGPT



Discussion (11)3
Log in or sign up to continue

Hi Muhammad,

Your video is available on InterSystems Developers YouTube:

⏯️Introduction to irisChatGPT application
[This is an embedded link, but you cannot view embedded content directly on the site because you have declined the cookies necessary to access it. To view embedded content, you would need to accept all cookies in your Cookies Settings]

Please enjoy!

Hi Muhammad,

thanks for this very interesting app ; it wide opens a lot of possibilities 😀

I'm trying to run it locally, but I'm facing this error :

%SYS>zn "user"

USER>zw ^ChatGPTKey

USER>set chat = ##class(dc.irisChatGPT).%New()

USER>do chat.SetAPIKey("--- my key ---")

USER>zw ^ChatGPTKey
^ChatGPTKey(1)="--- my key ---"

USER>write chat.irisDocs("Give me details of %$PIECE function with syntax")
You exceeded your current quota, please check your plan and billing details.

WRITE chat.irisDocs("Give me details of %$PIECE function with syntax")
<THROW> *%Exception.PythonException  230 ^^0^ <class 'UnboundLocalError'>: local variable 'ret' referenced before assignment - 

Then after a re-run I get the result successfully ; any idea ?

USER>write chat.irisDocs("Give me details of %$PIECE function with syntax")

 The %$PIECE function is used to extract a substring from a string of text. The syntax for the %$PIECE function is %$PIECE(string, delimiter, piece_number). The string is the text from which you want to extract a substring. The delimiter is the character or characters that separate the pieces of the string. The piece_number is the number of the piece you want to extract.

The docs is not always linked to IRIS :

USER>write chat.irisDocs("Give me details of $zn function with syntax")

 The $zn function is a MongoDB operator that returns the index of a given value in an array. The syntax for the $zn function is: {$zn: [<array>, <value>]}.

USER>write chat.irisDocs("Give me details of zn function with syntax")

 The zn function is a mathematical function that takes two arguments, x and n, and returns the remainder of x divided by n. The syntax for the zn function is zn(x, n).

USER>write chat.irisDocs("Give me details of SET function with syntax")

 The SET function is a built-in function in Microsoft Excel that allows you to assign a value to a variable. The syntax for the SET function is: SET(variable, value).

USER>write chat.irisDocs("Give me details of %kill function with syntax")

 The %kill function is used to terminate a SAS session. The syntax for the %kill function is %kill;

USER>write chat.irisDocs("Give me details of kill function with syntax")

 The kill function is a command line utility used to terminate a process. The syntax for the kill function is: kill [signal] PID, where signal is an optional argument that specifies the signal to be sent to the process and PID is the process ID of the process to be terminated.

Hi @Muhammad Waseem 
Streamlit now runs better. But when I give a key, it raises the following error :

RuntimeError: [91mYour system has an unsupported version of sqlite3. Chroma requires sqlite3 >= 3.35.0.[0m [94mPlease visit to learn how to upgrade.[0m Traceback:
File "/usr/local/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/", line 552, in _run_script exec(code, module.__dict__)
File "/opt/irisappS/streamlit/app/", line 208, in &lt;module&gt; main()
File "/opt/irisappS/streamlit/app/", line 153, in main init_doc()
File "/opt/irisappS/streamlit/app/", line 117, in init_doc vectorstore = irisChatGPT.docLoader(st.session_state["OPENAI_API_KEY"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/irisappS/streamlit/app/", line 94, in docLoader vectordb = Chroma(persist_directory='/opt/irisappS/streamlit/app/vectors', embedding_function=embedding) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/vectorstores/", line 80, in __init__ import chromadb
File "/usr/local/lib/python3.11/site-packages/chromadb/", line 69, in &lt;module&gt; raise RuntimeError(