Find

Announcement
· Jul 31, 2024

InterSystems Learning Services - O aprendizado na ponta dos seus dedos !

Olá Comunidade! 

Seja você um especialista em produtos InterSystems ou um iniciante tentando algo novo, você encontrará vídeos úteis para aprender como construir soluções com a tecnologia InterSystems no nosso canal InterSystems Learning Services no Youtube .

Somos 7,79 mil inscritos com 186 vídeos publicados.
O sucesso do cliente impulsiona tudo o que fazemos. É por isso que estamos colocando o aprendizado na ponta dos seus dedos.

✍️ Inscreva-se e fique ligado em todos os vídeos do canal e continue explorando no portal InterSystems Learning 📚 


Fique conectado conosco em todos os canais InterSystems

InterSystems website

 InterSystems Online Learning

 InterSystems Channel

 InterSystems LinkedIn

 InterSystems Twitter

Discussion (0)1
Log in or sign up to continue
Question
· Jul 31, 2024

Visual Studio Fail to connect my local IRIS, error message : Not found

Hello,

I have installed the latest version of IRIS (without a web server) to replace my community version with a embeded web server. 
I tried to connect Visual Studio Code to my namespace, but I am unable to do so. I keep receiving the message "Not found." 
Here is my configuration:

"pc-david": {
    "webServer": {
        "scheme": "http",
        "host": "localhost",
        "port": 80
    },
    "description": "pc-david",
    "username": "_SYSTEM"
}

When I modify the host and port, I can successfully connect to other remote databases (Cache 2018 or IRIS). 
However, I cannot connect to the one on my machine. Is there something I need to enable on my local instance to allow the connection to go through?

9 Comments
Discussion (9)4
Log in or sign up to continue
Discussion (0)1
Log in or sign up to continue
Question
· Jul 31, 2024

Newbie issues with XEP

Hello,

I'm brand new to InterSystems and their products.

Doing some research for my job, I've found myself going through the learning paths Connection Java Application to InterSystems(https://learning.intersystems.com/course/view.php?id=879).

Working through the proposed demos (https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls... and https://github.com/intersystems/quickstarts-java) I cannot get the methods deleteExtent, importSchemaFull and getEvent to work on the EventPersistor.

I keep getting this error

 

 

 

 

 

 

I've found that the tag <PROTECT> refers to this :

I'm creating a connection with SuperUser, I've also tried _SYSTEM. The installation of the IRIS instance is on the same machine as the java app. Interacting with the database via JDBC and NativeAPI works fine.

If anyone has any helpful debugging tips, it would be greatly appreciated. If I can provide any other info please let me know?

Thanks,

Alex

2 Comments
Discussion (2)1
Log in or sign up to continue
Article
· Jul 31, 2024 5m read

IRIS-RAG-Gen: Personalizing ChatGPT RAG Application Powered by IRIS Vector Search

image

Hi Community,

In this article, I will introduce my application iris-RAG-Gen .

Iris-RAG-Gen is a generative AI Retrieval-Augmented Generation (RAG) application that leverages the functionality of IRIS Vector Search to personalize ChatGPT with the help of the Streamlit web framework, LangChain, and OpenAI. The application uses IRIS as a vector store.
image

Application Features

  • Ingest Documents (PDF or TXT) into IRIS
  • Chat with the selected Ingested document
  • Delete Ingested Documents
  • OpenAI ChatGPT

Ingest Documents (PDF or TXT) into IRIS

Follow the Below Steps to Ingest the document:

  • Enter OpenAI Key
  • Select Document (PDF or TXT)
  • Enter Document Description
  • Click on the Ingest Document Button

image
 

Ingest Document functionality inserts document details into rag_documents table and creates 'rag_document + id' (id of the rag_documents) table to save vector data.

image

The Python code below will save the selected document into vectors:

from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader, TextLoader
from langchain_iris import IRISVector
from langchain_openai import OpenAIEmbeddings
from sqlalchemy import create_engine,text

class RagOpr:
    #Ingest document. Parametres contains file path, description and file type  
    def ingestDoc(self,filePath,fileDesc,fileType):
        embeddings = OpenAIEmbeddings()	
        #Load the document based on the file type
        if fileType == "text/plain":
            loader = TextLoader(filePath)       
        elif fileType == "application/pdf":
            loader = PyPDFLoader(filePath)       
        
        #load data into documents
        documents = loader.load()        
        
        text_splitter = RecursiveCharacterTextSplitter(chunk_size=400, chunk_overlap=0)
        #Split text into chunks
        texts = text_splitter.split_documents(documents)
        
        #Get collection Name from rag_doucments table. 
        COLLECTION_NAME = self.get_collection_name(fileDesc,fileType)
               
        # function to create collection_name table and store vector data in it.
        db = IRISVector.from_documents(
            embedding=embeddings,
            documents=texts,
            collection_name = COLLECTION_NAME,
            connection_string=self.CONNECTION_STRING,
        )

    #Get collection name
    def get_collection_name(self,fileDesc,fileType):
        # check if rag_documents table exists, if not then create it 
        with self.engine.connect() as conn:
            with conn.begin():     
                sql = text("""
                    SELECT *
                    FROM INFORMATION_SCHEMA.TABLES
                    WHERE TABLE_SCHEMA = 'SQLUser'
                    AND TABLE_NAME = 'rag_documents';
                    """)
                result = []
                try:
                    result = conn.execute(sql).fetchall()
                except Exception as err:
                    print("An exception occurred:", err)               
                    return ''
                #if table is not created, then create rag_documents table first
                if len(result) == 0:
                    sql = text("""
                        CREATE TABLE rag_documents (
                        description VARCHAR(255),
                        docType VARCHAR(50) )
                        """)
                    try:    
                        result = conn.execute(sql) 
                    except Exception as err:
                        print("An exception occurred:", err)                
                        return ''
        #Insert description value 
        with self.engine.connect() as conn:
            with conn.begin():     
                sql = text("""
                    INSERT INTO rag_documents 
                    (description,docType) 
                    VALUES (:desc,:ftype)
                    """)
                try:    
                    result = conn.execute(sql, {'desc':fileDesc,'ftype':fileType})
                except Exception as err:
                    print("An exception occurred:", err)                
                    return ''
                #select ID of last inserted record
                sql = text("""
                    SELECT LAST_IDENTITY()
                """)
                try:
                    result = conn.execute(sql).fetchall()
                except Exception as err:
                    print("An exception occurred:", err)
                    return ''
        return "rag_document"+str(result[0][0])

 

Type the below SQL command in the management portal to retrieve vector data

SELECT top 5
id, embedding, document, metadata
FROM SQLUser.rag_document2

image

 

Chat with the selected Ingested document

Select the Document from select chat option section and type question. The application will read the vector data and return the relevant answer
image

The Python code below will save the selected document into vectors:

from langchain_iris import IRISVector
from langchain_openai import OpenAIEmbeddings,ChatOpenAI
from langchain.chains import ConversationChain
from langchain.chains.conversation.memory import ConversationSummaryMemory
from langchain.chat_models import ChatOpenAI


class RagOpr:
    def ragSearch(self,prompt,id):
        #Concat document id with rag_doucment to get the collection name
        COLLECTION_NAME = "rag_document"+str(id)
        embeddings = OpenAIEmbeddings()	
        #Get vector store reference
        db2 = IRISVector (
            embedding_function=embeddings,    
            collection_name=COLLECTION_NAME,
            connection_string=self.CONNECTION_STRING,
        )
        #Similarity search
        docs_with_score = db2.similarity_search_with_score(prompt)
        #Prepair the retrieved documents to pass to LLM
        relevant_docs = ["".join(str(doc.page_content)) + " " for doc, _ in docs_with_score]
        #init LLM
        llm = ChatOpenAI(
            temperature=0,    
            model_name="gpt-3.5-turbo"
        )
        #manage and handle LangChain multi-turn conversations
        conversation_sum = ConversationChain(
            llm=llm,
            memory= ConversationSummaryMemory(llm=llm),
            verbose=False
        )
        #Create prompt
        template = f"""
        Prompt: {prompt}
        Relevant Docuemnts: {relevant_docs}
        """
        #Return the answer
        resp = conversation_sum(template)
        return resp['response']

    


For more details, please visit iris-RAG-Gen open exchange application page.

Thanks

1 Comment
Discussion (1)2
Log in or sign up to continue