Find

Article
· 22 hr ago 5m read

Unveiling the LangGraph

How to Build Applications with LangGraph: A Step-by-Step Guide

Tags: #LangGraph #LangChain #AI #Agents #Python #LLM #StateManagement #Workflows


Hi everyone, I want to tell you a little about LangGraph, a tool that I'm studying and developing.

Basically traditional AI applications often face challenges when dealing with complex workflows and dynamic states. LangGraph offers a robust solution, enabling the creation of stateful agents that can manage complex conversations, make context-based decisions, and execute sophisticated workflows.

This article provides a step-by-step guide to building applications using LangGraph, a framework for creating multi-step agents with state graphs.


Implementation Steps:

  1. Set Up Environment and Install Dependencies
  2. Define Application State
  3. Create Graph Nodes
  4. Configure State Graph
  5. Run the Agent

1. Set Up Environment and Install Dependencies

The first step is to set up the Python environment and install the necessary libraries:

pip install langgraph langchain langchain-openai

Configure your API credentials:

import os
from langchain_openai import ChatOpenAI

# Configure your API Key
os.environ["OPENAI_API_KEY"] = "your-api-key-here"

# Initialize the model
llm = ChatOpenAI(model="gpt-4", temperature=0)

2. Define Application State

LangGraph uses a TypedDict to define the state that will be shared between graph nodes:

from typing import TypedDict, Annotated
from operator import add

class AgentState(TypedDict):
    """State shared between graph nodes"""
    messages: Annotated[list, add]
    user_input: str
    response: str
    next_step: str

This state stores:

  • messages: History of exchanged messages
  • user_input: Current user input
  • response: Response generated by the agent
  • next_step: Next action to be executed

Graph State


3. Create Graph Nodes

3.1 - Input Processing Node

This node processes user input and prepares the context:

def process_input(state: AgentState) -> AgentState:
    """Processes user input"""
    user_message = state["user_input"]
    
    # Add message to history
    state["messages"].append({
        "role": "user",
        "content": user_message
    })
    
    # Define next step
    state["next_step"] = "analyze"
    
    return state

3.2 - Analysis and Decision Node

This node uses the LLM to analyze input and decide the next action:

from langchain.prompts import ChatPromptTemplate

def analyze_request(state: AgentState) -> AgentState:
    """Analyzes the request and decides the next action"""
    
    prompt = ChatPromptTemplate.from_messages([
        ("system", "You are an intelligent assistant. Analyze the user's request and determine the best way to respond."),
        ("user", "{input}")
    ])
    
    chain = prompt | llm
    
    result = chain.invoke({
        "input": state["user_input"]
    })
    
    state["response"] = result.content
    state["next_step"] = "respond"
    
    return state

3.3 - Response Node

This node formats and returns the final response:

def generate_response(state: AgentState) -> AgentState:
    """Generates the final response"""
    
    # Add response to history
    state["messages"].append({
        "role": "assistant",
        "content": state["response"]
    })
    
    state["next_step"] = "END"
    
    return state

4. Configure State Graph

4.1 - Create the Graph

Now let's connect all nodes in a state graph:

from langgraph.graph import StateGraph, END

# Create the graph
workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node("process_input", process_input)
workflow.add_node("analyze", analyze_request)
workflow.add_node("respond", generate_response)

# Define entry point
workflow.set_entry_point("process_input")

# Add transitions (edges)
workflow.add_edge("process_input", "analyze")
workflow.add_edge("analyze", "respond")
workflow.add_edge("respond", END)

# Compile the graph
app = workflow.compile()

4.2 - Visualize the Graph

LangGraph allows you to visualize the graph structure:

from IPython.display import Image, display

try:
    display(Image(app.get_graph().draw_mermaid_png()))
except Exception:
    print("Graph visualization requires additional dependencies")

Graph Flow


5. Run the Agent

5.1 - Execute a Simple Query

def run_agent(user_input: str):
    """Runs the agent with user input"""
    
    # Initial state
    initial_state = {
        "messages": [],
        "user_input": user_input,
        "response": "",
        "next_step": ""
    }
    
    # Execute the graph
    result = app.invoke(initial_state)
    
    return result["response"]

# Test the agent
response = run_agent("What is the capital of France?")
print(f"Response: {response}")

Expected output:

Response: The capital of France is Paris.

5.2 - Execute with Streaming

For interactive applications, you can use streaming:

async def run_agent_stream(user_input: str):
    """Runs the agent with streaming"""
    
    initial_state = {
        "messages": [],
        "user_input": user_input,
        "response": "",
        "next_step": ""
    }
    
    async for event in app.astream(initial_state):
        for node_name, node_state in event.items():
            print(f"\n--- {node_name} ---")
            if "response" in node_state and node_state["response"]:
                print(f"Partial response: {node_state['response']}")

Advanced Features

Checkpoints and Persistence

LangGraph supports checkpoints to save state:

from langgraph.checkpoint.memory import MemorySaver

# Add memory to the graph
memory = MemorySaver()
app_with_memory = workflow.compile(checkpointer=memory)

# Execute with persistence
config = {"configurable": {"thread_id": "user-123"}}
result = app_with_memory.invoke(initial_state, config)

Conditions and Dynamic Routing

You can add conditional logic for routing:

def router(state: AgentState) -> str:
    """Determines the next node based on state"""
    
    if "urgent" in state["user_input"].lower():
        return "priority_handler"
    else:
        return "normal_handler"

# Add conditional routing
workflow.add_conditional_edges(
    "analyze",
    router,
    {
        "priority_handler": "priority_node",
        "normal_handler": "normal_node"
    }
)

Use Cases

LangGraph is ideal for:

  1. Complex Chatbots: Managing multi-turn conversations with context
  2. Autonomous Agents: Creating agents that make state-based decisions
  3. Processing Workflows: Orchestrating data processing pipelines
  4. Multi-Agent Systems: Coordinating multiple specialized agents

See It in Action

For more details and practical examples, visit:


Conclusion

LangGraph offers a powerful and flexible approach to building stateful AI applications. By combining state graphs with LLMs, you can create sophisticated systems that manage complex conversations, make contextual decisions, and execute dynamic workflows.

LangGraph's modular structure allows you to scale from simple chatbots to complex multi-agent systems while keeping your code organized and maintainable.

 

Thanks!


Discussion (0)1
Log in or sign up to continue
Article
· 22 hr ago 6m read

Desvendando o LangGraph

Como Construir Aplicações com LangGraph: Um Guia Passo a Passo

Tags: #LangGraph #LangChain #AI #Agents #Python #LLM #StateManagement #Workflows


Olá pessoal, quero trazer para vocês aqui um pouco sobre o LangGraph, uma ferramenta que estou estudando e desenvolvendo.

Basicamente aplicações tradicionais de IA frequentemente enfrentam desafios ao lidar com fluxos de trabalho complexos e estados dinâmicos. O LangGraph oferece uma solução robusta, permitindo criar agentes com estado que podem gerenciar conversas complexas, tomar decisões baseadas em contexto e executar workflows sofisticados.

Este artigo fornece um guia passo a passo para construir aplicações utilizando LangGraph, um framework para criação de agentes multi-etapas com grafos de estado.


Passos de Implementação:

  1. Configurar o Ambiente e Instalar Dependências
  2. Definir o Estado da Aplicação
  3. Criar os Nós do Grafo
  4. Configurar o Grafo de Estado
  5. Executar o Agente

1. Configurar o Ambiente e Instalar Dependências

O primeiro passo é configurar o ambiente Python e instalar as bibliotecas necessárias:

pip install langgraph langchain langchain-openai

Configure suas credenciais de API:

import os
from langchain_openai import ChatOpenAI

# Configure sua API Key
os.environ["OPENAI_API_KEY"] = "sua-api-key-aqui"

# Inicialize o modelo
llm = ChatOpenAI(model="gpt-4", temperature=0)

2. Definir o Estado da Aplicação

O LangGraph utiliza um TypedDict para definir o estado que será compartilhado entre os nós do grafo:

from typing import TypedDict, Annotated
from operator import add

class AgentState(TypedDict):
    """Estado compartilhado entre os nós do grafo"""
    messages: Annotated[list, add]
    user_input: str
    response: str
    next_step: str

Este estado armazena:

  • messages: Histórico de mensagens trocadas
  • user_input: Entrada atual do usuário
  • response: Resposta gerada pelo agente
  • next_step: Próxima ação a ser executada

Estado do Grafo


3. Criar os Nós do Grafo

3.1 - Nó de Processamento de Entrada

Este nó processa a entrada do usuário e prepara o contexto:

def process_input(state: AgentState) -> AgentState:
    """Processa a entrada do usuário"""
    user_message = state["user_input"]
    
    # Adiciona a mensagem ao histórico
    state["messages"].append({
        "role": "user",
        "content": user_message
    })
    
    # Define próximo passo
    state["next_step"] = "analyze"
    
    return state

3.2 - Nó de Análise e Decisão

Este nó utiliza o LLM para analisar a entrada e decidir a próxima ação:

from langchain.prompts import ChatPromptTemplate

def analyze_request(state: AgentState) -> AgentState:
    """Analisa a requisição e decide a próxima ação"""
    
    prompt = ChatPromptTemplate.from_messages([
        ("system", "Você é um assistente inteligente. Analise a requisição do usuário e determine a melhor forma de responder."),
        ("user", "{input}")
    ])
    
    chain = prompt | llm
    
    result = chain.invoke({
        "input": state["user_input"]
    })
    
    state["response"] = result.content
    state["next_step"] = "respond"
    
    return state

3.3 - Nó de Resposta

Este nó formata e retorna a resposta final:

def generate_response(state: AgentState) -> AgentState:
    """Gera a resposta final"""
    
    # Adiciona resposta ao histórico
    state["messages"].append({
        "role": "assistant",
        "content": state["response"]
    })
    
    state["next_step"] = "END"
    
    return state

4. Configurar o Grafo de Estado

4.1 - Criar o Grafo

Agora vamos conectar todos os nós em um grafo de estado:

from langgraph.graph import StateGraph, END

# Criar o grafo
workflow = StateGraph(AgentState)

# Adicionar os nós
workflow.add_node("process_input", process_input)
workflow.add_node("analyze", analyze_request)
workflow.add_node("respond", generate_response)

# Definir o ponto de entrada
workflow.set_entry_point("process_input")

# Adicionar as transições (edges)
workflow.add_edge("process_input", "analyze")
workflow.add_edge("analyze", "respond")
workflow.add_edge("respond", END)

# Compilar o grafo
app = workflow.compile()

4.2 - Visualizar o Grafo

O LangGraph permite visualizar a estrutura do grafo:

from IPython.display import Image, display

try:
    display(Image(app.get_graph().draw_mermaid_png()))
except Exception:
    print("Visualização do grafo requer dependências adicionais")

Fluxo do Grafo


5. Executar o Agente

5.1 - Executar uma Consulta Simples

def run_agent(user_input: str):
    """Executa o agente com a entrada do usuário"""
    
    # Estado inicial
    initial_state = {
        "messages": [],
        "user_input": user_input,
        "response": "",
        "next_step": ""
    }
    
    # Executar o grafo
    result = app.invoke(initial_state)
    
    return result["response"]

# Testar o agente
response = run_agent("Qual é a capital da França?")
print(f"Resposta: {response}")

Saída esperada:

Resposta: A capital da França é Paris.

5.2 - Executar com Streaming

Para aplicações interativas, você pode usar streaming:

async def run_agent_stream(user_input: str):
    """Executa o agente com streaming"""
    
    initial_state = {
        "messages": [],
        "user_input": user_input,
        "response": "",
        "next_step": ""
    }
    
    async for event in app.astream(initial_state):
        for node_name, node_state in event.items():
            print(f"\n--- {node_name} ---")
            if "response" in node_state and node_state["response"]:
                print(f"Resposta parcial: {node_state['response']}")

Funcionalidades Avançadas

Checkpoints e Persistência

O LangGraph suporta checkpoints para salvar o estado:

from langgraph.checkpoint.memory import MemorySaver

# Adicionar memória ao grafo
memory = MemorySaver()
app_with_memory = workflow.compile(checkpointer=memory)

# Executar com persistência
config = {"configurable": {"thread_id": "user-123"}}
result = app_with_memory.invoke(initial_state, config)

Condições e Roteamento Dinâmico

Você pode adicionar lógica condicional para roteamento:

def router(state: AgentState) -> str:
    """Determina o próximo nó baseado no estado"""
    
    if "urgente" in state["user_input"].lower():
        return "priority_handler"
    else:
        return "normal_handler"

# Adicionar roteamento condicional
workflow.add_conditional_edges(
    "analyze",
    router,
    {
        "priority_handler": "priority_node",
        "normal_handler": "normal_node"
    }
)

Casos de Uso

O LangGraph é ideal para:

  1. Chatbots Complexos: Gerenciar conversas multi-turno com contexto
  2. Agentes Autônomos: Criar agentes que tomam decisões baseadas em estado
  3. Workflows de Processamento: Orquestrar pipelines de processamento de dados
  4. Sistemas Multi-Agente: Coordenar múltiplos agentes especializados

Veja em Ação

Para mais detalhes e exemplos práticos, visite:


Conclusão

O LangGraph oferece uma abordagem poderosa e flexível para construir aplicações de IA com estado. Ao combinar grafos de estado com LLMs, você pode criar sistemas sofisticados que gerenciam conversas complexas, tomam decisões contextuais e executam workflows dinâmicos.

A estrutura modular do LangGraph permite escalar desde simples chatbots até sistemas multi-agente complexos, mantendo o código organizado e fácil de manter.

Discussion (0)1
Log in or sign up to continue
Question
· 23 hr ago

Populating Persistent Class from JSON

As I am iterating through the FHIR JSON Response from a Patient Search, I am running into an issue where the extracted values are not being popualted into the Response object that I have created. If I do a $$$LOGINFO on them they will show up in the Object log, however if I try the following for example I get an error.

set target.MRN = identifier.value

ERROR <Ens>ErrException: <INVALID OREF>Transform+15 ^osuwmc.Epic.FHIR.DTL.FHIRResponseToPatient.1 -- logged as '-'
number - @'
set target.MRN = identifier.value
'

Here is my Ens.DataTransform

Class osuwmc.Epic.FHIR.DTL.FHIRResponseToPatient Extends Ens.DataTransform
{

ClassMethod Transform(source As HS.FHIRServer.Interop.Response, target As osuwmc.Epic.FHIR.DataStructures.PatientSearch.Response) As %Status
{
  Set tSC=$$$OK
  set tQuickStream = ##Class(HS.SDA3.QuickStream).%OpenId(source.QuickStreamId)
  set tRawJSON = ##Class(%Library.DynamicObject).%FromJSON(tQuickStream)
  $$$TRACE(tRawJSON.%ToJSON())
  set tResource = tRawJSON.entry.%Get(0).resource
  if tResource.resourceType '= "Patient" {
    set tSC = $$$ERROR($$$GeneralError, "FHIRResponseToPatient: Resource type is not Patient")
    return tSC
  }
  else{
    $$$LOGINFO("Resource Type: "_tResource.resourceType)
    set mrnIter = tResource.identifier.%GetIterator()
    while mrnIter.%GetNext(,.identifier) {
      if identifier.system = "urn:oid:1.2.840.114350.1.13.172.2.7.5.737384.100" {
          set target.MRN = identifier.value
      }
    }
    set NameIter = tResource.name.%GetIterator()
    while NameIter.%GetNext(,.humanName) {
      if humanName.use = "official" {
        set target.LastName = humanName.family
        set target.FirstName = humanName.given.%Get(0)
      }
    }
    set target.DOB = tResource.birthDate
    set target.Gender = tResource.gender
    set addrIter = tResource.address.%GetIterator()
    while addrIter.%GetNext(,.address) {
      if address.use = "home" {
     set target.Address = address.line.%Get(0)
        set target.City = address.city
        set target.State = address.state
        set target.PostalCode = address.postalCode
      }
    }
  }
  return tSC
}

}

If I replace the set statements with $$$LOGINFO, the values will show up in the trace viewer.

Discussion (0)1
Log in or sign up to continue
Article
· Dec 26 1m read

Usando o IRIS como um banco de dados vetorial

As capacidades integradas de busca vetorial do InterSystems IRIS nos permitem pesquisar dados não estruturados e semiestruturados. Os dados são convertidos em vetores (também chamados de ‘embeddings’) e, em seguida, armazenados e indexados no InterSystems IRIS para busca semântica, geração aumentada de recuperação (RAG), análise de texto, motores de recomendação e outros casos de uso.

Esta é uma demonstração simples do IRIS sendo usado como um banco de dados vetorial e de busca por similaridade no IRIS.

Pré-requisitos:

  • Python
  • InterSystems IRIS for Health – que será utilizado como o banco de dados vetorial

Repositório: https://github.com/piyushisc/vectorsearchusingiris

Passos a seguir:

  1. Clone o repositório.
  2. Abra o VS Code, conecte-se à instância e ao namespace desejados do IRIS e compile as classes.
  3. Abra o Terminal do IRIS e execute o comando do ##class(vectors.vectorstore).InsertEmbeddings(), que lê o texto do arquivo text.txt, gera os embeddings e os armazena no IRIS.
  4. Execute o comando do ##class(vectors.vectorstore).VectorSearch("termos_de_busca") com as palavras desejadas para realizar a busca por similaridade. O IRIS retornará as três correspondências mais próximas.
  1. alt text
Discussion (0)1
Log in or sign up to continue
Question
· Dec 26

Warning on Message Body

I am trying to centralize our FHIR queries into a single BP object that would send the FHIR query to the EMR, interpret the response into a %Persistent structure that could be sent back to the requestor. In theory it seemed like it would work but I am running into an issue..

"Warning on Message body 5@osuwmc.Epic.FHIR.DataStructures.PatientSearch.Record'
/ 229 because Status 'ERROR <Ens>ErrException: <PROPERTY DOES NOT EXIST>Transform+3 ^osuwmc.Scott.FHIR.DemoOutboundHL7Message.1 *DocType,osuwmc.Epic.FHIR.DataStructures.PatientSearch.Record -- logged as '-'
number - @' Set:""=source.DocType tBlankSrc=1, source.DocType="ORMORUPDF:MDM_T02"''
matched ReplyCodeAction 1 : 'E=W'
resulting in Action code W"

When I interpret the FHIR Response into a %Persistent structure, I get the error above. Am I missing something? I do not reference the DocType anywhere in my Transformation...

Class osuwmc.Epic.FHIR.DTL.FHIRResponseToPatient Extends Ens.DataTransform
{

ClassMethod Transform(source As HS.FHIRServer.Interop.Response, target As osuwmc.Epic.FHIR.DataStructures.PatientSearch.Record) As %Status
{
  Set tSC=$$$OK
  set tQuickStream = ##Class(HS.SDA3.QuickStream).%OpenId(source.QuickStreamId)
  set tRawJSON = ##Class(%Library.DynamicObject).%FromJSON(tQuickStream)
  $$$TRACE(tRawJSON.%ToJSON())
  set tResource = tRawJSON.entry.%Get(0).resource
  $$$LOGINFO("Resource Type: "_tResource.resourceType)
  if tResource.resourceType '= "Patient" {
    set tSC = $$$ERROR($$$GeneralError, "FHIRResponseToPatient: Resource type is not Patient")
    return tSC
  }
  else{
    set target = ##class(osuwmc.Epic.FHIR.DataStructures.PatientSearch.Record).%New()
    set mrnIter = tResource.identifier.%GetIterator()
    while mrnIter.%GetNext(,.identifier) {
      if identifier.system = "urn:oid:1.2.840.114350.1.13.172.2.7.5.737384.100" {
        set target.MRN = identifier.value
      }
    }
    set NameIter = tResource.name.%GetIterator()
    while NameIter.%GetNext(,.humanName) {
      if humanName.use = "official" {
        set target.lastname = humanName.family
        set target.firstname = humanName.given.%Get(0)
      }
    set target.birthdate = tResource.birthDate
    set target.gender = tResource.gender
    }
    set addrIter = tResource.address.%GetIterator()
    while addrIter.%GetNext(,.address) {
      if address.use = "home" {
        set target.address = address.line.%Get(0)
        set target.city = address.city
        set target.state = address.state
        set target.postalcode = address.postalCode
      }
    }
  }
  $$$LOGINFO(target.MRN_" "_target.lastname_" "_target.firstname_" "_target.birthdate_" "_target.gender_" "_target.address_" "_target.city_" "_target.state_" "_target.postalcode)
  return tSC
}

}

Maybe it's because I use osuwmc.Epic.FHIR.DataStructures.PatientSearch.Record for the request to get generated, and for me to send the response back to the requestor? Should I be using a different %Persistent class, so the system does not get confused?

1 new Comment
Discussion (1)1
Log in or sign up to continue