Question
· Apr 19

Problem with ollama container in langchain-iris-tool

I cloned @Yuri.Marx's langchain-iris-tool repo and modified docker-compose yaml per this post:

https://community.intersystems.com/post/error-when-trying-langchain-iris...

Now I see this:

docker ps -a
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS                          PORTS                                                                                                                                                                      NAMES
c585beb367e6   ollama/ollama:latest       "/usr/bin/bash /mode…"   6 minutes ago   Restarting (1) 55 seconds ago                                                                                                                                                                              ollama
c59535140780   langchain-iris-tool-iris   "/tini -- /docker-en…"   6 minutes ago   Up 6 minutes (healthy)          2188/tcp, 8501/tcp, 54773/tcp, 0.0.0.0:51972->1972/tcp, :::51972->1972/tcp, 0.0.0.0:53795->52773/tcp, :::53795->52773/tcp, 0.0.0.0:32770->53773/tcp, :::32770->53773/tcp   langchain-iris-tool-iris-1
e898e27c7275   yourstreamlitapp:latest    "streamlit run /usr/…"   6 minutes ago   Up 6 minutes                    0.0.0.0:8501->8501/tcp, :::8501->8501/tcp                                                                                                                                  langchain-iris-tool-streamlit-1

 

ollama is Restarting. Why?

I get this error when I try Iris Classes Chat:

ValueError: Error raised by inference endpoint: HTTPConnectionPool(host='ollama', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NameResolutionError("<urllib3.connection.HTTPConnection object at 0x7f4d75c88d90>: Failed to resolve 'ollama' ([Errno -5] No address associated with hostname)"))

Discussion (8)2
Log in or sign up to continue

Yes, it is:

services:
  iris:
    build: 
      context: .
      dockerfile: Dockerfile
    restart: always
    expose:
      - 8501
    ports: 
      - 51972:1972
      - 53795:52773
      - 53773
    networks:
      - iris-llm2-network
    volumes:
      - ./:/irisdev/app
      - ./init.sh:/docker-entrypoint-initdb.d/init.sh
  ollama:
    image: ollama/ollama:latest
    ports:
      - 11434:11434
    volumes:
      - ./model_files:/model_files
      - .:/code
      - ./ollama:/root/.ollama
    container_name: ollama
    pull_policy: always
    tty: true
    restart: always
    entrypoint: ["/usr/bin/bash", "/model_files/run_ollama.sh"]
    networks:
      - iris-llm2-network
#  ollama:
#    image: ollama/ollama:latest
#    deploy:
#      resources:
#        reservations:
#          devices:
#          - driver: nvidia
#            capabilities: ["gpu"]
#            count: all  # Adjust count for the number of GPUs you want to use
#    ports:
#      - 11434:11434
#    volumes:
#      - ./model_files:/model_files 
#      - .:/code
#      - ./ollama:/root/.ollama
#    container_name: ollama_iris
#    pull_policy: always
#    tty: true
#    entrypoint: ["/bin/sh", "/model_files/run_ollama.sh"] # Loading the finetuned Mistral with the GGUF file
#    restart: always
#    environment:
#      - OLLAMA_KEEP_ALIVE=24h
#      - OLLAMA_HOST=0.0.0.0
#    networks:
#      - iris-llm2-network

  
  streamlit:
    build:
      context: ./
      dockerfile: ./streamlit/Dockerfile
    #stdin_open: true # docker run -i
    #tty: true 
    #entrypoint: /bin/sh
    command: streamlit run /usr/src/app/Menu.py --server.port 8501
    volumes:
      - ./src/python/rag:/usr/src/app
    expose: [8501]
    ports:
      - 8501:8501
    image: yourstreamlitapp:latest 
    networks:
      - iris-llm2-network
        
networks:
  iris-llm2-network:
    driver: bridge