Question
· Apr 21

My AI use case - need help with Ollama and / or Langchain

I am brand new to using AI. I downloaded some medical visit progress notes from my Patient Portal. I extracted text from PDF files. I found a YouTube video that showed how to extract metadata using an OpenAI query / prompt such as this one:

ollama-ai-iris/data/prompts/medical_progress_notes_prompt.txt at main · oliverwilms/ollama-ai-iris
 

I combined @Rodolfo Pscheidt Jr https://github.com/RodolfoPscheidtJr/ollama-ai-iris with some files from @Guillaume Rongier https://openexchange.intersystems.com/package/iris-rag-demo.

I attempted to run

python3 query_data.py
Traceback (most recent call last):
  File "/irisdev/app/query_data.py", line 39, in <module>
    response = query_engine.query(prompt_data)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/base/base_query_engine.py", line 52, in query
    query_result = self._query(str_or_query_bundle)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/query_engine/retriever_query_engine.py", line 179, in _query
    response = self._response_synthesizer.synthesize(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/response_synthesizers/base.py", line 241, in synthesize
    response_str = self.get_response(
                   ^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/response_synthesizers/compact_and_refine.py", line 43, in get_response
    return super().get_response(
           ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/response_synthesizers/refine.py", line 179, in get_response
    response = self._give_response_single(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/response_synthesizers/refine.py", line 241, in _give_response_single
    program(
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/response_synthesizers/refine.py", line 85, in __call__
    answer = self._llm.predict(
             ^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/llms/llm.py", line 605, in predict
    chat_response = self.chat(messages)
                    ^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/instrumentation/dispatcher.py", line 322, in wrapper
    result = func(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/core/llms/callbacks.py", line 173, in wrapped_llm_chat
    f_return_val = f(_self, messages, **kwargs)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/llama_index/llms/ollama/base.py", line 322, in chat
    response = self.client.chat(
               ^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/ollama/_client.py", line 333, in chat
    return self._request(
           ^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/ollama/_client.py", line 178, in _request
    return cls(**self._request_raw(*args, **kwargs).json())
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/irisowner/.local/lib/python3.12/site-packages/ollama/_client.py", line 124, in _request_raw
    raise ConnectionError(CONNECTION_ERROR_MESSAGE) from None
ConnectionError: Failed to connect to Ollama. Please check that Ollama is downloaded, running and accessible. https://ollama.com/download


 

Discussion (1)1
Log in or sign up to continue

Hi Oliver,

Based on the error message, it looks like your error is coming from your connection to Ollama. Maybe double check the installation of all required packages, and that you are running in the proper python environment if you are using one. 

As a side note, I noticed you said that you are using medical visit progress notes from your Patient Portal as data. Just as a heads up, you probably should not send any private information to OpenAI -- unless you have some kind of business agreement with them, their LLMs are not HIPAA compliant and they can also keep + use whatever data you send them to further train their products. If you are using anonymized or mock/fake data, then that won't be an issue though :)