Welcome back to the Learn Tech Tips blog! Today, we’ll explore how to use the Ollama model (free) in Langflow. If you’re not familiar with Langflow, don’t worry—we’ll cover that too. By the end of this tutorial, you’ll be able to build a complete chatbot using the Ollama model.
What is Langflow?
Langflow is a powerful tool designed to simplify the development of applications that utilize large language models (LLMs). It offers a user-friendly interface for integrating LLMs into your projects, enabling you to create chatbots, content generators, and more without extensive coding knowledge.
What is the Ollama Model?
The Ollama model is an open-source language model that can be easily integrated into applications for various tasks like text generation, conversation, and more. It's lightweight and free to use, making it an excellent choice for developers looking to incorporate AI capabilities into their projects.
Here is the docker compose file for Langflow, if you dont know how to start a docker, you can check ref on my blog with below link.
Check out the github here langflow source code: https://github.com/langflow-ai/langflow
Dockerfile for Langflow
After start this one on docker with command: docker compose up -d, you can access Langflow with URL: http://localhost:7860, on this place you can make a process for your Agent. Here I will make a Chatbot simple flow
Go to Ollama and download ollama docker compose file and run it on your localhost
And after we start with command docker compose up -d
Access with URL: localhost:11434 (you will see Ollama is running) --> it already succeed start
Now go inside the docker Ollama by command: docker exec -it <docker_id> bash and exec ollama run qqwen, (qwen is a any free model, you can reference on Ollama/library page for more model details, on this tutorial I will use qwen for demo)
Access this URL: http://localhost:11434/api/tags
If you get below response it already succeed
First go drag a Ollama Components into the dashboard like this and you will see the Ollama component but actually it cannot the qwen Model. So if you need to load it smoothly, Please switch to Python code and I will help you how to easy to do it
click on Ollama component and choose code:
On the DropdownInput please update like this one, this will help your componet contain default qwen model
DropdownInput(
name="model_name",
display_name="Model Name",
options=[
"qwen"
],
info="Refer to https://ollama.com/library for more models.",
refresh_button=True,
real_time_refresh=True,
),
and on the update_build_config function, please hide/remove this one like my source code below. Explain, the hide code is validate the model when it load from local, so we no need use it, we just use the qwen model so we just define it.
async def update_build_config(self, build_config: dict, field_value: Any, field_name: str | None = None):
if field_name == "mirostat":
if field_value == "Disabled":
build_config["mirostat_eta"]["advanced"] = True
build_config["mirostat_tau"]["advanced"] = True
build_config["mirostat_eta"]["value"] = None
build_config["mirostat_tau"]["value"] = None
else:
build_config["mirostat_eta"]["advanced"] = False
build_config["mirostat_tau"]["advanced"] = False
if field_value == "Mirostat 2.0":
build_config["mirostat_eta"]["value"] = 0.2
build_config["mirostat_tau"]["value"] = 10
else:
build_config["mirostat_eta"]["value"] = 0.1
build_config["mirostat_tau"]["value"] = 5
if field_name in {"base_url", "model_name"}:
if build_config["base_url"].get("load_from_db", False):
base_url_value = await self.get_variables(build_config["base_url"].get("value", ""), "base_url")
else:
base_url_value = build_config["base_url"].get("value", "")
if not await self.is_valid_ollama_url(base_url_value):
# Check if any URL in the list is valid
valid_url = ""
check_urls = URL_LIST
if self.base_url:
check_urls = [self.base_url, *URL_LIST]
for url in check_urls:
if await self.is_valid_ollama_url(url):
valid_url = url
break
if valid_url != "":
build_config["base_url"]["value"] = valid_url
else:
msg = "No valid Ollama URL found."
raise ValueError(msg)
''' if field_name in {"model_name", "base_url", "tool_model_enabled"}:
if await self.is_valid_ollama_url(self.base_url):
tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled
build_config["model_name"]["options"] = await self.get_models(self.base_url, tool_model_enabled)
elif await self.is_valid_ollama_url(build_config["base_url"].get("value", "")):
tool_model_enabled = build_config["tool_model_enabled"].get("value", False) or self.tool_model_enabled
build_config["model_name"]["options"] = await self.get_models(
build_config["base_url"].get("value", ""), tool_model_enabled
)
else:
build_config["model_name"]["options"] = []
if field_name == "keep_alive_flag":
if field_value == "Keep":
build_config["keep_alive"]["value"] = "-1"
build_config["keep_alive"]["advanced"] = True
elif field_value == "Immediately":
build_config["keep_alive"]["value"] = "0"
build_config["keep_alive"]["advanced"] = True
else:
build_config["keep_alive"]["advanced"] = False '''
return build_config
After update above code, you can see the qwen on the Ollama now. that perfect, so you can make a flow
Click on the share with Embed into side, you can get the javascript code like this one
<script src="https://cdn.jsdelivr.net/gh/logspace-ai/langflow-embedded-chat@v1.0.7/dist/build/static/js/bundle.min.js">
</script>
<langflow-chat flow_id="3784e580-ac42-4328-b9f9-fc9528eca508" host_url="http://localhost:7860" window_title="Basic Prompting">
</langflow-chat>
tada, here is your chat box free with Ollama and Langflow