How to connect Ollama AI using Apache Camel and Langchain4j component.

Whatsapp Image 2024 11 22 At 6.02.14 Pm
Image generated with Meta AI.

This post will explore the synergy between three such tools: Ollama, Apache Camel and LangChain4j.

Setting Up the Development Environment.

To begin this tutorial, you will need:

  • Ollama. Provides a way to run large language models (LLMs) locally. You can run many models such as LLama3, Mistral, CodeLlama and many others on your machine, with full CPU and GPU support.
  • Visual Studio Code. With Kaoto, Java and Quarkus plugins installed.
  • OpenJDK 21
  • Maven
  • Quarkus 3.16
  • Quarkus Dev Services. Feature of Quarkus that simplifies the development and testing of applications the development and testing of applications that rely on external services such as databases
    such as databases, messaging systems and other resources.
  • Openshift CLI
  • Openshift Developer Sandbox Account.

You can download the complete code at the following Github Repo.

1. Creating the Quarkus project.

mvn io.quarkus:quarkus-maven-plugin:3.16.2:create \
-DprojectGroupId=dev.mikeintoch \
-DprojectArtifactId=camel-simple-ollama \
-Dextensions="camel-quarkus-core,quarkus-langchain4j-core,quarkus-langchain4j-ollama,camel-quarkus-langchain4j-chat,camel-quarkus-platform-http,camel-quarkus-yaml-dsl"

2. Configure Quarkus to run Ollama LLM.

Following instructions will be executed on Visual Studio Code.

Open application.properties file and add following lines:

#Configure Ollama local model
quarkus.langchain4j.ollama.chat-model.model-id=qwen2.5:0.5b
quarkus.langchain4j.ollama.chat-model.temperature=0.0
quarkus.langchain4j.ollama.log-requests=true
quarkus.langchain4j.log-responses=true
quarkus.langchain4j.ollama.timeout=180s

** Quarkus uses Ollama to run llm locally and also autowire configuration for the use in apache camel components in the following steps. **

3. Creating Apache Camel route using Kaoto.

Create new folder named route into src/main/resources folder.

Create new file into src/main/resources/routes folder and named ollama-route.camel.yaml and Visual Studio Code opens Kaoto visual editor.

Screenshot 2024 11 22 At 11.40.11 a.m.

Click on the +New button and a new Route will be created.

Screenshot 2024 11 22 At 11.07.37 a.m.

Click on 3 dots icon into timer component and select Replace.

Screenshot 2024 11 22 At 11.08.57 a.m.

Search and select into Catalog the platform-http component.

Screenshot 2024 11 22 At 11.11.28 a.m.

Configure required platform-http properties:

  • Set Path with value /camel/chat

By default the platform-http will be serving on port 8080.

Screenshot 2024 11 22 At 11.20.30 a.m.

Click on 3 dots on platform-http component and click on Add Step.

Screenshot 2024 11 22 At 11.26.55 a.m.

Search and select langchain4j-chat component in catalog.

Screenshot 2024 11 22 At 11.28.41 a.m.

Configure required langchain4j-chat properties:

  • Set Chat Id with value myllm.
  • Set Chat Operation with CHAT_MULTIPLE_MESSAGES

Screenshot 2024 11 22 At 11.32.31 a.m.

You must process the user input message to langchain4j-chat component able to use then select 3 dots in langchain4j component and select Prepend

Screenshot 2024 11 22 At 12.16.18 p.m.

Search and select Process Component in the catalog.

Screenshot 2024 11 22 At 12.17.37 p.m.

Configure required properties:

  • Set Ref with value createChatMessage.

Screenshot 2024 11 22 At 12.19.55 p.m.

** The process component will use createChatMessage method you will create in the following step. **

4. Create Process to send user input to LLM.

Create new Java Class into src/main/java folder named Bindings.java

import java.util.ArrayList;
import java.util.List;

import org.apache.camel.BindToRegistry;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;

import dev.langchain4j.data.message.ChatMessage;
import dev.langchain4j.data.message.UserMessage;

public class Bindings extends RouteBuilder{

    @Override
    public void configure() throws Exception {
        // Routes are loading in yaml files.
    }

    @BindToRegistry(lazy=true)
    public static Processor createChatMessage(){

        return new Processor() {
            public void process(Exchange exchange) throws Exception{

                String payload = exchange.getMessage().getBody(String.class);
                List<ChatMessage> messages = new ArrayList<>();


                messages.add(new UserMessage(payload));

                exchange.getIn().setBody(messages);
            }
        };
    }
}

** This class helps to create Camel Processor  to transform the user input to an object can handle for langchain4j component in the route. **

5. Include our Route to be loaded by quarkus project.

Camel Quarkus supports several domain specific languages (DSLs) to define Camel Routes.

It is also possible to include yaml DSL routes adding following line on the application.properties file.

# routes to load
camel.main.routes-include-pattern = routes/*.yaml

** This will be load all routes in the src/main/resources/routes folder.**

6. Test the app locally.

Run the application using Maven, open a Terminal in Visual Studio code and run following command.

mvn quarkus:dev

Once it has started, Quarkus call ollama and run locally your llm, open a terminal and verifiy with the following command.

ollama ps

NAME            ID              SIZE      PROCESSOR    UNTIL
qwen2.5:0.5b    a8b0c5157701    1.4 GB    100% GPU     4 minutes from now

To test the app send a POST request to localhost:8080/camel/chat with a plain text body input.

Screenshot 2024 11 22 At 1.54.48 p.m.

If everything was successful, you receive an answer from LLM model.

7. Connect to Red Hat Developer Sandbox.

You need access to the Red Hat Developer Sandbox, if you don't have please follow this article to get one.

Login into Developer Sandbox and click on your username and select "Copy login command"

Screenshot 2024 11 22 At 2.15.01 p.m.

Copy text in the "Log in with this token" section and paste in Terminal on Visual Studio Code

oc login --token=YOUR_TOKEN --server=https://YOUR_SANDBOX_API:6443

Change to your personal project

oc project yourusername-dev

8. Deploy your LLM on Developer Sandbox.

To deploy your llm in the sandbox you can use following file, is using same llm mode you run locally

Then run in a terminal following command and wait for a minute.

oc apply -f https://raw.githubusercontent.com/mikeintoch/camel-ollama-chat/refs/heads/main/llm-server.yaml

Due to this a new pod running llm  in Developer Sandbox.

Screenshot 2024 11 22 At 2.30.10 p.m.

9. Deploy App in Developer Sandbox

Add the quarkus extension to deploy on Openshift, run in terminal the following command.

./mvnw quarkus:add-extension -Dextensions="io.quarkus:quarkus-openshift"

Open application.properties file and add property to lanchain4j component connect with llm server on the Developer Sandbox.

%prod.quarkus.langchain4j.ollama.base-url=http://llm:8000/

Add properties to deploy on Red Hat Developer Sandbox

%prod.quarkus.openshift.route.expose=true
%prod.quarkus.openshift.deploy=true

** You want to test the application immediately then set the quarkus.openshift.route.expose config property to true to expose the service automatically. **    

Now, using maven to deploy on the Developer Sandbox environment.

mvn install

After a few minutes, a new pod will be deployed in your project.

Screenshot 2024 11 22 At 2.44.05 p.m.

Once deployment is done, get the route of the app

oc get route camel-simple-chat -o jsonpath='{.spec.host}{"\n"}'\n

And use to call the endpoint and verify the app works.

Screenshot 2024 11 22 At 2.52.53 p.m.

And That's it!

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *