Ollama Integration
Open Genie uses Ollama for local LLM inference. The client wrapper (lib/ollama.ts) uses the OpenAI-compatible API that Ollama exposes, accessed through the openai SDK.
Configuration
| Variable | Default | Description |
|---|---|---|
OLLAMA_BASE_URL | http://localhost:11434 | Ollama server URL |
OLLAMA_DEFAULT_MODEL | granite4:350m | Model for chat and text tasks |
OLLAMA_VISION_MODEL | llava:latest | Model for image/vision analysis |
Client API
chat(messages, options?)
Non-streaming chat completion.
import { ollama } from "@/lib/ollama";
const response = await ollama.chat([
{ role: "system", content: "You are a helpful assistant." },
{ role: "user", content: "Hello!" },
]);
// response.content = "Hi there! How can I help?"
chatStream(messages, options?)
Streaming chat that yields tokens as they're generated.
for await (const chunk of ollama.chatStream(messages)) {
process.stdout.write(chunk.content);
}
chatWithTools(messages, tools, options?)
Chat with function-calling support. Returns tool calls if the model decides to use them.
const tools = actionRegistry.toOllamaTools();
const response = await ollama.chatWithTools(messages, tools);
if (response.toolCalls) {
for (const call of response.toolCalls) {
console.log(`Tool: ${call.function.name}`);
console.log(`Args: ${JSON.stringify(call.function.arguments)}`);
}
}
vision(imageBase64, prompt, options?)
Analyze an image using the vision model.
const analysis = await ollama.vision(
base64ImageData,
"Describe what you see. Identify any people, vehicles, or animals."
);
// analysis.content = "I can see a person walking a dog..."
listModels()
List all available models on the Ollama server.
const models = await ollama.listModels();
// [{ name: "granite4:350m", ... }, { name: "llava:latest", ... }]
healthCheck()
Test connectivity to the Ollama server.
const healthy = await ollama.healthCheck();
// true or false
Tool Format
Actions are converted to OpenAI-compatible tool definitions via actionRegistry.toOllamaTools():
{
"type": "function",
"function": {
"name": "send_notification",
"description": "Send a notification to connected devices",
"parameters": {
"type": "object",
"properties": {
"title": { "type": "string", "description": "Notification title" },
"body": { "type": "string", "description": "Notification body" },
"target": { "type": "string", "description": "Target: all, phones, tablets, or deviceId" }
},
"required": ["title", "body"]
}
}
}
Model Requirements
- Chat model should support function/tool calling for the action system to work
- Vision model must accept base64 images (most multimodal Ollama models do)
- Both models run entirely locally — no data leaves your machine