Ollama Api Usage — Complete Guide

To use Ollama's API in a developer environment, first, install Ollama locally following the instructions on their website, then run `ollama api` to start the API server on the default port (usually 11434). Finally, you can send requests to the API using tools like `curl` or programming languages like Python, specifying the model you want to interact with.

Zero Boilerplate

Stop writing the same event handlers over and over. Comorando executes your logic automatically.

Smart Retries

Exponential backoff, dead-letter queues, and alert escalation — built in, no config needed.

AI Decisions

Gemma 4 evaluates every event and suggests the optimal action based on your business rules.

Code Example

# Ollama api-usage — integrated with Comorando automation
import ollama

# api-usage setup
client = ollama.Client(host='http://localhost:11434')

# List available models
models = client.list()
print([m['name'] for m in models['models']])

# Run inference
response = client.chat(
    model='gemma3:4b',
    messages=[{'role': 'user', 'content': 'Analyze this event data.'}],
    stream=False
)
print(response['message']['content'])

# Send AI result to Comorando for downstream automation
import httpx
httpx.post('https://api.comorando.com/decisions', json={
    'event': 'ollama.api_usage',
    'data': response['message'],
    'org_id': os.environ['COMORANDO_ORG_ID']
}, headers={'Authorization': f"Bearer {os.environ['COMORANDO_API_KEY']}"})

Automate your backend events today

Free tier includes 10,000 events/month. No credit card required.

Start Free See Pricing