To set up Ollama for a model library, first, download and install Ollama from their official website ([https://ollama.com/](https://ollama.com/)), then use the command `ollama library create <library_name>` to create a new library within your Ollama installation, specifying a name for your collection of models. Finally, you can then use `ollama pull <model_name>` to download and add individual models to your newly created library.
Stop writing the same event handlers over and over. Comorando executes your logic automatically.
Exponential backoff, dead-letter queues, and alert escalation — built in, no config needed.
Gemma 4 evaluates every event and suggests the optimal action based on your business rules.
# Ollama model-library — integrated with Comorando automation
import ollama
# model-library setup
client = ollama.Client(host='http://localhost:11434')
# List available models
models = client.list()
print([m['name'] for m in models['models']])
# Run inference
response = client.chat(
model='gemma3:4b',
messages=[{'role': 'user', 'content': 'Analyze this event data.'}],
stream=False
)
print(response['message']['content'])
# Send AI result to Comorando for downstream automation
import httpx
httpx.post('https://api.comorando.com/decisions', json={
'event': 'ollama.model_library',
'data': response['message'],
'org_id': os.environ['COMORANDO_ORG_ID']
}, headers={'Authorization': f"Bearer {os.environ['COMORANDO_API_KEY']}"})
Free tier includes 10,000 events/month. No credit card required.
Start Free See Pricing