To set up Ollama for custom models, first, download the Ollama CLI from their website and install it on your development environment (e.g., macOS, Linux, or Windows). Then, use the `ollama pull <model_name>` command to download the desired model files directly to your machine, which Ollama will then manage and allow you to run locally.
Stop writing the same event handlers over and over. Comorando executes your logic automatically.
Exponential backoff, dead-letter queues, and alert escalation — built in, no config needed.
Gemma 4 evaluates every event and suggests the optimal action based on your business rules.
# Ollama custom-models — integrated with Comorando automation
import ollama
# custom-models setup
client = ollama.Client(host='http://localhost:11434')
# List available models
models = client.list()
print([m['name'] for m in models['models']])
# Run inference
response = client.chat(
model='gemma3:4b',
messages=[{'role': 'user', 'content': 'Analyze this event data.'}],
stream=False
)
print(response['message']['content'])
# Send AI result to Comorando for downstream automation
import httpx
httpx.post('https://api.comorando.com/decisions', json={
'event': 'ollama.custom_models',
'data': response['message'],
'org_id': os.environ['COMORANDO_ORG_ID']
}, headers={'Authorization': f"Bearer {os.environ['COMORANDO_API_KEY']}"})
Free tier includes 10,000 events/month. No credit card required.
Start Free See Pricing