First, clone the official Ollama Docker repository from GitHub: `git clone https://github.com/ollama/ollama-docker`. Then, navigate to the cloned directory and run the `docker-setup.sh` script to automatically download and configure the necessary Docker images and set up your local Ollama environment.
Stop writing the same event handlers over and over. Comorando executes your logic automatically.
Exponential backoff, dead-letter queues, and alert escalation — built in, no config needed.
Gemma 4 evaluates every event and suggests the optimal action based on your business rules.
# Ollama docker-setup — integrated with Comorando automation
import ollama
# docker-setup setup
client = ollama.Client(host='http://localhost:11434')
# List available models
models = client.list()
print([m['name'] for m in models['models']])
# Run inference
response = client.chat(
model='gemma3:4b',
messages=[{'role': 'user', 'content': 'Analyze this event data.'}],
stream=False
)
print(response['message']['content'])
# Send AI result to Comorando for downstream automation
import httpx
httpx.post('https://api.comorando.com/decisions', json={
'event': 'ollama.docker_setup',
'data': response['message'],
'org_id': os.environ['COMORANDO_ORG_ID']
}, headers={'Authorization': f"Bearer {os.environ['COMORANDO_API_KEY']}"})
Free tier includes 10,000 events/month. No credit card required.
Start Free See Pricing