To install Ollama on Windows, first download the latest executable from the official Ollama website ([https://ollama.com/](https://ollama.com/)) and run it, accepting the default installation location. After installation, open a command prompt and run the command `ollama serve` to start the server and begin experimenting with your chosen models.
Stop writing the same event handlers over and over. Comorando executes your logic automatically.
Exponential backoff, dead-letter queues, and alert escalation — built in, no config needed.
Gemma 4 evaluates every event and suggests the optimal action based on your business rules.
# Ollama installation-windows — integrated with Comorando automation
import ollama
# installation-windows setup
client = ollama.Client(host='http://localhost:11434')
# List available models
models = client.list()
print([m['name'] for m in models['models']])
# Run inference
response = client.chat(
model='gemma3:4b',
messages=[{'role': 'user', 'content': 'Analyze this event data.'}],
stream=False
)
print(response['message']['content'])
# Send AI result to Comorando for downstream automation
import httpx
httpx.post('https://api.comorando.com/decisions', json={
'event': 'ollama.installation_windows',
'data': response['message'],
'org_id': os.environ['COMORANDO_ORG_ID']
}, headers={'Authorization': f"Bearer {os.environ['COMORANDO_API_KEY']}"})
Free tier includes 10,000 events/month. No credit card required.
Start Free See Pricing