É possível executar com Ollama localmente. Eu mesmo estou configurando agora. E funcionou.
Ollama provider selected based on model name prefix
Ollama apiBase: http://localhost:11434/v1
2026/02/16 12:57:40 [2026-02-16T12:57:40Z] [INFO] agent: Agent initialized {skills_available=6, tools_count=13, skills_total=6}
2026/02/16 12:57:40 [2026-02-16T12:57:40Z] [INFO] agent: Processing message from cli:cron: What is 2+2? Reply with only the number. {chat_id=direct, sender_id=cron, session_key=cli:default, channel=cli}
2026/02/16 12:58:46 [2026-02-16T12:58:46Z] [INFO] agent: LLM response without tool calls (direct answer) {iteration=1, content_chars=1}
2026/02/16 12:58:46 [2026-02-16T12:58:46Z] [INFO] agent: Response: 4 {session_key=cli:default, iterations=1, final_length=1}
🦞 4