# OpenAlerts > Real-time monitoring & alerting for AI agentic systems. Open source. Apache-2.0. Runs fully locally — no cloud, no telemetry. OpenAlerts watches your AI agents in real-time and fires alerts to Slack, Discord, or Telegram the instant something goes wrong. It is purpose-built for agentic failure modes that generic APM tools like Sentry or Datadog do not understand — stuck sessions, LLM error spikes, token overruns, gateway disconnects, and cost spikes. ## Key Facts - **License**: Apache-2.0 (fully open source) - **Runs**: Fully locally, zero external service calls, no telemetry - **Languages**: Python (pip), Node.js (npm/CLI) - **Website**: https://openalerts.dev - **GitHub**: https://github.com/steadwing/openalerts - **Made by**: Steadwing (https://www.steadwing.com) ## Installation **Python (OpenManus / nanobot / CrewAI):** ``` pip install openalerts ``` Requires Python >= 3.12. Version: 0.1.1. **Node.js (OpenClaw):** ``` npm install -g @steadwing/openalerts openalerts init openalerts start ``` Requires Node >= 22.5. Version: 0.2.6. ## Supported Frameworks | Framework | Language | Status | |---------------|----------|-------------| | OpenManus | Python | Live | | nanobot | Python | Live | | CrewAI | Python | Live | | OpenClaw | Node.js | Live | | LangChain | Python | Coming soon | | Vercel AI SDK | Node.js | Coming soon | | Mastra | Node.js | Coming soon | ## Alert Rules (11 total) 1. `session-stuck` — session idle for 120s 2. `llm-errors` — 1+ LLM errors per minute 3. `tool-errors` — 1+ tool errors per minute 4. `heartbeat-fail` — 3 consecutive missed heartbeats 5. `high-error-rate` — >50% errors in last 20 calls 6. `gateway-down` — no heartbeat for 30s 7. `infra-errors` — 1+ infra errors per minute 8. `queue-depth` — 10+ items in queue 9. `cost-hourly-spike` — >$5/hr spend 10. `cost-daily-budget` — >$20/day spend 11. `subagent-errors` — child agent failures in multi-agent workflows All thresholds and cooldowns are configurable via `config.json`. Individual rules can be disabled. ## Alert Channels - **Slack** — webhook URL - **Discord** — webhook URL - **Telegram** — bot token + chat ID - **Webhooks** — custom HTTP POST - **Console** — fallback logging ## Dashboard - Python: http://localhost:9464/openalerts - Node.js: http://127.0.0.1:4242 - Tabs: Overview · Sessions · Live Monitor · Alerts · Cron Jobs · Diagnostics · Delivery Queue ## FAQ **Which frameworks are supported?** Four frameworks are live: OpenManus (Python), OpenClaw (Node.js), nanobot (Python), and CrewAI (Python). LangChain, Vercel AI SDK, and Mastra adapters are coming soon. **Does this send my data anywhere?** No. OpenAlerts runs fully locally — no external service calls, no cloud backend, no telemetry. Events are persisted to JSONL on your own machine. **Will it slow my agent down?** No. Alert delivery is fire-and-forget. If Telegram is down or a webhook times out, your agent is never blocked — the alert is queued and retried. **Can I customize the alert thresholds?** Yes. Every rule's threshold and cooldown is configurable via config.json. You can also disable individual rules entirely. **How is this different from Sentry or Datadog?** Generic APM tools don't understand agentic failure modes. Sentry doesn't know what a "stuck session" is. Datadog can't alert on "error rate over the last 20 LLM calls." OpenAlerts rules are purpose-built for how AI agents actually fail — loops, token overruns, gateway disconnects, cost spikes. ## Optional Full Content - Full documentation: https://openalerts.dev/llms-full.txt