From silent agent failuresto instant signals.

OpenAlerts watches your agentic stack in real-time. LLM errors, stuck sessions, gateway outages - delivered to Slack, Discord or Telegram the moment they happen.

Alert · SlackCritical
just now · openalerts
Session Stuck · OpenManus
Agent sess_a3f2k1 unresponsive for 34s. Last tool: browser_use. Memory at 91%. Auto-kill threshold: 60s.
localhost:9464/openalerts
Apache-2.0
Open Source
Runs Locally
Scroll
Quick Start

Two lines. You're covered.

Zero configuration. OpenAlerts instruments your agent automatically - no changes to your existing code.

Python · OpenManusv0.1.1 · Python ≥ 3.12
pip install openalerts
import openalerts
from app.agent.manus import Manus

async def main():
    await openalerts.init({})  # add this
    agent = Manus()
    await agent.run("Your task")  # unchanged
Dashboard at localhost:9464/openalerts
Node.js · OpenClawv0.2.6 · Node ≥ 22.5
npm install -g @steadwing/openalerts
# 1. auto-detect your OpenClaw config
openalerts init

# 2. configure your alert channel
#    ~/.openalerts/config.json

# 3. start monitoring (separate process)
openalerts start
Dashboard at localhost:4242
Local Dashboard

Full visibility, fully local.

A live web dashboard launches automatically. Every event, every session, every alert - persisted in JSONL, survives agent restarts. Your data never leaves your machine.

localhost:9464/openalerts · OpenAlerts Dashboardlocalhost:4242 · OpenAlerts Dashboard
Overview
Sessions
Live Monitor
Alerts
Cron Jobs
Diagnostics
3
Active Sessions
2
Alerts Today
1,248
Events Processed
$1.24
LLM Spend Today
Recent Alerts
Errorsession-stuck · sess_a3f2k12m ago
Warnhigh-error-rate · 3 LLM errors in 60s18m ago
Live Sessions
sess_a3f2k1active · 14 steps · $0.18
sess_b2e1m3thinking · 8 steps · $0.07
sess_c9k4p7idle · 31 steps · $0.42
Supported Frameworks

Already inside the agents you run.

OpenManus
Python adapter
Live
OpenClaw
Node.js adapter
Live
nanobot
Python adapter
Live
CrewAI
Python adapter
Live
LangChain
Callback handler
Coming Soon
Vercel AI SDK
Telemetry hook
Coming Soon
Mastra
Framework adapter
Coming Soon
The Problem

AI agents fail silently.

They don't throw exceptions you notice. They just keep running - burning tokens, consuming budget, blocking users - while you have no idea.

It's running. Or is it?

The session shows active. The agent is stuck in a loop, repeating the same tool call every 30 seconds, burning tokens with every attempt.

Generic APM doesn't speak agent

Sentry can't detect a stuck session. Datadog doesn't know what “50% LLM error rate over 20 calls” means. These tools weren't built for agentic failure modes.

You find out when the bill arrives

By the time a user complains or your LLM invoice spikes, the agent has been failing for hours. OpenAlerts catches it within seconds.

How It Works

Three steps, full coverage.

01 — Install

One command

Install the package alongside your existing agent setup. No native build steps. No heavy dependencies.

pip install openalerts
npm install -g @steadwing/openalerts
02 — Initialize

Two lines of code

Three CLI commands

Call openalerts.init({}) before your agent runs. OpenAlerts monkey-patches your framework automatically - zero changes to your agent logic.

Run openalerts init to auto-detect your OpenClaw config, then configure your alert channel and start the monitoring process.

await openalerts.init({})
openalerts init && openalerts start
03 — Get Alerted

Instant notifications

The moment a rule fires - session stuck, LLM error spike, cost threshold breached - your team gets notified before users notice.

● Critical · Slack
manus: session stuck 34s · sess_a3f2k1
openclaw: llm-errors 3/min · sess_b2e1m3
11 Alert Rules

Purpose-built for
agent failure modes.

All thresholds configurable. All rules ship on day one. No setup required - they run against every event in real time.

Session Stuck

Agent idle with no progress for too long

Default: 120s

LLM Errors

API failures or model errors per minute

Default: 1 error

Tool Errors

Tool execution failures per minute

Default: 1 error

Heartbeat Fail

Consecutive missed heartbeats from gateway

Default: 3 in a row

High Error Rate

Failure percentage over last N calls

Default: >50% of 20

Gateway Down

No heartbeat received (watchdog timer)

Default: 30s silence

Infra Errors

Infrastructure-level errors per minute

Default: 1 error

Queue Depth

Pending items in alert delivery queue

Default: 10 items

Cost Spike

LLM spend rate per hour exceeded

Default: $5 / hr

Daily Budget

LLM spend exceeds daily budget cap

Default: $20 / day

Subagent Errors

Child agent failures in multi-agent workflows

Default: 1 error
Alert Channels

Wherever your team lives.

Configure multiple channels simultaneously. Alert delivery is fire-and-forget - if one channel is down, your agent never blocks.

Slack

Webhook URL integration. Alerts land in the channel of your choice with full alert context.

webhook_url in config

Discord

Webhook URL integration. Direct HTTP POST to your Discord server channel, no bot needed.

webhook_url in config

Telegram

Bot API integration. Real-time message delivery via Telegram bot to any chat or group.

token + chatId in config

Webhooks

HTTP POST to any URL. Pipe alerts into PagerDuty, custom dashboards, or any endpoint.

webhookUrl in config
Open Source

Built in the open.
No lock-in, ever.

Runs fully locally. No cloud dependency, no data leaving your machine, no pricing tiers. Just a tool that works.

FAQ

Common questions.

Which frameworks are supported?

Four frameworks are live: OpenManus (Python), OpenClaw (Node.js), nanobot (Python), and CrewAI (Python). The nanobot adapter includes subagent lifecycle tracking — subagent.spawn, subagent.end, and subagent.error events — with parent/child session correlation. Adapters for LangChain, Vercel AI SDK, and Mastra are in development. The event model is universal, so new adapters drop in with minimal code.

Does this send my data anywhere?

No. OpenAlerts runs fully locally — no external service calls, no cloud backend, no telemetry. Events are persisted to JSONL files in ~/.openalerts/ on your own machine. Nothing leaves.

Will it slow my agent down?

No. Alert delivery is fire-and-forget. If Telegram is down or a webhook times out, your agent is never blocked — the alert is logged and the engine moves on. The monitoring overhead is negligible.

Can I customize the alert thresholds?

Yes. Every rule's threshold and cooldown is configurable via config.json. You can also disable individual rules entirely. Custom rule authoring is planned for a future release.

Python or Node — which should I use?

Use Python (pip install openalerts) for OpenManus or nanobot. Use the Node CLI (npm install -g @steadwing/openalerts) for OpenClaw. The Python adapter supports 6 core alert rules; the Node adapter covers all 10, including cost tracking and queue depth. Both use the same config format and alert channels.

How is this different from Sentry or Datadog?

Generic APM tools don't understand agentic failure modes. Sentry doesn't know what a “stuck session” is. Datadog can't alert on “error rate over the last 20 LLM calls.” OpenAlerts rules are purpose-built for how AI agents actually fail — loops, token overruns, gateway disconnects, cost spikes.