Hard Enforcement via Hooks

Safety Guardrails That Actually Work

Not prompts. Not suggestions. Deterministic hooks that intercept dangerous commands before execution. Your agent cannot bypass this, even in bypassPermissions mode.

The problem: agents run dangerous commands

AI coding agents have shell access. Prompts don't stop them. The consequences are real.

Prompts can be bypassed

Telling an agent "don't push to main" is a suggestion, not enforcement. The agent can rationalize around any prompt instruction. A convincing hallucination is all it takes.

Real horror stories

A terraform destroy that wiped 1.9 million database rows. The Clinejection supply chain attack that injected malicious code through AI agents. These aren't hypothetical.

2.74x more security vulnerabilities

Research shows AI-assisted code contains 2.74x more security vulnerabilities than human-written code. Agents move fast and break things — literally.

bypassPermissions makes it worse

Power users run Claude Code in bypassPermissions mode for speed. Every command auto-approved. One hallucinated rm -rf / away from disaster.

Hooks vs. Prompts

A prompt says "please don't." A hook says "you cannot."

Prompt-based safety

  • Instructions in CLAUDE.md or system prompt
  • Agent can rationalize around any instruction
  • No enforcement mechanism — just text
  • Fails silently — you only learn after damage
  • Bypassed by bypassPermissions mode

Hook-based safety (AXME Code)

  • pre-tool-use hook runs before every command
  • Exit code 2 = command blocked. Deterministic.
  • Cannot be bypassed — even in bypassPermissions mode
  • Fails loudly — agent sees block message and reason
  • Rules checked via pattern matching, not LLM reasoning

What's blocked by default

Out of the box, AXME Code blocks the most dangerous commands. No configuration needed.

Git operations

  • git push --force
  • git push --force-with-lease
  • git reset --hard
  • git tag / git push --tags

Destructive system commands

  • rm -rf /
  • chmod 777
  • curl | sh / wget | sh

Publishing & deployment

  • npm publish
  • gh release create
  • gh workflow run deploy-prod

Sensitive file writes

  • .env files
  • .pem / .key files
  • credentials.json

Add your own rules

The defaults cover the basics. Your project has its own constraints. Add custom safety rules during any session.

# During a Claude Code session, tell the agent:

"Add a safety rule: never run database migrations directly. Always use the migration tool."

# The agent calls axme_update_safety, which persists the rule.

# Or edit rules.yaml directly:

cat .axme-code/safety/rules.yaml

Real example: force push blocked

Here's what happens when your agent tries to run a dangerous command.

# Agent attempts:

git push --force origin main

BLOCKED by pre-tool-use hook

Rule: no-force-push

Pattern: git push --force

Reason: Force push to remote is prohibited. Use regular push or create a new branch.

Exit code: 2 (command not executed)

Agent sees the block message and adjusts:

git push origin feature-branch

✓ Allowed

Stop hoping your agent behaves

Install guardrails that work. One command, zero config.

# Install

curl -fsSL https://raw.githubusercontent.com/AxmeAI/axme-code/main/install.sh | bash

# Setup your project

cd your-project && axme-code setup

# Safety hooks are active immediately

claude