Case Study: Palyan Family AI System

Running 13 AI agents on a single VPS with The Nervous System - February 28 to March 5, 2026

The Challenge

Palyan Family AI System operates 27 concurrent processes on a single $24/month DigitalOcean VPS with 4GB RAM. These include AI agents for operations, communication, content creation, social media, training, and three MCP servers. The system is managed primarily by LLM-powered agents that read files, execute commands, edit code, and deploy services.

The problem: without behavioral enforcement, LLM agents would routinely damage critical files, lose context between sessions, drift from objectives during long tasks, and fail silently when sessions timed out.

The Numbers

56
Violations Caught
32
Edits Blocked
13
Files Protected
0
Rules Bypassed
27
Processes Monitored
21
MCP Tools

What Was Prevented

Incident TypeCountImpact Prevented
LLM attempted to edit web server config7Site downtime for all services
LLM attempted to edit authentication code5Security vulnerability or lockout
LLM attempted to edit bridge server4Loss of remote management capability
LLM attempted to edit chatbox during live session3User-facing service disruption
LLM attempted to edit proxy configuration6API routing failures for all agents
LLM attempted to edit agent worker scripts4Agent behavioral changes without approval
Handoff not updated during active work24Context loss for next session

Timeline of Key Events

Feb 28
System goes live
Preflight.sh deployed with initial UNTOUCHABLE list of 89 files. First violation logged within hours when an agent attempted to modify simple-proxy.js.
Mar 1
First stale handoff detected
Agent worked for 40+ minutes without updating SESSION_HANDOFF.md. Violation logged, pattern identified, enforcement tightened.
Mar 2
NS MCP submitted to Anthropic directory
The Nervous System packaged as an MCP server and submitted. All 21 tools and 7 resources available for any MCP-compatible client.
Mar 4
NS integrated into chatbox
Rules loaded dynamically from MCP on startup. Step-back injection, handoff staleness checks, and violation tracking added to the chatbox backend.
Mar 5
v1.9.0 shipped + external deployment
v1.9.0 adds kill switch, tamper-evident audit chain, and dispatch-to-llm. First external deployment confirmed (MacBook Pro, March 5 2026). Published on npm, MCP Registry, and GitHub Marketplace.

What We Learned

"The LLM can't reliably self-enforce promises. Guardrails work via preflight.sh, violation logs, and catching drift. Build enforcement systems, don't make promises."
- Operating principle, established after observing LLM behavior patterns

Mechanical enforcement beats promises. In the first week, the preflight check blocked 32 file edits that would have damaged production infrastructure. The LLM was not being malicious. It was trying to be helpful. That is exactly the problem: a helpful AI with file access will "fix" things you did not ask it to fix.

Forced reflection materially improves quality. The step-back cycle (Rule 4) consistently produced moments where the LLM caught its own drift. Without the forced pause, these course corrections would not have happened.

Written handoffs are non-negotiable. 24 stale handoff warnings in 7 days means the LLM "forgot" to document its state roughly 3.4 times per day. Each of those would have been a complete context loss for the next session. The warning system caught every one.

Guest mode proves the concept. When a visitor can interact with a governed AI, see the rules being enforced, and fail to extract internal information through social engineering, the Nervous System demonstrates its value more effectively than any documentation.

Infrastructure

ComponentSpecification
ServerDigitalOcean VPS, 4GB RAM, $24/month
Process managerPM2, 27 processes
Web serverCaddy (automatic HTTPS)
LLM providerAnthropic (Max subscription)
NS enforcementBash scripts + Node.js MCP server
NS versionv1.9.0 (21 tools, 7 resources)
Monthly costUnder $300/month total

Live data: Audit Dashboard | System Status | Try the Demo | GitHub

Bot Builder Marketplace (March 2026)

Opened the SOUL template system to the public. Users create custom AI agents by talking. Auto-learn extracts personality and knowledge silently. Bot curator flags the best creations. 8 templates, Stripe billing, multi-channel deployment.

Try Bot Builder

Industry Validation (March 2026)

5 of the top 10 fastest-growing GitHub repositories in March 2026 are AI agent platforms. The largest, OpenClaw (247K stars), was banned by China's government and flagged by Cisco's security team for data exfiltration in third-party skills. ByteDance's DeerFlow 2.0 (10K+ stars, MIT licensed) validates multi-agent orchestration with sub-agents, memory, and sandboxed execution as production-grade infrastructure.

Our Nervous System MCP is the only governance framework addressing these security gaps with preflight authorization, hash-chained audit trails, drift detection, and kill switch capabilities. The agent ecosystem is exploding. Governance is not optional.

Sources: OpenClaw (github.com/openclaw/openclaw), DeerFlow (github.com/bytedance/deer-flow), RuView (github.com/ruvnet/RuView)