SignalOS: The Neural Pathway of an AI Operating Kernel
Every biological organism perceives its environment through a nervous system — millions of signals flowing inward, being classified, filtered, and routed to the right response centers. SignalOS gives OctopusOS the same capability: a unified IO/Signal/Stream control plane that turns raw environmental changes into governed, actionable intelligence.
1. Why an AI OS Needs a Nervous System
Traditional AI agents are reactive: they wait for explicit prompts, process them, and return results. But a true AI Operating System must be continuously aware — sensing file changes on disk, HTTP traffic flowing through its server, log anomalies bubbling up from internal modules, and hardware events from connected devices.
Without a unified signal plane, each subsystem implements its own ad-hoc event handling: file watchers that miss changes, log parsers that can’t correlate with network events, and hardware monitors that operate in isolation. The result is an agent that is deaf to its own environment.
SignalOS solves this by introducing a single, typed, governed signal bus that every IO domain feeds into and every governance module can observe.
2. Four IO Domains
OctopusOS classifies all input/output into four fundamental domains. Every signal that enters the bus carries its domain tag, enabling domain-specific governance policies and routing rules.
3. The SignalEnvelope: A Universal Event Language
At the heart of SignalOS is the SignalEnvelope — a frozen, immutable dataclass that serves as the universal event format. Every signal, regardless of its source domain, is normalized into this shape before entering the bus.
@dataclass(frozen=True)
class SignalEnvelope:
signal_id: str # Unique identifier
origin: SignalOrigin # Where it came from
kind: SignalKind # What type of event
ts_ms: int # Millisecond timestamp
object_ref: str # Target object (path, URL, device)
action: str # What happened (created, modified, GET, ERROR)
payload: dict # Structured event data
sensitivity: SensitivityLevel # public | internal | confidential | restricted
trust_level: TrustLevel # verified | authenticated | anonymous | untrusted
priority: SignalPriority # critical | high | normal | low | bulk
phase: SignalPhase # observe | understand | govern | act
correlation_id: str # Causal chain link
ttl_ms: int # Time-to-live (anti-storm)
Design principles:
- Frozen immutability — once created, a signal can never be mutated, ensuring audit trail integrity
- Domain-agnostic — the same envelope carries file changes and HTTP requests alike
- Self-describing — every field needed for governance decisions is embedded in the signal itself
- Causal linking —
correlation_idandparent_signal_idenable full causal chain reconstruction
4. The Four Control Phases
Every signal traverses a strict four-phase lifecycle. No signal can trigger side effects without passing through governance first.
Phase Isolation Guarantee
This is a critical architectural invariant: only signals in the act phase can trigger side effects. The kernel enforces this by tracking the phase field on every envelope. Signals in observe and understand phases can only be read and analyzed; govern phase signals can only produce routing decisions. This prevents the most dangerous class of event-driven bugs: observe-execute confusion, where raw sensor data directly triggers actions without governance review.
5. Five-Layer Architecture
SignalOS is implemented across five layers, each with a clear responsibility and strict dependency direction.
Layer placement rules:
- L1 (Adapters) live in
server/shared/ports_impl/signal_adapters/— these are the only modules that perform actual I/O - L2-L3 (Normalization + Governance) live in
kernel/domains/signal_os/— pure functions, zero I/O, fully testable - L4 (Execution) lives in
kernel/runtime/_wl_signal_os.py— a Worker Loop Mixin that orchestrates the pipeline per tick - L5 (Intelligence) lives in
kernel/domains/signal_os/— pure analysis modules for post-hoc pattern detection
6. Anti-Event-Storm Architecture
The most dangerous failure mode in any event-driven system is the event storm: a cascade of signals that overwhelms processing capacity. SignalOS implements four layers of defense.
Backpressure State Machine
When a priority lane reaches its queue depth limit, the bus enters a saturated state. Incoming signals to that lane are dropped and the drop count is tracked. The /api/signals/backpressure/{lane_id} endpoint exposes this state in real-time.
@dataclass(frozen=True)
class BackpressureState:
lane_id: str # "hot" | "warm" | "cold"
queue_depth: int # Current queue size
max_depth: int # Configured maximum
drop_count: int # Signals dropped due to saturation
is_saturated: bool # queue_depth >= max_depth
7. The Three Neural Pathways
In the current implementation, OctopusOS has three active “nerve fibers” that continuously feed signals into the bus:
HTTP Nerve (Network IO)
FastAPI middleware captures every HTTP request and response. The HttpInterceptAdapter extracts method, path, status code, duration, and client host — creating a network_io / http_request signal for each API call. Configurable sampling rate prevents self-referential storms (signal endpoints generating signals about themselves).
Log Nerve (Stream IO)
A custom SignalLoggingHandler is attached to Python’s root logger at the WARNING level and above. Every warning, error, or critical log automatically becomes a stream_io / log_line signal — giving OctopusOS real-time awareness of its own internal health without any explicit instrumentation.
File Nerve (Host IO)
The FileWatcherAdapter polls the workspace directory tree each tick cycle. File creations and modifications are detected by comparing (mtime, size) tuples against the last known state. Each change becomes a host_io / file_change signal.
8. Governance Pipeline Deep Dive
Every signal that enters the bus goes through a multi-stage governance pipeline before it can reach any handler:
9. Replay and Causal Intelligence
SignalOS doesn’t just process signals — it remembers them. The replay layer enables post-hoc analysis of signal chains, pattern detection, and root cause tracing.
Causal Chains
Every signal can carry a correlation_id and parent_signal_id, forming a directed acyclic graph of causally related events. The build_chain() function assembles these into SignalChain objects with root identification, time span analysis, and ordered signal sequences.
Pattern Detection
The detect_patterns() module analyzes collections of signal chains to identify recurring sequences — for example, detecting that file_change -> log_line(ERROR) -> http_request(500) happens repeatedly within a time window. These patterns are stored as SignalPattern objects with occurrence counts and time bounds.
Root Cause Tracing
Given a target signal (e.g., an error response), trace_root_cause() walks backward through the causal chain to identify the originating signal. This enables automated root cause analysis: “this 500 error was caused by a file change that triggered a module reload that failed.”
10. Flywheel Bridge
SignalOS doesn’t exist in isolation — it connects bidirectionally to OctopusOS’s existing Flywheel intelligence system through the flywheel_link module.
| Direction | Function | Behavior |
|---|---|---|
| Signal -> Flywheel | signal_to_flywheel() | Converts high-value signals into FlywheelSignal observations. Bulk-priority signals are filtered out to prevent flywheel pollution. |
| Flywheel -> Signal | flywheel_to_signal() | Converts flywheel observations back into SignalEnvelope format, with adapter_name="flywheel" and domain="stream_io". |
This bridge means that patterns detected by SignalOS feed into the flywheel’s learning loop, and flywheel insights can be re-ingested as signals for further governance — creating a self-reinforcing intelligence cycle.
11. API Surface
SignalOS exposes four REST endpoints for external integration and observability:
| Endpoint | Method | Purpose |
|---|---|---|
/api/signals/ingest | POST | Manually inject a signal (testing / external sources) |
/api/signals/stream | GET | Server-Sent Events stream of governed signals |
/api/signals/stats | GET | Bus throughput statistics (ingested, dropped, per-lane) |
/api/signals/backpressure/{lane_id} | GET | Real-time backpressure state for a priority lane |
12. Implementation Metrics
| Metric | Count |
|---|---|
Frozen dataclasses in contracts/signal_os.py | 15 |
Pure domain modules in domains/signal_os/ | 12 |
| Public domain functions | 14 |
| Signal adapters (active) | 4 |
| Unit + integration tests | 82 |
| Lines of kernel code (contracts + domains) | ~800 |
| Gate violations introduced | 0 |
13. Design Philosophy
SignalOS embodies three core architectural beliefs:
1. Observe before you act. The phase isolation guarantee ensures that no signal can trigger side effects without passing through classification, risk assessment, and routing. This is the single most important safety property of the system.
2. Every signal is a first-class citizen. A file change and an HTTP request receive the same typed envelope, the same governance pipeline, and the same replay capabilities. There are no second-class events that bypass governance.
3. Defense in depth against storms. Four independent layers of protection — adapter debounce, bus backpressure, governance throttle, and TTL expiry — mean that no single failure can cause an event cascade. Each layer operates independently, so even if one is misconfigured, the others hold.
14. Application Scenarios
Self-Healing Infrastructure
When SignalOS detects a recurring pattern of file_change -> log_line(ERROR), it can automatically correlate the file modification with the error and suggest (or execute) a rollback — without human intervention.
Security Monitoring
SSH events with sensitive payloads are automatically detected, sanitized (passwords redacted), and routed to the hot priority lane. Blocked origins are rejected before they reach any handler.
Performance Observability
Every HTTP request generates a signal with duration metrics. The replay layer can detect latency patterns — for example, identifying that requests to /api/knowledge/query consistently spike after a file_change event in the KB directory.
Agent Awareness
Agent lifecycle events (task start, step completion, failures) flow through SignalOS as stream_io / agent_stream_event signals. This gives the governance layer visibility into agent behavior, enabling quota enforcement and anomaly detection across agent populations.
Hardware Integration
GPIO state changes, serial data streams, and camera frames enter the bus as hardware_io signals. The governance pipeline ensures that camera frames (marked restricted sensitivity) are never persisted without explicit policy approval, while GPIO readings (marked internal) flow freely to monitoring dashboards.