Configuring a Local AI Agent Development Stack
Overview
Running AI coding agents locally requires more than installing a CLI tool. Production-grade setups need structured access controls, conflict prevention between agent instances, and governance layers that prevent both accidental damage and prompt injection attacks.
This article outlines a configuration stack running on Windows Subsystem for Linux (WSL) that combines multiple AI agent interfaces with centralized governance.
Stack Components
| Component | Role |
|---|---|
| WSL2 | Linux runtime environment |
| OpenClaw Gateway | Agent orchestration and channel routing |
| Claude Code | VS Code integrated AI coding assistant |
| Governance Hooks | Pre-execution security filters |
| Task Registry | Multi-instance conflict prevention |
WSL as the Foundation
WSL2 provides an isolated Linux environment with direct filesystem access to the Windows host. For AI agent development, this architecture offers:
- Native execution of Python toolchains and shell scripts
- Access to Windows drives via
/mnt/paths for media processing - Network isolation options (loopback binding)
- Process-level separation from the Windows desktop
The kernel runs as a lightweight VM, providing genuine Linux syscalls while maintaining low overhead.
Agent Orchestration Layer
The gateway component handles routing between messaging surfaces (Telegram, Discord, direct CLI) and the underlying AI model. Configuration controls include:
Access Control:
- Explicit allowlists for users and groups
- Per-channel policies (DM restrictions, group permissions)
- Token-based authentication for local HTTP endpoints
Resource Limits:
- Maximum concurrent sessions
- Maximum concurrent subagent spawns
- Context compaction policies
Binding:
- Loopback-only by default (localhost access)
- Optional Tailscale integration for remote access with authentication
Claude Code Permission Model
The VS Code integration uses an explicit permission allowlist. Commands not matching approved patterns require manual approval at runtime.
Example permission patterns:
Bash(python3:*)
Bash(git add:*)
Bash(npm test:*)
WebFetch(domain:docs.example.com)
This approach inverts the typical security model. Rather than blocking known-bad commands, only explicitly permitted operations execute without interruption.
Session hooks provide additional control points:
- Session start hooks — Run initialization scripts when a coding session begins
- Status line integration — Display project state in the IDE
Governance Hook Architecture
Pre-execution filters intercept messages and commands before they reach the AI model or execute on the system. A bash-based governance hook can enforce:
Path Restrictions:
- Block access to sensitive directories (
.ssh,.gnupg, credentials) - Prevent operations on system paths (
/etc/,/proc/,/sys/)
Prompt Injection Detection:
- Pattern matching for common injection phrases
- Context length limits to prevent stuffing attacks
Dangerous Command Prevention:
- Destructive operations (
rm -rf,chmod 777) - Force push operations
- Database truncation commands
Shell Injection Patterns:
- Command chaining attempts
- Eval/exec function calls
- Import smuggling
Blocked events log to structured JSONL files for audit trails.
Multi-Instance Conflict Prevention
When multiple agent interfaces can access the same codebase (VS Code session + Telegram bot + background automation), file corruption becomes possible. A lock-based registry prevents concurrent edits:
- Agent registers intent to work on a project path
- Registry checks for existing locks from other channels
- Conflict detected → work blocked with notification
- No conflict → lock acquired, work proceeds
- Task completion → lock released
Lock files encode the channel identifier (vscode, telegram, discord, background) and active task label.
Configuration Principles
Several patterns emerge from production AI agent configurations:
Explicit over implicit. Allowlists require more initial setup but prevent unexpected behavior. Blocklists inevitably miss edge cases.
Defense in depth. Multiple layers (permission model, governance hooks, lock registry) provide redundancy. A bypass at one layer doesn’t compromise the system.
Audit everything. Structured logging of blocked events, task registrations, and security exceptions enables post-incident analysis.
Local first. Loopback binding and local token auth reduce attack surface. Remote access adds complexity and should be opt-in.
Next in This Series
Future articles will cover:
- Implementing governance hooks with pattern matching
- Task registry design for multi-agent coordination
- Permission allowlist strategies for coding agents
- Prompt injection detection at the infrastructure layer
This configuration runs in production managing development workflows across multiple projects. Specific implementation details vary based on tooling versions and organizational requirements.