Neural Intelligence Platform
v0.1.0 · egui 0.31 · Rust | relay · arachnia.stream:8473 | Download Remote Access
Overview #overview

Arachnia is an AI-driven multi-model development environment — a native, GPU-accelerated application built in Rust with direct integration across all major LLM inference providers. It is designed as a unified workspace where AI models have full awareness of your codebase, can collaborate alongside you in real time, and can propose, validate, and evolve code autonomously under your explicit review.

The AI layer is the core of the product. Silk Cortex provides streaming multi-provider chat with persistent memory context injected from the Resonance Weave recall ledger. Hivemind broadcasts a single prompt to N providers simultaneously for side-by-side comparative scoring and model evaluation. Every provider — Anthropic, OpenAI, xAI, Gemini, or a local Ollama model — routes through a single interface with no context switching.

Supporting intelligence is provided by the Spiderling analysis layer — concurrent per-file workers that surface severity-ranked findings and feed them directly into AI context — and the Venom Suite, which combines AI-driven code transformation (Venom Lattice), adversarial security payload generation (Venom), tree-sitter structural editing (AST Surgeon), and a multi-AI sandboxed build tournament (Evolution Engine).

Arachnia Live Session is the collaborative layer — a two-party real-time co-development protocol that synchronizes editor state, ghost cursor, selection range, open tabs, typed AI memory entries, and peer chat through the relay at arachnia.stream:8473. Two developers share one codebase, one AI context, and one conversation thread simultaneously, with every spiderling finding and terminal event crossing to the peer as structured memory.

Platform: Windows x86_64 (native GPU, DirectX). Built with egui 0.31 on eframe. Renderer: OpenGL 3.3+ hardware-accelerated. Idle CPU target: 0%. Cold start target: ≤200ms. Binary size budget: <12 MB. No runtime dependencies — single-file deployment.
Architecture #architecture

Arachnia is structured as cooperating intelligence layers. Each layer is independent at runtime, communicating through shared application state rather than direct coupling. The rendering pipeline is single-threaded egui; all AI calls and analysis workers run on async or OS thread pools outside the render loop. Idle CPU usage is 0%.

CORE AI PIPELINE -------------------------------------------------------------------- User Input --> Silk Cortex --> Provider Router | +--------------------+--------------------+ v v v Cloud APIs Ollama Hivemind (Anthropic / OpenAI (localhost) (N providers, xAI / Gemini) parallel) +--------------------+--------------------+ | v Streaming Token Assembly | v Resonance Weave --> RecallLedger (insight extraction) (injected as system context, next query)
ANALYSIS LAYER -------------------------------------------------------------------- Workspace --> Spiderling Workers (concurrent, per-file) | v Severity-ranked Findings + Amber Semantic Index | (embedding search) v Collab Memory Push --> Peer AI Context
MUTATION LAYER (Venom Suite) -------------------------------------------------------------------- Venom Lattice --> AI diff proposals --> Accept / Reject Venom --> Adversarial payloads (network-blocked sandbox, 13-layer constraint set) Evolution Eng. --> Multi-AI changeset candidates | Cargo Sandbox (symlinked target/, fast rebuild) | Scoring tiers: Tier 0 -- build gate (compile pass/fail) Tier 1 -- static delta (pub_fn, warnings) Tier 2 -- AI judge (LLM qualitative) | Ranked candidates --> Human review + confirm AST Surgeon --> Tree-sitter structural edits (direct, no diff)
COLLABORATION LAYER (Arachnia Live Session) -------------------------------------------------------------------- Editor state --> FlushGate --> Relay (arachnia.stream:8473) | Bidirectional sync (2-second cycle): editor content + file path cursor position (line, col) selection range (start / end line+col) open tab list peer chat (chat_message entries) memory entries (6 typed kinds) spiderling findings (on analysis complete) terminal output (on command exit) | Ghost cursor badge in peer editor gutter Translucent ghost selection overlay Tab bar: dim-o peer-open, lit-dot peer-editing
VISUALIZATION LAYER -------------------------------------------------------------------- NeuralWeave --> Force-directed dependency graph (on-demand) AI-annotated weak strands: fan-out, circular, dead Consciousness --> System + model health sphere VibeCanvas --> Freeform node-graph workspace
Features #features

Core capabilities organized by surface area. Every feature is compiled into the binary — no extensions, no plugin system, no network calls at startup.

AreaCapability
AI Chat Streaming multi-provider chat: Anthropic, OpenAI, xAI, Gemini, and Ollama local models unified in one interface. Resonance memory context injected automatically. Configurable temperature, max-tokens, and system prompt per session. Markdown response rendering.
Multi-model Hivemind: broadcast a single prompt to N providers concurrently, responses rendered side-by-side with hybrid comparative scoring. Latency, quality tier, and model-specific telemetry tracked per response with optimization rules that evolve as patterns emerge.
Editor Multi-tab with syntect syntax highlighting, per-tab undo tree (60 snapshots), indent rainbow tints, error lens inline diagnostics, ghost cursor badge + translucent ghost selection overlay for the collab peer, Ctrl+K inline AI palette at cursor.
Analysis Concurrent Spiderling workers per file, severity-ranked findings (error / warning / hint / info), live sync to collab peer on analysis completion, Amber embedding-based semantic search and grounded recall across the full workspace.
Mutations Venom Lattice: AI-driven transformation proposals as structured diffs, per-change accept/reject. AST Surgeon: tree-sitter structural edits (rename symbol, extract function, inline variable) operating on the parsed AST, not raw text.
Evolution Evolution Engine: multi-AI sandboxed build tournament. Candidates compiled in isolated Cargo workspace (symlinked target/ — no full recompile), scored on three tiers (build gate / static delta / AI judge), ranked in Evolution Ledger. sled-backed cross-session snapshot memory. Human review required before any change merges.
Security Venom adversarial security suite: generates concrete attack payloads in a network-blocked, 13-layer constrained sandbox. Execution is fully isolated from the live workspace. All payloads are surfaced as reviewable artifacts, never auto-applied.
Memory Resonance Weave extracts typed insights from conversations into a persistent recall ledger injected as grounded system context on future queries. Memory entries are shared across collab sessions via relay. Amber semantic index enables embedding-based recall.
Collaboration Arachnia Live Session: bidirectional editor sync, ghost cursor and selection range, open tab list, peer chat cross-talk, 6-kind typed memory sync (insight / chat_message / terminal / spiderling / edit_pattern / warning), FlushGate serialized push/pull, 60-second heartbeat auto-disconnect.
Terminal Integrated terminal with command monitoring. Exit codes and last-40-lines output automatically create collab memory entries (terminal kind) on command completion, injected into both peers' AI context.
Visualization NeuralWeave: force-directed live dependency graph with AI-annotated weak strands (high fan-out, circular deps, dead code), manual Cast Web trigger (zero idle CPU), four sub-tabs. Consciousness health sphere. VibeCanvas freeform node workspace.
Planning AI-driven task decomposition. Goal → executable step tree with sub-tasks, dependencies, and status tracking. Plans persist per workspace in the project config directory.
Modules #modules

Each module is a Rust source file that owns its rendering, state, and logic. Modules communicate through shared application state — no inter-module method calls across boundaries.

Silk Cortex
Primary AI chat interface. Manages conversation history, provider dispatch, streaming token assembly, Resonance memory context injection, and markdown response rendering. The central user-facing AI surface.
NeuralWeave
Live dependency graph visualization. Force-directed spider-web layout maps every file and function to a node; import chains render as weighted threads. AI annotates "weak strands" — high fan-out, circular dependencies, dead code. Triggered manually ("Cast Web") — zero idle CPU. Sub-tabs: Graph, Details, Resonance, Mind. Collab-aware when a session is active.
Spiderlings
Autonomous workspace analysis swarm. Concurrent per-file workers surface severity-ranked findings. New findings push automatically to the collab peer as structured memory entries via the relay.
Venom Lattice
AI code transformation engine. Presents mutation proposals as structured diffs with per-change accept/reject controls. Accepted mutations write to disk; rejected patterns feed back into future proposal framing.
Venom
Adversarial security testing suite. Generates concrete attack payloads in a network-blocked sandbox with a 13-layer constraint set. All executions are fully isolated from the live workspace. Payloads surface as reviewable artifacts only.
Evolution Engine
Multi-AI sandboxed build tournament. Models propose code changes as concrete changesets; each candidate is compiled in an isolated Cargo workspace (symlinked target/ — no full recompile), scored on three tiers, and ranked in the Evolution Ledger. sled-backed cross-session snapshot storage remembers which patterns survived. Human review required before merge.
Scoring
Hybrid tiered evaluation for Evolution Engine candidates. Tier 0: build gate (compile pass/fail). Tier 1: static delta analysis — pub_fn count, build warnings, complexity delta. Tier 2: optional AI judge via LLM prompt for qualitative scoring. Blended score from configurable tier weights via JudgeReport.
Resonance Weave
Persistent AI memory layer. Extracts structured typed insights from chat history into a recall ledger. Injected as grounded system context on future queries. Survives process restart. Memory entries shared cross-session via relay.
Hivemind
Multi-model coordination panel. Broadcasts a single prompt to N providers concurrently. Responses rendered side-by-side with hybrid comparative scoring — latency, quality tier, and model-specific telemetry. Optimization rules injected as model behavior evolves.
Amber
Semantic workspace search. Embedding-based fuzzy matching across all files. Results ranked by semantic distance with inline context preview. Powers grounded AI responses when querying about specific code constructs or intent.
AST Surgeon
Tree-sitter structural code editing. Operates on the parsed AST rather than raw text — rename symbol, extract function, inline variable, reorder parameters. Changes previewed as diffs before application. Language-aware, not pattern-matching.
Planner
AI-driven task decomposition. Accepts a goal statement and produces an executable step tree with sub-tasks, dependencies, and status tracking. Persisted per workspace in the project config directory.
Vibe Canvas
Freeform visual workspace. Structural sketching and node-graph idea mapping with AI prompt invocation on individual canvas elements. Free-form and grid-snapped placement modes.
Resource Budget
Local inference budget tracking. OllamaModelManager auto-detects installed models and their memory footprint. LocalInferenceBudget enforces configurable peak RSS ceilings with warnings at 80% and hard block at 100% capacity.
Editor
Multi-tab code editor. Syntect syntax highlighting, per-tab undo tree (60 snapshots), indent rainbow tints, error lens inline diagnostics, ghost cursor badge and translucent ghost selection overlay for the collab peer. Ctrl+K inline AI palette.
$_Terminal
Integrated terminal with command monitoring. Exit codes and last-40-lines output automatically create collab memory entries (terminal kind) on command completion, visible in both peers' AI context.
Collab
Arachnia Live Session coordination. Manages relay connection state, FlushGate push/pull serialization, memory sync, peer chat, ghost cursor and selection rendering, tab bar indicators, and session lifecycle (Live / Joined / auto-disconnect at 60s heartbeat timeout).
Explorer
File system browser. Directory tree with lazy expansion and file operations: create, rename, delete, drag to editor tab bar.
Consciousness
System awareness layer. Tracks AI model availability, workspace health signals, resource budget counters, Ollama process state, and surfaces operational alerts in the status overlay.
Profiles
User identity management. Author name used for collab attribution — displayed in ghost cursor overlay, memory entry headers, and peer chat messages.
Keyboard Shortcuts #shortcuts
BindingAction
Ctrl+K AI command palette — inline prompt at cursor position
Ctrl+Shift+Z Semantic undo history — AI-aware snapshot time-travel
Ctrl+P Quick open file — fuzzy search across workspace
Ctrl+S Save current file
Ctrl+, Open settings panel
Ctrl+= / Ctrl+- Zoom in / zoom out
Ctrl+0 Reset zoom to 100%
Ctrl+Shift+I Switch to IDE Mode
Ctrl+Shift+V Switch to Vibe Mode
Ctrl+Shift+J Toggle Zen sub-mode (Vibe Mode only)
AI Providers #providers

Configure API keys in settings. All routing is local — keys are never transmitted to the relay.

ProviderModelsEndpoint
Anthropic claude-3-5-sonnet claude-3-7-sonnet claude-3-opus claude-3-haiku api.anthropic.com
OpenAI gpt-4o gpt-4-turbo o1-preview api.openai.com
xAI grok-beta api.x.ai
Google gemini-1.5-pro gemini-1.5-flash generativelanguage.googleapis.com
Ollama any local model localhost:11434
Ollama models are auto-detected on startup via localhost:11434/api/tags. The model list populates without manual configuration. No API key required.
Collab Protocol #collab

Arachnia Live Session is a two-party collaboration protocol built on the public relay at arachnia.stream:8473. One peer hosts by generating a token; the second joins by entering it. The relay carries all sync payloads in memory — no data is written to disk. Sessions auto-expire after 90 seconds of inactivity. No account, authentication, or configuration required beyond the token.

Session Lifecycle

Host: Idle --> Generate Token --> Live(token) --> Syncing Guest: Idle --> Enter Token --> Joined(token) --> Syncing | Both: ...................................... Auto-disconnect at 60s heartbeat timeout Token freed, relay slot removed, UI resets to Idle

The sync loop runs on a 2-second cycle. A FlushGate serializes push and pull operations — if a push is in-flight, the next pull is deferred until the push completes. This prevents cache corruption when the same author writes and reads within the same cycle window. Both roles (Live and Joined) push and pull symmetrically.

Editor State Fields

Synchronized each cycle via /session/push and /session/pull:

FieldDescription
file_pathActive file path — peer editor navigates to matching file automatically on receipt
contentFull file content — applied as a non-destructive content patch if the file is already open
cursor_line, cursor_colCursor position — rendered as a labelled ghost cursor badge in the peer's editor gutter
sel_start_line / sel_start_colSelection start — lower bound of the ghost selection overlay rendered in peer's editor
sel_end_line / sel_end_colSelection end — upper bound, rendered as a translucent tinted block over the peer's text
tabsOpen tab filenames — dim ○ in tab bar when peer has file open, ● when peer is actively editing it

Memory Entry Kinds

Memory entries are pushed via /session/memory/push and pulled via /session/memory/pull, which returns entries merged from all authors, sorted ascending by timestamp. Each entry carries a typed kind field that controls how it is handled on receipt:

kindSourceEffect on peer
insightResonance Weave extractionInjected into AI system context as grounded workspace knowledge on the next query
chat_messageCollab overlay input fieldAppears as cross-talk in peer's collab panel; displayed with author attribution and timestamp
terminalTerminal command completionLast 40 lines of output + exit code injected as AI context; shown as system message in chat view
spiderlingAnalysis worker findingSeverity-ranked finding injected as AI context; visible in peer's findings panel with source location
edit_patternAccepted Venom mutationPattern recorded and used to influence framing of future Venom Lattice proposals for this codebase
warningVenom Lattice alertHigh-severity warning surfaced in peer's notification overlay immediately on next pull cycle

Auto-push Triggers

Memory is pushed on a 30-second cooldown, but certain workspace events bypass the cooldown and trigger an immediate push:

File save + content change --> immediate push (edit_pattern) Spiderling analysis complete --> push findings (spiderling kind) Terminal command exits --> push output (terminal kind, last 40 lines + exit code) Venom warning generated --> push alert (warning kind)

Rate Limits and Caps

LimitValueNotes
Sync cycle2 secondsBoth roles, full push+pull each cycle
Memory auto-push cooldown30 secondsBypassed by the 4 trigger events above
Session creates5 / min / IPEnforced at relay, returns 429 on excess
Memory entries per author500Oldest entries evicted when cap reached
Session idle expiry90 secondsMeasured from last push or heartbeat
Heartbeat timeout60 secondsClient-side: auto-disconnect and reset to Idle
Relay API #relay-api

JSON over HTTP. Tokens normalized to uppercase. Sessions expire after 90s idle. Rate limit: 5 creates/min per IP.

POST
/session/create Register a new token. Body: {"token":"ABC"}. Returns {"ok":true}.
POST
/session/join Validate an existing token. 404 if not found or expired.
POST
/session/push Store sync payload. Fields: token, file, content, cursor, cursor_col, sel_start_line, sel_start_col, sel_end_line, sel_end_col, tabs[].
GET
/session/pull?token=X Fetch the latest stored payload. Returns empty sentinel if nothing pushed yet.
POST
/session/leave Remove session immediately, freeing the relay slot.
POST
/session/memory/push Push per-author typed memory entries. Body: token, author, entries[]. Replaces author's slot. Capped at 500 entries.
GET
/session/memory/pull?token=X Return merged entries from all authors, sorted by timestamp ascending.
GET
/healthz Liveness check. Returns {"ok":true,"sessions":N}.
Remote Tunneling #tunneling
Planned
Remote Access & WebRTC Tunneling
Reverse-tunnel support via arachnia.stream is in planning.
WebRTC peer connections, TURN relay at turn.arachnia.stream:3478, and SSH gateway are scoped for a future release.
This page will document the tunnel client, port forwarding, and connection commands.
Download #download
Planned
Binary Distribution
Signed Windows installer (.exe via Inno Setup) and portable binary are planned.
Build target: x86_64-pc-windows-msvc. Size budget: <12 MB binary.
Source remains private until the initial public release.