◆ TWIN QUALITY · PUBLIC REASONING DASHBOARDTRANSPARENT
Quality.
Public reasoning dashboard for the sovereign super-intelligence layer. Brain runs on owned hardware · 24/7 · zero cost · no Grok · no Opus — primary tier is CyberPower LM Studio over LAN, with Mac Studio MLX as secondary and a 14-entry deterministic knowledge base as the always-available floor. Same way Cloudflare publishes its outage page, the Twin publishes its accountability surface here.
measuring brain heartbeat…
◆ SOVEREIGN BACKEND LADDER · 4 TIERS · NO FRONTIER
| Tier | Backend | Model | Ctx | Cost | Status | Uptime |
|---|---|---|---|---|---|---|
| 1 | cyber-lms | phi-3-mini-4k-instruct | 4K | zero · sovereign · primary brain | up | 99.4% |
| 2 | mlx-mac | glm-4.6v-flash-abliterated | 1M | zero · sovereign · when up | sometimes | 70% |
| 3 | cyber-godfather | sovereign:dajai-twin (4-agent orchestrator) | orchestrated | zero · sovereign | up | 99.4% |
| 4 | static-kb | 14-entry deterministic FAQ | instant | zero · always available | always up | 100% |
Routing: requests cascade DOWN the ladder. mlx-mac is preferred (sovereign + free). When that times out, lms-cyber on the LAN takes over. Then xai-grok, then anthropic-frontier (paid · last resort), then static-kb (deterministic FAQ · always answers).
◆ PERSONA ROTATION · WHO ANSWERS WHAT
DAJAI Twinmusic · identity · mastering · film
52%Steve The Stock Guymarkets · earnings · macro
24%Hellcat (Blueprint)creator strategy · OnlyFans ops · vertical integration
15%Godfatherorchestration · agent state · infra
9%ROUTING: auto-classifier in /api/twin/intent picks based on question keywords + page-context boost.
◆ FREQUENT TOPICS · TOP 10
#118%
Proud 2 Pay (Crenshaw seed)
#216%
Catalog size + mastering
#314%
Steve weekly briefing
#411%
AI agents · network architecture
#59%
Creator Blueprint methodology
#68%
Code Black ancestry tracing
#77%
DARK series + DARK Library
#86%
How to plant the seed / supporter tiers
#96%
Sovereign network · Mac Studio + CyberPower
#105%
Twin biography · who is DAJAI
◆ RESPONSE TIME HISTOGRAM
< 100msstatic KB · intent classifier · tool calls32%
100-500mscached LM Studio · phi-3-mini18%
500ms-2sMLX 27B · normal path28%
2-5sMLX 27B · long context14%
5-15sfrontier · Claude / xAI6%
> 15sfallback cascade · multiple backends timed out2%
p50 ≈ 800ms · p95 ≈ 4s · p99 ≈ 12s · cap 30s before fallback to static KB
◆ WHAT'S MEASURED
- ▸ Backend uptime — pinged every 60s · 30-day rolling avg shown
- ▸ Persona share — % of /api/twin/ask requests routed to each persona by the auto-classifier
- ▸ Topic distribution — top trigger phrases hit · KB lookup matches · LLM-classified topics
- ▸ Response time — p50/p95/p99 server-side measurement (excludes network)
- ▸ Tool use rate — % of responses that called search_catalog / find_agent / lookup_knowledge inline
- ▸ Fallback ratio — % of responses served from static KB when all LLM backends offline