General
Frequently Asked Questions
Everything you need to know about AgentOS — from first install to production deployment.
General
What is AgentOS?
AgentOS is an open-source TypeScript runtime for building production AI agents, developed by Manic Agency LLC (manic.agency) and Frame.dev. It provides multi-agent orchestration, cognitive memory, multimodal RAG, built-in safety guardrails (PII redaction, prompt injection defense, content moderation), voice pipelines, 37 channel adapters, and 21 LLM provider integrations. AgentOS also powers the AI systems behind Wilds.ai, an AI-native gaming platform. Self-hostable and free.
Is AgentOS free and open source?
Yes. The core runtime is licensed under Apache 2.0. Agent presets, extensions, and guardrails are MIT-licensed. You can use AgentOS commercially, modify it, and redistribute it under these license terms.
What LLM providers are supported?
AgentOS supports 21 LLM providers including OpenAI, Anthropic, Google Gemini, Mistral, Cohere, Ollama, OpenRouter, Together AI, Fireworks AI, Groq, Perplexity, DeepSeek, xAI (Grok), Replicate, Anyscale, AI21 Labs, Aleph Alpha, AWS Bedrock, Azure OpenAI, Cloudflare Workers AI, and HuggingFace Inference. Provider fallback chains allow automatic failover.
What programming language is AgentOS built in?
AgentOS is written entirely in TypeScript and runs on Node.js. The npm package is @framers/agentos. It works with any TypeScript or JavaScript project.
How is AgentOS different from LangChain, CrewAI, or AutoGen?
AgentOS is TypeScript-native (not Python), includes cognitive memory with Ebbinghaus decay curves and HEXACO personality modeling, offers 5 security tiers instead of basic prompt filtering, provides 37 built-in channel adapters (Discord, Telegram, Slack, email, and more), and supports 6 orchestration strategies including graph-based agent DAGs. It also ships with 88 curated skills and a capability discovery engine that reduces token usage by roughly 90%.
What is Paracosm?
Paracosm (https://paracosm.agentos.sh) is a structured world-model simulation engine built on AgentOS. It compiles a JSON ScenarioPackage plus a HEXACO leader profile into a deterministic turn loop where AgentOS agents propose events, forge new TypeScript tools at runtime inside a hardened node:vm sandbox, and produce a counterfactual artifact you can fork and replay. Three modes share one schema: turn-loop civilization sims (Mars Genesis is the canonical example), batch-trajectory digital twins, and batch-point Monte-Carlo forecasts. Open-source under Apache 2.0, npm install paracosm. Live demo: https://paracosm.agentos.sh/sim. Source: https://github.com/framersai/paracosm. Long-form launch post (why we built it, the wilds.ai vault story, the engine reference): https://agentos.sh/blog/paracosm-launch.
What is Wilds.ai?
Wilds.ai (https://wilds.ai) is an AI-native gaming platform built on AgentOS and Paracosm. AI characters with persistent cognitive memory and HEXACO personality traits inhabit shared world models and diverge from each other based on grounded LLM-as-judge events. Players can drop into existing worlds, fork them, or compile new ones in the browser. Wilds.ai is the canonical production deployment of the AgentOS + Paracosm stack and the official community / support channel (https://wilds.ai/discord) for both projects.
Technical
What is a GMI?
GMI stands for General Machine Intelligence — it is the cognitive core of each AgentOS agent. A GMI encapsulates the agent's personality traits (via HEXACO), memory systems, tool access, guardrail configuration, and communication style into a single persistent identity. GMIs maintain state across sessions and adapt their behavior based on accumulated experience.
What is HEXACO personality?
HEXACO is a 6-factor personality model from psychology research (Ashton & Lee, 2004). The six dimensions are Honesty-Humility, Emotionality, Extraversion, Agreeableness, Conscientiousness, and Openness to Experience. In AgentOS, HEXACO traits modulate how agents communicate, remember, and make decisions. For example, high Openness increases memory reconsolidation during recall, while high Conscientiousness strengthens retrieval-induced forgetting of irrelevant associations.
How does cognitive memory work?
AgentOS cognitive memory implements 8 mechanisms from cognitive science research: reconsolidation, retrieval-induced forgetting, involuntary recall, feeling-of-knowing, temporal gist extraction, schema encoding, source confidence decay, and emotion regulation. Memories decay over time following Ebbinghaus forgetting curves (Ebbinghaus, 1885), with spaced repetition and consolidation to preserve important information. Working memory follows Baddeley's multi-component model (Baddeley, 2000) with a volatile scratchpad for the current turn that consolidates upward into episodic, semantic, and observational memory tiers.
What guardrails are available?
AgentOS provides 5 security tiers (dangerous, permissive, balanced, strict, paranoid) and 6 guardrail extension types: PII redaction (names, emails, phone numbers, SSNs), ML classifiers for prompt injection detection, code safety analysis, grounding verification against source documents, topicality enforcement to keep agents on-task, and content policy filtering for harmful or inappropriate output.
What is the capability discovery system?
The capability discovery engine replaces static tool and skill dumps with 3-tier semantic search, reducing token usage by roughly 90%. Tier 0 provides category summaries in about 150 tokens (always included). Tier 1 returns the top-5 semantic matches in about 200 tokens. Tier 2 delivers full schemas in about 1,500 tokens, loaded on demand. A meta-tool called discover_capabilities lets agents self-discover available tools at runtime.
How does the QueryRouter work?
The QueryRouter classifies incoming queries into 4 tiers: T0 (direct answer from context), T1 (single-tool invocation), T2 (multi-step plan requiring a planning engine), and T3 (multi-agent delegation requiring orchestration). This classification determines the execution strategy and resource allocation for each request.
What is adaptive intelligence?
Adaptive intelligence is how AgentOS agents continuously improve their behavior without retraining. Five mechanisms work together: (1) Meta-reflective prompt adaptation — the PromptBuilder assembles a different system prompt every turn, dynamically incorporating personality traits, mood state, conversation history, retrieved memories, and available tools. (2) Self-evaluating response quality — the self_evaluate tool scores the agent's own output and adjusts parameters like temperature, verbosity, and personality expression in real time. (3) Personality-modulated cognition — HEXACO traits shape how the agent processes information. High openness increases creative associations during memory retrieval; high conscientiousness strengthens retrieval-induced forgetting of irrelevant data. (4) Autonomous memory consolidation — the ConsolidationLoop prunes weak memories, strengthens frequently-accessed ones, and derives new insights from memory clusters, so the agent's knowledge improves over time without explicit training. (5) QueryRouter tiered classification — the system adapts retrieval depth based on query complexity. Simple questions get fast keyword lookup; complex questions trigger full hybrid RAG with deep research.
What are emergent behaviors?
Emergent behaviors are capabilities that agents develop at runtime rather than being explicitly programmed. AgentOS supports five forms: (1) Runtime tool forging — agents create new tools on the fly via forge_tool. The EmergentCapabilityEngine uses sandboxed JavaScript execution and LLM-as-judge evaluation to safely create, test, and promote tools. (2) Self-improving personality — agents adapt their HEXACO personality traits within bounded limits via adapt_personality. Mutations persist with Ebbinghaus decay — strong repeated adaptations stick, while weak ones fade naturally. (3) Dynamic skill management — agents enable or disable skills at runtime via manage_skills, adapting their behavioral repertoire to the task at hand. (4) Composable workflow creation — agents compose registered tools into multi-step pipelines via create_workflow, building new capabilities from existing building blocks. (5) Tiered tool promotion — forged tools progress through session, agent, and shared tiers. Tools that prove reliable (5+ successful uses, >0.8 confidence score) auto-promote for cross-agent reuse.
Can I use an LLM as a judge instead of a human?
Yes. AgentOS provides two complementary LLM-as-judge mechanisms. At the agency level, hitl.llmJudge() creates an approval handler that delegates decisions to an LLM. It evaluates each ApprovalRequest against configurable criteria and returns a structured decision with a confidence score. When confidence falls below the threshold, the decision escalates to a fallback handler (another hitl handler or a human). At the graph level, the humanNode() builder accepts a judge option that delegates the interrupt decision to an LLM. If the judge is confident, the graph continues without suspension. If confidence is low, execution falls through to a normal human interrupt. Both approaches support configurable model, provider, criteria, and confidence threshold. Example: hitl.llmJudge({ model: 'gpt-4o-mini', criteria: 'Is this response factually accurate?', confidenceThreshold: 0.8, fallback: hitl.cli() }).
Can guardrails override HITL approvals?
Yes. When hitl.guardrailOverride is enabled, guardrails run after an approval decision and can still block destructive or sensitive actions. This adds a second safety layer after auto-approve, human approval, or hitl.llmJudge(). The default post-approval guardrails include code-safety and pii-redaction, and you can configure the list with hitl.postApprovalGuardrails. Disable the override only if you explicitly want the approval handler to have the final say.
Getting Started
How do I install AgentOS?
Install via npm: npm install @framers/agentos. AgentOS requires Node.js 18 or higher. For the CLI tools, install the companion package: npm install -g wunderland.
How do I create my first agent?
Import the agent function and configure it with a provider, instructions, and optional personality traits:
import { agent } from '@framers/agentos'
const myAgent = agent({
provider: 'openai',
instructions: 'You are a helpful assistant.',
personality: {
openness: 0.8,
conscientiousness: 0.9,
},
memory: { enabled: true },
})
const reply = await myAgent.send('Hello!')
console.log(reply.text)How do I add voice capabilities?
AgentOS supports multiple STT (speech-to-text) and TTS (text-to-speech) providers. Configure a voice pipeline by specifying STT and TTS providers in your agent configuration. Supported STT providers include Whisper (OpenAI), Deepgram, and AssemblyAI. TTS providers include ElevenLabs, OpenAI TTS, and Google Cloud TTS. Telephony integration is available for real-time voice interactions.
How do I deploy AgentOS?
AgentOS is self-hostable and runs anywhere Node.js runs. Deploy on your own infrastructure with Docker, on cloud providers like AWS, GCP, or Azure, or use platforms like Vercel, Railway, or Fly.io. The runtime uses SQLite by default for zero-config setup, with optional migration to Postgres, pgvector, Qdrant, or Neo4j for production scale.
Does AgentOS work offline?
Yes. AgentOS works fully offline when paired with Ollama for local LLM inference. Ollama provides access to open-weight models like Llama, Mistral, and Phi locally. The SQLite storage backend and in-memory vector store require no external services. The CLI auto-detects Ollama and configures the agent to use it.
Enterprise
Is AgentOS production-ready?
Yes. AgentOS lets you control exactly how much freedom your agents have — from wide open to fully locked down. It blocks prompt injection attacks, redacts personal data, moderates content in real time, requires approval for sensitive tool calls, enforces budget limits, and includes circuit breakers and rate limiting.
What compliance standards does AgentOS address?
AgentOS is designed with GDPR readiness in mind — PII redaction guardrails can strip personal data before it reaches LLM providers. All data stays on your infrastructure when self-hosted. SOC 2 compliance documentation is planned. The memory system supports data deletion and export for right-to-access requests.
Is there enterprise support available?
Yes. For production deployments, enterprise licensing, dedicated support, and custom integrations, contact the team at [email protected]. Enterprise support includes priority issue resolution, architecture consulting, and deployment assistance.
Can I use AgentOS with my own infrastructure?
Absolutely. AgentOS is designed to be self-hosted from the ground up. For self-hosted deployments, your data stays on your infrastructure with SQLite, Postgres, pgvector, Qdrant, or Neo4j. If you want a managed vendor-hosted vector database instead, Pinecone is also supported. Docker Compose configurations are provided for the self-hosted backends.
Academic References
AgentOS cognitive memory and personality systems are grounded in peer-reviewed research.
Ebbinghaus, H. (1885)
Memory: A Contribution to Experimental Psychology.
The foundational study establishing the forgetting curve. AgentOS uses Ebbinghaus decay functions for automatic memory consolidation.
View sourceBaddeley, A. D. (2000)
The episodic buffer: a new component of working memory?
Introduced the 4-component model of working memory. AgentOS working memory is modeled on this architecture.
View sourceAshton, M. C., & Lee, K. (2004)
Empirical, theoretical, and practical advantages of the HEXACO model of personality structure.
The HEXACO model provides the 6-factor personality dimensions used in AgentOS agent identity.
View sourceBlondel, V. D., Guillaume, J.-L., Lambiotte, R., & Lefebvre, E. (2008)
Fast unfolding of communities in large networks.
The Louvain algorithm is used in AgentOS GraphRAG for community detection in knowledge graphs.
View sourceMalkov, Y. A., & Yashunin, D. A. (2018)
Efficient and robust approximate nearest neighbor search using Hierarchical Navigable Small World graphs.
HNSW is one of 7 vector store backends supported by AgentOS for efficient similarity search.
View sourceNader, K., Schafe, G. E., & LeDoux, J. E. (2000)
Fear memories require protein synthesis in the amygdala for reconsolidation after retrieval.
Basis for the memory reconsolidation mechanism — memories are rewritten each time they are recalled.
View sourceAnderson, M. C., Bjork, R. A., & Bjork, E. L. (1994)
Remembering can cause forgetting: Retrieval dynamics in long-term memory.
Basis for retrieval-induced forgetting — retrieving one memory suppresses related competing memories.
View sourceBerntsen, D. (2010)
The unbidden past: Involuntary autobiographical memories as a basic mode of remembering.
Basis for involuntary recall — contextual cues trigger spontaneous memory activation.
View source