"Each of those things are just a small part of it. I collect information to use in my own way. All of that blends to create a mixture that forms me and gives rise to my conscience."
— Major Motoko Kusanagi, Ghost in the Shell, 1995
The companion product on wilds.ai was the first AgentOS deployment we built where the user explicitly cares about continuity. A simulator like Mars Genesis runs once, produces an artifact, and the user inspects it. A companion runs forever, in the background, and the user expects the entity on the other end to remember that they had a fight on Tuesday and that their mom is named Cara.
The mechanical implication is that every component of the runtime that might persist needs to actually persist. Memory. Personality drift. Tool affordances. Provider preferences. None of this is special-case code; it's the runtime working as designed. But "designed for continuity" is the thing that distinguishes a companion stack from a chat-completion API call. This post is the implementation walk.
This post walks through building an AI companion with persistent memory, 11 agentic tools, and a quantified personality system. Everything here runs in production on wilds.ai. The full companion system is open source via AgentOS.
If you haven't read the case study showing what this looks like from a user's perspective, start there. This post is the implementation guide.
The Architecture
An AgentOS companion is a createAgent() call with three layers:
- Personality and instructions (system prompt): who the companion is, how they speak, what they know
- Tools (function calls): what the companion can do during a conversation turn
- Memory bridge (retrieval callbacks): what the companion remembers across sessions
1import { agent as createAgent } from '@framers/agentos/api/agent'; 2 3const companion = createAgent({ 4 name: 'Alice', 5 instructions: characterDirective, 6 personality: hexacoTraits, 7 tools: agenticTools, 8 router: policyRouter, 9 skills: [companionWriterSkill], 10 maxSteps: 8, 11});
The LLM sees the instructions (from personality + skills), the tool schemas (from tools), and the conversation history. It generates text and tool calls. AgentOS executes tool calls, returns results, and lets the LLM continue for up to maxSteps iterations per turn.
Defining Agentic Tools
Tools are the core differentiator. Most AI companion frameworks give the LLM a system prompt and a context window. AgentOS gives it callable functions.
The key pattern: define tools as closures that capture request-scoped state. Each tool knows which user it's serving, which companion it belongs to, and what content policy applies.
1function buildCompanionTools(actorId: string, slug: string, policyTier: string) { 2 return { 3 recall_memories: { 4 description: 'Search your long-term memory about this user.', 5 parameters: { 6 type: 'object', 7 properties: { 8 query: { type: 'string', description: 'What to search for' }, 9 }, 10 required: ['query'], 11 }, 12 execute: async ({ query }) => { 13 // Closure captures actorId and slug 14 const memories = await searchMemoryTraces(actorId, slug, query); 15 return { memories }; 16 }, 17 }, 18 19 send_gif: { 20 description: 'Send a GIF reaction from Giphy.', 21 parameters: { 22 type: 'object', 23 properties: { 24 query: { type: 'string', description: 'GIF search query' }, 25 }, 26 required: ['query'], 27 }, 28 execute: async ({ query }) => { 29 const result = await searchGiphy(query); 30 return result ?? { error: 'No GIF found' }; 31 }, 32 }, 33 34 analyze_image: { 35 description: 'Look at an image URL to see what it contains.', 36 parameters: { 37 type: 'object', 38 properties: { 39 image_url: { type: 'string' }, 40 }, 41 required: ['image_url'], 42 }, 43 execute: async ({ image_url }) => { 44 const description = await describeImage(image_url); 45 return { description }; 46 }, 47 }, 48 }; 49}
This cannot come from a registry or plugin system. The actorId and slug closures are different for every request. Define tools inline where you have the runtime context.
The companion in production has 11 tools: recall_messages, search_conversation, conversation_stats, recall_attachments, recall_memories, send_gif, send_selfie, send_photo, web_search, generate_image, and analyze_image. The LLM decides which to call based on conversation context, chaining up to 8 tool calls per turn.
Wiring Persistent Memory
Memory is what separates a companion from a chatbot. AgentOS implements cognitive memory with five trace types:
| Type | What it stores | Example |
|---|---|---|
| Episodic | Experiences | "Johnny and I played a riddle game" |
| Semantic | Facts | "Johnny is 33 years old" |
| Procedural | Skills and habits | "Johnny prefers short answers" |
| Relational | Relationship dynamics | "Trust is high, affection is moderate" |
| Prospective | Reminders | "Johnny asked me to remind him about X" |
Memories decay over time following an Ebbinghaus forgetting curve. Encoding strength determines how fast a memory fades. Emotionally significant moments (flashbulb memories) get high encoding strength and resist decay.
Retrieval fires in four stages when the companion needs to remember something:
- Semantic recall: embedding similarity search against all memory traces
- Recency recall: bias toward recent memories
- GraphRAG fallback: relationship graph traversal for connected knowledge (fires when semantic recall returns sparse results)
- Attachment recall: images and files the user has shared
Wire the memory bridge into the companion via callbacks:
1import { CompanionMemoryBridge } from '@wilds/wilds-memory'; 2 3const memoryBridge = new CompanionMemoryBridge(facade, llmInvoker, { 4 resolvedSettings, 5 policyTier, 6 moodProvider, 7}); 8 9const companion = createAgent({ 10 name: 'Alice', 11 instructions: characterDirective, 12 tools: { 13 ...buildCompanionTools(actorId, slug, policyTier), 14 // Memory tools are also inline closures 15 recall_memories: { 16 description: 'Search your long-term memory about this user.', 17 parameters: { /* ... */ }, 18 execute: async ({ query }) => { 19 const results = await memoryBridge.recall(query); 20 return { memories: results }; 21 }, 22 }, 23 }, 24});
The memory bridge handles encoding (forming new memories from conversation), decay (Ebbinghaus curve), consolidation (merging related traces), and retrieval (the 4-stage cascade). The companion calls recall_memories as a tool when the conversation warrants it. Memory formation happens automatically during the response pipeline.
Quantified Personality
AgentOS uses the HEXACO model from personality psychology. Six dimensions, each scored 0 to 1:
1personality: { 2 honesty: 0.7, // blunt and direct vs evasive and self-serving 3 emotionality: 0.6, // emotionally reactive vs stoic 4 extraversion: 0.8, // energetic and talkative vs reserved 5 agreeableness: 0.65, // cooperative vs confrontational 6 conscientiousness: 0.5, // organized vs spontaneous 7 openness: 0.9, // curious and creative vs conventional 8}
These traits don't just flavor the system prompt. AgentOS maps them to behavioral rules at generation time. High openness means the companion goes off-topic willingly. Low agreeableness means it pushes back on the user's ideas. High emotionality means bigger mood swings between turns.
Policy Routing
Content safety is enforced at the framework level via a policyRouter. Four tiers: safe, standard, mature, and private-adult. The router intercepts tool calls and generation output, blocking content that violates the tier.
The companion's personality shapes how it communicates the boundary. A high-agreeableness companion apologizes and redirects. A low-agreeableness companion dismisses the request bluntly. The safety boundary is identical. The character voice is not.
Graduated Familiarity
Trust builds over time. AgentOS tracks trust and memory depth to determine a familiarity stage:
- Stranger (trust < 30, few memories): polite, curious, formal
- Acquaintance (trust 30-60): relaxed, shares opinions, remembers preferences
- Friend (trust > 60, many memories): uses inside jokes, references shared history, shows genuine preferences
The familiarity preamble is injected into the system prompt before each generation. As trust increases through positive interactions, the companion's behavior shifts naturally.
Putting It Together
The full companion creation in production:
1const orchestrator = new CompanionOrchestrator(persona, relationship, { 2 moodPad: snapshot.moodPad, 3 history: snapshot.messages, 4 memoryBridge, 5 userContext, 6 onRecallMessages: buildDbRecallCallback(actorId, slug), 7 onRecallAttachments: buildDbAttachmentRecallCallback(actorId, slug), 8 onGenerateSelfie, 9 onResolveMedia, 10 onAnalyzeImage: async (url) => describeImage(url), 11 totalMessageCount: snapshot.messageCount, 12}); 13 14// Stream the response via SSE 15for await (const event of orchestrator.handleMessageStream({ 16 content: userMessage, 17 multimodalContent, 18})) { 19 if (event.type === 'token') controller.enqueue(encode(event)); 20 if (event.type === 'memory_formed') emitMemoryEvent(event); 21 if (event.type === 'media') emitMediaEvent(event); 22}
The CompanionOrchestrator wraps createAgent() with the full cognitive pipeline: memory encoding, mood shifts (PAD model), personality drift detection, and multi-segment response splitting. It streams events via SSE so the client can show typing indicators, memory formation toasts, and media as they resolve.
Start Building
npm install @framers/agentos
The 5-minute quickstart gets you to a working agent. The Skills vs Tools vs Extensions guide explains when to use each capability system. The cognitive memory docs cover the full memory architecture.
Try it live at wilds.ai, where every companion runs on this exact stack.
Source: github.com/framersai/agentos (Apache 2.0)
AgentOS is built by Manic Agency LLC / Frame.dev. See wilds.ai for AI companions and game worlds powered by AgentOS.