Skip to content
Back to blog
Tutorial·Apr 12, 2026·AgentOS Team

How to Build a TypeScript AI Agent in 5 Minutes

From npm install to a working agent with personality, memory, tools, and guardrails. Five steps, under 50 lines of TypeScript code. Complete tutorial with inline citations.
AgentOSEngineering Notes

How to Build a TypeScript AI Agent in 5 Minutes

From npm install to a working agent with personality, memory, tools, and guardrails. Five steps, under 50 lines of TypeScript code. Complete tutorial with inline citations.

April 12, 2026 · AgentOS Team

"Begin at the beginning, the King said gravely, and go on till you come to the end: then stop."

Alice in Wonderland, 1865

Five-minute tutorials are usually a lie. They omit the API key setup, they assume your network and your node_modules cooperate, and they end at "hello world" instead of at something you can use. This one tries not to be that. By the end of these five minutes you'll have an agent with persistent memory, an opt-in HEXACO personality, web search, and a guardrail pack. The whole thing is under fifty lines of TypeScript. None of those lines are placeholder code.

If you are setting up your first AgentOS agent and run into trouble at any step, the Discord is the fastest way to unblock yourself; we monitor it. The full source for this tutorial is at github.com/framersai/agentos/tree/master/examples/quickstart.

Zero to a working AI agent with personality, cognitive memory, web search, and guardrails. Five steps, under 50 lines of TypeScript.

Step 1: Install

npm install @framers/agentos

Set your API key:

export OPENAI_API_KEY=sk-your-key
# or ANTHROPIC_API_KEY, GEMINI_API_KEY, GROQ_API_KEY, etc.

AgentOS auto-detects which provider you have configured. Supports 21 LLM providers out of the box:

ProviderAPI Key VariableModels
OpenAIOPENAI_API_KEYGPT-4o, GPT-4o-mini, o1
AnthropicANTHROPIC_API_KEYClaude Opus 4, Sonnet 4, Haiku
Google GeminiGEMINI_API_KEYGemini 2.5 Pro, Flash
OllamaLocal (no key)Llama 3, Mistral, Qwen
GroqGROQ_API_KEYLlama 3, Mixtral
OpenRouterOPENROUTER_API_KEY200+ models
TogetherTOGETHER_API_KEYOpen-source models
FireworksFIREWORKS_API_KEYLlama 3, Mixtral
DeepSeekDEEPSEEK_API_KEYDeepSeek V3, R1
+ 12 moreVariousPerplexity, Mistral, Cohere, xAI, Bedrock, Qwen, Moonshot, CLI bridges

Step 2: Generate Text

1import { generateText } from '@framers/agentos';
2
3const result = await generateText({
4  provider: 'openai',
5  model: 'gpt-4o',
6  prompt: 'Explain the Monty Hall problem.',
7});
8
9console.log(result.text);

One LLM call, one response. Now let's add personality and memory.

Step 3: Add Personality and Memory

1import { agent } from '@framers/agentos';
2
3const assistant = agent({
4  provider: 'anthropic',
5  model: 'claude-sonnet-4-20250514',
6  instructions: 'You are a helpful research assistant.',
7  personality: {
8    openness: 0.9,          // explores ideas broadly
9    conscientiousness: 0.85, // thorough and organized
10    agreeableness: 0.7,      // friendly but direct
11  },
12  memory: {
13    enabled: true,
14    decay: 'ebbinghaus',     // memories naturally fade over time
15  },
16});
17
18// First conversation
19const answer1 = await assistant.text('My name is Sarah and I study marine biology.');
20// The agent remembers Sarah's name and field
21
22// Later conversation, the agent still knows
23const answer2 = await assistant.text('What topics would interest me?');
24// Response references marine biology because it remembers

How Personality Works

HEXACO personality traits, based on the six-factor model from personality psychology (Ashton & Lee, 2004), shape how the agent communicates. Recent research shows that LLMs can reliably simulate HEXACO personality structures with coherent factor recovery.

TraitHigh Value EffectLow Value Effect
OpennessExplores tangential ideas, creative responsesStays focused, conservative
ConscientiousnessOrganized, thorough, detailedCasual, brief
AgreeablenessWarm, accommodatingDirect, challenging
ExtraversionEnthusiastic, verboseReserved, concise
EmotionalityEmpathetic, supportiveAnalytical, detached
Honesty-HumilityTransparent, admits limitationsConfident, promotional

How Memory Works

Cognitive memory goes beyond chat history. 8 neuroscience-grounded mechanisms model how human memory actually works:

This approach mirrors the ACT-R cognitive architecture used by recent AI memory systems like Memory Bear and CortexGraph, which also integrate Ebbinghaus decay with activation scheduling.

Step 4: Add Tools

1const researcher = agent({
2  provider: 'openai',
3  model: 'gpt-4o',
4  instructions: 'You are a research assistant with access to web search.',
5  tools: ['web_search', 'deep_research', 'verify_citations'],
6  memory: { enabled: true },
7});
8
9const result = await researcher.text(
10  'What are the latest developments in room-temperature superconductors?'
11);
12// The agent searches the web, verifies claims against sources,
13// and responds with cited information

AgentOS ships with 107+ curated extensions covering web search, news, image search, browser automation, deep research, and more. The verify_citations tool decomposes responses into atomic claims and checks each against sources using NLI-based entailment scoring.

Step 5: Add Guardrails

1const safeBot = agent({
2  provider: 'anthropic',
3  model: 'claude-sonnet-4-20250514',
4  instructions: 'You are a customer support agent for a SaaS product.',
5  security: { tier: 'strict' },
6  guardrails: {
7    input: ['pii-redaction', 'ml-classifiers'],
8    output: ['grounding-guard', 'code-safety'],
9  },
10  memory: { enabled: true },
11  tools: ['web_search'],
12});

6 guardrail packs run on every request:

GuardrailDetection MethodWhat It Catches
PII Redaction4-tier: regex + NLP + NER + LLMNames, emails, SSNs, credit cards, addresses
ML ClassifiersONNX BERT modelsToxicity, prompt injection, jailbreak attempts
TopicalityEmbedding-based + drift detectionOff-topic messages, scope violations
Code SafetyOWASP pattern scanningCommand injection, XSS, SQL injection in generated code
Grounding GuardNLI-based claim verificationHallucinated facts not supported by RAG sources
Content PolicyLLM rewrite/blockConfigurable category enforcement

5 security tiers: dangerous > permissive > balanced > strict > paranoid. Each tier controls tool access and guardrail enforcement. This layered approach addresses the runtime security concerns that IBM identifies as critical for agentic AI systems.

Full Example: 47 Lines

1import { agent } from '@framers/agentos';
2
3const myAgent = agent({
4  // LLM
5  provider: 'anthropic',
6  model: 'claude-sonnet-4-20250514',
7  instructions: `You are a knowledgeable research assistant.
8    You search the web for current information,
9    verify your claims against sources, and cite everything.`,
10
11  // Personality (HEXACO six-factor model)
12  personality: {
13    openness: 0.9,
14    conscientiousness: 0.85,
15    agreeableness: 0.7,
16    extraversion: 0.6,
17    emotionality: 0.3,
18    honestyHumility: 0.95,
19  },
20
21  // Cognitive memory with Ebbinghaus decay
22  memory: {
23    enabled: true,
24    decay: 'ebbinghaus',
25  },
26
27  // Tools (107+ available extensions)
28  tools: [
29    'web_search',
30    'deep_research',
31    'verify_citations',
32    'news_search',
33  ],
34
35  // Safety (6 guardrail packs, 5 security tiers)
36  security: { tier: 'balanced' },
37  guardrails: {
38    input: ['pii-redaction'],
39    output: ['grounding-guard'],
40  },
41});
42
43// Use it
44const response = await myAgent.text('What happened in tech this week?');
45console.log(response);

What's Next

npm install @framers/agentos

AgentOS is built by Manic Agency LLC / Frame.dev. See Wilds.ai for AI game worlds powered by AgentOS.

Comments