AgentOS
AgentOS
Open-source TypeScript runtime · Apache 2.0
85.6% LongMemEval-S
0.4 below Emergence.ai (closed SaaS)
+1.4 above Mastra (84.23%)

Real output from real scripts

Three examples, captured from node examples/….mjs against the OpenAI API. The full source is committed at framersai/agentos/examples. Pick one.

examples/emergent-hierarchical-spawning.mjs
TypeScript
import { agency } from '@framers/agentos';

const team = agency({
  provider: 'openai',
  model: 'gpt-4o',
  strategy: 'hierarchical',
  instructions:
    'Coordinate a research team. If the task needs a ' +
    'capability your roster does not cover, call ' +
    'spawn_specialist to mint one.',
  agents: {
    researcher: { instructions: 'Find sources.' },
    writer:     { instructions: 'Produce concise prose.' },
  },
  emergent: {
    enabled: true,
    judge:   true,
    planner: { maxSpecialists: 1 },
  },
  on: {
    emergentForge: (e) => console.log(
      `[FORGE] spawned "${e.agentName}"`
    ),
  },
});

const result = await team.generate(
  'Write a 2-paragraph briefing on agentic-AI ' +
  'sandbox security risks. Include a security ' +
  'audit perspective on node:vm vs container ' +
  'isolation. The team has no security auditor; ' +
  'spawn one if needed.',
);

console.log(result.text);
$ node examples/emergent-hierarchical-spawning.mjs
● Live run
[FORGE] spawned "security_auditor"
approved=true
// EmergentAgentJudge passed; new agent joins the live roster on next turn
Final answer (gpt-4o)
Agentic AI systems operating in sandbox environments present unique security challenges, particularly when considering the isolation methods used to contain these systems. Node:vm (Virtual Machines) provide robust isolation as each VM operates with its own OS and resources, offering strong separation from other environments on the same host. This level of isolation helps to contain the potential impact of any security breach within the AI agent itself. However, VMs can be resource-intensive and potentially slower, as they require more overhead to emulate physical hardware, which might influence performance for high-demand AI tasks. Conversely, container-based isolation, such as Docker, offers a more lightweight and flexible approach as containers share the host kernel while isolating the application at the process level. This can be advantageous for deploying numerous, smaller agentic AI instances. However, from a security-audit perspective, containers can be more vulnerable to kernel-level attacks since they share the same underlying OS. The shared kernel can pose risks if one AI agent exploits a vulnerability to affect others.
Agent calls (2)
researcher: "Research security risks in agentic AI sandbox environments..."
writer: "Produce a two-paragraph CTO-audience briefing from the research..."
tokens: 4,272

The team starts with researcher + writer. The prompt asks for a security audit, which neither covers. The manager calls spawn_specialist, EmergentAgentJudge approves the spec, and security_auditor joins the live roster. The final answer above is what GPT-4o produced through that team. Captured from a real run, no edits.