같은 날 같은 걸 냈다…OpenAI와 Google 사이에 무슨 일이?

openai gpt update blog thumbnail
Futuristic illustration of AI agent sandboxes running in parallel within an orchestration harness
KEY POINTS
  • OpenAI shipped a major Agents SDK update on April 15 introducing native sandboxing, a long-horizon orchestration harness, subagents, code mode, and provider-agnostic support for 100+ LLMs.
  • The new two-layer architecture separates the harness (instructions, tools, approvals, tracing) from the sandbox (filesystem, shell, memory), enabling isolated and durable agent execution.
  • Google shipped Gemini CLI subagents the same day, allowing specialized expert agents to run in parallel with dedicated context windows, tools, and MCP servers.
  • Both releases signal a clear industry convergence toward multi-agent, sandbox-first developer tooling — the era of single-prompt AI coding assistants is giving way to orchestrated agent fleets.

Within 24 hours of each other, OpenAI and Google shipped what may be the most consequential developer-tool updates of 2026 so far. OpenAI’s Agents SDK now lets enterprises spin up sandboxed, crash-recoverable agent sessions that mount cloud storage and respect Unix-level permissions, while Google’s Gemini CLI gained a parallel subagent architecture that keeps primary sessions lean. Together, the two releases crystallize a new consensus: production AI agents need isolation, orchestration, and multi-model flexibility — not just a bigger context window.

OpenAI Agents SDK: From Prototype to Production

The Two-Layer Architecture

The core design change is a clean separation between the harness and the sandbox. The harness handles instructions, tool definitions, approval workflows, tracing, handoffs, and resume bookkeeping. The sandbox provides filesystem tools, shell access, a skill and memory system, compaction, and workspace customization through a new Manifest file. These two layers can run on the same machine for local development or on separate infrastructure for enterprise-grade isolation.

Manifest and Cloud-Native Storage

The Manifest is a portable configuration layer that stages local files, clones Git repositories, creates output directories, and mounts external storage from S3, GCS, Azure Blob Storage, or Cloudflare R2. Combined with Unix-level permissions — for example, a read-only dataroom with a write-enabled output directory — the system gives compliance-heavy industries like finance and healthcare a credible path to deploying autonomous agents without surrendering access control.

Durability: Snapshots and Rehydration

Long-running agents inevitably crash. The SDK now supports built-in snapshotting and rehydration, allowing a session to be restored in a fresh sandbox and continue from its last saved state. For tasks that span hours — large-scale code migrations, dataset transformations, multi-repo refactors — this eliminates the fragility that plagued earlier agent frameworks and brings genuine durability to agentic workflows.

Provider-Agnostic by Default

Perhaps the most strategically revealing detail: the SDK now supports 100+ LLMs out of the box. OpenAI is explicitly positioning the Agents SDK not as a lock-in vehicle for GPT models, but as a universal orchestration layer. Developers can swap models per agent, per task, or per cost tier — a direct response to the growing multi-model reality where teams use different providers for different workloads.

AI Biz Insider Analysis — OpenAI is making a calculated bet: own the orchestration layer even if you don’t own every model in the pipeline. By open-sourcing the SDK with provider-agnostic support and enterprise-grade sandboxing, OpenAI is trying to become the Kubernetes of agentic AI — the control plane that everyone builds on, regardless of which inference engine runs underneath. If it works, revenue shifts from per-token charges to platform fees and enterprise support contracts.

Gemini CLI: Subagents Go Mainstream

Isolated Context, Parallel Execution

Released on April 15, Google’s Gemini CLI now supports subagents — specialized expert agents that operate within their own separate context windows, custom system instructions, and curated tool sets. Each subagent’s multi-step execution consolidates into a single summary returned to the main agent, preventing the context degradation that plagues long coding sessions. Users invoke them with @agent notation: @frontend-specialist, @codebase_investigator, or any custom agent defined in a Markdown file with YAML frontmatter.

Built-In and Custom Agents

Gemini CLI ships with three built-in subagents: a generalist with full tool access, a cli_help agent for answering Gemini CLI feature questions, and a codebase_investigator for mapping dependencies and exploring code structure. Developers can define custom subagents at the global level (~/.gemini/agents) or per-project (.gemini/agents), and extensions can bundle agents in an agents/ directory — creating a composable, shareable agent ecosystem.

AI Biz Insider Analysis — Google’s approach is complementary but different. Where OpenAI focuses on infrastructure-level sandboxing and cloud-native durability, Google optimizes for developer experience at the CLI level. The @agent notation and Markdown-based configuration lower the barrier to multi-agent setups dramatically. Expect Anthropic’s Claude Code — which already supports sub-agents via the Agent tool — to respond with its own enhancements soon, intensifying the three-way platform race for developer mindshare.

What This Means for Enterprise Teams

The simultaneous releases mark an inflection point. Enterprise engineering teams evaluating agentic tooling now face a clearer taxonomy: OpenAI’s SDK targets backend infrastructure teams that need cloud-mounted, permission-controlled, crash-recoverable agent pipelines; Google’s Gemini CLI targets individual developers and small teams that want parallel expertise without context overflow. Both are betting that multi-agent orchestration — not monolithic chat — is the future of AI-assisted software development.

For CTOs and engineering leads, the practical takeaway is straightforward: if your agents need to touch production data, mount storage, and survive crashes, the OpenAI Agents SDK’s sandbox-plus-harness model deserves evaluation. If your priority is developer velocity on local codebases with specialized expertise, Gemini CLI subagents offer the fastest path to multi-agent workflows today. And with MCP crossing 97 million installs and the Agentic AI Foundation formalizing interoperability standards, the walls between these platforms are getting thinner by the quarter.

Related

Sources

  1. OpenAI — The Next Evolution of the Agents SDK (April 15, 2026)
  2. Google Developers Blog — Subagents Have Arrived in Gemini CLI (April 15, 2026)

AI Biz Insider · AI Trends EN · aibizinsider.com


AI Biz Insider에서 더 알아보기

구독을 신청하면 최신 게시물을 이메일로 받아볼 수 있습니다.

코멘트

댓글 남기기

AI Biz Insider에서 더 알아보기

지금 구독하여 계속 읽고 전체 아카이브에 액세스하세요.

계속 읽기

AI Biz Insider에서 더 알아보기

지금 구독하여 계속 읽고 전체 아카이브에 액세스하세요.

계속 읽기