OpenAI Launches Workspace Agents for Team Workflow Automation
OpenAI published a guide on building and scaling workspace agents within ChatGPT, enabling teams to automate repeatable workflows and connect tools. The Academy resource targets professionals looking to streamline operations using AI agents integrated directly into existing team environments.
Why it matters
Workspace agents signal ChatGPT's shift from individual productivity tool to enterprise workflow platform, directly competing with Microsoft Copilot and Google Workspace AI.
Post-Training Fixes Won't Erase AI Copyright Liability, Paper Argues
A new arXiv paper argues that machine unlearning and inference-time guardrails cannot retroactively fix copyright infringement during AI training. Authors contend legal liability stems from how data was acquired, not what models output. The position challenges a common industry defense strategy as generative AI faces mounting legal scrutiny.
Why it matters
If courts adopt this reasoning, AI companies cannot rely on post-deployment technical fixes to escape liability for training data violations—forcing compliance decisions upstream, before training begi
Google Launches Two Specialized 8th-Gen TPUs for AI Agents
Google unveiled two 8th-generation TPUs at Cloud Next: the TPU v8t for training large AI models and TPU v8i for inference workloads. The chips are purpose-built for agentic AI applications. Both will be available via Google Cloud, giving enterprises dedicated hardware to run and deploy next-generation AI agents at scale.
Why it matters
Purpose-built inference and training chips signal Google is hardening its cloud infrastructure to compete directly with Nvidia and AWS for enterprise AI workloads.
OpenAI Launches GPT-5.5, Advancing Its Super App Ambitions
OpenAI released GPT-5.5 on April 23, 2026, touting improved capabilities across multiple categories. The model is part of the company's broader strategy to consolidate AI tools into a single platform, moving toward what OpenAI envisions as an AI super app. Specific benchmark details were not disclosed in the announcement.
Why it matters
A unified AI super app from OpenAI could reshape how professionals access and pay for AI tools, threatening specialized competitors.
OpenAI WebSockets Cut Latency in Codex Agent Loops
OpenAI detailed how its Codex agent uses WebSockets and connection-scoped caching in the Responses API to reduce overhead and improve model latency. The persistent connections eliminate repeated handshake costs across multi-step agent loops, making agentic workflows faster and more efficient for developers building on the platform.
Why it matters
Developers building multi-step AI agents can now achieve meaningfully lower latency and infrastructure costs using WebSocket connections in OpenAI's Responses API.