AI Decoded Weekly
Your 3-minute briefing on what matters in AI. No hype. Just signal.
The line between "AI assistant" and "AI replacement" just moved — publicly, irreversibly, and at scale. This week, OpenAI handed its models to the Pentagon for classified operations, Jack Dorsey eliminated 40% of Block's workforce and dared other CEOs to follow, and Perplexity launched a digital worker that coordinates 19 AI models to complete full projects autonomously. The era of AI augmenting jobs is giving way to the era of AI eliminating them. Here's what you need to know.
💡 THE BIG STORY
OpenAI Signs Deal to Deploy AI on the Pentagon's Classified Network
Decoded: OpenAI reached an agreement with the U.S. Department of Defense on February 27 to deploy its AI models inside the Pentagon's classified network — hours after rival Anthropic was effectively blacklisted by the Trump administration over concerns about surveillance and autonomous weapons use cases.
Why it matters: AI has officially entered the classified tier of U.S. national security infrastructure. OpenAI's willingness to step into a space Anthropic refused — and got punished for refusing — signals that the frontier AI race is increasingly inseparable from geopolitical competition.
The deal makes OpenAI the first major commercial AI lab to gain access to the DoD's classified systems, a distinction that comes with enormous strategic upside and equally enormous risk. The company moved quickly to detail "layered protections" governing how its models can be used — but critics note those protections are self-reported and unverifiable from the outside.
The backdrop matters: earlier this week, 360+ employees across AI labs signed an open letter backing Anthropic's position that AI should not be used in autonomous weapons systems. OpenAI's deal — struck just hours after Anthropic's freeze-out — will be read by many as a direct rebuke of that position. It also raises hard questions about what "safety-first AI" means when commercial incentives and government contracts are in the room.
What happens next will define the norms for AI in defense for years. If OpenAI's classified deployment succeeds quietly, others will follow. If it generates a scandal, it could trigger the regulatory overhaul the industry has been dreading.
QUICK HITS
💰 Jack Dorsey Cuts 40% of Block's Workforce — and Says Your Company Is Next
Decoded: Block, the fintech company behind Square, Cash App, and Tidal, announced on February 26 that it is eliminating more than 4,000 of its roughly 10,000 employees. CEO Jack Dorsey said directly in a shareholder letter that the cuts are driven by AI productivity gains — and predicted most companies will do the same.
Why it matters: This is the clearest executive statement yet that AI isn't just changing workflows — it's eliminating headcount at scale and in public. Dorsey's willingness to say the quiet part loud is either a warning shot or a playbook other CEOs will quietly follow.
🖥️ Perplexity Launches "Computer" — a Digital Worker That Coordinates 19 AI Models
Decoded: Perplexity AI launched Perplexity Computer on February 25, a platform that accepts high-level objectives from users, breaks them into subtasks, and delegates each task to whichever of 19 specialized AI models — including Claude Opus, Gemini, and Grok — is best suited for the job. Priced at $200/month.
Why it matters: This is the most concrete realization yet of the "AI employee" concept. Rather than a single chatbot, Computer functions as a project manager with a staff of AI specialists. The $200 price point puts it within reach of individual operators and small teams — not just enterprises.
🔬 Google Ships Gemini 3.1 Pro With 65,000-Token Output and ARC-AGI-2 Gains
Decoded: Google DeepMind released Gemini 3.1 Pro on February 19, rolling it out globally via the Gemini app and API. The release introduces a 65,000-token output limit — roughly 5x most competitors — along with measurable improvements on ARC-AGI-2 reasoning benchmarks and stronger performance on multi-document analytical tasks.
Why it matters: The 65k output limit is a practical unlock for developers building document-heavy applications, legal tools, and agentic pipelines. Benchmark improvements on ARC-AGI-2 suggest this isn't just a scaling bump — the reasoning architecture is maturing.
🛠 TOOL OF THE WEEK
Perplexity Computer: Your First AI Employee
What it is: A multi-model orchestration platform that coordinates 19 AI systems to complete complex, multi-step projects autonomously — no prompt engineering required.
Why it matters: Perplexity Computer doesn't just answer questions; it plans, delegates, and executes. Tell it to research competitors, draft a report, pull pricing data, and summarize findings — and it assigns subtasks to the right models in parallel. It's the closest thing to hiring a junior analyst who never sleeps.
Best for:
Research-heavy workflows (market analysis, due diligence, content research)
Automating multi-step tasks that currently require stitching together 3-5 tools
Small teams looking to punch above their headcount
The catch: At $200/month it's priced like a productivity tool, not a toy — and complex task quality still varies by domain.
Try it: perplexity.ai/computer
💡 ONE THING TO TRY THIS WEEK
The Job Threat Audit
Use this prompt to honestly assess your own exposure to AI displacement — before your employer does it for you:
"I work as a [your job title] at a [type of company]. My core daily tasks include [list 5-7 tasks]. For each task, tell me: (1) whether current AI tools can do this today, (2) how long before it's fully automatable, and (3) what adjacent skills I should develop to stay ahead of the curve. Be direct — don't soften the assessment."
This works because specificity forces the model past generic "AI will help humans" hedging. You get an honest triage of what's safe vs. what's at risk — and a starting point for a real upskilling plan.
📊 BY THE NUMBERS
Metric | This Week |
|---|---|
Big Tech AI capex in 2026 (Bridgewater est.) | $650 billion |
Block employees laid off citing AI | 4,000 (40% of workforce) |
Nvidia Q4 FY2026 revenue | $68 billion |
Global jobs estimated displaced by AI by end of 2026 | 85 million |
🔭 WHAT I'M WATCHING
The Pentagon-as-Customer Problem
OpenAI's classified network deal is a milestone, but the harder question isn't whether AI belongs in defense — it's who sets the rules once it's there. Right now, the answer is: the vendors.
OpenAI described "layered protections" in its DoD agreement, but those protections are internal policies, not regulatory requirements. There's no independent body verifying how the models are used once they're inside a classified environment. That's not a knock on OpenAI specifically — it's a structural gap that applies to any commercial AI deployed in national security contexts.
What makes this week particularly significant is the contrast: Anthropic held a line (no autonomous weapons, no surveillance without consent), got punished for it, and watched a competitor step into the void within hours. If that's the incentive structure — safety constraints get you blacklisted, flexibility gets you the contract — then we're in a race to the bottom on norms, dressed up as a race to the top on capability.
Watch for whether Congress moves to create any oversight mechanism here. The window is narrow. Once classified AI infrastructure is established and operational, it becomes exponentially harder to regulate after the fact.
THAT'S A WRAP
The week's signal is clear: AI is no longer a productivity layer — it's an organizational restructuring event. Block's layoffs, OpenAI's Pentagon deal, and Perplexity's digital worker all point in the same direction. The question is no longer whether AI will change your industry. It's whether you'll be positioned on the right side of that change when it arrives.
One ask: If this was useful, forward it to one person who'd benefit.
Hit reply with feedback. I read everything.
See you next week.
— The AI Decoded Team
P.S. — Know someone drowning in AI noise? They can subscribe at AI Decoded.