Friday, March 13 · ~45s read
A single day. One million AI doctor visits. The adoption question is officially over.
🔬 AI Cleared a Medical Milestone: 1 Million Doctor Consultations in One Day
Decoded: On March 10, OpenEvidence logged 1 million clinical AI consultations between NPI-verified physicians and its system within a single 24-hour period — the first time any AI has hit that volume in active clinical use. A new AMA survey released this week shows 81% of U.S. doctors now use AI regularly.
Why it matters: This isn't a pilot program anymore. At that scale, AI is functioning as a live clinical co-pilot for a significant share of U.S. physicians. The adoption question is settled; the unresolved one is liability — when the AI gets it wrong at this volume, who's responsible?
🖥️ Nvidia Goes From Chip Seller to Ecosystem Banker With $2B Nebius Bet
Decoded: Nvidia announced a $2 billion investment in Amsterdam-based AI cloud company Nebius, taking an 8.3% stake. Nebius plans to deploy over 5 gigawatts of data center capacity by 2030, targeting the "neocloud" tier of the AI stack that sits between hyperscalers and startups hungry for compute.
Why it matters: Nvidia isn't just selling GPUs anymore — it's financing the infrastructure that buys them. By anchoring in the neocloud layer, Nvidia gains structural leverage over how AI compute gets allocated as demand continues to outpace supply. For the rest of the market, the takeaway is sharp: infrastructure access is the new strategic moat, and Nvidia is positioning itself as the gatekeeper.
🏛️ Anthropic vs. the Pentagon: AI's Biggest Safety Fight Goes to Court
Decoded: Anthropic is seeking an emergency appeals court stay after the Pentagon labeled it a "supply-chain risk" — the fallout from Anthropic's refusal to let Claude power autonomous lethal weapons or mass surveillance of U.S. citizens. Pentagon CTO Emil Michael ruled out any resumed negotiations on Thursday, saying there's "no chance." Microsoft filed an amicus brief backing Anthropic.
Why it matters: This is the sharpest collision yet between AI safety principles and U.S. military demands. Anthropic's position — that some use cases are simply off-limits, regardless of the customer — sets a precedent every major AI lab will eventually face. With the NDAA potentially in play, this case could define what "responsible AI supply chain" means for national security for years.
🛠 Tool Spotlight — Cursor · The AI code editor built on VS Code that understands your entire codebase — used by engineers at OpenAI, Stripe, and Shopify. Download free
That's your Friday signal. See you tomorrow.
— The AI Decoded Team
Get the full weekly deep-dive every Monday → getaidecoded.com/subscribe