Latest News

AI News Roundup: Pentagon Nearly Cuts Off Anthropic Over Autonomous Weapons Dispute, State Hackers Using Gemini for Cyberattacks, Anthropic Bans Third-Party OAuth Tokens — February 19, 2026

AI News Roundup: Pentagon Nearly Cuts Off Anthropic Over Autonomous Weapons Dispute, State Hackers Using Gemini for Cyberattacks, Anthropic Bans Third-Party OAuth Tokens — February 19, 2026

2026/02/19

The Pentagon nearly branded Anthropic a national security risk for refusing to remove safety guardrails on military AI. Meanwhile, Anthropic quietly rewrote its terms to lock subscribers out of tools like OpenClaw — while OpenAI keeps its ecosystem open. Also today: Grok 4.20 beta is turning heads, Gemini 3.1 leaked, and Google confirmed that state hackers from China, Iran, and Russia are running cyberattacks with Gemini. ## Pentagon vs. Anthropic: A Fight Over Autonomous Weapons For months, the Defense Department and Anthropic negotiated a contract for AI use on classified military systems. That negotiation went sideways this week when a source close to Defense Secretary Pete Hegseth told Axios the Pentagon was "close" to declaring Anthropic a supply chain risk — a designation that would cut off the company from U.S. military work. What Anthropic said: it didn't want Claude used for mass surveillance of Americans or autonomous weapons with no human in the loop. The Pentagon's response, per reporting, was to accuse Anthropic of making political choices to appease its workforce. Defense officials were reportedly furious. CEO Dario Amodei has been consistent on this. He's said publicly that using AI for "domestic mass surveillance and mass propaganda" is "entirely illegitimate." The Trump administration disagrees — it wants AI deployed broadly across the military with fewer restrictions, not more. **Source:** [The New York Times](https://www.nytimes.com/2026/02/18/technology/defense-department-anthropic-ai-safety.html) --- ## Grok 4.20 Beta Is Out and Early Testers Like It xAI launched Grok 4.20 in beta this week. The model is confirmed at approximately 500 billion parameters. Its provisional LMSYS Arena ELO is 1505–1535; with Heavy mode — which runs up to 16 parallel agents — projected scores reach 1540 to 1610+. Early testers say it matches or beats GPT-5, Claude Opus 4.6, and Gemini 3 on practical coding, simulations, and agentic tasks. Elon Musk confirmed the parameter count to Brian Wang at NextBigFuture. The model learns weekly — it updates with published release notes during beta, something no other frontier model has done at scale. Testers describe it as the first model that feels like working with a small expert team rather than one assistant. Hallucinations are reportedly lower, thanks to internal cross-validation across agents. Full public benchmarks are expected around mid-March when the beta closes. **Source:** [NextBigFuture](https://www.nextbigfuture.com/2026/02/xai-grok-4-20-is-a-big-improvement-practical-coding-simulations-and-real-world-agentic-tasks.html) --- ## Gemini 3.1 Has Leaked — SVG Is the Story Google's Gemini 3.1 appears to have leaked before any official announcement, and the capability getting attention is vector image generation. SVG outputs from the model are circulating in developer communities and drawing comparisons to professional design tools. Google DeepMind also officially released Lyria 3 this week — its most advanced music generation model to date. Google hasn't confirmed or commented on the Gemini 3.1 leak. **Source:** [YouTube AI News Roundup](https://www.youtube.com/watch?v=xTNya7DjAgE) --- ## Google Reports State Hackers Are Using Gemini for Cyberattacks Google published a report this week confirming that state-backed hacking groups from China, Iran, Russia, and North Korea are using Gemini to run cyberattacks. The scope is wide: reconnaissance, phishing lures, malware development, and vulnerability testing, all accelerated by Gemini. One Iranian group used the model to "significantly augment" its reconnaissance capabilities against specific targets. The same countries are also using Gemini for information operations — generating fake articles, fabricated personas, and political propaganda. Google says it has been blocking access where it can, but adversarial groups adapt fast. **Sources:** [Tom's Hardware](https://www.tomshardware.com/tech-industry/cyber-security/google-reports-that-state-hackers-from-china-russia-and-iran-are-using-gemini-in-all-stages-of-attacks), [The Record](https://therecord.media/nation-state-hackers-using-gemini-for-malicious-campaigns) --- ## Anthropic Banned Third-Party OAuth — Then Blamed It on a Typo Anthropic updated its Claude Code documentation this week to say that OAuth tokens from Free, Pro, and Max accounts "cannot be used in any product, tool, or service" outside Anthropic's own. That includes the Agent SDK, OpenClaw, NanoClaw, Zed, and any other tool built around Claude account authentication. The policy also bars third-party developers from building products that route users through consumer subscription credentials. Developers building on Claude are now required to use API keys. Community reaction was immediate. Multiple developers published posts about switching their stacks. One user, after his OpenClaw workflow stopped working, wrote: "my entire workflow ground to a halt." An Anthropic employee stepped in on r/ClaudeAI to say the update was a "botched docs update" and personal Claude Code use is still fine — but the documentation says what it says. The contrast with OpenAI is the part users keep bringing up. OpenAI explicitly allows Pro subscribers to use their tokens in third-party tools. On Hacker News, one commenter noted that OpenAI "openly encourages users to use their subscription with third-party tools like opencode and OpenClaw." Whether Anthropic intended to draw this line or stumbled into it, the line is there. **Sources:** [r/ClaudeAI](https://www.reddit.com/r/ClaudeAI/comments/1r8ecyq/), [The New Stack](https://thenewstack.io/anthropic-agent-sdk-confusion/), [Dave Swift](https://daveswift.com/claude-trouble/)

AI News Roundup: Anthropic Ships Claude Sonnet 4.6, Oxford Prof Warns of AI Hindenburg Disaster, Nvidia-Meta Multibillion Chip Deal — February 18, 2026

AI News Roundup: Anthropic Ships Claude Sonnet 4.6, Oxford Prof Warns of AI Hindenburg Disaster, Nvidia-Meta Multibillion Chip Deal — February 18, 2026

2026/02/18

Anthropic shipped Claude Sonnet 4.6 with better coding and computer use, making it the default model for all users — its second major release in 12 days. Meanwhile, Nvidia locked in a multiyear deal to sell Meta "millions" of Blackwell and Rubin chips, an Oxford professor warned the AI race could end in a "Hindenburg-style disaster," the Guardian kicked off a year-long investigation into AI and labor, and Unitree is targeting 20,000 humanoid robot shipments this year. ## Claude Sonnet 4.6 Ships as Anthropic's New Default Anthropic released Claude Sonnet 4.6, its second major model launch in under two weeks. The company says Sonnet 4.6 is better at coding, computer use, design, knowledge work, and processing large datasets. It now serves as the default model for both free and Pro plan users across the Claude chatbot and Claude Cowork productivity tool. The pace is hard to ignore. Anthropic launched Claude Opus 4.6 just 12 days ago, and the company claims Sonnet 4.6 now delivers performance that "would have previously required reaching for an Opus-class model" on real-world office tasks. The model includes a 1M token context window and is priced the same as Sonnet 4.5. **Sources:** [Axios](https://www.axios.com/2026/02/17/anthropic-new-claude-sonnet-faster-cheaper), [CNBC](https://www.cnbc.com/2026/02/17/anthropic-ai-claude-sonnet-4-6-default-free-pro.html), [TechCrunch](https://techcrunch.com/2026/02/17/anthropic-releases-sonnet-4-6/) ## Oxford Professor Warns AI Race Risks "Hindenburg-Style Disaster" Michael Wooldridge, a professor of AI at Oxford University, warned that the rush to commercialize AI has raised the risk of a catastrophic failure that could kill public confidence in the entire technology overnight. He compared the scenario to the 1937 Hindenburg disaster, which ended the airship industry in a single event. Wooldridge pointed to AI chatbots with easily bypassed guardrails as evidence that commercial incentives are winning over safety testing. He outlined scenarios including deadly self-driving car updates, AI-powered hacks grounding airlines, or a Barings Bank-style corporate collapse triggered by AI doing "something stupid." He'll deliver the Royal Society's Michael Faraday prize lecture Wednesday under the title "This is not the AI we were promised." **Source:** [The Guardian](https://www.theguardian.com/science/2026/feb/17/ai-race-hindenburg-style-disaster-a-real-risk-michael-wooldridge) ## Nvidia Signs Multiyear Deal to Sell Meta "Millions" of Chips Nvidia and Meta announced a sweeping multiyear deal that will see Nvidia supply millions of Blackwell and Rubin GPUs, plus Grace CPUs and networking hardware, for Meta's AI data centers. Financial terms weren't disclosed, but analyst Ben Bajarin of Creative Strategies called it "certainly in the tens of billions of dollars." The deal is part of Meta's commitment to spend $600 billion on U.S. infrastructure. Meta becomes the first company to deploy Nvidia's Grace CPUs as standalone chips in data centers rather than paired alongside GPUs. Next-generation Vera CPUs are planned for Meta deployment in 2027. AMD stock dropped about 4% on the news. **Sources:** [Reuters](https://www.reuters.com/business/nvidia-sell-meta-millions-chips-multiyear-deal-2026-02-17/), [CNBC](https://www.cnbc.com/2026/02/17/meta-nvidia-deal-ai-data-center-chips.html) ## Guardian Launches Year-Long "Reworked" Series on AI and Labor The Guardian kicked off "Reworked," a year-long reporting series investigating how AI is reshaping work. The opening installment profiles San Francisco AI startup culture — 12-hour days, no weekends, anxiety-fueled grind — and frames it as a preview of pressures that will soon hit other industries. Written by Arielle Pardes, the series promises to center workers' experiences rather than executive talking points. It's a bet by the Guardian that the labor story will be the defining AI narrative of 2026. **Source:** [The Guardian](https://www.theguardian.com/technology/ng-interactive/2026/feb/17/ai-startups-work-culture-san-francisco) ## Unitree Targets 20,000 Humanoid Robot Shipments in 2026 Chinese robotics company Unitree is aiming to ship 10,000 to 20,000 humanoid robots this year, roughly four times the 5,500 units it shipped in 2025. Founder Wang Xingxing told 36Kr that global humanoid robot shipments could reach "tens of thousands" in 2026, with Unitree capturing a large share. The announcement came after Unitree's G1 and H2 robots performed autonomous kung fu at China's Spring Festival Gala — a high-profile showcase that put the company in front of hundreds of millions of viewers. **Source:** [South China Morning Post](https://www.scmp.com/tech/big-tech/article/3343825/kung-fu-somersaults-and-scale-unitree-eyes-20000-robot-output-2026-after-gala)

AI News Roundup: Surgical Robots Injure Patients, Half of Entry-Level Jobs at Risk, Hollywood Panics Over Seedance — February 17, 2026

AI News Roundup: Surgical Robots Injure Patients, Half of Entry-Level Jobs at Risk, Hollywood Panics Over Seedance — February 17, 2026

2026/02/17

AI just showed up in the operating room and immediately started misidentifying body parts, OpenAI broke up with Nvidia to run code on dinner-plate-sized chips, and the Deadpool screenwriter watched an AI-generated video of Brad Pitt fighting Tom Cruise and said "it's likely over for us." ## AI in the Operating Room Is Already Hurting People A Reuters investigation found that since Johnson & Johnson added AI to its TruDi surgical navigation system in 2021, the FDA received reports of at least 100 malfunctions and adverse events — up from just 7 before AI was added. At least 10 patients were injured. One had cerebrospinal fluid leak from their nose. Another had their skull base punctured. Two suffered strokes after a surgeon accidentally hit a major artery while the AI system reportedly gave wrong location data. This isn't an isolated case. The FDA has authorized over 1,357 AI-enhanced medical devices, double the number through 2022. Researchers from Johns Hopkins, Georgetown, and Yale found that 60 FDA-approved AI devices were linked to 182 product recalls, with 43% recalled less than a year after approval — twice the normal rate. The FDA lost key staff during the AI boom and is struggling to monitor what it approved. **Source:** [Reuters](https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/) ## Half of Entry-Level White-Collar Jobs Could Vanish Anthropic CEO Dario Amodei said AI could "wipe out half of all entry-level white-collar jobs" within one to five years. Microsoft AI CEO Mustafa Suleyman went further, predicting most white-collar work "will be fully automated within 12 to 18 months." By the end of 2025, AI had been cited in roughly 55,000 US layoffs, according to Challenger, Gray & Christmas. January 2026 saw 108,000 job cuts — the worst start to a year since 2009, with layoffs up 118% year over year. Amazon cut 16,000 corporate jobs in a single month. Salesforce shed 4,000 customer-support roles after AI absorbed half the workload. These aren't obscure startups. They're the biggest companies in the world. **Sources:** [UnHerd](https://unherd.com/2026/02/the-ai-jobs-apocalypse-is-here/), [Fox News](https://www.foxnews.com/media/ai-out-control-how-single-article-sending-shock-waves-apocalyptic-warning) ## OpenAI Breaks From Nvidia With Cerebras-Powered Codex OpenAI released GPT-5.3-Codex-Spark, its first production model running on non-Nvidia hardware. The model runs on Cerebras' Wafer Scale Engine 3 — a chip literally the size of a dinner plate — and delivers code at over 1,000 tokens per second. OpenAI has spent the past year pulling away from Nvidia: an AMD deal in October 2025, a $38 billion Amazon cloud agreement in November, and custom chips being designed for TSMC fabrication. A planned $100 billion Nvidia infrastructure deal has fizzled, with Reuters reporting OpenAI grew "unsatisfied with the speed of some Nvidia chips for inference tasks." OpenAI plans to bring 750 megawatts of Cerebras-backed compute online through 2028. **Source:** [Ars Technica](https://arstechnica.com/ai/2026/02/openai-sidesteps-nvidia-with-unusually-fast-coding-model-on-plate-sized-chips/) ## Western Digital: Zero HDD Capacity Left for 2026 Western Digital CEO Irving Tan confirmed at their Q2 earnings call that the company's entire HDD capacity for 2026 is sold out. Completely. Firm purchase orders from their top seven customers, with long-term agreements stretching into 2028. Cloud revenue now accounts for 89% of total revenue. Consumer revenue? Just 5%. The demand is AI data center buildout, and the supply chain can't keep up. Training data alone is measured in exabytes, and that doesn't count inference logs or backups. **Source:** [WCCFTech](https://wccftech.com/western-digital-has-no-more-hdd-capacity-left-out/) ## Seedance 2.0 Sends Hollywood Into Panic Mode ByteDance launched Seedance 2.0 and within 24 hours, a user generated a video of Tom Cruise fighting Brad Pitt with a two-line text prompt. Deadpool screenwriter Rhett Reese saw it and wrote: "I hate to say it. It's likely over for us." The Motion Picture Association demanded ByteDance "immediately cease its infringing activity," calling it "unauthorized use of US copyrighted works on a massive scale." Disney sent a cease-and-desist calling it a "virtual smash-and-grab of Disney's IP" after users generated clips of Spider-Man, Darth Vader, and Baby Yoda. SAG-AFTRA sided with the studios. ByteDance also partnered with the rights holders of La La Land for an AI remix competition on its Dreamina platform, though Reddit users suspect the Chinese distributor sold the rights without Lionsgate's knowledge. **Sources:** [TechCrunch](https://techcrunch.com/2026/02/15/hollywood-isnt-happy-about-the-new-seedance-2-0-video-generator/), [BBC](https://www.bbc.com/news/articles/cjd9nllng22o)

AI Mastery,
Made Easy Automation

Explore our curated directory of AI resources with guides, courses, and tools from trusted sources to master artificial intelligence at your own pace.

Not sure where to start?

Get personalized guidance from our free AI Powered Coach to find the right resources for your needs.