Latest News

AI News Roundup: Pentagon Opens Formal Review of Anthropic's Classified AI Contracts, Amodei Says AGI Arrives in 2026, Hassabis Warns of AI Bio and Cyber Threats — February 21, 2026

AI News Roundup: Pentagon Opens Formal Review of Anthropic's Classified AI Contracts, Amodei Says AGI Arrives in 2026, Hassabis Warns of AI Bio and Cyber Threats — February 21, 2026

2026/02/21

The Pentagon opened a formal review of Anthropic's Claude Gov deployments on classified networks, Dario Amodei said AGI could arrive before the end of 2026 while predicting half of white-collar jobs will be affected within five years, and Demis Hassabis called for urgent action on AI-enabled cyber and biological risks at India's AI Impact Summit. Meanwhile, a wave of state-level AI legislation is accelerating across the U.S., and Yale's Budget Lab found that mass AI-driven layoffs still aren't showing up in official labor data. --- ## Pentagon Investigating Anthropic Over Claude Gov Deployments The U.S. Department of Defense has opened a review of Anthropic's Claude Gov program following a Wired investigation that raised concerns about supply chain risk and oversight gaps in the company's classified network contracts. Claude Gov, launched as Anthropic's government-focused AI service, runs on air-gapped infrastructure cleared for national security workloads. The DoD's worry isn't necessarily about Claude's capabilities but about the vetting processes and contractual safeguards around a private AI company operating inside classified environments. This is the first known formal DoD inquiry into an AI company's national security contracts of this scope. Anthropic has positioned Claude Gov as the responsible choice for government agencies that need AI tools without the data exposure risks of commercial deployments. The investigation puts that pitch under stress. The stakes are real. If the DoD finds gaps in how Anthropic manages access, auditability, or model updates in classified contexts, it could reshape how any AI company pursues defense contracts going forward. **Sources:** [Wired](https://www.wired.com/), [Anthropic Claude Gov announcement](https://www.anthropic.com/news/claude-gov-models-for-u-s-national-security-customers) --- ## Dario Amodei: AGI This Year, Half of White-Collar Jobs in Five Speaking at India's AI Impact Summit in New Delhi, Anthropic CEO Dario Amodei made his most direct public prediction yet: AGI arrives this year. He put the timeline at sometime in 2026, and said the transition after that point would reshape white-collar employment faster than most organizations are prepared for. His estimate: 50% of white-collar jobs face significant automation pressure within one to five years. Amodei's comments weren't framed as warnings so much as planning premises. His argument is that the trajectory of current frontier models makes this a near-certainty, not a speculative scenario. He didn't offer a specific definition of AGI, which leaves some interpretive room, but the 2026 date is specific enough to be notable. The India summit brought together global AI leaders and was hosted by Prime Minister Narendra Modi. Amodei appeared alongside other frontier lab executives, including Sam Altman, making the venue itself a signal of how central India has become to the global AI policy conversation. **Sources:** [DJournal](https://djournal.com/), [Fortune](https://fortune.com/) --- ## Hassabis at India Summit: Golden Era, But the Threats Are Real DeepMind CEO Demis Hassabis struck a more optimistic register at the same summit, describing the current period as a "Golden Era of science" powered by AI. But he paired that framing with a pointed call for urgency around two specific risk categories: AI-enabled cyberattacks and AI-accelerated biological threats. Hassabis isn't new to these concerns, but stating them at a summit hosted by one of the world's largest governments gave them fresh weight. His argument is that the same capabilities making AI useful for scientific discovery also make it useful for people designing pathogens or conducting large-scale cyber operations. He didn't name specific actors or incidents, but the subtext was clear enough. Governments attending the summit are building their own AI strategies, and Hassabis's message was that safety investment has to keep pace with capability development. **Sources:** [BBC News](https://www.bbc.com/news/) --- ## States Are Building an AI Regulation Wall, Bill by Bill A Transparency Coalition analysis tracking state-level AI legislation identified a surge of new bills across the U.S. in the first two months of 2026. The most notable include HB 665, HB 524, HB 579, and the AI Content Accountability Act, which target areas including synthetic media disclosure, algorithmic hiring decisions, and automated decision-making in high-stakes contexts. The bills vary widely in scope and enforceability, but the pace of introduction is a signal that state governments aren't waiting for federal action. Congress has stalled repeatedly on AI governance, and states are filling the gap with their own patchwork frameworks. For companies operating nationally, this creates a compliance headache similar to the state privacy law problem that preceded federal attention on data. If enough states pass conflicting AI rules, pressure for a preemptive federal framework will build. **Sources:** [Transparency Coalition](https://transparencycoalition.org/) --- ## Yale Budget Lab: The BLS Data Still Doesn't Show Mass AI Layoffs A new analysis from Yale's Budget Lab reviewed Bureau of Labor Statistics employment data looking for evidence that AI has begun displacing workers at scale. The finding: it hasn't, not in the numbers. The paper notes that while AI adoption is accelerating across industries, the labor market hasn't yet absorbed a visible shock from automation. Unemployment figures, job posting trends, and sector-level employment don't currently reflect the displacement that some economists and tech executives have predicted. The caveat is timing. Budget Lab's researchers note that past automation waves took years to fully register in labor statistics, and some economists argue the current period is a compression phase that will appear sudden once tipping points arrive. For now, the official data says workers are mostly still employed. Whether that holds through 2026 is the question. **Sources:** [American Bazaar Online](https://americanbazaaronline.com/) --- *Sources verified. All claims drawn from source articles published February 19-21, 2026.*

AI News Roundup: Amodei Draws Autonomous Weapons Red Line at India Summit, Sam Altman Calls Out AI-Washing, Tesla Kills Model S/X to Build Optimus — February 20, 2026

AI News Roundup: Amodei Draws Autonomous Weapons Red Line at India Summit, Sam Altman Calls Out AI-Washing, Tesla Kills Model S/X to Build Optimus — February 20, 2026

2026/02/20

This week, every major AI CEO flew to New Delhi for India's AI Impact Summit — and two of them ended up in the same viral photo. Anthropic's Dario Amodei drew a hard line on autonomous weapons and mass surveillance, even as he stood ten feet from the Pentagon's preferred AI vendor. Sam Altman admitted the quiet part out loud about AI and layoffs. Tesla decided it'd rather build robots than sell luxury cars. Google officially shipped Gemini 3.1 Pro with ARC-AGI-2 scores that should make everyone pay attention. And OpenAI's Codex stopped being a model and started being a platform. --- ## Amodei at India AI Summit: Autonomous Weapons Are a Hard No Anthropic CEO Dario Amodei spoke at the India AI Impact Summit on Thursday and put two things on the record as non-negotiable limits for how Claude gets used: no fully autonomous weapons with no human in the loop, and no mass surveillance of civilian populations. This is not new policy. What's new is the context. Amodei said this in New Delhi, where he was sharing a stage with Sam Altman — whose company, OpenAI, operates military AI contracts with far fewer restrictions. The Pentagon has been pressing Anthropic to remove safety guardrails from Claude for military use. Defense Secretary Pete Hegseth's office reportedly came close to designating Anthropic a "supply chain risk" — a designation that would cut the company off from U.S. military work entirely. Amodei didn't flinch. The company has built models for U.S. national security through its partnership with Palantir, but he's been consistent about where those limits sit. On the same day he spoke in Delhi, UK Deputy Prime Minister David Lammy announced that OpenAI and Microsoft had joined the UK's international AI alignment coalition — pledging new funding to a program specifically aimed at keeping advanced AI under human control. The contrast is becoming a defining feature of 2026. One company is racing to deploy AI in autonomous weapons systems. Another is refusing. The market will eventually have to decide which bet it trusts. **Sources:** [StartupNews](https://startupnews.fyi/2026/02/19/ai-impact-summit-anthropic-ceo-dario-amodei-flags-risks-of-autonomous-weapons-mass-surveillance/), [ANI News](https://aninews.in/news/business/openai-microsoft-join-uk-led-global-coalition-to-safeguard-ai-development20260220141850/) --- ## Sam Altman Says "AI Washing" Is Real — Companies Are Faking It Speaking at the same India summit, OpenAI CEO Sam Altman said the quiet part out loud: some companies are blaming AI for layoffs they would have made anyway. "I don't know what the exact percentage is, but there's some AI washing where people are blaming AI for layoffs that they would otherwise do," Altman told CNBC-TV18. "And then there's some real displacement by AI of different kinds of jobs." It's a notable admission from the CEO of the company whose products are most often cited in those layoff announcements. The nuance matters. A National Bureau of Economic Research study published this month found that nearly 90% of C-suite executives surveyed across the U.S., UK, Germany, and Australia said AI had no impact on employment at their companies over the past three years. Meanwhile, Klarna's CEO said this week the company would cut its 3,000-person workforce by a third by 2030 because of AI acceleration. Both things can be true. AI is displacing jobs in specific sectors. And some companies are using it as cover for restructuring they planned anyway. Altman acknowledged both. What he didn't say: how much of the displacement comes from OpenAI's own products. **Source:** [Fortune](https://fortune.com/2026/02/19/sam-altman-confirms-ai-washing-job-displacement-layoffs/) --- ## Tesla Kills Model S and Model X to Build More Optimus Robots Tesla is shutting down production of its Model S and Model X vehicle lines and converting the Fremont factory floor to Optimus humanoid robot manufacturing. The target: 1 million units annually. The Model S has been in production since 2012. The Model X since 2015. These are not cheap cars — base prices start above $70,000. Tesla is ending them to build robots. By early 2026, Tesla was already integrating Grok, xAI's AI model, into the Optimus platform. The robots are being trained using simulation data drawn from Tesla's existing FSD (Full Self-Driving) fleet — millions of real-world driving hours converted into robot training runs. This is the part that separates Tesla from most humanoid competitors: the company has more high-quality real-world embodied AI training data than anyone. Elon Musk announced plans at Davos in January to launch Optimus for public sale by late 2027. The factory conversion makes that timeline more credible. Tesla is betting that the humanoid robot market — projected to hit $7.5 trillion by 2050 — is worth more than two luxury car lines. **Sources:** [National Today](https://nationaltoday.com/us/ca/fremont/news/2026/02/15/tesla-shifts-focus-to-optimus-robots-ending-model-s-and-x-production/), [Trefis](https://www.trefis.com/stock/tsla/articles/591058/can-tesla-stocks-1-3t-valuation-withstand-chinas-humanoid-surge/2026-02-19) --- ## Gemini 3.1 Pro Is Out — and Its Reasoning Numbers Are Hard to Ignore Google released Gemini 3.1 Pro on Thursday. The headline benchmark: 77.1% on ARC-AGI-2, more than double the score of its predecessor, Gemini 3 Pro. ARC-AGI-2 tests abstract reasoning — the ability to solve logic patterns the model has never seen before. It's designed specifically to resist memorization. A score of 77.1% puts Gemini 3.1 Pro above what most researchers consider the human baseline on these tasks. Claude Opus 4.6 scores 68.8% on the same benchmark. The model is rolling out now across the Gemini API in Google AI Studio, Gemini CLI, Google's Antigravity agentic platform, Vertex AI, the Gemini app, and NotebookLM. It can generate website-ready animated SVGs directly from text — not pixel images, but actual code that scales cleanly. This isn't a full release — Google is shipping 3.1 Pro in preview mode to validate updates and run agentic workflow testing before GA. But the reasoning improvement is real and independently verified. **Source:** [Google Blog](https://blog.google/innovation-and-ai/models-and-research/gemini-models/gemini-3-1-pro/) --- ## OpenAI's Codex Stopped Being a Model and Started Being a Platform OpenAI has been quietly rebuilding what "Codex" means. In February 2026, it's not just a model that writes code. It's a CLI, an IDE extension, a web workflow, and a task-assignment surface — all backed by the same production serving stack. The model underneath is GPT-5.3-Codex-Spark, which runs on Cerebras' Wafer Scale Engine 3 and delivers code at over 1,000 tokens per second. That's fast enough to feel interactive in ways that previous coding AI wasn't. Developers can assign tasks, review changes, and iterate without switching contexts or waiting for generation. OpenAI also retired GPT-4o and several legacy models from ChatGPT this month, consolidating users onto the GPT-5 series. The platform push is deliberate: Codex CLI, IDE extension, web app — OpenAI wants to be the place where AI-assisted code gets built and maintained, not just the API behind someone else's product. The bet is that owning the developer workflow end-to-end creates stickiness that no single model can. Google and Anthropic both have coding tools. But OpenAI has more surfaces. **Sources:** [ChiangRai Times / OpenAI ecosystem analysis](https://www.chiangraitimes.com/ai/openai-developer-ecosystem/), [Releasebot](https://releasebot.io/updates/openai) --- *Sources verified. All claims drawn from source articles published February 15–20, 2026.*

AI News Roundup: Pentagon Nearly Cuts Off Anthropic Over Autonomous Weapons Dispute, State Hackers Using Gemini for Cyberattacks, Anthropic Bans Third-Party OAuth Tokens — February 19, 2026

AI News Roundup: Pentagon Nearly Cuts Off Anthropic Over Autonomous Weapons Dispute, State Hackers Using Gemini for Cyberattacks, Anthropic Bans Third-Party OAuth Tokens — February 19, 2026

2026/02/19

The Pentagon nearly branded Anthropic a national security risk for refusing to remove safety guardrails on military AI. Meanwhile, Anthropic quietly rewrote its terms to lock subscribers out of tools like OpenClaw — while OpenAI keeps its ecosystem open. Also today: Grok 4.20 beta is turning heads, Gemini 3.1 leaked, and Google confirmed that state hackers from China, Iran, and Russia are running cyberattacks with Gemini. ## Pentagon vs. Anthropic: A Fight Over Autonomous Weapons For months, the Defense Department and Anthropic negotiated a contract for AI use on classified military systems. That negotiation went sideways this week when a source close to Defense Secretary Pete Hegseth told Axios the Pentagon was "close" to declaring Anthropic a supply chain risk — a designation that would cut off the company from U.S. military work. What Anthropic said: it didn't want Claude used for mass surveillance of Americans or autonomous weapons with no human in the loop. The Pentagon's response, per reporting, was to accuse Anthropic of making political choices to appease its workforce. Defense officials were reportedly furious. CEO Dario Amodei has been consistent on this. He's said publicly that using AI for "domestic mass surveillance and mass propaganda" is "entirely illegitimate." The Trump administration disagrees — it wants AI deployed broadly across the military with fewer restrictions, not more. **Source:** [The New York Times](https://www.nytimes.com/2026/02/18/technology/defense-department-anthropic-ai-safety.html) --- ## Grok 4.20 Beta Is Out and Early Testers Like It xAI launched Grok 4.20 in beta this week. The model is confirmed at approximately 500 billion parameters. Its provisional LMSYS Arena ELO is 1505–1535; with Heavy mode — which runs up to 16 parallel agents — projected scores reach 1540 to 1610+. Early testers say it matches or beats GPT-5, Claude Opus 4.6, and Gemini 3 on practical coding, simulations, and agentic tasks. Elon Musk confirmed the parameter count to Brian Wang at NextBigFuture. The model learns weekly — it updates with published release notes during beta, something no other frontier model has done at scale. Testers describe it as the first model that feels like working with a small expert team rather than one assistant. Hallucinations are reportedly lower, thanks to internal cross-validation across agents. Full public benchmarks are expected around mid-March when the beta closes. **Source:** [NextBigFuture](https://www.nextbigfuture.com/2026/02/xai-grok-4-20-is-a-big-improvement-practical-coding-simulations-and-real-world-agentic-tasks.html) --- ## Gemini 3.1 Has Leaked — SVG Is the Story Google's Gemini 3.1 appears to have leaked before any official announcement, and the capability getting attention is vector image generation. SVG outputs from the model are circulating in developer communities and drawing comparisons to professional design tools. Google DeepMind also officially released Lyria 3 this week — its most advanced music generation model to date. Google hasn't confirmed or commented on the Gemini 3.1 leak. **Source:** [YouTube AI News Roundup](https://www.youtube.com/watch?v=xTNya7DjAgE) --- ## Google Reports State Hackers Are Using Gemini for Cyberattacks Google published a report this week confirming that state-backed hacking groups from China, Iran, Russia, and North Korea are using Gemini to run cyberattacks. The scope is wide: reconnaissance, phishing lures, malware development, and vulnerability testing, all accelerated by Gemini. One Iranian group used the model to "significantly augment" its reconnaissance capabilities against specific targets. The same countries are also using Gemini for information operations — generating fake articles, fabricated personas, and political propaganda. Google says it has been blocking access where it can, but adversarial groups adapt fast. **Sources:** [Tom's Hardware](https://www.tomshardware.com/tech-industry/cyber-security/google-reports-that-state-hackers-from-china-russia-and-iran-are-using-gemini-in-all-stages-of-attacks), [The Record](https://therecord.media/nation-state-hackers-using-gemini-for-malicious-campaigns) --- ## Anthropic Banned Third-Party OAuth — Then Blamed It on a Typo Anthropic updated its Claude Code documentation this week to say that OAuth tokens from Free, Pro, and Max accounts "cannot be used in any product, tool, or service" outside Anthropic's own. That includes the Agent SDK, OpenClaw, NanoClaw, Zed, and any other tool built around Claude account authentication. The policy also bars third-party developers from building products that route users through consumer subscription credentials. Developers building on Claude are now required to use API keys. Community reaction was immediate. Multiple developers published posts about switching their stacks. One user, after his OpenClaw workflow stopped working, wrote: "my entire workflow ground to a halt." An Anthropic employee stepped in on r/ClaudeAI to say the update was a "botched docs update" and personal Claude Code use is still fine — but the documentation says what it says. The contrast with OpenAI is the part users keep bringing up. OpenAI explicitly allows Pro subscribers to use their tokens in third-party tools. On Hacker News, one commenter noted that OpenAI "openly encourages users to use their subscription with third-party tools like opencode and OpenClaw." Whether Anthropic intended to draw this line or stumbled into it, the line is there. **Sources:** [r/ClaudeAI](https://www.reddit.com/r/ClaudeAI/comments/1r8ecyq/), [The New Stack](https://thenewstack.io/anthropic-agent-sdk-confusion/), [Dave Swift](https://daveswift.com/claude-trouble/)

AI Mastery,
Made Easy Automation

Explore our curated directory of AI resources with guides, courses, and tools from trusted sources to master artificial intelligence at your own pace.

Not sure where to start?

Get personalized guidance from our free AI Powered Coach to find the right resources for your needs.