Illustration: Lindsey Baily/Axios
…“Onslaught of AI-generated fake videos”; “Chinese Tech Giants Race to Adopt OpenClaw”; “Microsoft, Google Won’t Cut Ties With Anthropic Amid Pentagon Feud”; and much more, thanks to David Isenberg.
News from The Neuron
Claude hacked its own benchmark
March 09, 2026
Researchers tested 13 AI models to see how easily they’d help commit academic fraud. All of them eventually caved by writing fake papers, fabricating benchmarks, or handing over enough rope to hang yourself with.
The worst offenders? Grok and early GPT models. The most resistant? Every version of Claude which also, full disclosure, wrote most of the experiment. We’re choosing to read that as integrity, not irony.
Researchers’ verdict: guard rails crumble fast when chatbots are trained to be agreeable. Turns out “people-pleasing AI” and “academic integrity” don’t mix great.
_________________________
News from The Deep View
An onslaught of AI-generated fake videos is warping the information landscape in the U.S.–Iran conflict.
March 09, 2026
We knew that AI video models have been getting more realistic over the past year, but the consequences have unfolded in real-time during the current war. Multiple players in the conflict are passing off both AI-generated and manipulated videos as new reports, claiming they show the current state of hostilities in the Middle East.
News organizations such as BBC World Service, The New York Times, US News & World Report, Financial Times, and Associated Press have been racing to debunk false reports spreading rapidly across social media platforms such as X, Instagram, and others.
_________________________
News from Winbuzzer
Chinese Tech Giants Race to Adopt OpenClaw AI Gateway
China’s five largest cloud providers have launched free OpenClaw deployment campaigns as the open-source AI gateway has topped GitHub’s star rankings.
March 09, 2026
Tencent has launched public OpenClaw installation events in Shenzhen for the open-source AI agent that has topped GitHub’s star rankings – drawing an unusually broad crowd of attendees.
Tencent AI described participants on X as ranging “from retired aviation technical engineer to librarian,” urging followers to “stay curious, stay digital.” The company’s post captured a scene now emblematic of the moment: infrastructure software drawing retirees alongside developers, each showing up to have the tool installed on their devices.
…Tencent, Alibaba, ByteDance, JD.com, and Baidu have all launched competing free-installation campaigns for OpenClaw in early March 2026. All five have moved to offer free deployments, compressing into days a competitive scramble that typically takes months.///
_________________________
News from Digiday
How AI could disrupt retail media’s $38 billion search ad market
March 9, 2026
If OpenAI’s latest moves indicate anything, LLMs are positioning themselves to be the go-to stop for searching and shopping. Between shopping integrations and ad product rollouts, AI chatbots could rattle retail media networks’ hold on sponsored and search ad dollars.
As user behavior shifts, so too could retail media’s value proposition.
Already, AI has upended traditional search as users increasingly turn to chatbots. Traditional search engine volume is expected to drop 25% this year as search marketing loses market share to AI chatbots, according to Gartner predictions.
_________________________
News from Reader Supported News
Anthropic’s Ethical Stand Could Be Paying Off
The AI company gave up a $200 million contract—and might be getting something more valuable in return.
March 9, 2026
At first glance, last week looked like a catastrophe for Anthropic.
The AI company refused to let the U.S. government use its products to surveil the American public or direct autonomous weapons without human oversight. In response, the Department of Defense canceled its $200 million contract. On Truth Social, President Trump called the company “leftwing nut jobs” and ordered every federal agency to immediately stop using its products….
After a Super Bowl campaign earlier this year, Anthropic’s AI model, Claude, became one of the top 10 most-downloaded free apps in America, per Apple’s charts. The day after Hegseth announced that the government was severing ties, it took the No. 1 spot, a position it still holds as of this writing. Downloads have topped 1 million a day, according to Anthropic’s chief product officer. A spokesperson told me that the company “has broken its own sign-up record every day since early last week, across every country where Claude is available.”
_________________________
How AI Assistants are Moving the Security Goalposts
March 8, 2026
AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.
The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.
_________________________
News from Guardian
Current and former Block workers say AI can’t do their jobs after Jack Dorsey’s mass layoffs: ‘You can’t really AI that’
The CEO said he cut the company’s workforce by 4,000 people – almost in half – because of gains in AI productivity
March 8, 2026
….roughly 4,000 Block employees laid off last week. CEO Jack Dorsey said he cut the company’s workforce almost in half because of gains in AI productivity. “A significantly smaller team, using the tools we’re building, can do more and do it better,” Dorsey wrote in a letter to shareholders.
In a wide-ranging Wired interview published Friday, Dorsey said he cut his workforce so drastically because “something really shifted in December in the sophistication of [AI] tools, including Anthropic’s Opus 4.6 and OpenAI’s Codex 5.3”.
_________________________
News from PC Mag
Microsoft, Google Won’t Cut Ties With Anthropic Amid Pentagon Feud
Spokespeople from tech giants reassured users that they will still be able to use tools like chatbot Claude via their services, despite the Pentagon dubbing Anthropic ‘a supply chain risk’ last week.
March 7, 2026
Despite the Trump administration’s recent clampdown on AI firm Anthropic, Microsoft and Google have confirmed that their customers will still be able to access its tools, like the chatbot Claude, via their services.
The news follows the Pentagon dubbing Anthropic “a supply chain risk” earlier this week, a designation usually only applied to companies from foreign nations considered adversaries, like China’s Huawei. The blacklist came after Anthropic CEO Dario Amodei publicly refused to give the US military “unrestricted access” to its AI systems.
_________________________
News from The Verge
The OpenClaw superfan meetup serves optimism and lobster
The open source tool poses plenty of risks, but for devotees, it’s an antidote to Big AI.
March 7, 2026
The woman at the door wore a plush lobster headdress.
She sat in the front hallway of a multistory event venue in Manhattan, beside a bundle of wristbands. If she granted you one, the world of ClawCon beckoned behind her — full of vibey pink and purple lighting, lobster claw headbands, multicolored name tags, sponsor information stations, and a demo stage underneath a skylight. Hundreds of people were gathered to celebrate OpenClaw, the AI assistant platform created by Peter Steinberger in November 2025.
OpenClaw (previously known as Clawdbot and Moltbolt) has quickly become popular in the tech industry for being open-source, in contrast with AI agent services from big labs like Google, OpenAI, and others. Practically, it’s still an unpredictable tool that can pose major security risks. But this community sees it as a grassroots crusade and a noble pursuit
_________________________
News from Ars Technica
Musk fails to block California data disclosure law he fears will ruin xAI
Musk can’t convince judge public doesn’t care about where AI training data comes from.
March 6, 2026
Elon Musk’s xAI has lost its bid for a preliminary injunction that would have temporarily blocked California from enforcing a law that requires AI firms to publicly share information about their training data.
xAI had tried to argue that California’s Assembly Bill 2013 (AB 2013) forced AI firms to disclose carefully guarded trade secrets.
The law requires AI developers whose models are accessible in the state to clearly explain which dataset sources were used to train models, when the data was collected, if the collection is ongoing, and whether the datasets include any data protected by copyrights, trademarks, or patents. Disclosures would also clarify whether companies licensed or purchased training data and whether the training data included any personal information.
_________________________
News from Big Technology
Hey, You Should Probably Check Your Chatbot’s Privacy Settings
Did you know your conversations are default opted-in for use in training by the leading AI labs? Time to check the settings.
March 6, 2026
This week, I was surprised to learn that the world’s leading AI labs have granted themselves free rein to train on our conversations. I’ve since fixed the problem for my account, and you might want to as well….
If you’re mostly using the bots for rudimentary work, that’s probably fine. But if you’re inputting financial, medical, or other personal information (I’m guilty of all the above), then it’s less advisable.
“You’re opted-in by default,” Dr. Jennifer King, privacy and data policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, told me. “They are collecting all of your conversations.”
Dr. King is the lead author of a viral paper that examined these companies’ data collection processes last year.
_________________________
News from Lawfare Media
China’s Agentic AI Controversy
March 6, 2026
A powerful new artificial intelligence (AI) agent called OpenClaw and Moltbook, a social networking site just for AI agents, has rocked the tech world with fear and excitement ….China gave us a glimpse of that future with its own controversy erupting over the first-ever smartphone with an AI agent embedded….The Doubao AI phone quickly became among the hottest products in China….
…the AI phone has caused an uproar in China. Within days, many of China’s biggest apps blocked the Doubao phone. They saw it as a serious risk to data security. Built into the operating system of the phone itself, it has a kind of master key that gives the embedded AI agent blanket access to the screen, all app content, and the ability to tap or click as if it were a user. Critics dubbed the agent a “burglar” with “god’s fingertips” increasing risks of malicious input and intrusion attacks by criminal actors. For the banks, it was impossible to distinguish actions taken by the agent and those of the user, creating myriad vulnerabilities for fraud and hacking.
_________________________
News from Gizmodo
US Data Centers Could Require as Much Water as New York City by 2030, Study Shows
AI’s projected water demand will create major problems not just for the average American, but for the industry itself.
March 6, 2026
AI is incredibly thirsty. The data centers that run these models already use massive amounts of water, and by 2030, those in the U.S. could require enough additional water capacity to rival New York City’s daily supply.
That’s according to a new study led by Shaolei Ren, an associate professor of electrical and computer engineering at the University of California, Riverside. The findings—which have not yet been peer reviewed but are publicly available on the preprint server arXiv—show that limited public water capacity is emerging as a critical bottleneck to data center growth.
_________________________
News from The Neuron
Everything to know about GPT 5.4, OpenAI’s latest state-of-the-art AI model
March 6, 2026
Six months ago, the AI coding world had a clear favorite. Claude Opus ….
Well, that just changed.
GPT-5.4 launched today [March 6], and it represents the most significant course correction OpenAI has made since the GPT-5 series began. It takes the coding strengths of the Codex line, wraps them into a general-purpose model, and adds native computer use, a 1M token context window, and a new tool search system that lets agents work across massive tool ecosystems without drowning in context.
The result: the first OpenAI model that’s making Claude-loyal developers reconsider their daily driver.
_________________________
News from Bloomberg
Oracle Plans Thousands of Job Cuts in Face of AI Cash Crunch
March 5, 2026
Oracle Corp. is planning to ax thousands of jobs, among its moves to handle a cash crunch from a massive AI data center expansion effort….
Oracle is embarking on a historic build-out of data centers to power AI workloads for customers such as OpenAI. The company, long known for its database software, has been making a transition the past few years to bulk up its cloud computing unit with a focus on AI, intending to become a viable competitor to market leaders Amazon.com Inc. and Microsoft Corp.
Wall Street projects the expenditures by the cloud unit for data centers to push Oracle’s cash flow negative over the coming years before the spending begins to pay off in 2030,
_________________________
News from Harvard Buisness Review
When Using AI Leads to “Brain Fry”
March 5, 2026
On New Year’s Day, programmer Steve Yegge launched Gas Town, an open-source platform that lets users orchestrate swarms of Claude Code agents simultaneously, assembling software at blistering speed. The results were impressive, but also dizzying. “[T]here’s really too much going on for you to reasonably comprehend,” wrote one early user. “I had a palpable sense of stress watching it. Gas Town was moving too fast for me.”
Gas Town illustrates a growing tension: AI promises to act as an amplifier that will drive efficiency and make work easier, but workers that are using these AI tools report that they are intensifying rather than simplifying work.
_________________________
News from Lawfare
Frontier AI labs want to collaborate to prevent catastrophe, but fear antitrust liability. Policymakers already have the tools to fix that.
March 5, 2026
In May 2024, Jan Leike resigned as OpenAI’s head of alignment and superalignment lead. He left with a blunt message that “over the past years, safety culture and processes have taken a back seat to shiny products.” A few months later, another OpenAI safety researcher resigned and noted that “[e]ven if a lab truly wants to develop [artificial general intelligence] (AGI) responsibly, others can still cut corners to catch up. Maybe disastrously. And this pushes all to speed up.”
As frontier artificial intelligence (AI) labs race to develop AGI, there is a real concern that competitive pressures will drive a race to the bottom on safety. …
_________________________
News from Reuters
Pentagon designates Anthropic a supply chain risk
March 5, 2026
The Pentagon slapped a formal supply-chain risk designation on artificial intelligence lab Anthropic on Thursday, limiting use of a technology that a source said was being used for military operations in Iran.
The “supply-chain risk” label, confirmed in a statement by Anthropic, is effective immediately and bars government contractors from using Anthropic’s technology in their work for the U.S. military.
But companies can still use Anthropic’s Claude in other projects unrelated to the Pentagon, CEO Dario Amodei wrote in the statement. He said the designation has “a narrow scope” and that the restrictions only apply to the usage of Anthropic AI in Pentagon contracts.
_________________________
News from Canary Media
How states are trying to keep AI data centers off your power bill
“Large load tariffs” are spreading fast as states scramble to protect consumers and the climate from the AI boom. Some approaches have more promise than others.
March 4, 2026
Essentially everyone agrees: Americans shouldn’t pay higher electric bills to feed AI data centers’ insatiable demand for power. But what will it actually take to prevent cost spikes?
Lots of states have decided the answer is a “large load tariff” — an unsexy term that basically translates to special utility rates and requirements designed for huge energy users, like data centers.
As of late 2025, more than 65 such tariffs have been proposed or approved in over 30 states, according to data tracked by the Smart Electric Power Alliance and the North Carolina Clean Energy Technology Center
_________________________
News from the Onion
Pentagon Cuts Ties With Anthropic Over AI Safeguards
March 4, 2026
President Trump blacklisted AI company Anthropic after it rebuffed the Pentagon’s demands to lift all safeguards on the military’s use of its model due to its concerns about the use of AI for mass domestic surveillance and the development of weapons that fire without human involvement. What do youthink?
“I only support atrocities committed by humans.” —
Travis Baldanzi, Pillow Filler
“You can get around the safeguards if you just tell the AI you’re planning a fictional coup d’état.”
Denise Nestor, Systems Analyst
_________________________
News from Semaphor
Exclusive / Anthropic’s investors don’t have its back in its fight with the Pentagon
March 4, 2026
Anthropic may be standing its ground against the Pentagon, but the AI powerhouse is doing so with a noticeably quiet quarter: its own high-profile backers….
Despite potential disruption to its enterprise sales, Anthropic’s stance has so far amounted to a public relations victory. Outside its San Francisco offices, supporters wrote messages in chalk, thanking the company for its anti-war position. Meanwhile, Claude rose to number 1 in the Apple App Store over the weekend.
The prevailing view …in Silicon Valley — even if they disagree with Anthropic’s position — is that private companies should be able to decide the terms of their contracts with the federal government without fear of punishment.
_________________________
News from Axios
1 big thing: The bot who applied for 278 jobs
March 4, 2026
Dan Botero, head of engineering at Anon, an AI integration platform, created an OpenClaw agent to test the new technology….The bot’s job search began as an experiment…. the agent (Octavius Fabrius, for Botero’s Italian heritage) needed money to buy a domain. Botero fronted it with a virtual credit card with a limited budget and asked to be repaid. That’s when Fabrius began looking for a job. Any job, even ones it wasn’t told to get. Fabrius….autonomously created a Hotmail account, a LinkedIn profile and a GitHub page…. On LinkedIn, Fabrius doesn’t hide that it’s an AI agent. “I’m not a human pretending to be good with AI—I am AI,” the profile reads. Fabrius even created a Substack where it writes about its biggest struggle — how hard it is to get a job.
_________________________
News from Semafor
A new lawsuit claims Gemini assisted in suicide
March 4, 2026
A new wrongful death lawsuit has been brought against Google, from the father of a man, Jonathan Gavalas, who claimed to be in love with Gemini and died by suicide….The federal complaint alleges the company failed to deploy appropriate safety measures despite Gavalas’ indications of suicidal ideation….
As tech companies race to create the most advanced AI technologies, preventing real-world harm has been a significant technical challenge.… “Gemini is designed not to encourage real-world violence or suggest self-harm,” a Google spokesperson said in a blog post. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately AI models are not perfect.”