Image of a robot hand drawing on a whiteboard.Allie Carr/Axios
Who is David Isenberg? →
News from The New York Times
How Accurate Are Google’s A.I. Overviews?
The company’s A.I.-generated answers look authoritative, but they draw on an array of sources, from trustworthy sites to Facebook posts.
April 7, 2026
A recent analysis of AI Overviews found that they were accurate approximately nine out of 10 times. But with Google processing more than five trillion searches a year, this means that it provides tens of millions of erroneous answers every hour (or hundreds of thousands of inaccuracies every minute), according to an analysis done by an A.I. start-up called Oumi….
Google acknowledges that its AI Overviews can include errors. The fine print below each AI Overview reads: “A.I. can make mistakes, so double-check responses.”
More here →
______________
News from SmartBrief
AI shopping tools drive sales for Walmart, Amazon, Macy’s
April 7, 2026
Walmart, Amazon and Macy’s have AI-powered shopping assistants that aim to enhance customer engagement and boost sales. Walmart’s Sparky focuses on intent-driven commerce, increasing basket values by 35%. Amazon’s Rufus has generated almost $12 billion in incremental annualized sales, leveraging up to a two-year head start against its competitors and extensive user base. Also, the Ask Macy’s chatbot is designed to reverse declining sales with features like virtual try-on, reporting a 4.75 times spend lift during beta testing.
More here →
______________
News from Commonplace
AI Could Fix Higher Education by Breaking It
College degrees may prove worthless in the age of artificial intelligence. That’s not a bad thing.
April 6. 2026
…. that while AI could be a boon to learning, its real-world effect instead has been enabling students to know less than ever, put in less effort than ever, and think less capably than ever, all while still earning that coveted college degree. In the words of a Claude-dependent New York University student: “I’m trying to do the least work possible, because this is a class I’m not hella f**king with.”….
It’s tempting to conclude that AI is a disaster for learning….
Yet precisely by accelerating this alarming trend to the point of collapse, AI may prove to be the key to higher education reaching a better place….
More here →
______________
News from Axios
The AI agent buffet is closed
April 6. 2026
AI enthusiasts scrambled over the weekend after Anthropic blocked Claude subscriptions from powering third-party agent tools such as OpenClaw. …Power users want autonomous agents that run constantly, but AI labs are trying to control costs, capacity and how their models are used.
• Users can still access Claude models — including Opus, Sonnet and Haiku — through outside agent frameworks.
• But they’ll now need to pay via Anthropic’s API or a new pay-as-you-go “extra usage” system, rather than relying on flat-rate subscriptions.
• “The $20/month all-you-can-eat buffet just closed,” writes AI product manager Aakash Gupta.
More here →
______________
News from The Neuron
This Guy Built a $1.8 Billion Company With One Employee (His Brother)
April 6. 2026
In September 2024, Matthew Gallagher launched a telehealth startup from his house in LA with $20,000, a dozen AI tools, and zero employees. Eighteen months later, Medvi is on track to do $1.8 billion in sales this year.
His only hire? His younger brother.
Gallagher used AI to write the code, produce website copy, generate ad images and videos, handle customer service, and analyze business performance. He stitched together ChatGPT, Claude, Grok, Midjourney, Runway, and ElevenLabs to run what most companies would need an army of people to operate.
More here →
______________
News from Axios
Behind the Curtain: Sam’s superintelligence New Deal
April 6. 2026
OpenAI CEO Sam Altman is doing something no tech titan has ever done: He’s publishing a detailed blueprint for how government should tax, regulate and redistribute the wealth from the very technology he’s racing to build and spread.
Why it matters: Altman told us in a half-hour interview that AI superintelligence is so close, so mind-bending, so disruptive that America needs a new social contract — on the scale of the Progressive Era in the early 1900s, and the New Deal during the Great Depression.
The big picture: The threats of inaction or slow action are grave, Altman warns — widespread job loss, cyberattacks, social upheaval, machines man can’t control.
More here →
______________
News from The Deep View
What OpenClaw tells us about the future of agents
April 5, 2026
If you’ve been anywhere near a developer community this year, you’ve seen OpenClaw. The open-source AI agent went from a side project to 247,000 GitHub stars in a matter of weeks. In China, people literally lined up outside Tencent’s headquarters to get it installed on their laptops. The creator got hired by OpenAI.
…. OpenClaw is powerful. It’s also, by the admission of its own maintainers, dangerous if you don’t know what you’re doing. One of the project’s core contributors publicly warned that if you can’t understand how to run a command line, the project is too risky to use safely.
More here →
______________
News from Lawfare
Myth of the AI Oracle
Even the most capable AI will face limits on its ability to make predictions and substitute for strategic decision-making.
April 5, 2026
The astounding ability of artificial intelligence (AI) to produce plausibly human work and to radically improve enterprise administration and military tactics risks creating a seductive belief in its ability, acting alone, to beat human judgment in strategic decision-making. This is a dangerous illusion. The more consequential a decision, the more humans will not want machines alone to make it. As I note in a recent piece in International Security, this is especially true when it comes to avoiding intelligence surprises.
More here →
______________
News from AIM Media House
Built Rufus to Dominate, Sparky to Catch Up, and Ask Macy’s to Survive. All Three Are Winning.
April 4, 2026
….When Macy’s launched Ask Macy’s on March 23, 2026, it joined a race that Amazon and Walmart had already been running. Retailers experimenting with AI. The three assistants, Rufus, Sparky, and Ask Macy’s now represent the retail industry’s most concrete example of what AI-powered shopping tools can actually do to revenue.
Each has reported early performance numbers. And each number including a 4.75x spend lift, a 35% basket size increase, and $12 billion in incremental sales reveals something different about the retailer reporting it.
Rufus, Sparky, and Ask Macy’s are all built to do the same thing. Convert browsing into buying. But the stakes behind each launch are not the same…
More here →
______________
News from Futurism
Anthropic Suddenly Cares Intensely About Intellectual Property After Realizing With Horror That It Accidentally Leaked Claude’s Source Code
That’s rich.
April 3, 2026
The AI industry largely acts as if it’s above lowly copyright laws — unless, of course, those laws happen to be protecting its own interests.
As the Wall Street Journal reports, Anthropic is scrambling to contain a leak of its Claude Code AI model’s source code by issuing a copyright takedown request for more than 8,000 copies of it — a gallingly ironic stance for the company to be taking, considering how it trained its models in the first place
More here →
______________
News from Futurism
Almost Half of US Data Centers That Were Supposed to Open This Year Slated to Be Canceled or Delayed
“It is a pretty wild puzzle at the moment.”
April 2, 2026
The data centers powering your favorite AI chatbot are running low on helium, cash, and neighbors who don’t hate them, and that’s not even the worst of it.
According to reporting by Bloomberg, about half of the data centers slated to open in the US in 2026 will either face delays or outright cancellations
The publication interviewed analysts at market intelligence company Sightline Climate, which in research first flagged by Ed Zitron last week noted that 12 gigawatts worth of power-consuming data centers are set to open in the US this year. But here’s the catch: they say only a third of those are actually under construction right now, with the rest in a liminal pre-production stage in which they could, and likely will be, canceled.
More here →
______________
News from Axios
MIT study challenges AI job apocalypse narrative
April 2, 2026
AI is going to change the way people work, but it’s not going to replace them en masse, according to new research from MIT’s Computer Science and Artificial Intelligence Laboratory.
Why it matters: This directly pushes back on fear-based narratives coming from some AI leaders and reframes the debate from “when do jobs disappear?”…. High-quality, error-free work remains much harder and is a gap that continues to trip up real-world deployments. Recent examples include Deloitte’s error-filled AI-generated report for a Canadian province and Klarna’s pullback from AI-led customer service.
More here →
______________
News from the Wall St. Journal
Maine Is About to Become the First State to Ban New Data Centers
Legislation that could be enacted this spring would pause construction of large new data centers until November 2027
April 2, 2026
Maine is poised to freeze large data-center construction, which would make it the first state to enact such a measure as communities across the U.S. grapple with fallout from the boom in artificial intelligence. The Maine bill calls for a ban on major new data-center construction until November 2027, so the state can assess the impact of such development on the environment and electricity grid. The freeze would apply to data-center projects of at least 20 megawatts, which is enough energy to power more than 15,000 homes.
More here →
______________
News from Axios
AI’s compute wars
April 2, 2026
Anthropic’s growth is exposing AI’s core problem: compute costs
Why it matters: The closer AI labs get to IPOs, the harder it becomes to hide a structural margin problem: The more customers they win, the more they spend on the compute to serve them
State of play: Anthropic’s server capacity isn’t keeping pace with demand, leaving paying customers stuck on usage limits and outages.
More here →
______________
News from AI Impact
The rise and fall of OpenAI — Investors flee to Anthropic
April 2, 2026
Investors are shifting their focus from OpenAI to Anthropic in the secondary market, with OpenAI shares becoming harder to sell and Anthropic shares in high demand. Next Round Capital reports that institutional investors are struggling to find buyers for $600 million worth of OpenAI shares, while $2 billion is ready to invest in Anthropic.
….OpenAI did not have its best year, which has caused users to lose trust and move on to other AI tools with stronger safeguards and greater benefits. The company faced a lawsuit from the New York Times, is currently facing a lawsuit from Elon Musk, and just last month was sued for ChatGPT allegedly acting as an unlicensed lawyer.
More here →
______________
News from The Neuron
Here’s what happened in AI today:
April 2, 2026
• 😼 OpenAI’s co-founder Greg Brockman revealed the company is killing video generation to build one AI super app, said AGI is “70-80% here,” and teased a model with two years of research baked in
• 📰 OpenAI closed a record $122B funding round at $852B valuation, but investors are already trying to dump shares and pivot to Anthropic
• 📰 Oracle fired an estimated 25,000 employees via 6am termination emails to fund its AI data center buildout
• 🍪 Cloudflare launched EmDash, a free open-source CMS positioning itself as the spiritual successor to WordPress
• 📰 A peer-reviewed Science study confirmed sycophantic AI is widespread and actively harmful, decreasing prosocial behavior while promoting dependence
More here →
______________
News from Observer
The Catholic Priest Who Helped Write Anthropic’s A.I. Ethics Code
Father Brendan McGuire left the tech industry to serve God. Now he’s back—helping Anthropic build something resembling a conscience.
March 31, 2026
…. Earlier this year, he [McGuire] and a group of faith leaders helped Anthropic shape the Okm the ones I think could give us stable results:cently filed a federal court brief supporting Anthropic in its lawsuit against the U.S. government, which challenges the company’s effective blacklisting by the Pentagon after it refused to allow its A.I. systems to be used for autonomous warfare or domestic surveillance. The brief praised those ethical limits as “minimal standards of ethical conduct for technical progress.”
More here →
______________
News from The Neuron
This is how we’d teach AI from scratch in 2026
March 31, 2026
Most people are still using ChatGPT the way they used Google in 2005: type a question, get an answer, close the tab. A lot of people aren’t even asking the AI to use web search, and just relying on its “training date”. Le gasp!
That worked fine when AI was a novelty in 2023, or 2024. In 2026, it’s like owning a professional kitchen and only using the microwave.
Here’s the stack:
Level 1: Projects. Stop chatting in the main window. Create a project folder (ChatGPT, Claude, and Gemini all have them).
Inside, add custom instructions (persistent rules the AI follows every time), upload reference documents (style guide, brand voice, codebase), and set memories (facts it remembers across sessions).
This is the foundation for your work. Don’t do any sort of work without this set up.
More here →
______________
News from Curmuducation
Is it Smart to Let ‘Big Tech’ and Its Profit-Driven AI Tools Into Bucks County Classrooms?
March 31, 2026
Lots of school districts in Bucks County and across Pennsylvania have begun incorporating some sort of Artificial Intelligence training, policies, and even instruction. But it’s still a fuzzy and not fully understood technology.
Parents, students, and taxpayers should be asking—is your school district’s AI policy any good?
Here are some questions to ask about the local AI policy and training.
Where is it coming from?
Companies have invested huge piles of money in developing their AI, so it should come as no surprise that they are also putting lots of money into offering training to K-12 schools, which represent a huge potential market.
More here →
______________
News from AI Impact
The AI Breakdown
Hot and toasty AI news
March 31, 2026
➿Cohere debuts open-source voice model for transcription
✂️This AI computer needs no internet
⛔With Sora shuttered, here are some AI video options
🍒AI tracks Japan’s cherry blossom bloom
🫷What is behind Bluesky users blocking Attie the AI assistant?
👩💻OpenClaw, Claude Code are JustPaid’s new developers
😎Future’s so bright, Meta made AI shades
More here →
______________
News from Axios
AI’s ensemble era
March 31, 2026
Microsoft has revamped one of its AI research tools to use models from both OpenAI and Anthropic, the clearest sign yet that the future of AI may be multi-model.
AI companies are increasingly pairing models together — having them cross-check and evaluate — in a bid to boost accuracy and reduce errors that any one model might miss…The software giant is taking advantage of multiple models within its Microsoft 365 Copilot Researcher.
A new “Critique” layer uses Anthropic’s Claude to review answers generated by OpenAI’s model to improve accuracy before a user sees the response. The company says that approach enabled the research agent to score 13.8% higher on the DRACO benchmark, an industry standard for deep research quality.
Another new option, called Model Council, allows users to see a side-by-side comparison of responses from different models.
More here →
______________
News from Tom’s Guide
I put ChatGPT vs Gemini through 7 real-world tests — the results weren’t what I expected
The second round of AI Madness gets exciting with two top AI assistants facing off in real world challenges
March 30, 2026
This next round of AI Madness brings together two top contenders for the smartest, fastest and most useful AI assistants. ChatGPT beat out Perplexity in the first round and Google Gemini beat Alexa+. Now the two go head-to-head with seven prompts designed to reflect how people actually use AI day to day.
These real prompts are the kind users might ask — from math and debugging code, to making a tough decision or just trying to get through your day a little easier. Some tests were about accuracy. Others focused on reasoning, creativity or how well each model handled uncertainty. And in a few cases, I intentionally set traps to see which one would hallucinate.
More here →
______________
News from Nature
Major conference catches illicit AI use — and rejects hundreds of papers
The papers’ watermarks allowed organizers to detect use of large language models in peer review.
March 26, 2026
….The International Conference on Machine Learning (ICML), to be held in Seoul in July, has a reciprocal review policy, meaning that, bar certain exceptions, every paper must have an author who reviews other conference papers. Authors whose reviews violated the conference’s large language model (LLM)-use policy had their papers rejected.
Conference organizers detected the illicit AI use by hiding watermarks in research papers distributed for review. If a researcher used an LLM to generate their peer review, instructions hidden in the watermark prompted the LLM to include telltale phrases in the review text. The presence of these phrases revealed that an AI model had been used to generate the review.
More here →