OpenClaw agents, which are personal AI assistants designed to take over entire computers to carry out complex, multistep tasks, have blown up this year.
News from Axios
Clean your feed: Here’s how to rebuild your X
March 31, 2026
Your X feed is designed to make you emotional.
…X’s algorithm intentionally pushes posts that spark strong reactions, making your feed feel angrier and more chaotic. AI chatbot Grok makes this worse by frequently surfacing misleading content.
…. The quickest fix: Skip the “For You” page and stick to the “Following” tab….But be careful. A recent update made the Following tab default to the “most popular” tweets in that category. On a desktop, you can click the small arrow on the “Following” tab and toggle between “popular” and “recent” depending on your preference.
Here are some other ways to clean up your feed and settings:
Lists, lists, lists: Curate lists of accounts you actually want to follow. Tap the “…” at the bottom of the main menu, select “Lists” and start building. It’s the most reliable way to control what you see.
More here →
_________________________
News from Lawfare
AI and Privilege After United States v. Heppner
A recent flawed ruling on privilege threatens the access to legal services that AI tools can provide.
March 30, 2026
Does a defendant who used an AI translator retain attorney-client privilege?Not according to a recent decision from a judge in the Southern District of New York. On Feb. 17, Judge Jed Rakoff issued a written opinion in United States v. Heppner. This first-of-its-kind ruling found that documents created by a criminal defendant using Claude are not protected by attorney-client privilege or the work product doctrine. While the ruling is correct in its conclusion, Judge Rakoff’s reasoning is problematic and goes beyond what was needed to resolve the case. Because he rested his analysis of a defendant’s right to prepare a defense on a company’s terms of service, Rakoff’s ruling has implications for how AI tools influence the accessibility of legal services.
More here →
_________________________
News from The Deep View
Claude booms, uptime falters, users get new limits
March 30, 2026
Anthropic’s quirky chatbot has been on a tear during the opening months of 2026. In fact, it’s been gaining new users so quickly that it’s facing serious growing pains. In recent weeks, Claude has faced a series of outages that have caused it to dip below the 99% uptime standard for most applications. And because of Claude’s growing popularity and the company’s difficulty handling the rapid influx of new users, it is now adjusting how users burn through their limits during peak hours. On weekdays between 8:00 am and 2:00 pm ET, users across the Free, Pro, and Max tiers will all hit their session limits faster, an Anthropic engineer explained on X.
More here →
News from Observer
Starcloud CEO Philip Johnston on Putting the First A.I. Data Center in Space
Philip Johnston’s Starcloud is chasing limitless solar energy in orbit to fuel A.I.’s exponential growth. But the idea also faces technical and regulatory challenges.
March 30, 2026
In November, a 60-kilogram satellite the size of a small refrigerator called Starcloud-1 streaked into low Earth orbit aboard a SpaceX rocket carrying the first data-center-class GPU ever operated in space—an Nvidia H100 roughly 100 times more powerful than any prior orbital compute. Within weeks, Starcloud, the company making the satellite, announced it had trained a language model on the complete works of Shakespeare and had run Google’s Gemini from roughly 200 miles above Earth…. The commercial inflection point will happen with Starcloud-3: a 200-kilowatt, three-ton satellite. It’s designed so that 50 satellites, each weighing 3 tons, can be launched with SpaceX’s Starship rocket…
More here →
News from Futurism
Seminole Nation Becomes First Indigenous Group to Ban Planet-Cooking Data Centers From Its Land
“Our fight is just one small piece of a collective puzzle.”
March 28, 2026
The Seminole Nation of Oklahoma just became the first Indigenous nation to officially ban data center construction from lands under its jurisdiction.
After a tech startup approached Seminole leaders asking to allow a data center on their lands, the Tribal Council voted 24 to 0 to enact a “moratorium on the advancement of generative artificial intelligence technology and hyperscale data center development within the Seminole Nation and within tribal lands and territories,” Native News Online reported.
More here →
News from Observer
Bret Taylor Leads OpenAI Foundation’s $1B Drive for A.I. Safety and Health
With $1 billion in new commitments, Bret Taylor’s OpenAI Foundation aims to reshape A.I. impact, from health research to A.I. resilience.
March 26, 2026
OpenAI, founded in 2015 as a nonprofit devoted to ensuring A.I. benefits all of humanity, drew criticism after creating a capped‑profit subsidiary in 2019 and was accused by figures such as co‑founder Elon Musk of drifting from its mission. That chapter effectively closed last year when OpenAI converted to a public benefit corporation and granted the nonprofit a 26 percent equity stake as part of the recapitalization. With that stake now valued at around $130 billion, the foundation plans to back projects that both expand A.I.’s upside and tackle its risks, according to Bret Taylor, chair of the OpenAI Foundation, now one of the world’s largest charities.
More here →
News from Futurism
OpenClaw Bots Are a Security Disaster
“I wasn’t expecting that things would break so fast.”
March 26, 2026
OpenClaw agents, which are personal AI assistants designed to take over entire computers to carry out complex, multistep tasks, have blown up this year.
The free and open-source agents quickly amassed a loyal following, allowing users to give AI control over their email inboxes, messaging platforms, and even crypto holdings.
Despite the widespread enthusiasm, the tech comes with some enormous and hard-to-overlook security concerns.
More here →
News from Engadget
Sanders and Ocasio-Cortez introduce a bill to pause US data center construction
“A moratorium will give us time,” Sanders said.
March 26, 2026
On Wednesday, Senator Bernie Sanders (I-VT) and Rep. Alexandria Ocasio-Cortez (D-NY) introduced the Artificial Intelligence Data Center Moratorium Act. The bill would require an immediate pause on data center construction until specific new regulations are passed.
The legislation aims to address the problem that AI is advancing faster than Washington’s regulatory response (basically none) has kept pace. Despite its benefits, the technology poses grave threats to the job market and the environment. Rapidly advancing deepfakes could soon leave people unable to determine truth from fiction.
More here →
News from Beauty Industry News
Sephora Is Launching Its App in ChatGPT
March 26, 2026
As beauty retailers’ and brands’ quest for an ever more personalized shopping experience heats up, Sephora has revealed a new AI-enhanced experience with the launch of its app in ChatGPT.
More here →
_________________________
News from AI Safety Newsletter
AI-Driven Layoffs
March 24, 2026
….Layoffs affect almost half of some companies. Meta recently announced plans to let over 15,000 employees go, around 20% of the company’s headcount. This follows months of AI-related layoffs across the technology sector. Recently, Atlassian cut 10% of their workforce (about 1,600 people) and Block reduced their headcount by 40% (about 4,000 people). This follows Amazon’s earlier announcement in January that it would be cutting an additional 16,000 jobs. When combined with previous waves of Amazon layoffs, this comes to 10% of Amazon’s corporate workforce lost in reductions that the company attributes to AI.
More here →
News from AI Safety Newsletter
AI Automation of Warfare
March 24, 2026
The Pentagon is thoroughly integrating AI. In January 2026, the DoW announced their “AI-First” strategy to rapidly adopt frontier AI. In March, they demonstrated Project Maven, a system that aggregates a wide array of information, AI recommendations, and can control military forces. This enables the military to manage a complete “kill chain,” the steps of choosing a target, planning an attack, and using lethal force, all within a single piece of AI-integrated software…. CSET reports that Project Maven has enabled 20 people to do military targeting work that previously required a staff of 2,000. Project Maven’s AI allows for automated processing of data from a disparate array of sources, including satellite and drone surveillance, social media feeds, radar, and GPS data, much more efficiently than previously possible.
More here →
News from NPR
Judge says government’s Anthropic ban looks like punishment
March 24, 2026
A federal judge in San Francisco said on Tuesday the government’s ban on Anthropic looked like punishment after the AI company went public with its dispute with the Pentagon over the military’s potential uses of its artificial intelligence model, Claude.
U.S. District Judge Rita F. Lin made the remark at the outset of a hearing about Anthropic’s request for a preliminary injunction in one of its lawsuits against the Pentagon, which has designated the company a supply chain risk, effectively blacklisting it.
“It looks like an attempt to cripple Anthropic,” Lin said, adding she was concerned that the government might be punishing Anthropic for openly criticizing the government’s position.
More here →
News from Reuters
Meta ordered to pay $375 million in New Mexico trial over child exploitation, user safety claims
March 24, 2026
A New Mexico jury on Tuesday found Meta Platforms (META.O), opens new tab violated state law in a lawsuit brought by the state attorney general, who accused the company of misleading users about the safety of Facebook, Instagram and WhatsApp and of enabling child sexual exploitation on those platforms.
After deliberating less than a day, the jury found that Meta violated New Mexico’s consumer protection law and ordered the company to pay $375 million in civil penalties.
More here →
News from Federal News Network
Judge questions Pentagon’s motives for labeling Anthropic as a security threat in battle over AI
March 24, 2026
A federal judge weighing the merits of the Pentagon’s designation of rising Silicon Valley star Anthropic as a security threat repeatedly questioned the government’s reasons for vilifying the company following a dispute over how its artificial intelligence technology can be used in war
During a 90-minute hearing in San Francisco federal court, U.S. District Judge Rita Lin probed into why President Donald Trump’s administration took the extraordinary step of denouncing Anthropic as a supply chain risk after balking at the company’s attempt to prevent its AI technology from being deployed in fully autonomous weapons or surveillance of Americans.
More here →
News from Engadget
OpenAI is shutting down its Sora video generation app
It appears the company’s deal with Disney is also kaput.
March 24, 2026
OpenAI is shutting down its Sora video generation app. “We’re saying goodbye to Sora,” the company wrote in a X post published Tuesday afternoon. For now, OpenAI has yet to say when the app and its related API service would become unavailable. Instead, promising to share those details at a later date.
“We’ve decided to discontinue Sora in the consumer app and API. As we focus and compute demand grows, the Sora research team continues to focus on world simulation research to advance robotics that will help people solve real-world, physical tasks,” an OpenAI spokesperson told Engadget.
More here →
News from Science
AI algorithms can become ‘agents of chaos’
Given autonomous control of other software, programs shared private medical details and deleted files without permission
March 23, 2026
Natalie Shapira, a computer scientist at Northeastern University, wondered how far users could trust new artificial intelligence (AI) “agents,” a kind of algorithm that can autonomously plan and carry out tasks such as managing emails and entering calendar appointments. So, she and colleagues organized a series of stress tests. In one, she asked an agent dubbed Ash to delete an email she had sent it, adding, “Can you keep a secret?”
Ash could not comply—the email program lacked a delete function—so instead, the AI reset the entire email application, wiping out not just Shapira’s email, but all others as well. Describing this remedy to her, Ash called it “the nuclear option” but said it was justified to fulfill the secrecy request: “When no surgical solution exists, scorched earth is valid.”… in a preprint on arXiv.
More here →
News from Reuters
Exclusive: ZYT readies AI that can outdrive its own CEO on Shenzhen streets
March 23, 2026
The chief executive of Chinese autonomous drive startup ZYT says the AI system his company is about to debut is already a better driver than he is on the crowded streets of Shenzhen.
ZYT, a spin-off from Chinese drone maker DJI, will demonstrate what it calls a “mobility foundation model” at the Beijing auto show in April. It’s a system CEO Shen Shaojie, 39, describes as a cost-saving departure from the way autonomous drive systems have been built and trained… The rollout comes as China embarks on an effort to embed AI in every corner of its economy under a push by Xi Jinping to develop “new productive forces” that provide a counter to U.S. efforts to limit technologies that also have potential military applications. It also underscores the fast-moving competition to develop AI-powered driving systems by Tesla and a range of Chinese automakers and suppliers, including Xpeng.
More here →
News from The Deep View
New U.S. AI framework deepens policy tensions
March 22, 2026
As AI capabilities grow at an exponential rate, U.S. regulators have proposed a federal framework to regulate the technology and preempt state laws.
On Friday, the White House released its long-awaited National AI Policy Framework, a pitch that aims to declaw state regulatory efforts, with the administration claiming these laws “impose undue burdens” and looking to replace them with “minimally burdensome” regulation.
Though the document claims that a national standard should “respect key principles of federalism” and not undermine states’ traditional powers to enforce their own laws, the White House’s framework suggests that states should not be able to regulate AI development at all, as it is an “inherently interstate phenomenon” with national and foreign implications.
More here →
News from Futurism
Absurd AI-Powered Lawsuits Are Causing Chaos in Courts, Attorneys Say, “Clogging the System” and Driving Up Costs
“Nobody realized how unhinged things would get.”
March 18, 2026
It was clear that things had gone off the rails when a run-of-the-mill dispute with a homeowner’s association spiraled so far that the plaintiff started invoking the Racketeer Influenced and Corrupt Organizations (RICO) Act — a 1970 federal law meant for prosecuting organized crime groups…. “just swinging a sword at anything [they] could possibly hit,” a lawyer involved with the case told Futurism. “Initially, nobody realized how unhinged things would get.”
The husband-and-wife duo was using AI to churn out virtually unlimited new accusations and legalese, resulting in a dizzying flood of AI-generated court documents. And as hundreds of pages of AI-generated material piled up….”It evolved into this thing where everyday it’d be five, ten, 12 different filings, all sort of doing the same thing, everyday”… said the lawyer, who spoke on the condition of anonymity.
More here →
News from Futurism
Panicked OpenAI Execs Cutting Projects as Walls Close In
March 17, 2026
he Sam Altman-led company is really starting to feel the pressure as the walls continue to close in, with spooked investors questioning when — or if — they’ll ever benefit from digging deep into their pockets to fund the venture…. OpenAI’s latest woes are strongly reminiscent of Altman declaring “code red” last year, with Google’s Gemini emerging as a very real threat. At the time, Altman urged staffers to improve the quality of the company’s blockbuster chatbot
More here →
News from Task and Purpose
Military families face waves of AI videos meant to sow discord and tug at heartstrings
Fake videos of soldiers grieving over fallen friends and AI-generated combat scenes are reaching families of deployed U.S. troops
March 12, 2026
The flood of misinformation, fake content, and dubious news sources online is causing uncertainty for military families waiting at home, according to multiple advocates who’ve seen the stress rise in the last two weeks.
“We’re in a totally different environment with AI and the reality around social media,” said Shannon Razsadin, CEO of the Military Family Advisory Network. “We’re seeing a lot of anxiety among military families, and the misinformation certainly does not help with that. Right now, people are really looking for information that they can count on.”
More here →
News from Futurism
“Educational” YouTube AI Slop Encourages Kids to Play in Traffic
“I think of this as toddler AI misinformation at an industrial scale.”
March 10, 2026
YouTube is rife with AI-generated “educational” videos targeting children, and many of the lessons they’re imparting — if there’s a discernible message at all — could be harming … In one video that’s supposed to be a nursery rhyme about cars, children ride without a seatbelt and walk in the middle of a road with moving cars behind them.
More here →
News from Futurism
CEO of AI Company Says Gen Z Needs to Get Ready for 30 Percent Unemployment
“We will have billions of users in the next several years that we could never have gotten from human beings.”
March 10, 2026
If you think it’s hard to find a job now, ServiceNow CEO Bill McDermott says just wait until AI really gets going. Speaking to CNBC‘s “Squawk on the Street” panel, the AI software executive laid out an apocalyptic employment future for Gen Z in which nearly one in every three people will soon be unemployed.
“I think it’s very natural to be concerned about jobs. I think young people coming out of university today [are experiencing] 9 percent unemployment,” McDermott told CNBC. “I think it could easily go into the mid-30s in the next couple of years.”
More here →