College graduate talking with an AI about his job prospects.
News from Rest of World
Netflix’s AI deal puts the global VFX workforce at risk
A startup founded by Ben Affleck, recently acquired by Netflix, could automate the frame-by-frame work done by artists across India, South Korea, and Latin America.
April 20, 2026
Netflix’s latest acquisition is threatening thousands of livelihoods from Los Angeles to Mumbai.
On March 5, Netflix acquired InterPositive, an artificial intelligence company built by Hollywood actor Ben Affleck, for an undisclosed sum. InterPositive automates color grading, relighting, and continuity fixes. This work is currently done frame by frame by artists in India, South Korea, the Philippines, and Latin America. More than 2 million professionals work in visual effects globally.
______________
News from AEI
Anthropic’s Project Glasswing Is a Warning: Technical Debt Is Now a National Security Risk
April 20, 2026
Anthropic’s launch of Project Glasswing should be understood less as a product announcement and more as a policy warning. Reuters reports that the rapid emergence of Claude Mythos Preview has already prompted discussions among the US Treasury, the Federal Reserve, and major banking executives because the model exposes the fragility of legacy systems...
The most important takeaway is not merely that Anthropic has built a model capable of finding vulnerabilities across major operating systems, browsers, and enterprise software. Rather, it is that AI has finally turned decades of accumulated technical debt into an immediately exploitable risk surface.
______________
News from The AIReport
NSA taps Anthropic despite ban
April 20, 2026
The National Security Agency is using Anthropic’s Mythos Preview AI model even though the Pentagon formally labeled the company a “supply chain risk” back in February, according to an Axios report citing two sources with direct knowledge….
The National Security Agency is using Anthropic’s Mythos Preview AI model even though the Pentagon formally labeled the company a “supply chain risk” back in February…
Anthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent on Friday. Both sides called the talks “productive.”
______________
News from Murphy’s Law
Commencement 2026
If Ai Is Your Friend You Need Better Friends.
April 19, 2026
Human ones, for starters.
______________
News from Axios
Trump-branded AI data center megaproject stalls, CEO departs
April 19, 2026
The world’s largest data center project — backed by Trump allies and bearing his name — is stalled by delays and logistical hurdles that could stop it before it even starts.
The latest sign of trouble emerged Friday: CEO Toby Neugebauer abruptly departed. That sent the company’s shares, which already shed 75% in the last six months, plummeting in aftermarket trading.
______________
News from The Decoder
Just ten minutes of using AI as an answer machine can measurably erode problem-solving skills, new study finds
April 18, 2026
Just 10 to 15 minutes with an AI assistant is enough to measurably weaken problem-solving ability and persistence on later tasks done without AI, according to a new study from researchers in the US and UK.
The research, conducted by teams at several American and British universities, shows that while AI assistance boosts immediate performance, it comes with a catch: once the AI is taken away, users perform worse than people who tackled the same tasks on their own from the start. They also give up more often.
______________
News from Futurism
Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advice
What could possibly go wrong?
April 17, 2026
While Google’s AI may no longer recommend eating rocks or confidently telling users to put glue on their pizza, even cutting-edge AI chatbots remain staggeringly incompetent at dispensing medical advice.
In a new study published this week in the journal JAMA Network Open, researchers asked 21 frontier large language models (LLMs) to “play doctor” when confronted with realistic symptoms that an actual patient could feasibly ask about.
The results painted a damning picture. The AIs’ failure rates exceeded 80 percent when provided with given ambiguous symptoms that could match more than one condition, and for more straightforward cases that included including physical exam findings and lab results, they still failed 40 percent of the time. The researchers also found that unlike human clinicians, the “LLMs collapse prematurely onto single answers,” resulting in “weak performance” across all models.
______________
News from Emerge
Anthropic’s Alarming Mythos Findings Replicated With Off-the-Shelf AI, Researchers Say
April 17, 2026
….”We replicated Mythos findings in opencode using public models, not Anthropic’s private stack,” Dawid Moczadło, one of the researchers involved in the experiment, wrote on X after publishing the results. “A better way to read Anthropic’s Mythos release is not ‘one lab has a magical model.’ It is: the economics of vulnerability discovery are changing.”
….Every scan stayed below $30 per file, meaning researchers were able to find the same vulnerabilities as Anthropic while spending less than $30 to do it.
______________
News from Futurism
Allbirds Stock Now Crashing as Reality Sets in About Its Delusional AI Pivot
“The vast majority of times, these things end in tears.”
April 16, 2026
Tech bro sneaker company Allbirds made a huge splash yesterday when it announced a baffling pivot to AI infrastructure — news that was met with a mix of incredulity and ridicule.
The company’s blindsiding metamorphosis into what it’s calling “NewBird AI” had investors leaping from their office chairs, sending shares surging by over 700 percent on Wednesday.
That’s despite Allbirds’ core business being at death’s door. In its final throes, the company sold off its intellectual property and other assets for a measly $39 million mere weeks ago, leaving its once lofty $4 billion market cap five years ago long behind.
But don’t break out the champagne quite yet. The rally subsequently came to a “screeching halt,” as Bloomberg put it, with shares sinking a dismal 35 percent on Thursday.
In other words, possibly ketamine-crazed Wall Street bros realized the morning after that a struggling shoe company may not be able to prop up a trillion-dollar industry with its promises of buying up impossible-to-get AI chips.
______________
News from Modern War Institute
Designing Lethal Decisions: AI, Accountability, and the Future of Military Judgment
April 16, 2026
As artificial intelligence systems are integrated into military operations, a familiar intuition hardens into an institutional standard: The higher the stakes, the more essential it is to keep humans in the loop….
That intuition is understandable. It is also, in important respects, wrong…..
Consider a simple but revealing example—what we might call the white van problem. In a combat zone, intelligence has associated a reported threat with a white van. Other information is scant or unsubstantiated. For soldiers on the ground or drones in the sky, any white van may either be entirely benign—or it may be carrying combatants or explosives. The fundamental challenge, then, is how to act under conditions where signals are weak, context-dependent, and consequential.
…Operators must rely on contextual cues: movement patterns, timing, proximity to known threats, and behavior that deviates from local norms. The standard argument is that such contextual judgment cannot be codified and must remain with human decision-makers in the field.
But….if the alternative is to depend on intuition shaped by stress, fatigue, or incomplete perception, then human override power is not a reliable basis for life-and-death decisions.
______________
News from Futurism
Starbucks’ Baffling ChatGPT Collab Treats Customers Like Empty, Soulless Venti Cups
“If you are so paralyzed by an indecision that you need a chatbot to tell you what coffee drink to order, you probably need to check into a rehab.”
April 16, 2026
As AI chatbots go, OpenAI’s ChatGPT isn’t the most provocative. Its relentlessly upbeat, hand-holding style has drawn constant criticism for coming across as condescending. Still, Starbucks’ newly announced partnership with the chatbot may have pushed that paternalism to a whole new level.
Announced on Wednesday, the new “Starbucks app” is basically a widget within ChatGPT. After enabling Starbucks connectivity in the ChatGPT app, users can type “@Starbucks” to receive “personalized drink recommendations tailored to your taste, mood, and goals.”
…“You don’t need to know the name of a drink, just start with how you’re feeling or what you’re craving — in your own words or through a photo,” the presser enthuses. “It’s discovery that feels effortless.”
______________
News from Wes Siler’s Newsletter
Republicans Vote To Destroy Boundary Waters In Giveaway To China’s AI
The most popular Wilderness in the country will become a dumping ground for sulfuric acid
April 16, 2026
Traitors. Republicans in the Senate just voted to permit the construction of a heavily polluting mine in the headwaters for Minnesota’s Boundary Waters Canoe Area Wilderness. The region’s ecosystem will be destroyed, taking with it $1.1 billion in annual economic activity, 17,000 jobs, and one of the last unspoiled slices of nature left in this country. What does America get in return? Nothing. Profits will go to Chile, the copper will go to China where it will help that country race head of us in its AI buildout, and any jobs created will go to workers from outside the state and country. Polluted water will also flow into Voyageurs National Park, Canada’s Quetico Provincial Park, and Lake Superior.
______________
News from Reuters
Gucci-owner Kering aims to launch luxury Google glasses next year, CEO says
April 16, 2026
Kering …opens new tab aims to launch smart glasses under the Gucci brand in partnership with Google …, opens new tab next year, CEO Luca de Meo told Reuters, becoming potentially the first major luxury brand to enter the AI-powered eyewear sector.
That will pit it against Italian-French eyewear leader EssilorLuxottica….
“Probably next year, 2027,” de Meo said when asked about the timeline for the smart glasses’ launch
______________
News from ArsTechnica
Google’s AI enables robots to read gauges while inspecting industrial facilities.
April 15, 2026
The new Gemini Robotics-ER 1.6 model announced on April 14 performs as a “high-level reasoning model for a robot” that can plan and execute tasks, according to Google DeepMind. This model also unlocks the capability of accurately reading instruments such as complex gauges and doing visual inspections using sight glasses that provide a transparent window to peek inside tanks and pipes—a performance upgrade that came about through Google DeepMind’s ongoing collaboration with robotics company Boston Dynamics.
______________
News from Futurism
ChatGPT’s “Honest Reaction” to a “Song” Composed Entirely of Gas-Passing Noises Will Make You Question Whether It’s Honestly Evaluating Your Other Brilliant Ideas
April 15, 2026
It doesn’t take much to impress an AI chatbot.
Tools like OpenAI’s ChatGPT have long garnered a reputation for being ludicrously sycophantic. Despite AI companies publicly promising to address the problem, researchers recently found that the bots still have a strong tendency to flatter and affirm in response to virtually any kind of prompt.
In the latest preposterous example of this impulse, philosophy YouTuber and writer Jonas Čeika “sent ChatGPT an audio file of a series of FART sound effects and asked what it thinks of ‘my music.’”
It didn’t take long for the glazing chatbot to congratulate him on his musical achievement, in what it called a “straight” and “honest reaction.”
“First impression: It has a cool lo-fi, late-night, slightly eerie vibe,” it wrote. “It feels more like an atmosphere piece than a traditional song — which actually works in its favor. It reminds me of something that would play over a quiet city montage or end credits.”
______________
News from Futurism
AI Is Turning Workplaces Into Hopeless Gridlock
Looks like AI is not the magical tool that CEOs make it out to be.
April 15, 2026
CEOs have eagerly grabbed onto AI as a tool to make offices more efficient, and often to reduce headcount via brutal layoffs.
There’s a problem, though: the workers who remain often say they now have to fix a flood of error-ridden AI-generated “workslop” that’s burdening them, paradoxically, with more work than ever.
______________
News from The AI Report
Claude aces UK cyber test
April 15, 2026
Anthropic’s Claude Mythos Preview has become the first AI model to complete a full simulated corporate network attack, according to new evaluations from the UK’s AI Security Institute (AISI). The findings, published days after the model’s April 7 announcement, suggest AI cyber capabilities have reached a level that requires close attention from security teams worldwide….
The UK’s findings reinforce the need for organizations to double down on foundational cybersecurity measures, including regular patching, strict access controls, and comprehensive logging. Reports also indicate that US Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell met with major bank CEOs to warn about potential cyber risks linked to this model.
______________
News from Amazon News
AWS launches Amazon Bio Discovery to accelerate AI-powered research in life sciences
A new agentic AI application aims to speed up drug development, helping bring new medical treatments to patients faster.
April 14, 2026
Today, AWS announced Amazon Bio Discovery, a new AI-powered application designed to help scientists design and test novel drugs more quickly and confidently.
Amazon Bio Discovery gives scientists direct access to a broad catalog of specialized AI models called biological foundation models (bioFMs) that are trained on vast biological datasets. These models generate and evaluate potential drug molecules, known as candidates, helping scientists accelerate antibody therapies during the early stages of drug discovery.
______________
News from The Onion
Man Who Threw Molotov Cocktail At Sam Altman’s Home Claims He Was Following ChatGPT Recipe For Risotto
April 13, 2026
Following reports that a 20-year-old man had been arrested for throwing a Moltov cocktail at Sam Altman’s home, the suspect stated Monday that he only initiated the attack because he was following a ChatGPT recipe for risotto. “I’ve been using ChatGPT to help with cooking for a while now, so I didn’t think too much of it when the ingredients list included a bottle filled with gasoline and a cloth wick,” said the alleged attacker, who added that he naturally assumed making the rice dish involved driving several hours to the OpenAI CEO’s residence, especially after the AI chatbot had given him a “pretty decent” sesame chicken recipe the week before.
______________
News from Foreign Policy
How the Pentagon Can Manage the Risks of AI Warfare
If warfighters don’t trust the technology, they won’t use it.
April 13, 2026
AI has the potential to dramatically change the cognitive speed and scale of warfare. Yet military AI comes with profound risks. The dangers go beyond the use of autonomous weapons, which was one of the sticking points in the recent dispute between the Pentagon and leading AI company Anthropic. General-purpose AI systems such as large language models are prone to novel failure modes, vulnerable to hacking and manipulation, and have even been demonstrated to lie and scheme against their own users.
….Above all, AI must be a tool to enhance human decision-making, not surrender it to machines.
______________
News from Lifehacker
Eight Things You Should Never Share With an AI Chatbot
A reminder that your conversations aren’t private or secure.
April 10, 2026
…. you can opt out of having your data used to train LLMs, but chats can also be read by human reviewers, and long-term retention policies increase the risk of your stored information being leaked in a breach.
If you’re going to use an AI chatbot, these are the things you should avoid sharing:…
Financial data:AI chatbots aren’t financial experts, and you shouldn’t upload documents or use data related to your specific finances in prompts. This includes bank statements, credit card numbers, investment information, account numbers and balances, etc. Sharing financial details anywhere that isn’t secure increases the risk of theft, fraud, and targeting by scammers.
# # #