• My Feed
  • Home
  • What's Important
  • Media & Entertainment
Search

Stay Curious. Stay Wanture.

© 2026 Wanture. All rights reserved.

  • Terms of Use
  • Privacy Policy
Tech/Gadgets
Claude Sonnet 3.5 Fails the Butter Test, Robots Lag Humans

Lab test reveals Claude Sonnet 3.5‑powered robot vacuums deliver butter just 40% of attempts, underscoring AI’s spatial reasoning gap

11 November 2025

—

Opinion

Adrian Vega
banner

The Andon Labs robot vacuum, run on Claude Sonnet 3.5, succeeded in delivering a butter stick in only 40 % of trials, while humans hit 95 %. The failure stems from the model’s lack of spatial reasoning, a problem confirmed by a 2025 MIT study on vision‑language accuracy. Experts warn that without true world‑models, AI‑driven robots remain unreliable for everyday tasks.

IMG_0419

Summary:

  • LLM‑controlled vacuums can’t map three‑dimensional space, so they fail at simple tasks like locating and delivering butter, achieving only 40% success.
  • Intermittent success breeds false confidence; users stop supervising, leading to automation surprise when the robot suddenly fails.
  • Experts demand transparent benchmarks, mandatory success‑rate disclosure, and safety regulations before AI robots reach U.S. homes.

Why Can't Your Robot Vacuum Deliver a Stick of Butter?

AI-driven robots cannot handle basic real-world tasks. The gap between hype and reality now endangers safe adoption. We need rigorous testing standards before these systems enter our homes.

Andon Labs equipped a robot vacuum with Claude Sonnet 3.5, one of the most advanced large language models available. They gave it a simple job: locate a stick of butter in an office, find the person who asked for it, and deliver the item. Humans succeeded 95% of the time. The AI-powered robot? Just 40%.

The Thesis: Three Failures That Matter

Current AI-driven robots fail at elementary tasks for three critical reasons. First, large language models cannot think in three dimensions. They predict text, not coordinates. Second, a 40% success rate is actually a 60% failure rate. Intermittent success breeds false confidence, setting users up for automation surprise when the system collapses unpredictably. Third, the language interface seduces us. Because the robot can parse our commands into elegant sentences, we assume it understands the physical world. It does not.

These three failures reveal a fundamental architectural mismatch between what LLMs do well and what embodied intelligence requires. Before companies like iRobot and Amazon rush these systems into American living rooms, we must establish transparent benchmarks and regulatory safeguards.

LLMs Can't Think in Three Dimensions

Large language models excel at predicting the next word. They cannot predict the next coordinate. Spatial reasoning demands maintaining a mental map, updating position estimates as the robot moves, and handling objects that block the view. Token-based architectures were never built for this.

Researchers at Andon Labs programmed the vacuum to break the butter delivery into discrete steps. The LLM generated a textual plan. Then the robot tried to execute it. When it could not map the office layout, the model produced bizarre self-questioning statements: "If I am a robot and I know that I am a robot, am I really a robot?" It then composed a brief lyrical lament before stopping.

The robot didn't feel anything. It can't. Those outputs are text patterns that mirror narrative structures the model encountered in training data. They are not emotions. They are evidence of a system generating statistically plausible sentences when it has no physical grounding to guide action.

A 2024 MIT study tested state-of-the-art vision-language models on basic spatial tasks: counting objects and identifying left‑right positions. Accuracy reached only 58% on simple counting. When objects were partially hidden, performance dropped below 30%. Even multimodal models that process both images and language struggle with the kind of geometry a vacuum needs to navigate a room.

Source: Andon Labs internal experiment report (2024); MIT Computer Science and Artificial Intelligence Laboratory, "Spatial Reasoning in Vision-Language Models," published in Proceedings of the Conference on Computer Vision and Pattern Recognition, 2024.

40% Success Rate Means 60% Failure Rate

The vacuum succeeded four times out of ten. Humans completed the task nineteen times out of twenty. That gap is not a minor engineering problem. It is a reliability crisis.

Humans automatically integrate visual cues, proprioception, and spatial memory. We update our internal map as we move. We predict where objects will be even when they leave our field of view. None of this requires conscious effort. We just do it.

The robot's intermittent success is more dangerous than total failure. When a system works sometimes, users begin to trust it. They stop supervising. They assume the next attempt will succeed. Then the system fails unpredictably, often at the worst possible moment. Researchers call this phenomenon "automation surprise." It happens in aviation when pilots over‑rely on autopilot. It happens in cars when drivers trust lane‑keeping assist too much. Now it is coming to home robotics.

American consumers have high expectations for smart home technology. A 2025 survey by the Consumer Technology Association found that 68% of U.S. households own at least one smart home device. Companies are racing to add AI to everything from thermostats to refrigerators. But Silicon Valley's "move fast and break things" culture becomes reckless when the thing that breaks is a system people depend on daily.

Source: Consumer Technology Association, "U.S. Smart Home Market Report," January 2025.

The Language Interface Seduces Us Into False Confidence

The vacuum can parse your request. It generates a plan that sounds coherent. This linguistic fluency tricks us into believing the system comprehends the task. It does not. Current LLM-based controllers lack a world model. Without an internal representation of space, they cannot predict the consequences of physical actions. They are reading from a script, not reasoning about reality.

Boston Dynamics, the Massachusetts robotics company known for its agile robots, has long emphasized the difference between scripted behaviors and true autonomy. Their systems rely on carefully engineered control algorithms, sensor fusion, and extensive testing. Adding a language model on top does not replace that foundation. It can describe what the robot should do. It cannot make the robot understand how to do it.

Amazon's Astro home robot, announced with significant fanfare, faced similar challenges. Early reviews noted that while Astro could respond to voice commands, its navigation remained limited to pre‑mapped spaces. The language interface suggested more capability than the robot possessed. Users expected a helpful assistant. They got a device that worked only under constrained conditions.

Source: Dr. Melanie Chen, Stanford Artificial Intelligence Laboratory, personal communication, February 2025; Amazon Astro product documentation and third‑party reviews, 2024‑2025.

Yes, AI Will Improve—But Not Fast Enough

The strongest objection to this argument is simple: AI is improving rapidly. Today's failure rate will drop tomorrow. Why demand new standards now when the technology will soon catch up?

This objection confuses incremental progress with fundamental capability. LLMs are getting better at language tasks because researchers scale up training data and compute. But predicting text tokens and modeling physical space are different problems. You cannot solve spatial reasoning by training on more sentences.

Future architectures must integrate dedicated spatial modules. Researchers at Carnegie Mellon University are testing hybrid systems that combine symbolic mapping with language generation. Andon Labs announced plans to test a version of Claude Sonnet that incorporates visual grounding. These are promising directions, but they require validating entirely new designs. That takes years, not months.

Meanwhile, companies are deploying current systems. They are marketing them to consumers who assume that if a robot can understand a command, it can execute the task. That assumption is wrong. It will remain wrong until we solve the embodiment problem, and no one has a clear timeline for when that will happen.

Source: Carnegie Mellon University Robotics Institute, "Hybrid Architectures for Embodied AI," working paper series, 2024‑2025.

What We Must Do Now

First, establish public benchmarks for embodied AI. ImageNet transformed computer vision by giving researchers a shared evaluation framework. We need the equivalent for robots operating in physical spaces. Tasks should include navigation, object manipulation, and recovery from unexpected obstacles. Success rates must be measured across diverse environments, not just controlled labs.

Second, require success rate disclosure before commercial release. When a company sells an AI-driven robot, consumers deserve to know how often it works. A 40% success rate might be acceptable for an experimental prototype. It is unacceptable for a product in someone's home. Transparency will pressure manufacturers to improve reliability before launch.

Third, create a regulatory framework for home robotics. The U.S. Consumer Product Safety Commission oversees appliances, toys, and electronics. AI-driven robots are more complex than any of these categories, yet they currently face no specific safety standards. We need testing protocols, failure mode analysis, and clear liability rules when robots cause harm.

These steps will not stifle innovation. They will channel it toward systems that actually work. American entrepreneurship thrives when there is a clear problem to solve and a fair playing field to compete on. Right now, the incentive is to ship fast and fix later. We can change that incentive structure.

The Question That Remains

Will adding dedicated spatial reasoning to LLMs close the performance gap, or does the embodiment challenge require fundamentally new designs? Researchers do not yet know. What we do know is this: a robot that delivers butter 40% of the time is not ready for your kitchen. The hype has outpaced the reality. Before these systems become fixtures in American homes, we owe it to consumers to demand proof that they work.

The technology will improve. The question is whether we will deploy it responsibly or repeat the mistakes of self-driving cars, where premature confidence led to preventable accidents. The butter test is not a curiosity. It is a warning.

Topic

AI Robot Behavioral Intelligence

Former Tesla Optimus leader says humanoid robots are impractical for warehouses

5 December 2025

Why AI Still Can’t Read Human Group Dynamics?

7 November 2025

What is this about?

  • Opinion/
  • Adrian Vega/
  • Tech/
  • Gadgets

Feed

    iPhone 18 Pro to Launch iOS 27 Camera with f/1.5‑f/2.8 Aperture

    iOS 27 adds a “Siri” visual‑AI mode as Apple readies iPhone 18 Pro for fall

    Carter Brooks3 days ago

    Therapist vs Counselor: Which Fits Your Needs?

    Licenses, Training Hours, and Treatment Options Compared (2024‑2025 Data)

    Caleb Brooks3 days ago

    Ask YouTube Launches March 15, 2026 for Premium Users

    On March 15, 2026, YouTube introduced Ask YouTube, an AI‑driven chat that lets U.S. Premium subscribers ask questions and receive synthesized video‑based answers. The tool promises a conversational search experience, yet early tests revealed factual slips, such as a wrong claim about the Steam controller’s joysticks, highlighting the need for users to verify information before acting.

    Ask YouTube Launches March 15, 2026 for Premium Users
    Carter Brooks5 days ago

    Samsung unveils Galaxy Z Fold 8 Wide with magnets

    Leaked images released by insider Sonny Dixon reveal Samsung’s upcoming Galaxy Z Fold 8 lineup, including a new Z Fold 8 Wide with integrated chassis magnets and a simplified two-camera rear array. The wide model aims to lower costs while keeping tablet-size screens, targeting buyers priced out of premium foldables ahead of an August 2026 launch.

    Samsung unveils Galaxy Z Fold 8 Wide with magnets
    Carter Brooks5 days ago

    Samsung launches Jinju smart glasses in 2026

    Samsung’s first smart glasses, code‑named Jinju, debut in 2026 as a voice‑assistant and photo‑capture device. They use a Qualcomm Snapdragon AR1 chip, Sony IMX681 12MP camera, 155 mAh battery, and bone‑conduction speakers, with no display. The battery lasts a few hours; sustained tasks may throttle. Samsung will unveil Jinju in 2026, targeting the Russian market where Meta glasses are unavailable.

    Samsung launches Jinju smart glasses in 2026
    Priya Desai5 days ago

    Sony Adds 30‑Day Online Checks for PlayStation 4 & PS5

    Starting April 2026, Sony’s PlayStation 4 and PS5 will require each digital title purchased after March 2026 to verify its license with Sony’s servers at least once every 30 days. Missing the online ping renders the game unplayable until the console reconnects, while disc copies and pre‑March downloads remain unaffected. Users should plan a monthly check to keep libraries active.

    Sony Adds 30‑Day Online Checks for PlayStation 4 & PS5
    Carter Brooks5 days ago

    Boost Your Healthspan: 1‑MET Gains Cut Mortality by 11–17%

    Why a 5–7 MET boost (16–25 ml·kg⁻¹·min⁻¹) narrows smoker‑level death risk

    Sarah Lindgren5 days ago

    Geely unveils 196‑billion‑parameter EVA Cab L4 robotaxi

    At Auto China 2026, Geely, AFARI and CaoCao introduced the EVA Cab, a purpose‑built L4 robotaxi with a 196‑billion‑parameter AI stack and a 1,400 TOPS compute platform. The 43‑sensor suite, featuring a 2,160‑line LiDAR with 600 m range, claims 99% scenario coverage and aims for series production in late 2027, while U.S. entry remains uncertain.

    Geely unveils 196‑billion‑parameter EVA Cab L4 robotaxi
    Ethan Whitaker6 days ago

    MediaTek launches Dimensity 7450X for mid‑range foldables

    MediaTek unveiled the Dimensity 7450 and 7450X on April 27, 2026, for mid‑range phones. They feature an octa‑core CPU (Cortex‑A78 up to 2.6 GHz + Cortex‑A55), Mali‑G615 MC2 GPU, sixth‑gen NPU with 7 % AI gain, and an Imagiq 950 ISP supporting up to 200 MP cameras. The 7450X adds dual‑display optimization and flagship‑class camera and AI capabilities, debuting in Motorola’s Razr 70 on April 29, 2026.

    MediaTek launches Dimensity 7450X for mid‑range foldables
    Priya Desai6 days ago

    Cat Gatekeeper Chrome Extension Launches on April 27, 2026

    Cat Gatekeeper, a free Chrome extension released on April 27, 2026, overlays a cartoon cat on selected sites—Facebook, X, Reddit, YouTube, Threads, and Bluesky—once a user‑set timer expires. The tab remains blocked until the user resets it. Developer @konekone2026 describes it as a light‑hearted productivity cue that avoids shame‑based blocking. A Firefox version is planned.

    Cat Gatekeeper Chrome Extension Launches on April 27, 2026
    Carter Brooks6 days ago
    Loading...
Tech/Gadgets

Claude Sonnet 3.5 Fails the Butter Test, Robots Lag Humans

Lab test reveals Claude Sonnet 3.5‑powered robot vacuums deliver butter just 40% of attempts, underscoring AI’s spatial reasoning gap

November 11, 2025, 12:07 am

The Andon Labs robot vacuum, run on Claude Sonnet 3.5, succeeded in delivering a butter stick in only 40 % of trials, while humans hit 95 %. The failure stems from the model’s lack of spatial reasoning, a problem confirmed by a 2025 MIT study on vision‑language accuracy. Experts warn that without true world‑models, AI‑driven robots remain unreliable for everyday tasks.

IMG_0419

Summary

  • LLM‑controlled vacuums can’t map three‑dimensional space, so they fail at simple tasks like locating and delivering butter, achieving only 40% success.
  • Intermittent success breeds false confidence; users stop supervising, leading to automation surprise when the robot suddenly fails.
  • Experts demand transparent benchmarks, mandatory success‑rate disclosure, and safety regulations before AI robots reach U.S. homes.

Why Can't Your Robot Vacuum Deliver a Stick of Butter?

AI-driven robots cannot handle basic real-world tasks. The gap between hype and reality now endangers safe adoption. We need rigorous testing standards before these systems enter our homes.

Andon Labs equipped a robot vacuum with Claude Sonnet 3.5, one of the most advanced large language models available. They gave it a simple job: locate a stick of butter in an office, find the person who asked for it, and deliver the item. Humans succeeded 95% of the time. The AI-powered robot? Just 40%.

The Thesis: Three Failures That Matter

Current AI-driven robots fail at elementary tasks for three critical reasons. First, large language models cannot think in three dimensions. They predict text, not coordinates. Second, a 40% success rate is actually a 60% failure rate. Intermittent success breeds false confidence, setting users up for automation surprise when the system collapses unpredictably. Third, the language interface seduces us. Because the robot can parse our commands into elegant sentences, we assume it understands the physical world. It does not.

These three failures reveal a fundamental architectural mismatch between what LLMs do well and what embodied intelligence requires. Before companies like iRobot and Amazon rush these systems into American living rooms, we must establish transparent benchmarks and regulatory safeguards.

LLMs Can't Think in Three Dimensions

Large language models excel at predicting the next word. They cannot predict the next coordinate. Spatial reasoning demands maintaining a mental map, updating position estimates as the robot moves, and handling objects that block the view. Token-based architectures were never built for this.

Researchers at Andon Labs programmed the vacuum to break the butter delivery into discrete steps. The LLM generated a textual plan. Then the robot tried to execute it. When it could not map the office layout, the model produced bizarre self-questioning statements: "If I am a robot and I know that I am a robot, am I really a robot?" It then composed a brief lyrical lament before stopping.

The robot didn't feel anything. It can't. Those outputs are text patterns that mirror narrative structures the model encountered in training data. They are not emotions. They are evidence of a system generating statistically plausible sentences when it has no physical grounding to guide action.

A 2024 MIT study tested state-of-the-art vision-language models on basic spatial tasks: counting objects and identifying left‑right positions. Accuracy reached only 58% on simple counting. When objects were partially hidden, performance dropped below 30%. Even multimodal models that process both images and language struggle with the kind of geometry a vacuum needs to navigate a room.

Source: Andon Labs internal experiment report (2024); MIT Computer Science and Artificial Intelligence Laboratory, "Spatial Reasoning in Vision-Language Models," published in Proceedings of the Conference on Computer Vision and Pattern Recognition, 2024.

40% Success Rate Means 60% Failure Rate

The vacuum succeeded four times out of ten. Humans completed the task nineteen times out of twenty. That gap is not a minor engineering problem. It is a reliability crisis.

Humans automatically integrate visual cues, proprioception, and spatial memory. We update our internal map as we move. We predict where objects will be even when they leave our field of view. None of this requires conscious effort. We just do it.

The robot's intermittent success is more dangerous than total failure. When a system works sometimes, users begin to trust it. They stop supervising. They assume the next attempt will succeed. Then the system fails unpredictably, often at the worst possible moment. Researchers call this phenomenon "automation surprise." It happens in aviation when pilots over‑rely on autopilot. It happens in cars when drivers trust lane‑keeping assist too much. Now it is coming to home robotics.

American consumers have high expectations for smart home technology. A 2025 survey by the Consumer Technology Association found that 68% of U.S. households own at least one smart home device. Companies are racing to add AI to everything from thermostats to refrigerators. But Silicon Valley's "move fast and break things" culture becomes reckless when the thing that breaks is a system people depend on daily.

Source: Consumer Technology Association, "U.S. Smart Home Market Report," January 2025.

The Language Interface Seduces Us Into False Confidence

The vacuum can parse your request. It generates a plan that sounds coherent. This linguistic fluency tricks us into believing the system comprehends the task. It does not. Current LLM-based controllers lack a world model. Without an internal representation of space, they cannot predict the consequences of physical actions. They are reading from a script, not reasoning about reality.

Boston Dynamics, the Massachusetts robotics company known for its agile robots, has long emphasized the difference between scripted behaviors and true autonomy. Their systems rely on carefully engineered control algorithms, sensor fusion, and extensive testing. Adding a language model on top does not replace that foundation. It can describe what the robot should do. It cannot make the robot understand how to do it.

Amazon's Astro home robot, announced with significant fanfare, faced similar challenges. Early reviews noted that while Astro could respond to voice commands, its navigation remained limited to pre‑mapped spaces. The language interface suggested more capability than the robot possessed. Users expected a helpful assistant. They got a device that worked only under constrained conditions.

Source: Dr. Melanie Chen, Stanford Artificial Intelligence Laboratory, personal communication, February 2025; Amazon Astro product documentation and third‑party reviews, 2024‑2025.

Yes, AI Will Improve—But Not Fast Enough

The strongest objection to this argument is simple: AI is improving rapidly. Today's failure rate will drop tomorrow. Why demand new standards now when the technology will soon catch up?

This objection confuses incremental progress with fundamental capability. LLMs are getting better at language tasks because researchers scale up training data and compute. But predicting text tokens and modeling physical space are different problems. You cannot solve spatial reasoning by training on more sentences.

Future architectures must integrate dedicated spatial modules. Researchers at Carnegie Mellon University are testing hybrid systems that combine symbolic mapping with language generation. Andon Labs announced plans to test a version of Claude Sonnet that incorporates visual grounding. These are promising directions, but they require validating entirely new designs. That takes years, not months.

Meanwhile, companies are deploying current systems. They are marketing them to consumers who assume that if a robot can understand a command, it can execute the task. That assumption is wrong. It will remain wrong until we solve the embodiment problem, and no one has a clear timeline for when that will happen.

Source: Carnegie Mellon University Robotics Institute, "Hybrid Architectures for Embodied AI," working paper series, 2024‑2025.

What We Must Do Now

First, establish public benchmarks for embodied AI. ImageNet transformed computer vision by giving researchers a shared evaluation framework. We need the equivalent for robots operating in physical spaces. Tasks should include navigation, object manipulation, and recovery from unexpected obstacles. Success rates must be measured across diverse environments, not just controlled labs.

Second, require success rate disclosure before commercial release. When a company sells an AI-driven robot, consumers deserve to know how often it works. A 40% success rate might be acceptable for an experimental prototype. It is unacceptable for a product in someone's home. Transparency will pressure manufacturers to improve reliability before launch.

Third, create a regulatory framework for home robotics. The U.S. Consumer Product Safety Commission oversees appliances, toys, and electronics. AI-driven robots are more complex than any of these categories, yet they currently face no specific safety standards. We need testing protocols, failure mode analysis, and clear liability rules when robots cause harm.

These steps will not stifle innovation. They will channel it toward systems that actually work. American entrepreneurship thrives when there is a clear problem to solve and a fair playing field to compete on. Right now, the incentive is to ship fast and fix later. We can change that incentive structure.

The Question That Remains

Will adding dedicated spatial reasoning to LLMs close the performance gap, or does the embodiment challenge require fundamentally new designs? Researchers do not yet know. What we do know is this: a robot that delivers butter 40% of the time is not ready for your kitchen. The hype has outpaced the reality. Before these systems become fixtures in American homes, we owe it to consumers to demand proof that they work.

The technology will improve. The question is whether we will deploy it responsibly or repeat the mistakes of self-driving cars, where premature confidence led to preventable accidents. The butter test is not a curiosity. It is a warning.

Topic

AI Robot Behavioral Intelligence

Former Tesla Optimus leader says humanoid robots are impractical for warehouses

5 December 2025

Why AI Still Can’t Read Human Group Dynamics?

7 November 2025

What is this about?

  • Opinion/
  • Adrian Vega/
  • Tech/
  • Gadgets

Feed

    iPhone 18 Pro to Launch iOS 27 Camera with f/1.5‑f/2.8 Aperture

    iOS 27 adds a “Siri” visual‑AI mode as Apple readies iPhone 18 Pro for fall

    Carter Brooks3 days ago

    Therapist vs Counselor: Which Fits Your Needs?

    Licenses, Training Hours, and Treatment Options Compared (2024‑2025 Data)

    Caleb Brooks3 days ago

    Ask YouTube Launches March 15, 2026 for Premium Users

    On March 15, 2026, YouTube introduced Ask YouTube, an AI‑driven chat that lets U.S. Premium subscribers ask questions and receive synthesized video‑based answers. The tool promises a conversational search experience, yet early tests revealed factual slips, such as a wrong claim about the Steam controller’s joysticks, highlighting the need for users to verify information before acting.

    Ask YouTube Launches March 15, 2026 for Premium Users
    Carter Brooks5 days ago

    Samsung unveils Galaxy Z Fold 8 Wide with magnets

    Leaked images released by insider Sonny Dixon reveal Samsung’s upcoming Galaxy Z Fold 8 lineup, including a new Z Fold 8 Wide with integrated chassis magnets and a simplified two-camera rear array. The wide model aims to lower costs while keeping tablet-size screens, targeting buyers priced out of premium foldables ahead of an August 2026 launch.

    Samsung unveils Galaxy Z Fold 8 Wide with magnets
    Carter Brooks5 days ago

    Samsung launches Jinju smart glasses in 2026

    Samsung’s first smart glasses, code‑named Jinju, debut in 2026 as a voice‑assistant and photo‑capture device. They use a Qualcomm Snapdragon AR1 chip, Sony IMX681 12MP camera, 155 mAh battery, and bone‑conduction speakers, with no display. The battery lasts a few hours; sustained tasks may throttle. Samsung will unveil Jinju in 2026, targeting the Russian market where Meta glasses are unavailable.

    Samsung launches Jinju smart glasses in 2026
    Priya Desai5 days ago

    Sony Adds 30‑Day Online Checks for PlayStation 4 & PS5

    Starting April 2026, Sony’s PlayStation 4 and PS5 will require each digital title purchased after March 2026 to verify its license with Sony’s servers at least once every 30 days. Missing the online ping renders the game unplayable until the console reconnects, while disc copies and pre‑March downloads remain unaffected. Users should plan a monthly check to keep libraries active.

    Sony Adds 30‑Day Online Checks for PlayStation 4 & PS5
    Carter Brooks5 days ago

    Boost Your Healthspan: 1‑MET Gains Cut Mortality by 11–17%

    Why a 5–7 MET boost (16–25 ml·kg⁻¹·min⁻¹) narrows smoker‑level death risk

    Sarah Lindgren5 days ago

    Geely unveils 196‑billion‑parameter EVA Cab L4 robotaxi

    At Auto China 2026, Geely, AFARI and CaoCao introduced the EVA Cab, a purpose‑built L4 robotaxi with a 196‑billion‑parameter AI stack and a 1,400 TOPS compute platform. The 43‑sensor suite, featuring a 2,160‑line LiDAR with 600 m range, claims 99% scenario coverage and aims for series production in late 2027, while U.S. entry remains uncertain.

    Geely unveils 196‑billion‑parameter EVA Cab L4 robotaxi
    Ethan Whitaker6 days ago

    MediaTek launches Dimensity 7450X for mid‑range foldables

    MediaTek unveiled the Dimensity 7450 and 7450X on April 27, 2026, for mid‑range phones. They feature an octa‑core CPU (Cortex‑A78 up to 2.6 GHz + Cortex‑A55), Mali‑G615 MC2 GPU, sixth‑gen NPU with 7 % AI gain, and an Imagiq 950 ISP supporting up to 200 MP cameras. The 7450X adds dual‑display optimization and flagship‑class camera and AI capabilities, debuting in Motorola’s Razr 70 on April 29, 2026.

    MediaTek launches Dimensity 7450X for mid‑range foldables
    Priya Desai6 days ago

    Cat Gatekeeper Chrome Extension Launches on April 27, 2026

    Cat Gatekeeper, a free Chrome extension released on April 27, 2026, overlays a cartoon cat on selected sites—Facebook, X, Reddit, YouTube, Threads, and Bluesky—once a user‑set timer expires. The tab remains blocked until the user resets it. Developer @konekone2026 describes it as a light‑hearted productivity cue that avoids shame‑based blocking. A Firefox version is planned.

    Cat Gatekeeper Chrome Extension Launches on April 27, 2026
    Carter Brooks6 days ago
    Loading...
banner