• My Feed
  • Home
  • What's Important
  • Media & Entertainment
Search

Stay Curious. Stay Wanture.

© 2026 Wanture. All rights reserved.

  • Terms of Use
  • Privacy Policy
banner
Science/Mind

AI can't read the room — and that's a problem

New research reveals why even advanced AI fails at understanding human social dynamics

November 7, 2025, 6:14 pm

A groundbreaking 2025 study from Johns Hopkins University tested over 350 AI models against human perception of social interaction. The result: no AI could match how people instantly interpret collaboration, competition, or social cues. This limitation affects autonomous vehicles, delivery robots, and any technology navigating human spaces — revealing a fundamental gap between seeing and understanding.

telegram-cloud-photo-size-2-5211058007943351500-y

Summary

  • Johns Hopkins study reveals AI struggles to interpret human social interactions in 3-second video tests
  • Current AI models cannot match human ability to read subtle social dynamics and collaborative behaviors
  • Research highlights critical limitations for autonomous technologies like self-driving cars and service robots

Two people glance at each other across a crowded room. In milliseconds, you know they're collaborating—not competing, not strangers, not waiting. An AI watching the same scene? It's still guessing.

That gap—between human intuition and machine interpretation—is wider than we thought. A study published at the International Conference on Learning Representations (ICLR) in April 2025 reveals that even the most advanced AI models struggle to interpret the social dynamics humans read effortlessly.

The research, led by scientists at Johns Hopkins University, tested over 350 large language models and generative AI systems against human perception. The result: no AI model could adequately match how people understand and respond to social behavior in real time.

This isn't just an academic curiosity. It's a fundamental limitation with real-world stakes—for autonomous vehicles navigating pedestrian crossings, delivery robots interpreting when someone holds a door open, and any technology that must move safely through human spaces.

What Social Interaction Actually Involves

Before we understand what AI can't do, we need to clarify what humans do without thinking.

Social interaction isn't just seeing people move. It's reading body language, interpreting context, predicting intentions, and sensing collaboration or conflict in a glance. When two people assemble furniture together, you instantly recognize coordination. When they work on separate tasks in the same room, you know they're coexisting, not cooperating.

These judgments happen in fractions of a second. They rely on pattern recognition, contextual memory, and emotional inference—cognitive processes woven so deeply into perception that we barely notice them.

AI, by contrast, sees pixels and patterns. It lacks the lived experience that teaches humans what collaboration looks like versus competition, what hesitation means versus confidence.

How Scientists Tested AI Against Human Perception

The Johns Hopkins team designed an experiment to measure this gap precisely.

Researchers Kathy Garcia, Emalie McMahon, Colin Conwell, Michael F. Bonner, and Leyla Isik led the study.

The Three-Second Video Experiment

Participants watched 250 short video clips—each just three seconds long—drawn from the Moments in Time dataset. In these clips, people performed tasks together or independently, demonstrating different aspects of social interaction.

After watching, participants rated characteristics important for understanding social dynamics on a scale from 1 to 5. Questions included: Are these people working together? Is this interaction cooperative or independent? What is the social relationship here?

What AI Models Were Asked to Do

Researchers fed the same videos to over 350 AI systems—including large language models (AI systems trained on vast text to predict and generate human-like responses) and generative AI models (systems that create new content based on patterns).

The models were asked to predict how humans would rate the videos. Additionally, language models evaluated short captions written by humans describing the social interactions.

To deepen the comparison, the team also collected fMRI brain response data from four participants, measuring neural activity in regions associated with social cognition—specifically, lateral-stream brain responses, which process social information.

Why AI Struggles With Social Dynamics

The results were clear: AI models could not reliably predict human judgments about social behavior.

Language models performed relatively well at predicting human ratings when given text captions. Video models showed some ability to predict brain responses in certain regions. But no single model excelled at both behavioral judgments and social brain activity.

Think of it like reading sheet music versus feeling rhythm. AI sees the notes but misses the beat that makes humans move together.

The researchers concluded that current AI architecture lacks a fundamental aspect that allows the human brain to interpret dynamic social interaction quickly and accurately. That missing piece isn't just more data or better algorithms—it's something closer to lived understanding, the kind that comes from being a social creature navigating a social world.

What This Means for Autonomous Technology

This limitation isn't abstract. It has immediate implications for technologies already entering public spaces.

Self-Driving Cars and Social Navigation

Autonomous vehicles rely on AI to interpret pedestrian behavior. A person making eye contact at a crosswalk signals intent to cross. A group hesitating on the curb suggests uncertainty. These cues—invisible to current AI—are critical for safe navigation.

If an AI can't distinguish collaboration from coexistence in a three-second video, how reliably can it interpret the social choreography of a busy intersection?

Assistant Robots in Human Spaces

Delivery robots, warehouse assistants, and service machines must navigate environments filled with people. They need to recognize when someone is blocking a path intentionally versus accidentally, when a gesture means "go ahead" versus "wait."

Without the ability to read social dynamics, these systems risk awkward interactions at best—and safety failures at worst.

The Missing Piece in AI Architecture

What exactly do humans possess that AI lacks?

The Johns Hopkins researchers point to something deeper than pattern recognition. Humans don't just process visual information—they interpret it through layers of social experience, emotional context, and predictive modeling built over a lifetime of interaction.

AI models, even those trained on billions of images and videos, lack this embodied knowledge. They can identify objects, track motion, and classify actions. But they can't feel the difference between a tense silence and a comfortable one, between cooperation and competition, between invitation and dismissal.

That gap—between seeing and understanding—is where current AI architecture falls short.

What Comes Next for AI Development

The research team made their findings publicly available, inviting other researchers to build on their work.

They shared code, captions, behavioral data, and fMRI data through the Open Science Framework.

In a follow-up study posted in October 2025, Garcia and Isik introduced a human-similarity benchmark with approximately 49,000 odd-one-out judgments. They also developed a method to fine-tune video models to better align with human social judgments.

These steps suggest a path forward: not just training AI on more data, but training it to recognize the patterns that matter most to human social cognition.

The question isn't whether AI will learn to read social cues—it's how researchers will teach machines something the human brain does without thinking. Until then, the room remains unreadable to the algorithm watching from the corner.

Feed

    JBL rolls out EasySing AI Mic with PartyBox 2 Plus

    JBL unveiled the EasySing AI karaoke microphone, bundled with the PartyBox 2 Plus, on April 5, 2026. The mic’s on‑device neural‑network strips vocals at three levels and adds real‑time pitch correction, while Voice Boost cuts background noise. With ten‑hour battery life and USB‑C pairing, it aims at the expanding U.S. karaoke market driven by AI‑enhanced, portable audio.

    JBL rolls out EasySing AI Mic with PartyBox 2 Plus
    about 12 hours ago

    Why Does Muscle Mass Beat the Scale After 40?

    Hidden muscle loss slows metabolism; strength tests can protect health after 40

    about 13 hours ago

    Evening Sugar Cravings: Why They’re Metabolic, Not Willpower

    Low glucose and dopamine spikes spark sweet cravings; protein curbs them

    about 13 hours ago

    Apple’s upcoming foldable adds two‑app split-screen

    Apple’s upcoming foldable iPhone, slated for the 2026‑2027 roadmap, will run a custom OS and support a two‑app side‑by‑side view. The internal screen expands to roughly 7.6‑7.8 inches while the outer cover remains a familiar 5.4 inches, offering a pocket‑sized device that lets professionals check notes or reply to messages without switching apps. Developer tools will determine how quickly the split‑screen workflow gains traction.

    Apple’s upcoming foldable adds two‑app split-screen
    about 15 hours ago
    7 Steps to Supercharge Windows with PowerToys v0.97.2

    7 Steps to Supercharge Windows with PowerToys v0.97.2

    Install, configure, and use PowerToys v0.97.2 to speed up Windows tasks

    about 17 hours ago

    Apple Music Streams Full Songs Inside TikTok

    Apple Music became the exclusive provider of full‑track streaming inside TikTok on March 11, 2026. Users tap a button to play entire songs via an embedded mini‑player without leaving the app. Non‑subscribers receive a three‑month free trial, streams count toward artist royalties, and new Listening Party rooms enable real‑time co‑listening with live chat.

    about 20 hours ago

    Xbox Full Screen Experience hits Windows 11 in April 2026

    Microsoft announced that the Xbox Full Screen Experience will be available on Windows 11 PCs starting in April 2026. The mode disables File Explorer and background services, freeing roughly 2 GB of RAM and lowering CPU load. Gamers can activate it by pressing Win+F11 or via the Game Bar, and it works with Steam, Epic, Microsoft Store, and DirectX 12 titles.

    Xbox Full Screen Experience hits Windows 11 in April 2026
    about 20 hours ago

    Nvidia, Nebius unveil AI factories using H100 and H200 GPUs

    Nvidia and Nebius announced on March 11 a partnership to launch on‑demand AI factories built from H100 and H200 GPUs. The service bundles Nvidia AI Enterprise, NeMo and Triton, letting developers train and run large language models without buying hardware. Nebius shares jumped over 13% after the news, buoyed by its 2025 Microsoft contract.

    Nvidia, Nebius unveil AI factories using H100 and H200 GPUs
    1 day ago

    Windows 11 KB5079473 update released on March 11, 2026

    Microsoft’s March 11, 2026 Windows 11 KB5079473 update fixes sign‑in freezes, cuts wake‑from‑sleep latency on SSD laptops, and stops Nearby Sharing crashes during large file transfers. It adds an Extract‑All button for RAR/7z archives, fresh emojis, an internet‑speed taskbar widget, and native .webp wallpaper support. Install via Settings > Windows Update or a standalone download.

    Windows 11 KB5079473 update released on March 11, 2026
    1 day ago

    Klotho Clock Assays Target Biological Age in Neuro Trials

    Klotho Neurosciences rolled out two genomics assays on March 10, 2026, dubbed the Klotho Clock. The tests read cell‑free DNA methylation at the KLOTHO promoter and profile nine longevity‑linked genes, letting researchers match trial participants by biological age. Aligning groups this way may boost power in ALS and Alzheimer’s studies and cut costly trial failures.

    1 day ago

    Moskvich Halts 5‑Sedan Production After Failed Benchmarks

    On March 8, 2026, Moskvich announced the end of 5‑sedan production after fewer than 500 units left the line, citing missed consumer‑property benchmarks for ride comfort and interior durability. Remaining cars will be sold at discounts of up to 30%. The company is now shifting resources to the 3 SUV, aiming for 50,000 units to avoid the shortfalls that halted the 5.

    Moskvich Halts 5‑Sedan Production After Failed Benchmarks
    1 day ago

    Meta acquires Moltbook to boost AI‑agent platform

    Meta announced on March 10, 2026 that it has acquired Moltbook, the Reddit‑style AI‑agent platform that amassed 1.5 million agents after its late‑January launch. The purchase follows a February security breach that exposed API keys, prompting Meta to bring the team into its Superintelligence Labs and promise secure, hosted tools for managing multi‑agent ecosystems.

    Meta acquires Moltbook to boost AI‑agent platform
    1 day ago

    Adobe Photoshop AI assistant launches for all on April 1

    On April 1, Adobe opened its Photoshop AI assistant to all web and mobile users, ending the invite‑only beta. The generative fill feature lets creators type prompts or draw arrows to remove, replace, or adjust objects, with support for iOS 15+ and Android 12+. Paid subscribers keep unlimited generations; free accounts are capped at 20 edits until April 9.

    Adobe Photoshop AI assistant launches for all on April 1
    2 days ago

    Xiaomi begins public test of Mijia Kids Toothbrush Pro

    Xiaomi has begun testing in China of its Mijia Kids Toothbrush Pro, a brush that logs brushing duration, pressure, and problem spots. Parents set care plans in the Mijia app, earn rewards for sessions, and get alerts for missed brushing. The device offers a 90‑day battery life, an IPX8 waterproof rating, and stores data on Xiaomi servers, needing consent under the 2025 COPPA rules.

    Xiaomi begins public test of Mijia Kids Toothbrush Pro
    2 days ago

    MacBook Neo Disrupts Budget Laptop Market

    The case study examines Apple’s entry‑level MacBook Neo, a 13‑inch Retina laptop powered by the A18 Pro chip, and its impact on U.S. education. By delivering a 500‑nit display, fan‑less design, and over ten hours of battery life at a budget‑friendly price, the Neo challenges Chromebooks’ dominance and forces Windows OEMs to rethink low‑cost hardware strategies.

    3 days ago
    4 Steps to Navigate the 2026 Memory Chip Shortage

    4 Steps to Navigate the 2026 Memory Chip Shortage

    Pick DDR4 or DDR5, balance your budget, and build a PC that lasts

    4 days ago

    Apple iMac adds new colors, M5 or M6 chips for 2026

    Apple announced that the iMac will receive two fresh color options with shipments scheduled for late 2026. The refreshed model will retain the 2021 chassis and be powered by either the existing M5 silicon or the upcoming M6 chip, depending on launch timing. Production is set to begin later this year, and Apple noted the 3D‑printed aluminum process could later be used on iMacs.

    Apple iMac adds new colors, M5 or M6 chips for 2026
    4 days ago
    Inside LEGO’s Smart Brick: How a 2×4 Brick Plays Sound

    Inside LEGO’s Smart Brick: How a 2×4 Brick Plays Sound

    A teardown shows the 45 mAh battery, speaker and RFID trigger that add sound

    4 days ago

    Mac mini M4 fits inside 20‑inch LEGO block

    Engineer Paul Staall unveiled a 20‑inch LEGO Galaxy Explorer brick that encloses a Mac mini M4 powered by an M2‑Pro chip, offering Thunderbolt 4, HDMI 2.1, and full‑size SD connectivity. The 3D‑printed case, printed in 12 hours with PETG, shows how affordable printers and open‑source designs let hobbyists turn nostalgic toys into functional mini‑PCs.

    Mac mini M4 fits inside 20‑inch LEGO block
    4 days ago

    Anthropic Launches Claude Marketplace with Unified Billing

    Anthropic’s Claude Marketplace lets enterprises buy AI tools on a single Anthropic balance, removing separate vendor contracts. Teams assign credit, set per‑tool budget caps, and receive one invoice, streamlining procurement and audit trails. As AI spend tops $8 billion this year, the service helps align costs with strategic budgets.

    Anthropic Launches Claude Marketplace with Unified Billing
    6 days ago
    Loading...
Science/Mind

AI can't read the room — and that's a problem

New research reveals why even advanced AI fails at understanding human social dynamics

7 November 2025

—

Explainer *

Adrian Vega

banner

A groundbreaking 2025 study from Johns Hopkins University tested over 350 AI models against human perception of social interaction. The result: no AI could match how people instantly interpret collaboration, competition, or social cues. This limitation affects autonomous vehicles, delivery robots, and any technology navigating human spaces — revealing a fundamental gap between seeing and understanding.

telegram-cloud-photo-size-2-5211058007943351500-y

Summary:

  • Johns Hopkins study reveals AI struggles to interpret human social interactions in 3-second video tests
  • Current AI models cannot match human ability to read subtle social dynamics and collaborative behaviors
  • Research highlights critical limitations for autonomous technologies like self-driving cars and service robots

Two people glance at each other across a crowded room. In milliseconds, you know they're collaborating—not competing, not strangers, not waiting. An AI watching the same scene? It's still guessing.

That gap—between human intuition and machine interpretation—is wider than we thought. A study published at the International Conference on Learning Representations (ICLR) in April 2025 reveals that even the most advanced AI models struggle to interpret the social dynamics humans read effortlessly.

The research, led by scientists at Johns Hopkins University, tested over 350 large language models and generative AI systems against human perception. The result: no AI model could adequately match how people understand and respond to social behavior in real time.

This isn't just an academic curiosity. It's a fundamental limitation with real-world stakes—for autonomous vehicles navigating pedestrian crossings, delivery robots interpreting when someone holds a door open, and any technology that must move safely through human spaces.

What Social Interaction Actually Involves

Before we understand what AI can't do, we need to clarify what humans do without thinking.

Social interaction isn't just seeing people move. It's reading body language, interpreting context, predicting intentions, and sensing collaboration or conflict in a glance. When two people assemble furniture together, you instantly recognize coordination. When they work on separate tasks in the same room, you know they're coexisting, not cooperating.

These judgments happen in fractions of a second. They rely on pattern recognition, contextual memory, and emotional inference—cognitive processes woven so deeply into perception that we barely notice them.

AI, by contrast, sees pixels and patterns. It lacks the lived experience that teaches humans what collaboration looks like versus competition, what hesitation means versus confidence.

How Scientists Tested AI Against Human Perception

The Johns Hopkins team designed an experiment to measure this gap precisely.

Researchers Kathy Garcia, Emalie McMahon, Colin Conwell, Michael F. Bonner, and Leyla Isik led the study.

The Three-Second Video Experiment

Participants watched 250 short video clips—each just three seconds long—drawn from the Moments in Time dataset. In these clips, people performed tasks together or independently, demonstrating different aspects of social interaction.

After watching, participants rated characteristics important for understanding social dynamics on a scale from 1 to 5. Questions included: Are these people working together? Is this interaction cooperative or independent? What is the social relationship here?

What AI Models Were Asked to Do

Researchers fed the same videos to over 350 AI systems—including large language models (AI systems trained on vast text to predict and generate human-like responses) and generative AI models (systems that create new content based on patterns).

The models were asked to predict how humans would rate the videos. Additionally, language models evaluated short captions written by humans describing the social interactions.

To deepen the comparison, the team also collected fMRI brain response data from four participants, measuring neural activity in regions associated with social cognition—specifically, lateral-stream brain responses, which process social information.

Why AI Struggles With Social Dynamics

The results were clear: AI models could not reliably predict human judgments about social behavior.

Language models performed relatively well at predicting human ratings when given text captions. Video models showed some ability to predict brain responses in certain regions. But no single model excelled at both behavioral judgments and social brain activity.

Think of it like reading sheet music versus feeling rhythm. AI sees the notes but misses the beat that makes humans move together.

The researchers concluded that current AI architecture lacks a fundamental aspect that allows the human brain to interpret dynamic social interaction quickly and accurately. That missing piece isn't just more data or better algorithms—it's something closer to lived understanding, the kind that comes from being a social creature navigating a social world.

What This Means for Autonomous Technology

This limitation isn't abstract. It has immediate implications for technologies already entering public spaces.

Self-Driving Cars and Social Navigation

Autonomous vehicles rely on AI to interpret pedestrian behavior. A person making eye contact at a crosswalk signals intent to cross. A group hesitating on the curb suggests uncertainty. These cues—invisible to current AI—are critical for safe navigation.

If an AI can't distinguish collaboration from coexistence in a three-second video, how reliably can it interpret the social choreography of a busy intersection?

Assistant Robots in Human Spaces

Delivery robots, warehouse assistants, and service machines must navigate environments filled with people. They need to recognize when someone is blocking a path intentionally versus accidentally, when a gesture means "go ahead" versus "wait."

Without the ability to read social dynamics, these systems risk awkward interactions at best—and safety failures at worst.

The Missing Piece in AI Architecture

What exactly do humans possess that AI lacks?

The Johns Hopkins researchers point to something deeper than pattern recognition. Humans don't just process visual information—they interpret it through layers of social experience, emotional context, and predictive modeling built over a lifetime of interaction.

AI models, even those trained on billions of images and videos, lack this embodied knowledge. They can identify objects, track motion, and classify actions. But they can't feel the difference between a tense silence and a comfortable one, between cooperation and competition, between invitation and dismissal.

That gap—between seeing and understanding—is where current AI architecture falls short.

What Comes Next for AI Development

The research team made their findings publicly available, inviting other researchers to build on their work.

They shared code, captions, behavioral data, and fMRI data through the Open Science Framework.

In a follow-up study posted in October 2025, Garcia and Isik introduced a human-similarity benchmark with approximately 49,000 odd-one-out judgments. They also developed a method to fine-tune video models to better align with human social judgments.

These steps suggest a path forward: not just training AI on more data, but training it to recognize the patterns that matter most to human social cognition.

The question isn't whether AI will learn to read social cues—it's how researchers will teach machines something the human brain does without thinking. Until then, the room remains unreadable to the algorithm watching from the corner.

Feed

    JBL rolls out EasySing AI Mic with PartyBox 2 Plus

    JBL unveiled the EasySing AI karaoke microphone, bundled with the PartyBox 2 Plus, on April 5, 2026. The mic’s on‑device neural‑network strips vocals at three levels and adds real‑time pitch correction, while Voice Boost cuts background noise. With ten‑hour battery life and USB‑C pairing, it aims at the expanding U.S. karaoke market driven by AI‑enhanced, portable audio.

    JBL rolls out EasySing AI Mic with PartyBox 2 Plus
    about 12 hours ago

    Why Does Muscle Mass Beat the Scale After 40?

    Hidden muscle loss slows metabolism; strength tests can protect health after 40

    about 13 hours ago

    Evening Sugar Cravings: Why They’re Metabolic, Not Willpower

    Low glucose and dopamine spikes spark sweet cravings; protein curbs them

    about 13 hours ago

    Apple’s upcoming foldable adds two‑app split-screen

    Apple’s upcoming foldable iPhone, slated for the 2026‑2027 roadmap, will run a custom OS and support a two‑app side‑by‑side view. The internal screen expands to roughly 7.6‑7.8 inches while the outer cover remains a familiar 5.4 inches, offering a pocket‑sized device that lets professionals check notes or reply to messages without switching apps. Developer tools will determine how quickly the split‑screen workflow gains traction.

    Apple’s upcoming foldable adds two‑app split-screen
    about 15 hours ago
    7 Steps to Supercharge Windows with PowerToys v0.97.2

    7 Steps to Supercharge Windows with PowerToys v0.97.2

    Install, configure, and use PowerToys v0.97.2 to speed up Windows tasks

    about 17 hours ago

    Apple Music Streams Full Songs Inside TikTok

    Apple Music became the exclusive provider of full‑track streaming inside TikTok on March 11, 2026. Users tap a button to play entire songs via an embedded mini‑player without leaving the app. Non‑subscribers receive a three‑month free trial, streams count toward artist royalties, and new Listening Party rooms enable real‑time co‑listening with live chat.

    about 20 hours ago

    Xbox Full Screen Experience hits Windows 11 in April 2026

    Microsoft announced that the Xbox Full Screen Experience will be available on Windows 11 PCs starting in April 2026. The mode disables File Explorer and background services, freeing roughly 2 GB of RAM and lowering CPU load. Gamers can activate it by pressing Win+F11 or via the Game Bar, and it works with Steam, Epic, Microsoft Store, and DirectX 12 titles.

    Xbox Full Screen Experience hits Windows 11 in April 2026
    about 20 hours ago

    Nvidia, Nebius unveil AI factories using H100 and H200 GPUs

    Nvidia and Nebius announced on March 11 a partnership to launch on‑demand AI factories built from H100 and H200 GPUs. The service bundles Nvidia AI Enterprise, NeMo and Triton, letting developers train and run large language models without buying hardware. Nebius shares jumped over 13% after the news, buoyed by its 2025 Microsoft contract.

    Nvidia, Nebius unveil AI factories using H100 and H200 GPUs
    1 day ago

    Windows 11 KB5079473 update released on March 11, 2026

    Microsoft’s March 11, 2026 Windows 11 KB5079473 update fixes sign‑in freezes, cuts wake‑from‑sleep latency on SSD laptops, and stops Nearby Sharing crashes during large file transfers. It adds an Extract‑All button for RAR/7z archives, fresh emojis, an internet‑speed taskbar widget, and native .webp wallpaper support. Install via Settings > Windows Update or a standalone download.

    Windows 11 KB5079473 update released on March 11, 2026
    1 day ago

    Klotho Clock Assays Target Biological Age in Neuro Trials

    Klotho Neurosciences rolled out two genomics assays on March 10, 2026, dubbed the Klotho Clock. The tests read cell‑free DNA methylation at the KLOTHO promoter and profile nine longevity‑linked genes, letting researchers match trial participants by biological age. Aligning groups this way may boost power in ALS and Alzheimer’s studies and cut costly trial failures.

    1 day ago

    Moskvich Halts 5‑Sedan Production After Failed Benchmarks

    On March 8, 2026, Moskvich announced the end of 5‑sedan production after fewer than 500 units left the line, citing missed consumer‑property benchmarks for ride comfort and interior durability. Remaining cars will be sold at discounts of up to 30%. The company is now shifting resources to the 3 SUV, aiming for 50,000 units to avoid the shortfalls that halted the 5.

    Moskvich Halts 5‑Sedan Production After Failed Benchmarks
    1 day ago

    Meta acquires Moltbook to boost AI‑agent platform

    Meta announced on March 10, 2026 that it has acquired Moltbook, the Reddit‑style AI‑agent platform that amassed 1.5 million agents after its late‑January launch. The purchase follows a February security breach that exposed API keys, prompting Meta to bring the team into its Superintelligence Labs and promise secure, hosted tools for managing multi‑agent ecosystems.

    Meta acquires Moltbook to boost AI‑agent platform
    1 day ago

    Adobe Photoshop AI assistant launches for all on April 1

    On April 1, Adobe opened its Photoshop AI assistant to all web and mobile users, ending the invite‑only beta. The generative fill feature lets creators type prompts or draw arrows to remove, replace, or adjust objects, with support for iOS 15+ and Android 12+. Paid subscribers keep unlimited generations; free accounts are capped at 20 edits until April 9.

    Adobe Photoshop AI assistant launches for all on April 1
    2 days ago

    Xiaomi begins public test of Mijia Kids Toothbrush Pro

    Xiaomi has begun testing in China of its Mijia Kids Toothbrush Pro, a brush that logs brushing duration, pressure, and problem spots. Parents set care plans in the Mijia app, earn rewards for sessions, and get alerts for missed brushing. The device offers a 90‑day battery life, an IPX8 waterproof rating, and stores data on Xiaomi servers, needing consent under the 2025 COPPA rules.

    Xiaomi begins public test of Mijia Kids Toothbrush Pro
    2 days ago

    MacBook Neo Disrupts Budget Laptop Market

    The case study examines Apple’s entry‑level MacBook Neo, a 13‑inch Retina laptop powered by the A18 Pro chip, and its impact on U.S. education. By delivering a 500‑nit display, fan‑less design, and over ten hours of battery life at a budget‑friendly price, the Neo challenges Chromebooks’ dominance and forces Windows OEMs to rethink low‑cost hardware strategies.

    3 days ago
    4 Steps to Navigate the 2026 Memory Chip Shortage

    4 Steps to Navigate the 2026 Memory Chip Shortage

    Pick DDR4 or DDR5, balance your budget, and build a PC that lasts

    4 days ago

    Apple iMac adds new colors, M5 or M6 chips for 2026

    Apple announced that the iMac will receive two fresh color options with shipments scheduled for late 2026. The refreshed model will retain the 2021 chassis and be powered by either the existing M5 silicon or the upcoming M6 chip, depending on launch timing. Production is set to begin later this year, and Apple noted the 3D‑printed aluminum process could later be used on iMacs.

    Apple iMac adds new colors, M5 or M6 chips for 2026
    4 days ago
    Inside LEGO’s Smart Brick: How a 2×4 Brick Plays Sound

    Inside LEGO’s Smart Brick: How a 2×4 Brick Plays Sound

    A teardown shows the 45 mAh battery, speaker and RFID trigger that add sound

    4 days ago

    Mac mini M4 fits inside 20‑inch LEGO block

    Engineer Paul Staall unveiled a 20‑inch LEGO Galaxy Explorer brick that encloses a Mac mini M4 powered by an M2‑Pro chip, offering Thunderbolt 4, HDMI 2.1, and full‑size SD connectivity. The 3D‑printed case, printed in 12 hours with PETG, shows how affordable printers and open‑source designs let hobbyists turn nostalgic toys into functional mini‑PCs.

    Mac mini M4 fits inside 20‑inch LEGO block
    4 days ago

    Anthropic Launches Claude Marketplace with Unified Billing

    Anthropic’s Claude Marketplace lets enterprises buy AI tools on a single Anthropic balance, removing separate vendor contracts. Teams assign credit, set per‑tool budget caps, and receive one invoice, streamlining procurement and audit trails. As AI spend tops $8 billion this year, the service helps align costs with strategic budgets.

    Anthropic Launches Claude Marketplace with Unified Billing
    6 days ago
    Loading...