• My Feed
  • Home
  • What's Important
  • Media & Entertainment
Search

Stay Curious. Stay Wanture.

© 2026 Wanture. All rights reserved.

  • Terms of Use
  • Privacy Policy
Science/Tech

Google's DiscoRL writes its own learning rules

A meta-learning system that generates custom algorithms, adapts in real time, and optimizes itself

15 December 2025

—

Explainer *

Rhea Kline
banner

Google DeepMind built DiscoRL, a reinforcement learning system that observes how AI agents perform and generates tailored update rules on the fly. It doesn't follow fixed formulas. It authors new ones mid-training. Tested on Atari57 and navigation tasks, it matched state-of-the-art algorithms and transferred learning principles across domains.

Summary:

  • Google researchers developed DiscoRL, a meta-learning AI system that writes its own learning instructions, potentially reducing AI development costs by automating optimization.
  • The meta-learning system observes how AI learns, generates custom instructions, and repeatedly improves its own learning process across different domains like robotics and drug discovery.
  • DiscoRL challenges traditional AI development by showing that self-generated algorithms can match or outperform human-designed systems, with potential to accelerate innovation across American industries.

Google researchers built an AI that writes its own learning instructions. Most people think humans code all AI algorithms. Not anymore. This article explains how meta-learning systems teach themselves to learn.

What Is Meta-Learning?

Meta-learning means learning how to learn. Imagine teaching someone to study. You don't just give them facts. You teach them how to organize notes. How to test themselves. How to recognize when they're stuck. Meta-learning does this for AI.

Traditional AI follows fixed learning rules written by humans. Data scientists at companies like Google or Tesla code these rules. The AI applies them. The process is slow. It's expensive. It requires expert intuition at every step.

DiscoRL reverses that process. It's a meta-learning system built by Google DeepMind. The system observes how an AI learns. Then it generates custom instructions for that specific AI. The AI applies those instructions. The system observes again. The cycle repeats.

According to research published in Nature, DiscoRL stands for Discovery-based Reinforcement Learning. Lead author Junhyuk Oh and senior author David Silver, along with collaborators at Google DeepMind, demonstrated that self-generating algorithms could match or outperform human-designed systems.

Why This Matters Now

Human experts currently design AI learning algorithms. Tech companies spend millions on AI researchers to tune these algorithms. A single machine learning engineer can cost $300,000 annually at major firms. They spend months optimizing how AI systems learn.

DiscoRL automates this process. It could reduce costs. It could speed up AI development across American robotics firms, biotech startups, and autonomous vehicle companies. Instead of waiting for human experts to debug learning problems, the system fixes itself.

Think of it like smartphone updates that improve performance. Your phone learns how to optimize battery life. It adjusts based on your usage patterns. Meta-learning does this for AI training itself.

How Meta-Learning Works

The Observation Layer

Think of the meta-network as a basketball coach. The coach watches you practice. They notice you miss free throws. They see you hesitate on three-pointers. They track every move.

A meta-network is an AI that teaches other AIs. It's the teacher, not the student. The meta-network monitors another AI during training. It tracks reward signals. It watches action patterns. It sees how the AI adjusts after each attempt.

Reinforcement learning is how AI learns by trial and error. The AI tries actions. It gets rewards for good actions. Penalties for bad ones. It learns which actions work best.

The meta-network observes this entire process in real time.

Rule Generation

The coach writes you a custom practice plan. Not a generic plan. One tailored to your weaknesses. If you struggle with defense, you get defensive drills. If your stamina is low, you get cardio work.

The meta-network does the same thing. It analyzes the AI's performance patterns. Then it generates update rules. Update rules are instructions for learning. They tell the AI how to adjust after each attempt. Like rules for improving your basketball shot after each practice.

These aren't fixed formulas. They're context-dependent. Early in training, rules emphasize exploration. The AI tries many different approaches. Later, rules shift toward refinement. The AI perfects what already works.

The Feedback Loop

The AI applies the generated rules. Performance data flows back to the meta-network. The meta-network updates its own understanding. It generates new rules based on new observations.

The system optimizes its optimization strategy in real time.

This recursive structure means the meta-network itself improves. It gets better at generating rules. It learns which types of instructions work best for different learning challenges. The longer it runs, the smarter it becomes at teaching.

Real-World Performance

DiscoRL outperformed or matched state-of-the-art hand-designed algorithms. The team tested it on Atari57, a benchmark of 57 classic arcade games. They tested it on ProcGen, which generates random game levels. They tested it on DMLab-30, a complex 3D navigation environment.

The system also succeeded on held-out test environments. These were environments the system had never seen during training. This suggests the meta-network discovered general principles about learning. Not task-specific tricks.

Meta-training required substantial compute. The most capable version used 2,048 TPUv3 cores for approximately 60 hours. That's significant resources. But once trained, the meta-network generates rules without repeating that cost. It applies learned knowledge to new AIs and new tasks.

Code and meta-parameters are available under open-source license at GitHub. American AI labs, university researchers, and startups can now experiment with the system.

Real-World Examples

Example 1: Self-Driving Cars Adapting to New Cities

A self-driving car trained in Phoenix drives to Seattle. Rain and hills are new conditions. Traditional AI struggles. The learning rules optimized for sunny, flat roads don't work.

DiscoRL generates new learning rules when it detects the change. The meta-network observes the car's performance drop. It analyzes which decisions fail in wet conditions. Then it writes new optimization instructions. The car adapts in hours, not months.

Result: Faster deployment of autonomous vehicles across diverse U.S. cities. Companies like Waymo and Cruise, operating in San Francisco and Phoenix, could expand to Portland or Boston more quickly.

Example 2: Drug Discovery at Boston Biotech Labs

Researchers use AI to predict protein structures. The AI gets stuck on complex proteins. Traditional optimization algorithms hit a wall. The AI stops improving after weeks of training.

DiscoRL observes the problem. It detects that the AI's exploration strategy became too narrow. The meta-network generates new update rules that encourage broader search. The AI breaks through in days instead of months.

Result: Faster drug development for American patients. Boston's biotech corridor houses dozens of startups using AI for molecular design. Meta-learning could accelerate their research timelines significantly.

Example 3: Transfer Across Domains

The team tested whether generated algorithms transfer between tasks. A meta-network trained on arcade games produced rules that worked on navigation tasks. The rules weren't game-specific. They encoded general principles about exploration and exploitation.

This suggests meta-learning discovers universal learning dynamics. Rules that work for Pac-Man also help robots navigate warehouses. Rules that optimize chess play also improve logistics scheduling.

What People Get Wrong

Myth: This means AI can now improve itself without limits.

Reality: The meta-network operates within constraints. It generates update rules for specific learning tasks. It's not general intelligence recursively upgrading itself. It's targeted algorithmic optimization within defined boundaries. The system still requires human-designed architectures, curated datasets, and clear objectives.

Myth: Human researchers become obsolete.

Reality: Designing meta-networks requires deep expertise. Machine learning architecture. Loss functions. Training dynamics. The system automates one layer of optimization. Humans still define the problem. They curate data. They interpret results. Stanford AI Lab, MIT CSAIL, and UC Berkeley AI Research continue hiring experts specifically for meta-learning research.

Myth: Self-generated algorithms will be inscrutable black boxes.

Reality: Generated update rules are mathematical operations. They can be analyzed. Tested. Understood. Researchers can inspect what rules the meta-network proposes and why. The challenge is interpretability at scale. Not fundamental opacity.

Current Limitations

Meta-learning systems face real constraints. Training DiscoRL required massive computational resources—2,048 specialized processors running for 60 hours. Not every research lab or company can afford this infrastructure.

The system works best on well-defined learning tasks with clear objectives. It struggles with ambiguous problems where success is hard to measure. It also requires substantial training data across diverse environments to learn generalizable principles.

Transfer learning across very different domains remains challenging. While arcade game rules helped with navigation, the gap between, say, language processing and robotic manipulation may be too large for current meta-learning approaches to bridge effectively.

And critically, the meta-network itself is still a human-designed architecture. Someone had to decide its structure, training objectives, and evaluation criteria. The self-generation happens within guardrails that humans establish.

What This Reveals

Meta-learning means learning how to learn.

For decades, progress meant better architectures. Bigger datasets. More compute. This approach asks a different question. What if the algorithms themselves could evolve?

Meta-learning pushes AI from following instructions to authoring them. Human experts still design the meta-networks. They still choose which problems to solve. But the day-to-day optimization decisions shift to machines.

This could accelerate AI development across American industries. Robotics companies in Pittsburgh. Healthcare AI startups in Cambridge. Agricultural tech firms in Iowa. Self-optimizing systems reduce the bottleneck of human expertise.

The work continues. Google DeepMind is testing whether meta-networks work across more diverse tasks. How they handle distribution shifts. Whether generated algorithms transfer between domains. Early results suggest broad applicability.

The Takeaway

Meta-learning represents a shift from human-coded algorithms to self-generated ones. AI systems can now optimize their own learning processes. This could accelerate AI development across American industries, from robotics to healthcare.

The next frontier: systems that architect their own improvement. The calculation continues. Now the calculation is writing its own next steps.

Topic

Google Gemini 3 Launch

Google adds Gemini 3 Memory import for chat logs

27 March 2026

Google adds Gemini 3 Memory import for chat logs

Google launches Gemini 3‑powered AI Overview on mobile

28 January 2026

Google launches Gemini 3‑powered AI Overview on mobile

Google launches Gemini 3 and the Antigravity AI platform

18 November 2025

Google launches Gemini 3 and the Antigravity AI platform

What is this about?

  • artificial intelligence/
  • Google AI/
  • machine learning optimization/
  • adaptive algorithms

Feed

    Cursor 3 Launches Unified AI Coding Workspace

    Cursor 3 Launches Unified AI Coding Workspace

    Side‑panel lets devs toggle local and cloud agents, building on Composer 2 and Kimi 2.5

    about 12 hours ago
    Orion’s Six‑Minute Burn Puts Artemis 2 on Free‑Return Path

    Orion’s Six‑Minute Burn Puts Artemis 2 on Free‑Return Path

    iPhone 17 Pro Max survives Orion’s deep‑space test as crew heads to lunar flyby

    about 14 hours ago
    Android 17 Introduces System‑Level Notification Rules

    Android 17 Introduces System‑Level Notification Rules

    Samsung’s One UI 9 will adopt Android 17’s rules, adding OS‑level alert control

    about 14 hours ago

    Nvidia rolls out DLSS 4.5, 6× boost on RTX 50-series

    Dynamic Multi‑Frame Generation smooths 120–240 Hz, delivered in the driver 595.97

    about 16 hours ago
    Apple rolls out iOS 18.7.7 to block DarkSword

    Apple rolls out iOS 18.7.7 to block DarkSword

    Patch fixes six Safari bugs, stopping DarkSword on iOS 18–18.7 devices

    1 day ago
    BoxPlates Skins Revamp PS5 Slim & Pro in Two Weeks

    BoxPlates Skins Revamp PS5 Slim & Pro in Two Weeks

    1 day ago
    Artemis 2 Rockets Beyond Earth—402,000 km From Home

    Artemis 2 Rockets Beyond Earth—402,000 km From Home

    Lift‑off at 8:23 a.m. ET marks first crewed lunar flight since 1972, with a diverse four‑person crew

    1 day ago
    Apple celebrates 50 years with new minimalist wallpapers

    Apple celebrates 50 years with new minimalist wallpapers

    Basic Apple Guy releases iPhone and Mac wallpapers for Apple’s 50th anniversary

    1 day ago
    Razer Unveils Pro Type Ergo Ergonomic Keyboard Today

    Razer Unveils Pro Type Ergo Ergonomic Keyboard Today

    Split design, AI button, and 19‑zone RGB aim at U.S. workers with a 9.7% RSI rate

    2 days ago
    Google to debut screen‑free Fitbit band in 2026

    Google to debut screen‑free Fitbit band in 2026

    AI‑driven training plan and upgraded platform aim at the health‑tracking market against Oura and Whoop

    2 days ago
    Nothing unveils AI‑powered smart glasses for a 2027 launch

    Nothing unveils AI‑powered smart glasses for a 2027 launch

    The glasses use a paired phone and cloud, with a clear frame and LED accents

    2 days ago
    Google rolls out Veo 3.1 Lite, halving AI video costs

    Google rolls out Veo 3.1 Lite, halving AI video costs

    Veo 3.1 Lite matches Veo 3.1 Fast speed but cuts price by over 50% for devs now

    3 days ago
    Freelander 97 Debuts 800‑V EV Crossover in Shanghai

    Freelander 97 Debuts 800‑V EV Crossover in Shanghai

    Chery‑JLR showcases ADS 4.1 autonomy on 800‑V platform, eyeing 2028 launch

    3 days ago
    Telegram Launches Version 12.6 With AI Editor, New Polls

    Telegram Launches Version 12.6 With AI Editor, New Polls

    It adds an AI tone editor, richer polls, Live/Motion Photos, and bot management

    3 days ago

    Pixel 11 Pro Renders Leak With Black Camera Bar and MediaTek Modem

    Google’s August 2026 flagship ditches Samsung radios for improved 5G and runs the Tensor G6

    3 days ago

    Anthropic leak reveals Opus 4.7, Sonnet 4.8 in npm 2.1.88

    Leak on March 30‑31 exposed TypeScript, revealing Opus 4.7, Sonnet 4.8, and internal features

    3 days ago
    iOS 26.5 beta lands on iPhone 17 Pro with an 8 GB download

    iOS 26.5 beta lands on iPhone 17 Pro with an 8 GB download

    Apple restores RCS encryption and adds a 12‑month subscription in the update

    3 days ago
    Windows 11 24H2 Brings Dark Mode to Core Utilities

    Windows 11 24H2 Brings Dark Mode to Core Utilities

    Tools like Registry Editor get dark mode in Windows 11 24H2, out in Sep 2026

    4 days ago

    John Noble's 1,024 Thread Implant Powers Warcraft Raids

    John Noble, a former British parachutist turned veteran gamer, received a neural implant with 1,024 threads after a 2024 trial in Seattle. The device lets him control a MacBook with thought alone, turning World of Warcraft raids into hands‑free battles. His story shows how brain‑computer interfaces can expand digital access for disabled veterans and reshape gaming.

    5 days ago
    Apple unveils Siri app for iOS 27, adds 50+ AI agents

    Apple unveils Siri app for iOS 27, adds 50+ AI agents

    iOS 27 Siri app adds Extensions marketplace, eyeing Alexa’s 100,000‑skill store

    5 days ago
    Loading...
Science/Tech

Google's DiscoRL writes its own learning rules

A meta-learning system that generates custom algorithms, adapts in real time, and optimizes itself

December 15, 2025, 7:11 pm

Google DeepMind built DiscoRL, a reinforcement learning system that observes how AI agents perform and generates tailored update rules on the fly. It doesn't follow fixed formulas. It authors new ones mid-training. Tested on Atari57 and navigation tasks, it matched state-of-the-art algorithms and transferred learning principles across domains.

Summary

  • Google researchers developed DiscoRL, a meta-learning AI system that writes its own learning instructions, potentially reducing AI development costs by automating optimization.
  • The meta-learning system observes how AI learns, generates custom instructions, and repeatedly improves its own learning process across different domains like robotics and drug discovery.
  • DiscoRL challenges traditional AI development by showing that self-generated algorithms can match or outperform human-designed systems, with potential to accelerate innovation across American industries.

Google researchers built an AI that writes its own learning instructions. Most people think humans code all AI algorithms. Not anymore. This article explains how meta-learning systems teach themselves to learn.

What Is Meta-Learning?

Meta-learning means learning how to learn. Imagine teaching someone to study. You don't just give them facts. You teach them how to organize notes. How to test themselves. How to recognize when they're stuck. Meta-learning does this for AI.

Traditional AI follows fixed learning rules written by humans. Data scientists at companies like Google or Tesla code these rules. The AI applies them. The process is slow. It's expensive. It requires expert intuition at every step.

DiscoRL reverses that process. It's a meta-learning system built by Google DeepMind. The system observes how an AI learns. Then it generates custom instructions for that specific AI. The AI applies those instructions. The system observes again. The cycle repeats.

According to research published in Nature, DiscoRL stands for Discovery-based Reinforcement Learning. Lead author Junhyuk Oh and senior author David Silver, along with collaborators at Google DeepMind, demonstrated that self-generating algorithms could match or outperform human-designed systems.

Why This Matters Now

Human experts currently design AI learning algorithms. Tech companies spend millions on AI researchers to tune these algorithms. A single machine learning engineer can cost $300,000 annually at major firms. They spend months optimizing how AI systems learn.

DiscoRL automates this process. It could reduce costs. It could speed up AI development across American robotics firms, biotech startups, and autonomous vehicle companies. Instead of waiting for human experts to debug learning problems, the system fixes itself.

Think of it like smartphone updates that improve performance. Your phone learns how to optimize battery life. It adjusts based on your usage patterns. Meta-learning does this for AI training itself.

How Meta-Learning Works

The Observation Layer

Think of the meta-network as a basketball coach. The coach watches you practice. They notice you miss free throws. They see you hesitate on three-pointers. They track every move.

A meta-network is an AI that teaches other AIs. It's the teacher, not the student. The meta-network monitors another AI during training. It tracks reward signals. It watches action patterns. It sees how the AI adjusts after each attempt.

Reinforcement learning is how AI learns by trial and error. The AI tries actions. It gets rewards for good actions. Penalties for bad ones. It learns which actions work best.

The meta-network observes this entire process in real time.

Rule Generation

The coach writes you a custom practice plan. Not a generic plan. One tailored to your weaknesses. If you struggle with defense, you get defensive drills. If your stamina is low, you get cardio work.

The meta-network does the same thing. It analyzes the AI's performance patterns. Then it generates update rules. Update rules are instructions for learning. They tell the AI how to adjust after each attempt. Like rules for improving your basketball shot after each practice.

These aren't fixed formulas. They're context-dependent. Early in training, rules emphasize exploration. The AI tries many different approaches. Later, rules shift toward refinement. The AI perfects what already works.

The Feedback Loop

The AI applies the generated rules. Performance data flows back to the meta-network. The meta-network updates its own understanding. It generates new rules based on new observations.

The system optimizes its optimization strategy in real time.

This recursive structure means the meta-network itself improves. It gets better at generating rules. It learns which types of instructions work best for different learning challenges. The longer it runs, the smarter it becomes at teaching.

Real-World Performance

DiscoRL outperformed or matched state-of-the-art hand-designed algorithms. The team tested it on Atari57, a benchmark of 57 classic arcade games. They tested it on ProcGen, which generates random game levels. They tested it on DMLab-30, a complex 3D navigation environment.

The system also succeeded on held-out test environments. These were environments the system had never seen during training. This suggests the meta-network discovered general principles about learning. Not task-specific tricks.

Meta-training required substantial compute. The most capable version used 2,048 TPUv3 cores for approximately 60 hours. That's significant resources. But once trained, the meta-network generates rules without repeating that cost. It applies learned knowledge to new AIs and new tasks.

Code and meta-parameters are available under open-source license at GitHub. American AI labs, university researchers, and startups can now experiment with the system.

Real-World Examples

Example 1: Self-Driving Cars Adapting to New Cities

A self-driving car trained in Phoenix drives to Seattle. Rain and hills are new conditions. Traditional AI struggles. The learning rules optimized for sunny, flat roads don't work.

DiscoRL generates new learning rules when it detects the change. The meta-network observes the car's performance drop. It analyzes which decisions fail in wet conditions. Then it writes new optimization instructions. The car adapts in hours, not months.

Result: Faster deployment of autonomous vehicles across diverse U.S. cities. Companies like Waymo and Cruise, operating in San Francisco and Phoenix, could expand to Portland or Boston more quickly.

Example 2: Drug Discovery at Boston Biotech Labs

Researchers use AI to predict protein structures. The AI gets stuck on complex proteins. Traditional optimization algorithms hit a wall. The AI stops improving after weeks of training.

DiscoRL observes the problem. It detects that the AI's exploration strategy became too narrow. The meta-network generates new update rules that encourage broader search. The AI breaks through in days instead of months.

Result: Faster drug development for American patients. Boston's biotech corridor houses dozens of startups using AI for molecular design. Meta-learning could accelerate their research timelines significantly.

Example 3: Transfer Across Domains

The team tested whether generated algorithms transfer between tasks. A meta-network trained on arcade games produced rules that worked on navigation tasks. The rules weren't game-specific. They encoded general principles about exploration and exploitation.

This suggests meta-learning discovers universal learning dynamics. Rules that work for Pac-Man also help robots navigate warehouses. Rules that optimize chess play also improve logistics scheduling.

What People Get Wrong

Myth: This means AI can now improve itself without limits.

Reality: The meta-network operates within constraints. It generates update rules for specific learning tasks. It's not general intelligence recursively upgrading itself. It's targeted algorithmic optimization within defined boundaries. The system still requires human-designed architectures, curated datasets, and clear objectives.

Myth: Human researchers become obsolete.

Reality: Designing meta-networks requires deep expertise. Machine learning architecture. Loss functions. Training dynamics. The system automates one layer of optimization. Humans still define the problem. They curate data. They interpret results. Stanford AI Lab, MIT CSAIL, and UC Berkeley AI Research continue hiring experts specifically for meta-learning research.

Myth: Self-generated algorithms will be inscrutable black boxes.

Reality: Generated update rules are mathematical operations. They can be analyzed. Tested. Understood. Researchers can inspect what rules the meta-network proposes and why. The challenge is interpretability at scale. Not fundamental opacity.

Current Limitations

Meta-learning systems face real constraints. Training DiscoRL required massive computational resources—2,048 specialized processors running for 60 hours. Not every research lab or company can afford this infrastructure.

The system works best on well-defined learning tasks with clear objectives. It struggles with ambiguous problems where success is hard to measure. It also requires substantial training data across diverse environments to learn generalizable principles.

Transfer learning across very different domains remains challenging. While arcade game rules helped with navigation, the gap between, say, language processing and robotic manipulation may be too large for current meta-learning approaches to bridge effectively.

And critically, the meta-network itself is still a human-designed architecture. Someone had to decide its structure, training objectives, and evaluation criteria. The self-generation happens within guardrails that humans establish.

What This Reveals

Meta-learning means learning how to learn.

For decades, progress meant better architectures. Bigger datasets. More compute. This approach asks a different question. What if the algorithms themselves could evolve?

Meta-learning pushes AI from following instructions to authoring them. Human experts still design the meta-networks. They still choose which problems to solve. But the day-to-day optimization decisions shift to machines.

This could accelerate AI development across American industries. Robotics companies in Pittsburgh. Healthcare AI startups in Cambridge. Agricultural tech firms in Iowa. Self-optimizing systems reduce the bottleneck of human expertise.

The work continues. Google DeepMind is testing whether meta-networks work across more diverse tasks. How they handle distribution shifts. Whether generated algorithms transfer between domains. Early results suggest broad applicability.

The Takeaway

Meta-learning represents a shift from human-coded algorithms to self-generated ones. AI systems can now optimize their own learning processes. This could accelerate AI development across American industries, from robotics to healthcare.

The next frontier: systems that architect their own improvement. The calculation continues. Now the calculation is writing its own next steps.

Topic

Google Gemini 3 Launch

Google adds Gemini 3 Memory import for chat logs

27 March 2026

Google adds Gemini 3 Memory import for chat logs

Google launches Gemini 3‑powered AI Overview on mobile

28 January 2026

Google launches Gemini 3‑powered AI Overview on mobile

Google launches Gemini 3 and the Antigravity AI platform

18 November 2025

Google launches Gemini 3 and the Antigravity AI platform

What is this about?

  • artificial intelligence/
  • Google AI/
  • machine learning optimization/
  • adaptive algorithms

Feed

    Cursor 3 Launches Unified AI Coding Workspace

    Cursor 3 Launches Unified AI Coding Workspace

    Side‑panel lets devs toggle local and cloud agents, building on Composer 2 and Kimi 2.5

    about 12 hours ago
    Orion’s Six‑Minute Burn Puts Artemis 2 on Free‑Return Path

    Orion’s Six‑Minute Burn Puts Artemis 2 on Free‑Return Path

    iPhone 17 Pro Max survives Orion’s deep‑space test as crew heads to lunar flyby

    about 14 hours ago
    Android 17 Introduces System‑Level Notification Rules

    Android 17 Introduces System‑Level Notification Rules

    Samsung’s One UI 9 will adopt Android 17’s rules, adding OS‑level alert control

    about 14 hours ago

    Nvidia rolls out DLSS 4.5, 6× boost on RTX 50-series

    Dynamic Multi‑Frame Generation smooths 120–240 Hz, delivered in the driver 595.97

    about 16 hours ago
    Apple rolls out iOS 18.7.7 to block DarkSword

    Apple rolls out iOS 18.7.7 to block DarkSword

    Patch fixes six Safari bugs, stopping DarkSword on iOS 18–18.7 devices

    1 day ago
    BoxPlates Skins Revamp PS5 Slim & Pro in Two Weeks

    BoxPlates Skins Revamp PS5 Slim & Pro in Two Weeks

    1 day ago
    Artemis 2 Rockets Beyond Earth—402,000 km From Home

    Artemis 2 Rockets Beyond Earth—402,000 km From Home

    Lift‑off at 8:23 a.m. ET marks first crewed lunar flight since 1972, with a diverse four‑person crew

    1 day ago
    Apple celebrates 50 years with new minimalist wallpapers

    Apple celebrates 50 years with new minimalist wallpapers

    Basic Apple Guy releases iPhone and Mac wallpapers for Apple’s 50th anniversary

    1 day ago
    Razer Unveils Pro Type Ergo Ergonomic Keyboard Today

    Razer Unveils Pro Type Ergo Ergonomic Keyboard Today

    Split design, AI button, and 19‑zone RGB aim at U.S. workers with a 9.7% RSI rate

    2 days ago
    Google to debut screen‑free Fitbit band in 2026

    Google to debut screen‑free Fitbit band in 2026

    AI‑driven training plan and upgraded platform aim at the health‑tracking market against Oura and Whoop

    2 days ago
    Nothing unveils AI‑powered smart glasses for a 2027 launch

    Nothing unveils AI‑powered smart glasses for a 2027 launch

    The glasses use a paired phone and cloud, with a clear frame and LED accents

    2 days ago
    Google rolls out Veo 3.1 Lite, halving AI video costs

    Google rolls out Veo 3.1 Lite, halving AI video costs

    Veo 3.1 Lite matches Veo 3.1 Fast speed but cuts price by over 50% for devs now

    3 days ago
    Freelander 97 Debuts 800‑V EV Crossover in Shanghai

    Freelander 97 Debuts 800‑V EV Crossover in Shanghai

    Chery‑JLR showcases ADS 4.1 autonomy on 800‑V platform, eyeing 2028 launch

    3 days ago
    Telegram Launches Version 12.6 With AI Editor, New Polls

    Telegram Launches Version 12.6 With AI Editor, New Polls

    It adds an AI tone editor, richer polls, Live/Motion Photos, and bot management

    3 days ago

    Pixel 11 Pro Renders Leak With Black Camera Bar and MediaTek Modem

    Google’s August 2026 flagship ditches Samsung radios for improved 5G and runs the Tensor G6

    3 days ago

    Anthropic leak reveals Opus 4.7, Sonnet 4.8 in npm 2.1.88

    Leak on March 30‑31 exposed TypeScript, revealing Opus 4.7, Sonnet 4.8, and internal features

    3 days ago
    iOS 26.5 beta lands on iPhone 17 Pro with an 8 GB download

    iOS 26.5 beta lands on iPhone 17 Pro with an 8 GB download

    Apple restores RCS encryption and adds a 12‑month subscription in the update

    3 days ago
    Windows 11 24H2 Brings Dark Mode to Core Utilities

    Windows 11 24H2 Brings Dark Mode to Core Utilities

    Tools like Registry Editor get dark mode in Windows 11 24H2, out in Sep 2026

    4 days ago

    John Noble's 1,024 Thread Implant Powers Warcraft Raids

    John Noble, a former British parachutist turned veteran gamer, received a neural implant with 1,024 threads after a 2024 trial in Seattle. The device lets him control a MacBook with thought alone, turning World of Warcraft raids into hands‑free battles. His story shows how brain‑computer interfaces can expand digital access for disabled veterans and reshape gaming.

    5 days ago
    Apple unveils Siri app for iOS 27, adds 50+ AI agents

    Apple unveils Siri app for iOS 27, adds 50+ AI agents

    iOS 27 Siri app adds Extensions marketplace, eyeing Alexa’s 100,000‑skill store

    5 days ago
    Loading...
banner