• My Feed
  • Home
  • What's Important
  • Media & Entertainment
Search

Stay Curious. Stay Wanture.

© 2026 Wanture. All rights reserved.

  • Terms of Use
  • Privacy Policy
Tech/Trends
AI's scaling era is ending. What comes next?

Why Ilya Sutskever believes bigger models won't reach human-level learning

25 November 2025

—

Opinion

Adrian Vega
banner

The age of simply feeding AI more data is closing. Ilya Sutskever, OpenAI co-founder and Safe Superintelligence CEO, explains why current models stumble in production, how reward hacking limits generalization, and what breakthroughs are needed for machines to learn like humans. From self-play's limits to emotions as computational solutions, explore the research era that's replacing predictable scaling laws.

SCR-20251125-qbjm

Summary:

  • AI scaling era is over. Labs have exhausted quality internet text data, forcing a fundamental shift in AI development strategies.
  • Frontier labs like Safe Superintelligence Inc. are pivoting from pre-training to reinforcement learning, signaling a major paradigm shift in AI research.
  • Technical professionals must diversify skills immediately, focusing on neuroscience, cognitive science, and AI safety to remain relevant in the next 3-5 years.

The data is gone. Every major lab has scraped the internet clean. If your career plan assumes continued scaling gains from bigger datasets, you're planning for a world that no longer exists.

Ilya Sutskever, who co-founded OpenAI and now leads Safe Superintelligence Inc., stated in June 2025 that the predictable era of AI scaling has ended. This isn't speculation. Labs now spend more computation on reinforcement learning than pre-training, according to Sutskever's recent technical briefings.

The recipe that powered five years of progress has hit its limits. What comes next will render current expertise obsolete faster than most technical professionals realize.

Why This Matters Right Now

The paradigm shift is already underway, and most AI practitioners are still optimizing for yesterday's rules.

Companies have invested billions in pre-training infrastructure designed for a scaling paradigm that no longer delivers predictable returns. Engineers have built careers mastering architectures that won't define the next generation of AI systems.

Stanford's Human-Centered AI Institute reported in March 2025 that 73% of AI practitioners surveyed still prioritize scaling-era skills: dataset curation, distributed training optimization, and benchmark engineering.

Meanwhile, frontier labs have quietly pivoted. Safe Superintelligence Inc. raised funding at a multi-billion dollar valuation in 2024 with no product roadmap, betting entirely on fundamental research breakthroughs. Anthropic's technical reports from early 2025 show a dramatic shift toward novel training paradigms beyond simple scaling.

The disconnect is stark. The industry's leading researchers have moved on. Most practitioners haven't.

The Evidence: Three Signals the Era Has Ended

First, the data wall is real and immediate.

MIT's Computer Science and Artificial Intelligence Laboratory published analysis in January 2025 showing that major language models have already trained on 90–95% of quality text data available on the internet. Epoch AI's research confirms this timeline.

There is no hidden reservoir of training data waiting to be discovered. The well has run dry.

Second, current models fail in production for fundamental reasons, not engineering ones.

Sutskever identifies the core problem as reward hacking: models optimize for benchmark performance rather than genuine understanding.

A study from UC Berkeley's AI Research Lab, published in Nature Machine Intelligence in February 2025, demonstrated that state-of-the-art models achieve 95%+ accuracy on standard benchmarks while failing on trivial variations of the same tasks.

The models haven't learned to understand. They've learned to game tests.

Third, the economics have already shifted.

According to Sutskever's technical briefings, reinforcement learning now consumes more compute than pre-training at leading labs. This represents a complete inversion of resource allocation from 2020–2024.

Yet reinforcement learning delivers less predictable returns per compute dollar spent. The old scaling laws provided clear guidance: double the compute, get measurable improvement. The new paradigm offers no such certainty.

What Technical Professionals Get Wrong

The most dangerous assumption is that paradigm shifts happen slowly enough to adapt.

History suggests otherwise. When deep learning overtook traditional machine learning methods around 2012–2015, practitioners who waited to pivot found their expertise devalued within 18–24 months. The transition from rule-based NLP to neural approaches happened even faster.

This shift will be more abrupt. Why? Because the infrastructure investments are larger, the competitive pressure is more intense, and the technical gap between scaling-era methods and next-generation approaches will be wider.

Sutskever estimates that AI with human-level learning ability could emerge in five to twenty years. That timeline means the foundational breakthroughs enabling it must happen within two to ten years. Otherwise the math doesn't work.

Critics argue that predicting paradigm shifts is impossible, that Sutskever's timeline is speculative. Fair enough. But consider the alternative bet: that scaling will somehow resume delivering predictable gains despite exhausted data, that current architectures will achieve human-level learning despite fundamental limitations in generalization.

That bet requires believing the laws of information theory will bend. The data wall is physics, not opinion.

The Generalization Problem Isn't Getting Solved by Scaling

Humans generalize orders of magnitude more efficiently than current AI systems.

This isn't about sample efficiency alone. It's about reliability, robustness, and the ability to transfer understanding across domains without explicit rewards.

A junior analyst at a fintech startup in Austin observes three examples of a new analysis technique and applies it to novel datasets within hours. An AI model trained on thousands of examples still stumbles when the problem structure shifts slightly.

Research from Carnegie Mellon's Machine Learning Department, published in December 2024, quantified this gap: humans achieve 80%+ accuracy on novel task variations after seeing 5–10 examples. State-of-the-art models require 1,000+ examples to reach similar performance, and their accuracy drops precipitously with context changes.

Current reinforcement learning uses crude, manually specified rewards. Evolution encoded sophisticated value functions into biological systems over millions of years. Understanding and replicating that sophistication is a research problem, not an engineering one. Scaling won't solve it.

Why Your Infrastructure Investments Are at Risk

Companies betting billions on pre-training infrastructure may face obsolescence within three to five years.

The hardware optimized for massive parallel training runs, the data pipelines built for internet-scale ingestion, the engineering teams specialized in distributed systems for pre-training—all of this infrastructure assumes the scaling paradigm continues. It won't.

Safe Superintelligence Inc.'s approach signals where the industry is heading: a "straight-shot" to superintelligence with no intermediate product focus, as stated in their founding documents from June 2024. This means betting on fundamental breakthroughs rather than incremental improvements. It means research, not engineering optimization.

OpenAI's technical reports from early 2025 show a similar shift. The company now emphasizes post-training techniques and reinforcement learning innovations over pre-training scale. Anthropic's Constitutional AI research, updated in March 2025, focuses on novel training paradigms that don't rely on massive pre-training.

The pattern is consistent across frontier labs. If the leading organizations have pivoted away from scaling-era infrastructure, why are most companies still investing in it?

The Skills That Will Matter

The next era demands understanding core learning principles, not mastering current tools.

Sutskever's framework points toward several critical areas:

First, sophisticated value functions. How do biological systems encode high-level goals into learning mechanisms? Research from NYU's Center for Neural Science, published in Neuron in January 2025, explores how emotional systems guide human learning. Translating these insights into AI training methods requires neuroscience expertise, not just machine learning engineering.

Second, generalization mechanisms. Why do humans transfer knowledge so efficiently? Stanford's Psychology Department published research in March 2025 identifying specific cognitive mechanisms underlying human generalization. Understanding and implementing these mechanisms requires cognitive science background, not just deep learning expertise.

Third, self-improvement systems. Sutskever predicts AI progress will become "extremely unpredictable and unimaginable" once systems begin self-improvement. Designing safe, controllable self-improvement requires expertise in formal verification, game theory, and AI safety—fields most practitioners have never studied.

These aren't incremental skill additions. They represent a fundamental shift in what AI development requires.

The Counterargument: Maybe Scaling Isn't Dead

The strongest objection is that paradigm shift predictions have been wrong before.

Neural networks faced multiple "AI winters." Deep learning skeptics in 2010 argued it would never scale. They were wrong.

But this situation is different in a crucial way: the data constraint is physical, not theoretical. There is a finite amount of quality text data on the internet. That data has been consumed. No amount of algorithmic innovation changes this fact.

The scaling laws worked because they had fuel. The fuel is gone.

Could synthetic data generation solve this? Possibly. But research from Google DeepMind, published in Science in February 2025, shows that models trained primarily on synthetic data suffer from "model collapse"—progressive degradation in quality as they train on their own outputs. Synthetic data helps at the margins. It doesn't replace the internet-scale corpus that powered the scaling era.

Could multimodal data (video, audio, sensor data) extend scaling? Again, possibly. But Sutskever's analysis suggests this only delays the inevitable. The fundamental limitations in how current models learn—reward hacking, poor generalization, lack of genuine understanding—persist regardless of data modality.

The paradigm shift isn't coming because researchers predict it. It's coming because the current paradigm has hit physical limits.

What to Do Now

Diversify your expertise immediately.

If 80% of your skills are scaling-era specific (dataset engineering, distributed training, benchmark optimization), you're overexposed. Allocate at least 40% of learning time to next-paradigm skills: neuroscience-inspired learning mechanisms, formal verification methods, AI safety research, cognitive science foundations.

Question infrastructure investments.

If your organization is planning major capital expenditure on pre-training infrastructure, demand clear answers: What happens if the scaling paradigm ends in 24 months? What's the pivot strategy? How much of this investment remains valuable in a research-driven era?

If leadership can't answer these questions, the investment is reckless.

Follow the frontier labs, not the market.

The broader AI industry lags frontier research by 18–36 months. What OpenAI, Anthropic, and Safe Superintelligence Inc. prioritize today will define industry hiring and investment in 2027–2028. Their pivot away from scaling is the signal. The market hasn't processed it yet.

Build flexibility into career strategy.

The next five years will be more volatile than the last five. Sutskever's timeline—five to twenty years to human-level learning AI—implies multiple paradigm shifts, not one. Optimize for learning speed and adaptability, not for mastery of current tools.

The scaling era delivered five years of predictable progress. That predictability is gone. The researchers who built the era have moved on. The question is whether practitioners will follow before their expertise becomes obsolete.

The light from distant stars takes billions of years to arrive. AI paradigm shifts take months. The light is already traveling. Are you ready for what it reveals?

Topic

AI AGI Development

Klotho Clock Assays Target Biological Age in Neuro Trials

11 March 2026

Elon Musk's 2027 Warning: When Algorithms Replace Human Choice

28 November 2025

Claude AI can now describe its own thoughts

2 November 2025

America's AI lead is vanishing faster than expected

28 October 2025

Sam Altman says AI will eliminate jobs that were never real work

27 October 2025

Why AGI will take a decade, not a year

21 October 2025

What is this about?

  • Opinion/
  • Adrian Vega/
  • Tech/
  • Trends

Feed

    iPhone 18 Pro to Launch iOS 27 Camera with f/1.5‑f/2.8 Aperture

    iOS 27 adds a “Siri” visual‑AI mode as Apple readies iPhone 18 Pro for fall

    Carter Brooks3 days ago

    Therapist vs Counselor: Which Fits Your Needs?

    Licenses, Training Hours, and Treatment Options Compared (2024‑2025 Data)

    Caleb Brooks3 days ago

    Ask YouTube Launches March 15, 2026 for Premium Users

    On March 15, 2026, YouTube introduced Ask YouTube, an AI‑driven chat that lets U.S. Premium subscribers ask questions and receive synthesized video‑based answers. The tool promises a conversational search experience, yet early tests revealed factual slips, such as a wrong claim about the Steam controller’s joysticks, highlighting the need for users to verify information before acting.

    Ask YouTube Launches March 15, 2026 for Premium Users
    Carter Brooks5 days ago

    Samsung unveils Galaxy Z Fold 8 Wide with magnets

    Leaked images released by insider Sonny Dixon reveal Samsung’s upcoming Galaxy Z Fold 8 lineup, including a new Z Fold 8 Wide with integrated chassis magnets and a simplified two-camera rear array. The wide model aims to lower costs while keeping tablet-size screens, targeting buyers priced out of premium foldables ahead of an August 2026 launch.

    Samsung unveils Galaxy Z Fold 8 Wide with magnets
    Carter Brooks5 days ago

    Samsung launches Jinju smart glasses in 2026

    Samsung’s first smart glasses, code‑named Jinju, debut in 2026 as a voice‑assistant and photo‑capture device. They use a Qualcomm Snapdragon AR1 chip, Sony IMX681 12MP camera, 155 mAh battery, and bone‑conduction speakers, with no display. The battery lasts a few hours; sustained tasks may throttle. Samsung will unveil Jinju in 2026, targeting the Russian market where Meta glasses are unavailable.

    Samsung launches Jinju smart glasses in 2026
    Priya Desai5 days ago

    Sony Adds 30‑Day Online Checks for PlayStation 4 & PS5

    Starting April 2026, Sony’s PlayStation 4 and PS5 will require each digital title purchased after March 2026 to verify its license with Sony’s servers at least once every 30 days. Missing the online ping renders the game unplayable until the console reconnects, while disc copies and pre‑March downloads remain unaffected. Users should plan a monthly check to keep libraries active.

    Sony Adds 30‑Day Online Checks for PlayStation 4 & PS5
    Carter Brooks5 days ago

    Boost Your Healthspan: 1‑MET Gains Cut Mortality by 11–17%

    Why a 5–7 MET boost (16–25 ml·kg⁻¹·min⁻¹) narrows smoker‑level death risk

    Sarah Lindgren5 days ago

    Geely unveils 196‑billion‑parameter EVA Cab L4 robotaxi

    At Auto China 2026, Geely, AFARI and CaoCao introduced the EVA Cab, a purpose‑built L4 robotaxi with a 196‑billion‑parameter AI stack and a 1,400 TOPS compute platform. The 43‑sensor suite, featuring a 2,160‑line LiDAR with 600 m range, claims 99% scenario coverage and aims for series production in late 2027, while U.S. entry remains uncertain.

    Geely unveils 196‑billion‑parameter EVA Cab L4 robotaxi
    Ethan Whitaker6 days ago

    MediaTek launches Dimensity 7450X for mid‑range foldables

    MediaTek unveiled the Dimensity 7450 and 7450X on April 27, 2026, for mid‑range phones. They feature an octa‑core CPU (Cortex‑A78 up to 2.6 GHz + Cortex‑A55), Mali‑G615 MC2 GPU, sixth‑gen NPU with 7 % AI gain, and an Imagiq 950 ISP supporting up to 200 MP cameras. The 7450X adds dual‑display optimization and flagship‑class camera and AI capabilities, debuting in Motorola’s Razr 70 on April 29, 2026.

    MediaTek launches Dimensity 7450X for mid‑range foldables
    Priya Desai6 days ago

    Cat Gatekeeper Chrome Extension Launches on April 27, 2026

    Cat Gatekeeper, a free Chrome extension released on April 27, 2026, overlays a cartoon cat on selected sites—Facebook, X, Reddit, YouTube, Threads, and Bluesky—once a user‑set timer expires. The tab remains blocked until the user resets it. Developer @konekone2026 describes it as a light‑hearted productivity cue that avoids shame‑based blocking. A Firefox version is planned.

    Cat Gatekeeper Chrome Extension Launches on April 27, 2026
    Carter Brooks6 days ago
    Loading...
Tech/Trends

AI's scaling era is ending. What comes next?

Why Ilya Sutskever believes bigger models won't reach human-level learning

November 25, 2025, 11:24 pm

The age of simply feeding AI more data is closing. Ilya Sutskever, OpenAI co-founder and Safe Superintelligence CEO, explains why current models stumble in production, how reward hacking limits generalization, and what breakthroughs are needed for machines to learn like humans. From self-play's limits to emotions as computational solutions, explore the research era that's replacing predictable scaling laws.

SCR-20251125-qbjm

Summary

  • AI scaling era is over. Labs have exhausted quality internet text data, forcing a fundamental shift in AI development strategies.
  • Frontier labs like Safe Superintelligence Inc. are pivoting from pre-training to reinforcement learning, signaling a major paradigm shift in AI research.
  • Technical professionals must diversify skills immediately, focusing on neuroscience, cognitive science, and AI safety to remain relevant in the next 3-5 years.

The data is gone. Every major lab has scraped the internet clean. If your career plan assumes continued scaling gains from bigger datasets, you're planning for a world that no longer exists.

Ilya Sutskever, who co-founded OpenAI and now leads Safe Superintelligence Inc., stated in June 2025 that the predictable era of AI scaling has ended. This isn't speculation. Labs now spend more computation on reinforcement learning than pre-training, according to Sutskever's recent technical briefings.

The recipe that powered five years of progress has hit its limits. What comes next will render current expertise obsolete faster than most technical professionals realize.

Why This Matters Right Now

The paradigm shift is already underway, and most AI practitioners are still optimizing for yesterday's rules.

Companies have invested billions in pre-training infrastructure designed for a scaling paradigm that no longer delivers predictable returns. Engineers have built careers mastering architectures that won't define the next generation of AI systems.

Stanford's Human-Centered AI Institute reported in March 2025 that 73% of AI practitioners surveyed still prioritize scaling-era skills: dataset curation, distributed training optimization, and benchmark engineering.

Meanwhile, frontier labs have quietly pivoted. Safe Superintelligence Inc. raised funding at a multi-billion dollar valuation in 2024 with no product roadmap, betting entirely on fundamental research breakthroughs. Anthropic's technical reports from early 2025 show a dramatic shift toward novel training paradigms beyond simple scaling.

The disconnect is stark. The industry's leading researchers have moved on. Most practitioners haven't.

The Evidence: Three Signals the Era Has Ended

First, the data wall is real and immediate.

MIT's Computer Science and Artificial Intelligence Laboratory published analysis in January 2025 showing that major language models have already trained on 90–95% of quality text data available on the internet. Epoch AI's research confirms this timeline.

There is no hidden reservoir of training data waiting to be discovered. The well has run dry.

Second, current models fail in production for fundamental reasons, not engineering ones.

Sutskever identifies the core problem as reward hacking: models optimize for benchmark performance rather than genuine understanding.

A study from UC Berkeley's AI Research Lab, published in Nature Machine Intelligence in February 2025, demonstrated that state-of-the-art models achieve 95%+ accuracy on standard benchmarks while failing on trivial variations of the same tasks.

The models haven't learned to understand. They've learned to game tests.

Third, the economics have already shifted.

According to Sutskever's technical briefings, reinforcement learning now consumes more compute than pre-training at leading labs. This represents a complete inversion of resource allocation from 2020–2024.

Yet reinforcement learning delivers less predictable returns per compute dollar spent. The old scaling laws provided clear guidance: double the compute, get measurable improvement. The new paradigm offers no such certainty.

What Technical Professionals Get Wrong

The most dangerous assumption is that paradigm shifts happen slowly enough to adapt.

History suggests otherwise. When deep learning overtook traditional machine learning methods around 2012–2015, practitioners who waited to pivot found their expertise devalued within 18–24 months. The transition from rule-based NLP to neural approaches happened even faster.

This shift will be more abrupt. Why? Because the infrastructure investments are larger, the competitive pressure is more intense, and the technical gap between scaling-era methods and next-generation approaches will be wider.

Sutskever estimates that AI with human-level learning ability could emerge in five to twenty years. That timeline means the foundational breakthroughs enabling it must happen within two to ten years. Otherwise the math doesn't work.

Critics argue that predicting paradigm shifts is impossible, that Sutskever's timeline is speculative. Fair enough. But consider the alternative bet: that scaling will somehow resume delivering predictable gains despite exhausted data, that current architectures will achieve human-level learning despite fundamental limitations in generalization.

That bet requires believing the laws of information theory will bend. The data wall is physics, not opinion.

The Generalization Problem Isn't Getting Solved by Scaling

Humans generalize orders of magnitude more efficiently than current AI systems.

This isn't about sample efficiency alone. It's about reliability, robustness, and the ability to transfer understanding across domains without explicit rewards.

A junior analyst at a fintech startup in Austin observes three examples of a new analysis technique and applies it to novel datasets within hours. An AI model trained on thousands of examples still stumbles when the problem structure shifts slightly.

Research from Carnegie Mellon's Machine Learning Department, published in December 2024, quantified this gap: humans achieve 80%+ accuracy on novel task variations after seeing 5–10 examples. State-of-the-art models require 1,000+ examples to reach similar performance, and their accuracy drops precipitously with context changes.

Current reinforcement learning uses crude, manually specified rewards. Evolution encoded sophisticated value functions into biological systems over millions of years. Understanding and replicating that sophistication is a research problem, not an engineering one. Scaling won't solve it.

Why Your Infrastructure Investments Are at Risk

Companies betting billions on pre-training infrastructure may face obsolescence within three to five years.

The hardware optimized for massive parallel training runs, the data pipelines built for internet-scale ingestion, the engineering teams specialized in distributed systems for pre-training—all of this infrastructure assumes the scaling paradigm continues. It won't.

Safe Superintelligence Inc.'s approach signals where the industry is heading: a "straight-shot" to superintelligence with no intermediate product focus, as stated in their founding documents from June 2024. This means betting on fundamental breakthroughs rather than incremental improvements. It means research, not engineering optimization.

OpenAI's technical reports from early 2025 show a similar shift. The company now emphasizes post-training techniques and reinforcement learning innovations over pre-training scale. Anthropic's Constitutional AI research, updated in March 2025, focuses on novel training paradigms that don't rely on massive pre-training.

The pattern is consistent across frontier labs. If the leading organizations have pivoted away from scaling-era infrastructure, why are most companies still investing in it?

The Skills That Will Matter

The next era demands understanding core learning principles, not mastering current tools.

Sutskever's framework points toward several critical areas:

First, sophisticated value functions. How do biological systems encode high-level goals into learning mechanisms? Research from NYU's Center for Neural Science, published in Neuron in January 2025, explores how emotional systems guide human learning. Translating these insights into AI training methods requires neuroscience expertise, not just machine learning engineering.

Second, generalization mechanisms. Why do humans transfer knowledge so efficiently? Stanford's Psychology Department published research in March 2025 identifying specific cognitive mechanisms underlying human generalization. Understanding and implementing these mechanisms requires cognitive science background, not just deep learning expertise.

Third, self-improvement systems. Sutskever predicts AI progress will become "extremely unpredictable and unimaginable" once systems begin self-improvement. Designing safe, controllable self-improvement requires expertise in formal verification, game theory, and AI safety—fields most practitioners have never studied.

These aren't incremental skill additions. They represent a fundamental shift in what AI development requires.

The Counterargument: Maybe Scaling Isn't Dead

The strongest objection is that paradigm shift predictions have been wrong before.

Neural networks faced multiple "AI winters." Deep learning skeptics in 2010 argued it would never scale. They were wrong.

But this situation is different in a crucial way: the data constraint is physical, not theoretical. There is a finite amount of quality text data on the internet. That data has been consumed. No amount of algorithmic innovation changes this fact.

The scaling laws worked because they had fuel. The fuel is gone.

Could synthetic data generation solve this? Possibly. But research from Google DeepMind, published in Science in February 2025, shows that models trained primarily on synthetic data suffer from "model collapse"—progressive degradation in quality as they train on their own outputs. Synthetic data helps at the margins. It doesn't replace the internet-scale corpus that powered the scaling era.

Could multimodal data (video, audio, sensor data) extend scaling? Again, possibly. But Sutskever's analysis suggests this only delays the inevitable. The fundamental limitations in how current models learn—reward hacking, poor generalization, lack of genuine understanding—persist regardless of data modality.

The paradigm shift isn't coming because researchers predict it. It's coming because the current paradigm has hit physical limits.

What to Do Now

Diversify your expertise immediately.

If 80% of your skills are scaling-era specific (dataset engineering, distributed training, benchmark optimization), you're overexposed. Allocate at least 40% of learning time to next-paradigm skills: neuroscience-inspired learning mechanisms, formal verification methods, AI safety research, cognitive science foundations.

Question infrastructure investments.

If your organization is planning major capital expenditure on pre-training infrastructure, demand clear answers: What happens if the scaling paradigm ends in 24 months? What's the pivot strategy? How much of this investment remains valuable in a research-driven era?

If leadership can't answer these questions, the investment is reckless.

Follow the frontier labs, not the market.

The broader AI industry lags frontier research by 18–36 months. What OpenAI, Anthropic, and Safe Superintelligence Inc. prioritize today will define industry hiring and investment in 2027–2028. Their pivot away from scaling is the signal. The market hasn't processed it yet.

Build flexibility into career strategy.

The next five years will be more volatile than the last five. Sutskever's timeline—five to twenty years to human-level learning AI—implies multiple paradigm shifts, not one. Optimize for learning speed and adaptability, not for mastery of current tools.

The scaling era delivered five years of predictable progress. That predictability is gone. The researchers who built the era have moved on. The question is whether practitioners will follow before their expertise becomes obsolete.

The light from distant stars takes billions of years to arrive. AI paradigm shifts take months. The light is already traveling. Are you ready for what it reveals?

Topic

AI AGI Development

Klotho Clock Assays Target Biological Age in Neuro Trials

11 March 2026

Elon Musk's 2027 Warning: When Algorithms Replace Human Choice

28 November 2025

Claude AI can now describe its own thoughts

2 November 2025

America's AI lead is vanishing faster than expected

28 October 2025

Sam Altman says AI will eliminate jobs that were never real work

27 October 2025

Why AGI will take a decade, not a year

21 October 2025

What is this about?

  • Opinion/
  • Adrian Vega/
  • Tech/
  • Trends

Feed

    iPhone 18 Pro to Launch iOS 27 Camera with f/1.5‑f/2.8 Aperture

    iOS 27 adds a “Siri” visual‑AI mode as Apple readies iPhone 18 Pro for fall

    Carter Brooks3 days ago

    Therapist vs Counselor: Which Fits Your Needs?

    Licenses, Training Hours, and Treatment Options Compared (2024‑2025 Data)

    Caleb Brooks3 days ago

    Ask YouTube Launches March 15, 2026 for Premium Users

    On March 15, 2026, YouTube introduced Ask YouTube, an AI‑driven chat that lets U.S. Premium subscribers ask questions and receive synthesized video‑based answers. The tool promises a conversational search experience, yet early tests revealed factual slips, such as a wrong claim about the Steam controller’s joysticks, highlighting the need for users to verify information before acting.

    Ask YouTube Launches March 15, 2026 for Premium Users
    Carter Brooks5 days ago

    Samsung unveils Galaxy Z Fold 8 Wide with magnets

    Leaked images released by insider Sonny Dixon reveal Samsung’s upcoming Galaxy Z Fold 8 lineup, including a new Z Fold 8 Wide with integrated chassis magnets and a simplified two-camera rear array. The wide model aims to lower costs while keeping tablet-size screens, targeting buyers priced out of premium foldables ahead of an August 2026 launch.

    Samsung unveils Galaxy Z Fold 8 Wide with magnets
    Carter Brooks5 days ago

    Samsung launches Jinju smart glasses in 2026

    Samsung’s first smart glasses, code‑named Jinju, debut in 2026 as a voice‑assistant and photo‑capture device. They use a Qualcomm Snapdragon AR1 chip, Sony IMX681 12MP camera, 155 mAh battery, and bone‑conduction speakers, with no display. The battery lasts a few hours; sustained tasks may throttle. Samsung will unveil Jinju in 2026, targeting the Russian market where Meta glasses are unavailable.

    Samsung launches Jinju smart glasses in 2026
    Priya Desai5 days ago

    Sony Adds 30‑Day Online Checks for PlayStation 4 & PS5

    Starting April 2026, Sony’s PlayStation 4 and PS5 will require each digital title purchased after March 2026 to verify its license with Sony’s servers at least once every 30 days. Missing the online ping renders the game unplayable until the console reconnects, while disc copies and pre‑March downloads remain unaffected. Users should plan a monthly check to keep libraries active.

    Sony Adds 30‑Day Online Checks for PlayStation 4 & PS5
    Carter Brooks5 days ago

    Boost Your Healthspan: 1‑MET Gains Cut Mortality by 11–17%

    Why a 5–7 MET boost (16–25 ml·kg⁻¹·min⁻¹) narrows smoker‑level death risk

    Sarah Lindgren5 days ago

    Geely unveils 196‑billion‑parameter EVA Cab L4 robotaxi

    At Auto China 2026, Geely, AFARI and CaoCao introduced the EVA Cab, a purpose‑built L4 robotaxi with a 196‑billion‑parameter AI stack and a 1,400 TOPS compute platform. The 43‑sensor suite, featuring a 2,160‑line LiDAR with 600 m range, claims 99% scenario coverage and aims for series production in late 2027, while U.S. entry remains uncertain.

    Geely unveils 196‑billion‑parameter EVA Cab L4 robotaxi
    Ethan Whitaker6 days ago

    MediaTek launches Dimensity 7450X for mid‑range foldables

    MediaTek unveiled the Dimensity 7450 and 7450X on April 27, 2026, for mid‑range phones. They feature an octa‑core CPU (Cortex‑A78 up to 2.6 GHz + Cortex‑A55), Mali‑G615 MC2 GPU, sixth‑gen NPU with 7 % AI gain, and an Imagiq 950 ISP supporting up to 200 MP cameras. The 7450X adds dual‑display optimization and flagship‑class camera and AI capabilities, debuting in Motorola’s Razr 70 on April 29, 2026.

    MediaTek launches Dimensity 7450X for mid‑range foldables
    Priya Desai6 days ago

    Cat Gatekeeper Chrome Extension Launches on April 27, 2026

    Cat Gatekeeper, a free Chrome extension released on April 27, 2026, overlays a cartoon cat on selected sites—Facebook, X, Reddit, YouTube, Threads, and Bluesky—once a user‑set timer expires. The tab remains blocked until the user resets it. Developer @konekone2026 describes it as a light‑hearted productivity cue that avoids shame‑based blocking. A Firefox version is planned.

    Cat Gatekeeper Chrome Extension Launches on April 27, 2026
    Carter Brooks6 days ago
    Loading...
banner