Daily Edition

November 11, 2025

1. “Introducing Nested Learning: A new ML paradigm for continual learning”

A deep dive from Google Research into “Nested Learning,” a framework that treats large models as collections of interconnected optimization loops—each with its own context flow and update rate—to tackle catastrophic forgetting and enable true continual learning.
“By treating architecture and optimization as a single, coherent system of nested optimization problems, we unlock a new dimension for design, stacking multiple levels.” (research.google)

2. “Art by algorithm”

Ed Finn traces the arc from Deep Blue to Auto-Tune, showing how computation has fundamentally reshaped aesthetics—from centaur chess to algorithmically enhanced photography—and argues that human surprise remains the last bastion of genuine creativity.
“We will continue responding most powerfully to those creative stimuli that somehow reconfigure our brains, literally allowing us to see in a new way.” (aeon.co)

3. “Artificial intelligence won’t replace designers; it will supercharge them”

At Bengaluru’s DesignUp conference, leading practitioners agree that AI augments human judgment, freeing designers from rote work and elevating the craft of curation, taste, and problem framing to the forefront.
“Design becomes much more about curation, taste, and the ability to meet user needs. AI will make you faster, but it isn't going to decide what problem to solve or what idea to evolve.” (timesofindia.indiatimes.com)

4. “Can computers think? No. They can’t actually do anything”

Alva Noë argues that genuine thinking requires resistance and self-critique—qualities machines, as mere symbol manipulators, inherently lack—inviting readers to reclaim a richer, embodied vision of intelligence beyond hype.
“For all the promise and dangers of AI, computers plainly can’t think. To think is to resist—something no machine does.” (aeon.co)

5. “What Gödel’s incompleteness theorems say about AI morality”

Brandon Boesch explains why any AI system built on formal logic will inevitably harbor moral blind spots, showing that Gödel’s work places an insurmountable boundary on machine ethics and underscores the necessity of human judgment.
“No AI, no matter how sophisticated, could prove all moral truths it can express… Gödel’s theorems place a logical boundary on what AI, if built on formal systems, can ever fully prove or validate about morality from within those systems.” (aeon.co)