Daily Edition

September 24, 2025
  1. This AI-powered lab runs itself—and discovers new materials 10x faster. Researchers at North Carolina State University have developed a “self-driving” chemistry lab that collects data every half-second, yielding at least ten times more information per experiment than traditional batch methods. By continuously flowing samples through analytical instruments instead of pausing between trials, the system generates a full “movie” of reaction dynamics rather than isolated snapshots. Early applications demonstrate accelerated discovery of new materials for clean energy and electronics, reducing timelines from months to days. This platform also leverages real-time machine learning to select the next experiment, optimizing exploration of vast chemical spaces without human intervention.

    “The sample is moving continuously through the system and, because the system never stops characterizing the sample, we can capture data on what is taking place in the sample every half second.” (sciencedaily.com)

  2. When AI takes over the practice of science we will likely find the results strange and incomprehensible. Brandon Boesch argues that fully automated scientific discovery—where robots design, execute and interpret experiments—will produce insights that elude human understanding. Drawing on Paul Humphreys’s “automated scenario,” the essay examines how motivations driving human science (curiosity, explanation, control) vanish when machines pursue optimization alone. Instead of replacing human inquiry, super-intelligent lab systems will create a parallel “species” of science, driven by metrics and patterns beyond our epistemic reach. Boesch concludes that this automated science must coexist alongside human-led research, each reflecting different values and aims, lest we surrender our desire for understanding to inscrutable algorithms.

    “Arguably, the most likely reason we will end up with the automated scenario is because we simply cannot escape the forces of capital and competition. The future may simply happen to us, whether we reflectively want it or not.” (aeon.co)

  3. An individual cannot be ‘captured’ in a photograph. Daniel Star dismantles the notion that portrait photographs “capture the essence” of their subjects. He shows how technical choices—exposure time, focus, framing—and post-production mean photographs are constructed, not transparent windows. Even candid street images only offer interpretations that may mislead, since we lack independent evidence of the sitter’s state or character at the moment of capture. Star warns against treating portraits as factual records and invites viewers to appreciate instead the aesthetic, ethical and narrative layers that make photography a creative art.

    “Photographs ‘capture’ scenes only in a highly attenuated sense. We do not see the world in the way we see scenes in photographs. Skilled photographers exercise a significant degree of control over the content and production of photographs in a large number of ways, both before the shutter closes and afterwards.” (aeon.co)

  4. Let’s ditch the dangerous idea that life is a story. Philosopher Galen Strawson challenges the popular view that we naturally experience our lives as coherent narratives. He distinguishes between “Narrative” selves—who organize memories and identity into plot-like structures—and “non-Narrative” selves, who perceive life as a series of discrete events without overarching story arcs. Strawson argues that insisting on narrative self-construction can distort memory, fuel anxiety, and impose Western, linear frameworks on diverse individual experiences. Embracing non-narrative modes allows for authenticity and acceptance of life’s “great shambles,” freeing us from the pressure to cast every moment as part of a heroic quest.

    “Many of us aren’t Narrative in this sense. We’re naturally—deeply—non-Narrative. We’re anti-Narrative by fundamental constitution. It’s not just that the deliverances of memory are, for us, hopelessly piecemeal and disordered, even when we’re trying to remember a temporally extended sequence of events.” (aeon.co)

  5. The Strange Physics That Gave Birth to AI. Elise Cutts traces the unlikely lineage of modern neural networks to the mid-20th-century physics of spin glasses—metal alloys whose disordered magnetic interactions defied expectations. John Hopfield, inspired by spin-glass mathematics, repurposed these ideas in 1982 to build simple memory-recalling networks, reigniting interest in artificial neural models. Cutts shows how insights from complex-systems physics—long dismissed as academic curiosities—provided the theoretical underpinnings for today’s deep-learning revolution. This convergence reveals that breakthroughs often emerge at the intersection of seemingly unrelated fields, and suggests future AI may draw on yet more “useless” branches of fundamental science.

    “Spin glasses might turn out to be the most useful useless things ever discovered. These materials … exhibit puzzling behaviors that captivated a small community of physicists in the mid-20th century. Spin glasses themselves turned out to have no imaginable material application, but the theories devised to explain their strangeness would ultimately spark today’s revolution in artificial intelligence.” (quantamagazine.org)