New project makes Wikipedia data more accessible to AI models. On October 1, 2025, Wikimedia Deutschland unveiled the Wikidata Embedding Project, a vector-based semantic search layer atop Wikipedia’s 120 million entries to enable natural-language queries by AI models (techcrunch.com). Integrated with the Model Context Protocol, this open database supports retrieval-augmented generation, grounding LLM outputs in community-verified facts. The initiative, built in partnership with Jina.AI and DataStax, promises a collaborative alternative to proprietary data sources.
“This Embedding Project launch shows that powerful AI doesn’t have to be controlled by a handful of companies,” said project manager Philippe Saadé (techcrunch.com).
Why big data is actually small, personal and very human. This 3,900-word essay reframes the “petabyte era” by arguing that behind every stream of bytes lies human subjectivity and lived experience (aeon.co). Historian Rebecca Lemov traces big data’s lineage from early 20th-century social-science data collection to today’s clickstreams, showing that these traces are “faint images of me” revealing our hopes, desires, and vulnerabilities. She warns that the prevailing view of big data as an impersonal force stifles debate over privacy and power, and insists that recognising its inherent humanness is the only path to meaningful regulation.
“‘Big data is people!’” Lemov declares, challenging readers to connect public-sphere discourse with personal intimacy (aeon.co).
The secret formula for Apple’s rounded corners. In this 4-minute visual essay, Arun Venkatesan unveils a precise “roundness score” for Apple’s product lines, from 0 percent on desktop hardware to 100 percent on in-ear earbuds (arun.is). By measuring the ratio of corner-curve diameter to device side length, he reveals a pattern: objects closer to the body are rounder, enhancing ergonomics and emotional appeal. Venkatesan connects design principles to human perception, noting that developers leverage “squircles” to avoid the amygdala’s fear response to sharp edges—and to evoke trust and delight.
“When presented with objects that possess sharp angles or pointed features, a region of the human brain involved in fear processing, the amygdala, is activated,” he writes (arun.is).
Why Tool AIs Want to Be Agent AIs. This in-depth analysis argues that confining AI to passive “Tool” mode is unstable: agent-style autonomy not only yields economic advantages but also enhances inferential power via self-optimization (gwern.net). The author dissects the conceptual distinction—tools calculate actions, agents execute them—and shows that the same reinforcement learning that fuels agency also improves learning, hyperparameter tuning, and resource allocation. As markets and research push for efficiency, Agent AIs will outcompete Tool AIs, making agency the de-facto standard for future AI development.
“An ‘agent,’ by contrast, has an underlying instruction set that conceptually looks like: ‘(1) Calculate which action… (2) Execute Action A.’ In any AI where (1) is separable, (2) can be set to the ‘tool’ version—but users will choose agents unless there’s a compelling cost,” the essay explains (gwern.net).
A Visual Essay: Discovering the phenomenology and philosophy of light. Photographer Mik Efford guides readers through an 8-minute exploration of light’s dual nature—its measurable physics and its subjective perception (medium.com). Organised into natural and artificial light sources, the essay pairs personal images with philosophical questions: Do skiers under harsh sunlight perceive the same phenomena as distant viewers? Can scientists reduce sunset’s glow to mere wavelengths? Efford bridges science and phenomenology, invoking Paul Churchland’s critique and Arthur C. Clarke’s maxim.
“Any sufficiently advanced technology is indistinguishable from magic,” he reminds us, as each photograph prompts reflection on the unseen qualities of light (medium.com).