The Quiet Power of Defaults

If you have never changed the settings on your phone, congratulations. You have already experienced the power of defaults.

Published

Author

Shreya Kudumala

Note: This article began with an AI-generated first draft, which is probably the most honest demonstration of the argument to follow.

We have a tendency to accept preselected options because they reduce cognitive effort. In a now-famous analysis, behavioral economists Eric Johnson and Daniel Goldstein compared opt-in and opt-out policies for organ donation across Europe. Countries like Germany, which required citizens to actively consent, had donation rates below 15 percent. Neighboring Austria, with an opt-out default, saw donation rates above 90 percent. In this case, the default selection was not merely a suggestion, but a behavioral anchor.

Computing history is full of these anchors. The QWERTY keyboard, designed in the 1870s to slow typists and prevent typewriters from jamming, became the de facto global system for keyboards despite more efficient layouts. When Microsoft set Internet Explorer as the default browser in Windows, its market share climbed above 90 percent. If you’re in a crowded room and an iPhone starts ringing, half the room checks their pockets, because no one bothered to change it from Marimba.

This brings us to today's AI era.

When we use AI to generate the first draft of a product spec or a marketing plan, that draft immediately becomes our anchoring frame. A large body of psychology research, beginning with Tversky and Kahneman in the 1970s, shows that humans consistently anchor their reasoning on initial information, even when they know it is arbitrary. With AI, that initial information is no longer arbitrary. LLMs are trained to predict the most probable continuation of text based on vast datasets. As a result, their outputs tend to converge toward the center of the distribution. In other words: the model’s default answer is usually the most statistically average option.

This is extremely useful for tasks that benefit from standardization, like drafting documentation or summarizing meeting notes. But human creativity lives at the margins. A 2024 Stanford study found that participants who began brainstorming sessions with AI-generated ideas showed reduced ideational diversity compared to those who began independently. Not necessarily weaker ideas, but less varied ones.

The challenge for AI-native UI/UX is designing defaults that promote exploration rather than convergence. Already, alternative approaches are emerging. Some systems generate multiple, contrasting first drafts, intentionally creating a spread rather than a center. Others reveal uncertainty or show the internal ranking of possible answers. These small shifts change how users anchor their reasoning. In HCI research, this is called “choice visibility”. When the system surfaces its alternatives, the user regains some agency over their starting point.

Douglas Engelbart wrote in 1962 that computers should augment human intellect by increasing our capability to approach complex problems. His vision was not automation but expansion. AI can still serve that purpose by becoming starting points rather than conclusions. Then they become tools for imagining futures we couldn't have reached alone.