It opens with these vignettes of how people seem to expect the future to be:
And when you look at like Marvel Movies that depict the Near Future, it's basically the same thing except holograms:
Which is 3D which is nice, but, there's something fundamentally... sad/improverished about it.
The essay notes:
Before we think about how we should interact with our Tools Of The Future, let's consider what a tool is in the first place.
I like this definition: A tool addresses human needs by amplifying human capabilities.
That is, a tool converts what we can do into what we want to do. A great tool is designed to fit both sides.
In this rant, I'm not going to talk about human needs. Everyone talks about that; it's the single most popular conversation topic in history.
And I'm not going to talk about technology. That's the easy part, in a sense, because we control it. Technology can be invented; human nature is something we're stuck with.
I'm going to talk about that neglected third factor, human capabilities. What people can do. Because if a tool isn't designed to be used by a person, it can't be a very good tool, right?
Take another look at what our Future People are using to interact with their Future Technology:
Do you see what everyone is interacting with? The central component of this Interactive Future? It's there in every photo!
And that's great! I think hands are fantastic!
Hands do two things. They are two utterly amazing things, and you rely on them every moment of the day, and most Future Interaction Concepts completely ignore both of them.
Hands feel things, and hands manipulate things:
Go ahead and pick up a book. Open it up to some page.
Notice how you know where you are in the book by the distribution of weight in each hand, and the thickness of the page stacks between your fingers. Turn a page, and notice how you would know if you grabbed two pages together, by how they would slip apart when you rub them against each other.
Go ahead and pick up a glass of water. Take a sip.
Notice how you know how much water is left, by how the weight shifts in response to you tipping it.
Almost every object in the world offers this sort of feedback. It's so taken for granted that we're usually not even aware of it. Take a moment to pick up the objects around you. Use them as you normally would, and sense their tactile response — their texture, pliability, temperature; their distribution of weight; their edges, curves, and ridges; how they respond in your hand as you use them.
There's a reason that our fingertips have some of the densest areas of nerve endings on the body. This is how we experience the world close-up. This is how our tools talk to us. The sense of touch is essential to everything that humans have called "work" for millions of years.
Now, take out your favorite Magical And Revolutionary Technology Device. Use it for a bit.
What did you feel? Did it feel glassy? Did it have no connection whatsoever with the task you were performing?
I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.
Is that so bad, to dump the tactile for the visual? Try this: close your eyes and tie your shoelaces. No problem at all, right? Now, how well do you think you could tie your shoes if your arm was asleep? Or even if your fingers were numb? When working with our hands, touch does the driving, and vision helps out from the back seat.
Pictures Under Glass is an interaction paradigm of permanent numbness. It's a Novocaine drip to the wrist. It denies our hands what they do best. And yet, it's the star player in every Vision Of The Future.
To me, claiming that Pictures Under Glass is the future of interaction is like claiming that black-and-white is the future of photography. It's obviously a transitional technology. And the sooner we transition, the better.
What can you do with a Picture Under Glass? You can slide it.
That's the fundamental gesture in this technology. Sliding a finger along a flat surface.
There is almost nothing in the natural world that we manipulate in this way.
That's pretty much all I can think of.
Okay then, how do we manipulate things? As it turns out, our fingers have an incredibly rich and expressive repertoire, and we improvise from it constantly without the slightest thought. In each of these pictures, pay attention to the positions of all the fingers, what's applying pressure against what, and how the weight of the object is balanced:
Many of these are variations on the four fundamental grips. (And if you like this sort of thing, you should read John Napier's wonderful book.)
Suppose I give you a jar to open. You actually will switch between two different grips:
You've made this switch with every jar you've ever opened. Not only without being taught, but probably without ever realizing you were doing it. How's that for an intuitive interface?
I read that several years ago, and... sorta assumed someone would be on the ball of making "UI that is not hands sliding over glass" happen. Since then I've watched cars replace their knobs and such with More Glass, and been sad. And it's become more clear to me that I and many people are addicted to shiny screens.
There's, notably, a good reason to make more UI devices into screens: screens are much more flexible than hard-modeled buttons and widgets. You can make apps that do all kinds of stuff, not just one thing.
There is the idea that we could go back to single-use devices, where you don't need all that flexibility. This is appealing to me, but I don't really see how it can be a equilibrium point for a thing society adopts en mass. Laptops are too useful.
But, it seems like there could be some kind of... idk, "Smart Putty based device" that can actually reshape itself into various little knobs and buttons?
Yesterday I was thinking "man, some LessWrong guy who for whatever reason isn't worried about AI x-risk but is otherwise ambitious should make this their life mission.
Then, I immediately remembered "oh, right, the future of UI interaction is here, and it's LLM agents." And, the actual next Big UI Thing is going to be an audio-primary device that lets you ask AIs for things and then they give you exactly what you ask for and then anticipate what you're going to ask for next and it doesn't leverage your human racial bonus to having hands but does leverage your human racial bonus for having ears and a mouth and social interaction, which is pretty good.
But, the Smart Putty stuff still sounds cool, and Audio AI UI still leaves me a bit sad to miss out on more tactile experiences.
A decade+ ago, there was this post A Brief Rant on the Future of Interaction Design, which noted that we seem to be designing all our devices to have smooth glass omni-interfaces.
It opens with these vignettes of how people seem to expect the future to be:
And when you look at like Marvel Movies that depict the Near Future, it's basically the same thing except holograms:
Which is 3D which is nice, but, there's something fundamentally... sad/improverished about it.
The essay notes:
I read that several years ago, and... sorta assumed someone would be on the ball of making "UI that is not hands sliding over glass" happen. Since then I've watched cars replace their knobs and such with More Glass, and been sad. And it's become more clear to me that I and many people are addicted to shiny screens.
There's, notably, a good reason to make more UI devices into screens: screens are much more flexible than hard-modeled buttons and widgets. You can make apps that do all kinds of stuff, not just one thing.
There is the idea that we could go back to single-use devices, where you don't need all that flexibility. This is appealing to me, but I don't really see how it can be a equilibrium point for a thing society adopts en mass. Laptops are too useful.
But, it seems like there could be some kind of... idk, "Smart Putty based device" that can actually reshape itself into various little knobs and buttons?
Yesterday I was thinking "man, some LessWrong guy who for whatever reason isn't worried about AI x-risk but is otherwise ambitious should make this their life mission.
Then, I immediately remembered "oh, right, the future of UI interaction is here, and it's LLM agents." And, the actual next Big UI Thing is going to be an audio-primary device that lets you ask AIs for things and then they give you exactly what you ask for and then anticipate what you're going to ask for next and it doesn't leverage your human racial bonus to having hands but does leverage your human racial bonus for having ears and a mouth and social interaction, which is pretty good.
But, the Smart Putty stuff still sounds cool, and Audio AI UI still leaves me a bit sad to miss out on more tactile experiences.
So, someone get on that.