These kinds of haptic feedback devices exist and got talked about a decent amount 10-15 years ago, but mostly failed to take off for a variety of reasons (I don't remember all the reasons, but cost, durability, and transparency were common ones). The first that comes to mind for me is Tactus Technology, which put a film over touchscreens that could dynamically form buttons as needed. I forget if that one was fluid based or electroactive polymer based, but I remember both existing. (EAPs are also used for vibration feedback and actuators, but in this case the idea is to deform them into a fixed shaped for as long as needed).
IIRC there was also a haptic feedback device company that talked about integrations with AR/VR and physics engines and physical modeling tools, so you could e.g. literally feel yourself moving around a digital workshop or other setting and move stuff around and interact with any materials present. Can't remember the name.
I also wish someone would pick these kinds of ideas back up.
I wonder how far you could get with neodynium apple pencil + big honkin electromagnets. It would require a user interface paradigm capable of kilohertz feedback loops instead of ~100 ms, but also the point would be kilohertz feedback.
The guy who wrote this essay went on to make a "interactive building-as-computer" thing called dynamicland
A decade+ ago, there was this post A Brief Rant on the Future of Interaction Design, which noted that we seem to be designing all our devices to have smooth glass omni-interfaces.
It opens with these vignettes of how people seem to expect the future to be:
And when you look at like Marvel Movies that depict the Near Future, it's basically the same thing except holograms:
Which is 3D which is nice, but, there's something fundamentally... sad/improverished about it.
The essay notes:
I read that several years ago, and... sorta assumed someone would be on the ball of making "UI that is not hands sliding over glass" happen. Since then I've watched cars replace their knobs and such with More Glass, and been sad. And it's become more clear to me that I and many people are addicted to shiny screens.
There's, notably, a good reason to make more UI devices into screens: screens are much more flexible than hard-modeled buttons and widgets. You can make apps that do all kinds of stuff, not just one thing.
There is the idea that we could go back to single-use devices, where you don't need all that flexibility. This is appealing to me, but I don't really see how it can be a equilibrium point for a thing society adopts en mass. Laptops are too useful.
But, it seems like there could be some kind of... idk, "Smart Putty based device" that can actually reshape itself into various little knobs and buttons?
Yesterday I was thinking "man, some LessWrong guy who for whatever reason isn't worried about AI x-risk but is otherwise ambitious should make this their life mission.
Then, I immediately remembered "oh, right, the future of UI interaction is here, and it's LLM agents." And, the actual next Big UI Thing is going to be an audio-primary device that lets you ask AIs for things and then they give you exactly what you ask for and then anticipate what you're going to ask for next and it doesn't leverage your human racial bonus to having hands but does leverage your human racial bonus for having ears and a mouth and social interaction, which is pretty good.
But, the Smart Putty stuff still sounds cool, and Audio AI UI still leaves me a bit sad to miss out on more tactile experiences.
So, someone get on that.
I'm not expecting this to be fast, and the oncoming AGI situation also makes it a bit wonky how to prioritize (probably AI will help somehow but this does require engaging with the World of Atoms with novel material science, which I expect to be one of the slower things to get accelerated)
The original article ends by noting that in some sense, the iPad was "invented" in 1968, it just took 40 years: