[ Question ]

What (feasible) augmented senses would be useful or interesting?

by AprilSR1 min read6th Mar 20219 comments

8

World Modeling
Frontpage

The BrainPort is a device which allows blind people to have their sight restored (partially) by sending signals through electrodes that rest on the tongue. 

The signals are sent to the tongue via a "lollipop," an electrode array about nine square centimeters that sits directly on the tongue. Each electrode corresponds to a set of pixels. White pixels yield a strong electrical pulse, whereas black pixels translate into no signal. Densely packed nerves at the tongue surface receive the incoming electrical signals, which feel a little like Pop Rocks or champagne bubbles to the user.

Seiple works with four patients who train with the BrainPort once a week and notes that his patients have learned how to quickly find doorways and elevator buttons, read letters and numbers, and pick out cups and forks at the dinner table without having to fumble around.

- Scientific American

The brain seems to be remarkably good at reinterpreting sensory signals. What possible applications are there of this, especially which might be practically implemented by an average person today without needing to develop specialized hardware?

New Answer
Ask Related Question
New Comment

2 Answers

Both feeling magnetic fields and cardinal direction are senses that can be required with relatively cheap hardware. The first is common enough that I have seen two people having it with my own eyes.

First things first: the design of the brainport sucks. I've seen retainer versions of these kind of devices, and given the nature of modern electronics (particularly wireless and battery tech) the resolution of any such device can be significantly enhanced. If you are using a retainer as the chassis then it is easy to expand the sensing surface to include the teeth and gums, and likely some of the cheeks too.

It wouldn't be difficult to incorporate haptics, temperature, orientation, and tap sensors. Given the thing is in your mouth, speech to text is a no brainer, as is sound conduction for output. It would be nice if you could put buttons and a touchpad, that would be doable hardware but a difficult UX problem. Going for an external switch to move between states would be a quick fix (for example, having a magnet ring that you just hold to the side of your face for a magnetometer to pick up).

It is also worth mentioning that anything that is in the mouth can be a sensor for collecting data about the wearer. These sensors will know your temperature, pulse, heart rate, O2 saturation, possibly your blood glucose, etc.

As the brainport functions as a display for whatever data is given to it then any sensor or combination thereof would work. Fancy sensors can get quite expensive, but anything you can do with a camera and filters is cheap. This is an instance where the low resolution of the device is an asset because you can use equally low resolution sensors (at least if you aren't doing any fancy processing and are just dumping pixels as is). Everyone already has a phone for doing offboard processing if it won't fit in the sensor or retainer packages.

For me, where this idea gets interesting is in expanding it beyond the mouth. If it is primarily touch based then you have an entire body of skin that can do that. You only need to pick up a probe and start poking yourself to see the resolution of various parts of your body. I'd think the ear and the scalp would be good targets for this kind of sensing.

The quick list of stuff to see:

  • Visible light, IR, ultraviolet, the polarisations of light. Camera sensors and filters exist for all of that.
  • Temperature. Flir sensors are a good fit for this low resolution application.
  • Lidar, sonar, and radar - this is a solved problem domain, but accuracy costs money.
  • Assisted GPS and other positioning systems.
  • Hybrid processed experiences. If your location and viewing vector can be identified then computers, especially with online access, can fill in the gaps of whatever sensors you're using (especially if your own 'eyes' contribute back to the shared database. Now's a good time to mention that you could literally have eyes in the back of your head for this application). You could even be completely 'blind' and without sensors provided external sensors could track you and send the data back to your interface.
7 comments, sorted by Highlighting new comments since Today at 10:53 AM

Good question! This feeds into another important question: Whether Neuralink (and brain-computer interfaces more generally) will be strategically relevant.

Currently I am skeptical that there is anything super powerful that can be done without decades of training and experimentation. That said, here are some ideas:

--Connect people to their computers so they can type, dictate, use mouse, etc. at the speed of thought, without needing to use their hands.

--People whose job involves monitoring a complex system (e.g. for suspicious activity) can have the variables of that system plugged directly into their brain rather than displayed on a screen. Perhaps with training this could allow them to monitor more complex systems more effectively.

--Connect people to the internet + GPT-4-powered personal assistants so that they can think questions and have the answers immediately beamed back into their minds. Might be powerful in situations where you don't have your laptop or phone handy, e.g. in face-to-face conversation with another human.

--Connect people to each other, so that they can share concepts directly rather than through the medium of language. (This is the only idea so far that seems likely to be powerful/relevant, but it's the least likely to actually happen.)

New colors.

I don't mean discerning ever finer gradations in existing colors, but entirely new color qualia.

What would it be like? Would our brains be able to integrate this new phenomenal experience?

And most importantly, if color is a property of brains rather than something in the external world, does that imply the number of "possible colors" is infinite? I.e. seeing as biological brains "choose" how to internally represent a particular wavelength of EM radiation, is the seemingly Platonic realm from which these colors are plucked from inexhaustible?

We currently have aesthetic preferences over existing colors. I would want to know whether these possible colors can be enumerated, searched through, and have a utility function placed over them so that each human can find what is to them "the best possible color."

Experiments with squirrel monkeys suggests that the monkeys can effectively learn new color qualia. It requires gene therapy that's a bit more risky then brewing your own vaccine together but it's technology that works.

And most importantly, if color is a property of brains rather than something in the external world, does that imply the number of "possible colors" is infinite? 

I don't see how that changes anything about that. If you use real numbers to model color you already have infinitive shades of colors. Adding the number of possible cone configurations in the eye adds a lot of additional one's but there's a physical limit of ways those can be configured.

I don't mean discerning ever finer gradations in existing colors, but entirely new color qualia.

Finer gradations are boring, but there are (1) infrared and ultraviolet colors, and (2) colors that our current system of three types of cone cells perceives as the same, but could be perceived as different by having more types of cones, or even different types of cones. We could use the new qualia for these.

Imagine an ultra-intelligent tribe of congenitally blind extraterrestrials. Their ignorance of vision and visual concepts is not explicitly represented in their conceptual scheme. To members of this hypothetical species, visual experiences wouldn’t be information-bearing any more than a chaotic drug-induced eruption of bat-like echolocatory experiences would be information-bearing to us. Such modes of experience have never been recruited to play a sensory or signaling function. At any rate, some time during the history of this imaginary species, one of the tribe discovers a drug that alters his neurochemistry. The drug doesn’t just distort his normal senses and sense of self. It triggers what we would call visual experiences: vivid, chaotic in texture and weirder than anything the drug-taker had ever imagined. What can the drug-intoxicated subject do to communicate his disturbing new categories of experiences to his tribe’s scientific elite? If he simply says that the experiences are “ineffable”, then the sceptics will scorn such mysticism and obscurantism. If he speaks metaphorically, and expresses himself using words from the conceptual scheme grounded in the dominant sensory modality of his species, then he’ll probably babble delirious nonsense. Perhaps he’ll start talking about messages from the gods or whatever. Critically, the drug user lacks the necessary primitive terms to communicate his experiences, let alone a theoretical understanding of what’s happening. Perhaps he can attempt to construct a rudimentary private language. Yet its terms lack public “criteria of use”, so his tribe’s quasi-Wittgensteinian philosophers will invoke the (Anti-)Private Language Argument to explain why it’s meaningless. Understandably, the knowledge elite are unimpressed by the drug-disturbed user’s claims of making a profound discovery. They can exhaustively model the behaviour of the stuff of the physical world with the equations of their scientific theories, and their formal models of mind are computationally adequate. The drug taker sounds psychotic. Yet from our perspective, we can say the alien psychonaut has indeed stumbled on a profound discovery, even though he has scarcely glimpsed its implications: the raw materials of what we would call the visual world in all its glory.

Interview with David Pearce with the H+ magazine, 2009

Or, in other words: I hear your "new colors" and raise you new qualia varieties that are as different from sight and taste as sight and taste are from each other.

From the point of view of the alien scientists, I'd argue that it's rational to ignore claims of unprecedented insight into reality unless the claimant can demonstrate impressive and unexplained feats using said insight.

I would think that gaining an entirely new sensory modality would lead to unparalleled advantages over fellow members of your species. At the very least, it would let you do things that would confound them.

I feel like this would require brain surgery beyond what is realistic without major technological development - possibly post-AGI. Although the brain is able to reinterpret sensory information well, it seems to only do this using pre-existing neural structures. (I'm unsure to what degree this applies to new colors.) I'm no neuroscience expert though.