First things first: the design of the brainport sucks. I've seen retainer versions of these kind of devices, and given the nature of modern electronics (particularly wireless and battery tech) the resolution of any such device can be significantly enhanced. If you are using a retainer as the chassis then it is easy to expand the sensing surface to include the teeth and gums, and likely some of the cheeks too.
It wouldn't be difficult to incorporate haptics, temperature, orientation, and tap sensors. Given the thing is in your mouth, speech to text is a no brainer, as is sound conduction for output. It would be nice if you could put buttons and a touchpad, that would be doable hardware but a difficult UX problem. Going for an external switch to move between states would be a quick fix (for example, having a magnet ring that you just hold to the side of your face for a magnetometer to pick up).
It is also worth mentioning that anything that is in the mouth can be a sensor for collecting data about the wearer. These sensors will know your temperature, pulse, heart rate, O2 saturation, possibly your blood glucose, etc.
As the brainport functions as a display for whatever data is given to it then any sensor or combination thereof would work. Fancy sensors can get quite expensive, but anything you can do with a camera and filters is cheap. This is an instance where the low resolution of the device is an asset because you can use equally low resolution sensors (at least if you aren't doing any fancy processing and are just dumping pixels as is). Everyone already has a phone for doing offboard processing if it won't fit in the sensor or retainer packages.
For me, where this idea gets interesting is in expanding it beyond the mouth. If it is primarily touch based then you have an entire body of skin that can do that. You only need to pick up a probe and start poking yourself to see the resolution of various parts of your body. I'd think the ear and the scalp would be good targets for this kind of sensing.
The quick list of stuff to see:
- Visible light, IR, ultraviolet, the polarisations of light. Camera sensors and filters exist for all of that.
- Temperature. Flir sensors are a good fit for this low resolution application.
- Lidar, sonar, and radar - this is a solved problem domain, but accuracy costs money.
- Assisted GPS and other positioning systems.
- Hybrid processed experiences. If your location and viewing vector can be identified then computers, especially with online access, can fill in the gaps of whatever sensors you're using (especially if your own 'eyes' contribute back to the shared database. Now's a good time to mention that you could literally have eyes in the back of your head for this application). You could even be completely 'blind' and without sensors provided external sensors could track you and send the data back to your interface.