Meta (formerly known as Oculus and as Facebook) just had their yearly conference. I think right now is a good time to take a brief break from worrying about AI timelines, and take a few minutes to notice the shortness of VR timelines. VR isn't going to determine whether humanity lives or dies, but I think it's close enough to have immediate-term impact on decisions about where to live, and how to structure organizations.

Concretely: I think we're 6 months from the crossover point where VR is better than in-person for a narrow subset of meetings, specifically, 1:1 meetings without laptops, where both sides are knowledgeable enthusiasts using the latest headset in combination with a high-end desktop PC. I estimate about 2.5 years until the crossover point where VR is better than in-person for most meetings.

I think most people are wildly underestimating how impactful, and how sudden, this is going to be. I think they'll continue underestimating how impactful it's going to be, right up until the second crossover happens.

Basically, everyone tried out Zoom meetings and discovered that they sucked compared to in-person meetings, for reasons that are slightly hard to put your finger on. Some people tried VR meetings, and discovered that they were worse. People then updated towards believing that remote work (including both videochat and VR chat) would not be transformative, but did so without a model of what concretely was going wrong and when/whether it would be fixed. But, zooming in on the technology more closely, I see a fairly short list of known problems that seems to fully explain why people prefer in-person meetings in practice, and I'm watching that list slowly shrink. I currently think every item on the VR-meetings-suck list has line-of-sight to a full solution, and that at that point, VR meetings will be straightforwardly better than in-person ones.

For context: Oculus announced a new headset, the Quest Pro, at their yearly conference yesterday. It ships in two weeks and costs $1500. Consumer sentiment about it is probably going to be negative; I expect a lot of grumbling about the price and the battery life, in particular. But this is going to be misleading. The important thing about it is that Quest Pro is that it contains a first-pass version of all but one item on my why-remote-meetings-suck list. These solutions are going to have problems at first, but the problems are not fundamental.

In particular:

  • Color passthrough cameras solve the problem where entering VR requires giving up the ability to see the real world, which makes it very costly compared to joining a call. There are some minor visual-quality caveats, and some major software-not-ready caveats about how the worlds mix.
  • Inward-facing cameras with face and eye tracking. Under research-prototype-demo conditions, it looks like this solves the problem of not being able to see peoples' facial expressions. Under realistic conditions, what I've read suggests that they've got framerate problems, which is why I specified "1:1 with gaming PCs" as a setup that will work before other setups do.
  • Audio latency. I believe Oculus is currently pretty far ahead of Zoom on this one, due to a combination of console-like known hardware, expertise crossover from video-latency being a core requirement of VR, and engineering prioritization.

The one caveat that this doesn't attempt to address is that the panel resolution isn't high enough to have small text, which in turn means you can't bring a normal-sized laptop with you into VR. As far as I can tell this is not a fundamental problem, denser panels do exist elsewhere on the market.

New to LessWrong?

New Comment
46 comments, sorted by Click to highlight new comments since: Today at 2:04 PM

I disagree with pretty much everything you've said here.  

First, zoom meetings (or google meet) are not necessarily worse than in-person.  They're great!  I've been working from home since the pandemic started, and I actually have more meetings and interactions with colleagues than I did before.  Before the pandemic, having a meeting not only meant setting a time, but finding a spare conference room, which were in short supply at my office.  With WFH, any time I want to talk to someone, I just send them a brief chat, and boom, instant videoconference.  I love it.  It's great.  

Second, what problem, exactly, is VR supposed to solve? Facial expressions are much more accurate over videoconference than VR.  Looking at poorly rendered and animated avatars is not going to fix anything. Gestures and hand signals are more accurate over VC.  Slide presentations are easy over VC.  Shared documents are easy over VC.  I really can't think of anything that would actually be better in VR.

Third, I'm early adopter and VR enthusiast, and owner of a high-end ($4k) VR gaming rig, and I can tell you that the tech is really only suitable for niche applications.  VR headsets are heavy, sweaty and uncomfortable.  They're a pain to use with glasses.  Screen resolution is low, unless you spend lots of $$$.  You don't have good peripheral vision.  There are lensing artifacts.  Lots of people still get nauseous.  It's hard to use a keyboard, or to move without bumping into things.  I've got a strong stomach, but an hour or two is pretty much my max before I want to rip the damn thing off.  No way in hell am I going to wear a VR headset for meetings; I'd quit my job first.

VR is really great for certain things, like flight simulators, where the head tracking and immersion makes it vastly superior to any other option.  But if Meta thinks that ordinary people are going to want to use VR headsets for daily work, then they're smoking some pretty strong stuff.

I agree wholeheartedly. Dissemination of a technology doesn't seem to primarily be limited by the capacity of the hardware, but rather by willingness of people to adopt it. Saying the tech is getting better doesn't prove the tech is solving a salient problem. And I don't see managers and employees being thrilled about strapping on a headset for every meeting. As a simple heuristic, I (somebody who's logged many hours on my oculus) do not want to meet in the metaverse, for the reasons you state here.

I'll go even further though; videoconferencing is overrated compared with phone calls. For socializing, sure it's nice to see peoples' faces. But when you're just trying to exchange information, a phone call suffices. As a teacher I never see the faces of people who I'm working with on fundraising... sometimes never even hear their voice if we just exchange emails. If you ran a double-blind experiment with a thousand participants to compare how much information was being conveyed with a video call vs a phone call, sure you might find a smidgen of difference. But I'd wager the effect would be an order of magnitude smaller than if you just gave everyone a cup of coffee before the meeting. 

What about eyestrain?

Eyestrain is much stronger in VR than with traditional computers - and it's easy to just look away from a computer or phone when you want to versus having to remove a headset altogether.

I very strongly believe that VR as opaque goggles with screens will never be a transformative product*; AR will be. AR is real world first, virtual world second.

*Barring full Matrix/Ready Player One types of experiences where it's effectively a full substitute for reality.

How's eyestrain with AR?

There isn't any mainstream AR product to judge against because it's a much more challenging technology. Proper AR keeps the real world unobstructed and overlays virtual objects; Hololens and Magic Leap would be the closest to that which are available so far. I do not consider piped-in cameras like will be on the Quest Pro to be the same. Eyestrain will likely in better AR for two reasons. One, it would simply be the real world in regular vision for most experiences, so no adjustment is required. Secondly, unlike VR which is effectively two close-up screens to focus on, current AR innovation involves clear, layered reflective lenses that actually orient the individual light rays to match the path it would take to your eye if the object was actually in that 3d space. So instead of a close image that your brain can be convinced is distant, the light itself hits the retina at the proper angle to be registered as actually at that distance. Presumably, this would be less strenuous on the eyes and image processing, but it's still experimental.

Depends on the tech.  A lot of AR involves putting a camera on VR goggles, and piping the digital image onto VR screens.  So while you may be looking at the real world, you're looking at a low-res, pixelated, fixed-focal-distance, no-peripheral-vision, sweaty, god-rays version of it.

There are versions of AR that function more like a heads up display.  I cannot speak from personal experience, but my understanding is that they still have issues:

https://arstechnica.com/gadgets/2022/10/microsoft-mixed-reality-headsets-nauseate-soldiers-in-us-army-testing/

Are you long Meta?

[-]cata2y2322

I don't think it's going to be transformative until you are happy to wear a headset for hours on end. In and of themselves, VR meetings are better than Zoom meetings, but having a headset on sucks compared to sitting at your computer with nothing on your face.

I think the bottleneck is going to be cheap and widely owned. Zoom benefits from ubiquitous built-in or very cheap cameras, which are a huge marginal upgrade over having no videoconferencing capability at all and seem to me (no background in the tech) like they're build on the accumulated tech stack of a mature camera/video industry.

Well-functioning headsets have higher tech hurdles, will be more expensive, physically unfamiliar, and the benefits of VR vs. videoconferencing are hard to put into words.

I could see the killer app for VR being the ability to remotely control a general-purpose robot that is physically dexterous, is controlled via the same body movements you'd do yourself if you were in its place, and can be mass-produced.

I don't really think that cost is an important bottleneck anymore. I and many others have a Rift collecting dust because I don't really care to use it regularly. Many people have spent more money on cameras, lighting, microphones, and other tinkering for Zoom than it would cost them to buy a Quest.

Any technology is more useful if everyone owns it, but to get there, it has to be useful at reasonable levels of adoption (e.g. a quarter of your friends own it), or it's not going to happen.

To me, the plausible route towards getting lots of people into VR for meetings is to have those people incidentally using a headset for all kinds of everyday computing stuff -- watching movies, playing games, doing office work -- and then, they are already wearing it and using it, and it's easy to have meetings with everyone else who is also already wearing it and using it. That's clearly achievable but also clearly not ready yet.

[-]jmh2y20

I agree on this, I think network effects are going to be a big part of VR success in terms of transforming society and economy. 

I don't think meetings are going to be the driver here but they do reflect one problem area the VR can ultimately resolve. As email and messaging solved a temporal aspect of communications VR will be solving a locational aspect of interaction. But I think outside niche areas like the operating room or similar settings where the operator and the equipment need to be well integrated in a way that makes sense to human perceptions it won't too much traction. 

Gaming (and perhaps porn) perhaps get the major growth of the network in the initial stages (which does seem to be occurring now with gaming).

I'm generally not happy if I have meetings for hours on ends.

Meta seems to have trouble motivating the developers who actually develop Horizon worlds to use it. I would expect at least some of them to do the alpha/beta testing of the new device. 

I would expect that it takes more than 6 months to get to the point where the product is good enough to be superior to 1-on-1 meetings.

vrchat is already everything horizon worlds wishes it was. including popular. it's not going to be used for meetings, but if you want to see what it can really be like, do come try vrchat, we've already basically got it; it's just a bit uncomfy for practical use for many people still, jim is betting that will change because it's in the process of doing so. transhumanists in vr meets friday at 6pm pacific for game night and saturday 6pm pacific for discussion night; it's one of several tech nerds communities, there's also eavr (https://www.eavr.org/diving-into), stemvr (https://vreventhub.com/), improv vr (https://gobsimprovemporium.com/), and a hell of a lot of dance clubs (find some on vreventhub). and oh man, those are some dance clubs. anyway, here's a documentary video if you're into that sort of thing. https://www.youtube.com/watch?v=4PHT-zBxKQQ

vrchat is free and works on pc-desktop, or pc-vr, or oculus quest. runs on windows or steam's builtin wine on linux-desktop. it's a better meeting environment than gather.town or discord calls even just in desktop

something I think particularly cool about vrchat is there's lots of room for many events from the same community, and in my view, communities can have smoother friendship knitting than is easy irl or online because of the emotional communication effect of distance and head turn to indicate vocal connection and yet having teleportation available and unlimited amounts of space. it allows a smoother mix of spreading out into 3d spatialized volumes, and hopping worlds if you need to be with a different group for a while. if you make any effort at all to connect to multiple groups, you'll find yourself connected to a weird mix of people; if you retain interest in the link they provide to the variety of activities on vrchat, you can connect with all sorts of folks. there's still a filter in what sorts of people you'll encounter on vrc, but it's a pretty wide filter.

I don't think that's a good indicator, the same way that beverage executives don't buy their kids soda and tobacco lobbyists don't smoke.

Soda/tabacco don't provide clearly better experiences. They don't exist to have a better meeting experience.

Aight, I have a Meta Quest Pro to test now, so here are my first-day impression. I haven't updated away from the bottom line of this post, though a few things were better and a few things were worse than I expected.

  • They really knocked it out of the park with the pass-through cameras. While wearing the headset under typical indoor lighting, I can see the regular world about as well as I see it without my glasses (I'm nearsighted -2.75).
  • The underside is open, rather than covered, which makes it much easier to peak out eg to check your phone compared to Quest 2. A side-effect of this is that you can see the same object in both normal and pass-through vision at the same time, and verify that it lines up perfectly.
  • Text clarity improved more than I expected (they improved the optics, but didn't significantly increase the panel pixel density). The minimum acceptable font size is now a font size that most people (without my unusual preference for small fonts) would find to be normal.
  • I haven't found someone else with a Quest Pro yet to try out eye contact and face tracking in a call, but I was able to try it out on an avatar in a mirror. Eye direction, blinking, winking work, and most mouth movements work. Eyebrows don't move, sticking out your tongue doesn't come through, and I don't think it can distinguish real smiles from fake smiles. Translated onto a slightly cartoonish avatar, this is enough to get me over the uncanny valley.
  • The weight distribution is good, with the thinner front and the battery moved to the back, but I don't like the head strap (compared to the Quest 2 Elite Strap I was using). It puts too much of the headset's weight onto too small an area of the forehead, and as a shear rather than a compression. I think it can be remedied with an aftermarket modification to add a fabric strap across the top, but the strap is much less customizable/replaceable than it was on Quest 2. This seems like a major weak point and some people will probably find it to be a dealbreaker for extended use.
  • I paired it with a bluetooth mouse and keyboard, opened a web browser and tried it as a laptop replacement, and it Just Worked. (Choice of a virtual environment with hands, keyboard, and browser window visibile, or pass-through camera with a browser window floating in virtualized physical space. Caveat that transferring a keyboard into a virtual environment with computer vision only works for a whitelist of supported keyboard models, and I was lucky in that the first keyboard I tried happened to be a supported one). It's not at a level where I'd choose it over my Macbook Pro under normal circumstances, but it's within striking distance, and I *would* use it that way if I was stuck in an airplane seat without room to properly angle a laptop screen, or looking at something highly confidential in a public place.
  • The controller tracking is solid. It's the same experience as Quest 2, except the dead zones are gone, as are the bulky tracking rings.

Latency, regardless of the cause, is one of the biggest hurdles. No matter how perfect the VR tech is, if the connection between participants has significant latency, then the experience will be inferior to in-person communication.

This is plausible to me and pretty interesting (I think there are more obstacles that the ones you list here and it'll take longer to work out kinks, but the overall point that a VR paradigm shift isn't that far away still seems pretty plausible)

My question, though, is, "is there anything you can do to profit off this now if you're not personally investing in building the VR future?". It seems like it'll be pretty easy to start hiring people remotely and using VR once the tech becomes workable. It seems useful to be anticipating this, and be "ahead of the curve" when the tech gets good enough, but that gets you maybe a 3-12 months of getting a bit more value.

I wrote more about this https://forum.effectivealtruism.org/posts/K3JZCspQMJA34za3J/most-social-activity-will-reside-in-vr-by-2036

It's not going to be very transformative until it has widespread adoption, but it will, very soon.

Why better than in-person? Because of commute times, because of people being in spaces adapted to their own preferences, something else?

[-]jp2y76

I think the problem with zoom meetings is not the meeting itself, but instead the bounds of the meeting. It's easier to have better coordination if you can freely wander in and out of a casual conversation. It's hard to get super-in-sync over, say, 60 minutes a day of facetime. To put another way, zoom does fine for "full meeting" mode, but much worse for casual, semi-meeting mode. VR does nothing to solve the second category, so I'm skeptical.

Inward-facing cameras with face and eye tracking

Before I read this phrase, I was about to comment something along the lines of "Facebook, and other tech companies, are no strangers to generating galaxy-brained arguments to persuade investors/analyst firms that their company's future is more valuable than it actually is". After I read that phrase I completely flipped.

Anyone else with sufficient real-life experience analyzing this industry would also flip when they see a phrase like this one.

And Facebook knows this.

Can you say more? are you saying you're flipped out (presumably an expression of irritation) because it gives perfect eye fixation data for tracking emotions? or perhaps, I'm personally excited for eye tracked mouse input, maybe that's the kind of thing you meant too? I'm not sure which parse to use heh

I strongly doubt eye-tracked mouse input is going to be good enough for anyone to want (if they don't have a disability that makes them unable to use their hands). The three reasons for eye tracking are for augmenting the inverse optics (chromatic aberration correction in particular works better if you have millimeter-precise information about where on the user's face the headset is sitting), for foveated rendering (uncertain value, but in the best case might effectively quadruple your GPU speed), and for letting other people in a conversation see your eye movements (very important for getting past the uncanny valley).

Lots of people look at Oculus and tell stories about how this is supposed to support Facebook's advertising business in some creepy way or another. I think you can refute this just by looking at revenue numbers: the Oculus platform gets a 30% cut of game revenue (similar to Steam), and this is quite a lot of money per user.

chromatic aberration correction in particular works better if you have millimeter-precise information about where on the user's face the headset is sitting

You have officially blown my mind. I seriously cannot believe that AI can subtly mess with video color in real time based on known effect on eye movement, that is absolutely nuts and the applications are limitless in the short- and long-term. No wonder Apple stock keeps going up, there's probably all sorts of things like that which I'm not aware of.

Before I read this, I thought I was cool for knowing that oscillating adaptive refresh rate could yield known measurable effects e.g. while shopping online. That's nothing compared to what chromatic aberration can do.  Thank you very much for sharing, my career has benefited profoundly by me learning this.

foveated rendering (uncertain value, but in the best case might effectively quadruple your GPU speed)

I'm definitely not an expert in this area, but I can't imagine this being possible unless the headset was hardwired to a data center or something. Have we really gotten to the point where that much ML can fit on a gaming PC?

Lots of people look at Oculus and tell stories about how this is supposed to support Facebook's advertising business in some creepy way or another.

Do tell! Although I definitely agree with you that it's extremely cheap to generate large numbers of totally-false rumors on this specific topic, and extremely expensive to verify most of them.

You have officially blown my mind. I seriously cannot believe that AI can subtly mess with video color in real time based on known effect on eye movement, that is absolutely nuts and the applications are limitless in the short- and long-term. No wonder Apple stock keeps going up, there's probably all sorts of things like that which I'm not aware of.

This isn't an AI thing at all, it's an optics thing.

One of the core problems of VR optics is that the panels emit three different wavelengths of light (R, G and B), and these bend differently when they pass through the lenses. If you naively display an image without correcting this, you wind up with red, green, and blue partial images that have separated from each other. In order to fix this problem, you predict the lens effect, apply the opposite effect to the drawn image, and have the distortions cancel. The problem is that the headset's position on your face is imprecise, and if you shift the headset a millimeter in any direction, the R, G and B images (as perceived by the eye) move in different directions. If you're trying to display black-on-white or white-on-black text, moving the color channels a pixel apart has a major effect on readability.

Before I read this, I thought I was cool for knowing that oscillating adaptive refresh rate could yield known measurable effects e.g. while shopping online. That's nothing compared to what chromatic aberration can do.  Thank you very much for sharing, my career has benefited profoundly by me learning this.

This paragraph is profoundly confused in a way that I can't fathom.

foveated rendering (uncertain value, but in the best case might effectively quadruple your GPU speed)

I'm definitely not an expert in this area, but I can't imagine this being possible unless the headset was hardwired to a data center or something. Have we really gotten to the point where that much ML can fit on a gaming PC?

Again, nothing to do with ML. Foveated rendering is a fancy way of saying "don't spend GPU cycles drawing parts of the screen that the user isn't looking at". It only works if you have an eye-tracking camera that tells you which part of the screen the user is looking at.

Concretely: I think we're 6 months from the crossover point


Now that it's been 6 months since you got your Meta Quest Pro, how has it held up? Also, what are your predictions for Apple's VR headset, which is rumored to release next month?

Update: now that Vision Pro is out, would you consider that to meet your definition of "Transformative VR"?

I think it probably meets the definition, but, caveat, it isn't actually out in the relevant sense, so there's some risk that it has a caveat that wasn't on my radar.

Judging by the hammering that Meta's stock has taken over the last 5 years, the market really disagrees with you.

Here's an argument against radical VR transformation in the near term: some significant proportion of people have a strong anti-VR aversion. But the benefit of VR for meetings has strong network effects: if you have 6 friends you want to meet with, but 2 out of the 6 hates VR, that's going to derail the benefit of VR for the whole group.

It's happened before though. Despite being one of those 2 friends, I've already been forced to change my habits and regard videocalls as a valid form of communication.

I think I agree with your technological argument, but I'd take your 6 months and 2.5 years and multiply them by a factor of 2-4x.

Party of it is likely that we are conceiving the scenarios a bit differently. I might be including some additional practical considerations.

I agree. I think the argument in the OP fails to account for how dramatically uncool VR is, especially when it tries to come into the work place. Everyone thinks VR is cool when it's a fun toy doing neat stuff. If you had to wear a VR headset all day for work, though, most people who aren't already a certain kind of nerd will balk. This means that, like bringing computers and other tech into the workplace, it'll likely take a decade or more for the transition to happen.

Basically, everyone tried out Zoom meetings and discovered that they sucked compared to in-person meetings, for reasons that are slightly hard to put your finger on.

If the problems are unclear why will good VR solve them?

Or put differently: What makes Zoom meetings worse than in-person-meetings and what will make VR-meetings better? 

'Meetings' are torture and making them better doesn't make me want VR, but reframed, making virtual 'hanging out with friends' better is quite appealing. So if it's way better for that  - particularly for low intensity socializing while watching shared media (virtual couch) - then I may be interested.

The thing I miss the most shared living spaces, or college, is doing the TV and video-gaming you normally do, but always with friends in a social setting.

The resolution is such a bottleneck though. It feels like it's not that far off but I keep trying to squint to read things on a display in VR. Just one more half generation maybe.

As someone who has been organizing meetups in VR for about a year now [check out ea vr and h+vr), I think VR makes virtual hangouts with friends better! I am also actually quite excited about productivity in VR however the tech isn't quite there (give it a couple years), but social VR is already lots of fun (despite still being in a fairly early state itself)! 

Hanging in VR with friends feels a lot like being in the same space together. We like to chill and vibe while doing our own things too. 

My personal feeling towards social VR is something like: despite how early all this tech is, it's already this cool, and I enjoy it a lot. Imagine what it'll be like once it's more mature! I'm excited :)

Are they going to solve the audio quality problem? I would say that is the number one thing that ruins video calls for me. Number two is eye contact.

They already have. I have no idea why zoom failed to solve this problem and vrchat succeeded, but it is so. (might be something to do with gamedevs having a better understanding of how to trick content-neutral networks into prioritizing realtime applications, I've seen some of them writing about that.)

You already pretty much have eye contact, or like, head contact, it's good enough for conveying who you're currently listening to, which covers the practical dimension, and your avatar will guess who you're looking at and make eye contact with them so the emotional dimension is covered too.

I'm finding it to be surprisingly troublesome that I can't not make eye contact with people yet. There's no way to send that signal "I don't really want/can't to connect with you", so I've ended up just becoming shy and anxious.

@jimrandomh I'm curious how your views on this have changed in the last 7 months?

I think it’s plausible to say this generation of headset will be better than a group video conference. At a stretch possibly better than a 1:1 video call. But better than in-person seems extremely unlikely to me.

Perhaps you are intending something broad like “overall higher utility for business and employees” rather than strictly better such that people will prefer to leave offices they were happy in to do VR instead? Taking into account the flexibility to hire people remotely, avoid paying tech hub wages, etc.?

Personally I think 1:1 video is much better than team/group video conferences (where interruptions and body language matter much more), and those group sessions are likely to be the meeting that VR wins first. But I think the comparison is VC versus VR, given many employees are going remote now. I strongly doubt that VR will persuade in-person companies to go remote/VR in the next few years.

I do think that enterprise is the right segment to target; even for one group meeting a day, most companies would get good ROI on these headsets if it is good enough to make people feel an interpersonal connection. I don’t think that quality bar is certain for this iteration though.

One big item I see missing is haptic feedback. Like, if I ask myself in what ways is VR still different from regular reality, I feel like there is still a lot missing.[1]

I think working with physical objects is a big component of activities that can't be done remotely currently. But even if we just focus on interpersonal communication, being able to touch other people is an important component. Even if we are just talking about a strictly formal business context, handshakes at least still almost always occur.

And I just don't see high fidelity haptic feedback getting sufficiently advanced in the near future. Interaction even with just macro scale objects is still a major challenge. And the human touch is remarkably sensitive at the micro scale as well. "The lowest amplitude of the wrinkles so distinguished was approximately 10 nm, demonstrating that human tactile discrimination extends to the nanoscale."[2] There is even research suggesting that we can sometimes detect single atom differences.[3]


  1. Admittedly though, a VR that is indistinguishable from reality would be very creepy. ↩︎

  2. https://doi.org/10.1038/srep02617 ↩︎

  3. https://doi.org/10.1039/D1SM00451D ↩︎

It is unclear from my reading of the paper that the result is anything beyond "fine waves cause friction, which humans can feel" 

I haven't looked too much into that paper, but yeah, it could be that at smaller scales you are just perceiving the friction.

But just from personal experience, I am pretty confident that at least in the 10μm-100μm range humans have high fidelity tactile perception, and are able to distinguish various patterns and not just friction.