Sorted by New

Wiki Contributions


I'm very concerned about comfort. It looks much heavier than a quest 2 even with an external battery due to the glass and metal construction. If its also consuming double the watts as a quest pro, I worry about thermals. I cant see anyone wearing this for more than an hour due to weight and thermals. 

The good news is this can be fixed by moving all the compute off the head into the battery pack. The bad news is apple hates wires, so it might never happen.

Some good analysis here

Thanks for the post! You mention that its unlikely PHF is as sample efficient as RLHF, do you have plans to explore that direction? Most attributes we'd like to condition on are not trivially inferred, so labels are scarce or expensive to acquire. I'm interested in how alignment scales with the amount of labeled data. Perhaps this work could synergize well with TracIn or Influence Functions to identify examples that help or hurt performance on a small test set.

These are very impressive! It looks like it gets the concepts, but lacks global coherency. 

Could anyone comment on how far we are from results of similar quality as the training set? Can we expect better results just by scaling up the generator or CLIP?

There is a paper describing the architecture

It looks like the system is comprised of many independent skills and an algorithm to pick which skill to use at each state of the conversation. Some of the skills use neural nets, like a CNN for parsing images and a RNN for completing sentences but the models look relatively small. 

Could you elaborate on step 4? How can you buy 10 shares of each bucket if you only have $10? Isn't the total cost 14.80 * 10?

There is a very extensive discussion of a UPRO/TMF strategy here. One thing to note is taxes severely decrease the returns of strategies which require frequent re-balancing.

Have you rechecked the data recently?

I see, thanks for clarifying!

Load More