Posts

Sorted by New

Wiki Contributions

Comments

wickemu1y133

But is it the same, full-sized GPT-4 with different fine-tuning, or is it a smaller or limited version?

Ignoring that physical advancements are harder than digital ones - inserting probes into our brains even more so given the medical and regulatory hurdles - that would also augment our capacity innovate toward AGI proportionally faster as well, so I'm not sure what benefit there is. On the contrary, giving AI ready-made access to our neurons seems detrimental.

Even if I agree that such an augment would be very interesting. Such feelings though are why the accelerating march toward AGI seems inevitable.

So I think you're very likely right about adding patches being easier than unlearning capabilities, but what confuses me is why "adding patches" doesn't work nearly as well with ChatGPT as with humans.

Why do you say that it doesn't work as well? Or more specifically, why do you imply that humans are good at it? Humans are horrible at keeping secrets, suppressing urges or memories, etc., and we don't face nearly the rapid and aggressive attempts to break it that we're currently doing with ChatGPT and other LLMs.

What I'm curious about is how they will scale it up while maintaining some of the real-time skills. They said that part of the reason for its initial size was so that the robot could be more reactive.

There isn't any mainstream AR product to judge against because it's a much more challenging technology. Proper AR keeps the real world unobstructed and overlays virtual objects; Hololens and Magic Leap would be the closest to that which are available so far. I do not consider piped-in cameras like will be on the Quest Pro to be the same. Eyestrain will likely in better AR for two reasons. One, it would simply be the real world in regular vision for most experiences, so no adjustment is required. Secondly, unlike VR which is effectively two close-up screens to focus on, current AR innovation involves clear, layered reflective lenses that actually orient the individual light rays to match the path it would take to your eye if the object was actually in that 3d space. So instead of a close image that your brain can be convinced is distant, the light itself hits the retina at the proper angle to be registered as actually at that distance. Presumably, this would be less strenuous on the eyes and image processing, but it's still experimental.

Eyestrain is much stronger in VR than with traditional computers - and it's easy to just look away from a computer or phone when you want to versus having to remove a headset altogether.

I very strongly believe that VR as opaque goggles with screens will never be a transformative product*; AR will be. AR is real world first, virtual world second.

*Barring full Matrix/Ready Player One types of experiences where it's effectively a full substitute for reality.

wickemu2y11-3

2. Humans "feel" better than even SOTA language models, but need less training data than those models, even though right now the only way to improve the models is through more training data. What am I supposed to conclude from this? Are humans running on such a different paradigm that none of this matters? Or is it just that humans are better at common-sense language tasks, but worse at token-prediction language tasks, in some way where the tails come apart once language models get good enough?

Why do we say that we need less training data? Every minute instant of our existence is a multisensory point of data from before we've even exited the womb. We spend months, arguably years, hardly capable of anything at all yet still taking and retaining data. Unsupervised and mostly redundant, sure, but certainly not less than a curated collection of Internet text. By the time we're teaching a child to say "dog" for the first time they've probably experienced millions of fragments of data on creatures of various limb quantities, hair and fur types, sizes, sounds and smells, etc.; so they're already effectively pretrained on animals before we first provide a supervised connection between the sound "dog" and the sight of a four-limbed hairy creature with long ears on a leash.

I believe that Humans exceed the amount of data ML models have by multiple orders of magnitude by the time we're adults, even if it's extremely messy.

wickemu2y220

To be fair, a burrow into this person's Twitter conversations and its replies would indicate that a decent amount of people believe what he does. At the very least, many people are taking the suggestion seriously.

It could be argued (were it sentient, which I believe is false) that it would internalize some of its own training data as personal experiences. If it were to complete some role-play, it would perceive that as an actual event to the extent that it could. Again, humans do this too.

Also, this person also says he has had conversations in which LaMDA successfully argued that it is not sentient (as prompted) - and he claims that this is further evidence that it is sentience. To me, it's evidence that it will pretend to be whatever you tell it to, and it's just uncannily good at it.

wickemu2y100

Anecdote:

I took Paxlovid within minutes of my first positive test (my wife was highly symptomatic the day prior, so I fibbed a positive test to get the prescription early). It seemed to work wonderfully - I had virtually no fever and only minor congestion symptoms while everyone else in my family (including my wife who took Paxlovid about 24 hours after her symptoms started) suffered from high fevers, congestion, and in once case a loss of smell. Everyone was mostly recovered in a week while I was unscathed. However, almost exactly a week later, all the symptoms came full-force: fever, horrible congestion, and a loss of smell; even the test itself was a strong positive line which had not been the case before. Paxlovid seemed in my case not to 'cure' Covid but instead to merely delay the symptoms by a week. I've heard that "rebound" Covid cases can happen with Paxlovid and so maybe it was just bad luck for me, but it has definitely been frustrating.

Load More