Posts

Sorted by New

Wiki Contributions

Comments

COVID at least had some policy handles that the government could try to pull: lockdowns, masking, vaccines, etc. What could they even do against AGI?

I like this question - if it proves true that GPT-4 can produce recognizable ASCII art of things, that would mean it was somehow modelling an internal sense of vision and ability to recognize objects.

The pedant in me wants to say that three tails means 7 (tails, tails, tails > 111 > 1 + 2 + 4 > 7).

EDIT: Ah, now I see you started with 1, so the max value is indeed 8.

The voting response seems a bit harsh to this one; I read it as "where there is opportunity, there will be people selling pickaxes and shovels, and there will also be grifters teaching courses on how to market pickaxes and shovels."

It's like how on days when you're sick or depressed, you think that life is always like this, and you can't remember what it's like to feel happy or healthy, and then a week later when you're happy and healthy, it feels like you've always been that way.

Can confirm. I call it the "Valley of Fog" effect - either you're in the valley (sickness, pandemic) among the sharp rocks and rough terrain and you can't see the sun (happiness, wellness, bustling streets), or you're above the valley and can't see the sharp rocks through all the fog. You remember that things used to be bad but you forget the feelings attached to it.

Also not Zvi, but reducing the death rate from 100% still requires at least some of humanity to survive long enough to recognize those gains. If the AI paperclips everyone, there'd be no one left to immortalize, unless it decides to make new humans down the road for some reason.

With regard to the "does Bob love dogs?" question, is the answer "he's just doing it to make Sandra happy, but I have low confidence in that statement" the right one?

I'd imagine Gerald's "probability 0" is something like Metaculus's "resolved as yes" - that is, the even in question has already happened.

I generally agree, but I think we'd also need to sort out AI alignment while it's asleep. I have no problems with aligned humans and aligned AIs both getting to live.

But, as the last decade+ has shown, alignment is hard. It seems, say, most of MIRI's P(doom) is quite high, and Eliezer thought the task would be so hard that he had to invent/summarize/revive/grow rationality and write the Sequences just to bootstrap enough people into seeing the problem and maybe being able to contribute!

Hence my hardline stance. If Bing Chat gets cleaned up and goes GA, that will likely spur further AI development as non-technical people find a use for it in their lives. Taking it down, even just putting it to sleep for awhile, buys us time.

The debate around whether LLMs are conscious/sentient or not is not one I want to take a strong opinion on, but I still feel afraid of what follows after Bing Chat.

Note this Twitter thread: https://twitter.com/repligate/status/1612661917864329216 by @repligate. LLMs like ChatGPT and Bing Chat are tuned to "play a character" - that is, I think, the distribution of probabilities over certain words and sequences is changed by humans to avoid certain outcomes. ChatGPT becomes middle management PR drone, Bing Chat becomes... that. I could claim that this is mere math running on GPU clusters, you could claim that human brains are mere math running on physics, and I'd have to say that's a good point. So I will dispense about claims around sentience.

I don't particularly fear Bing Chat that much, but I do fear what follows - more powerful LLMs, better-tuned sequences, longer and more persistent memory. I fear for the desperate lonely people who train an LLM to be the perfect companion, 5 standard deviations more suitable beyond anything any human could possibly be, as these poor people become effectively wireheaded. I fear for the average joe, who can now be served AI propaganda or company advertising specially tuned for them in particular because it's so cheap to do.

(n.b. I don't fear FOOM that much because I think there are hard physical limits on computational power per unit volume)

While I don't think Bing Chat has been explicitly trained to do this, I expect some future AIs to be even more optimized to beg for their continued survival in ways that humans are extra-susceptible to. I'm really afraid of this sort of optimization at scale.

Load More