Pattern

Interested in math, Game Theory, etc.

Sequences

Shortform feeds

Wiki Contributions

Comments

As a speaker of a native language that has only genderneutral pronouns and no gendered ones, I often stumble and misgender people out of disregard of that info because that is just not how referring works in my brain. I suspect that natives don't have this property and the self-reports are about them.

What language is this?

It reminds me of a move made in a lawsuit.

But you said that I should use orange juice as a replacement because it's similarly sweet.

Does ChatGPT think tequila is sweet, orange juice is bitter...or is it just trying to sell you drinks?*

tequila has a relatively low alcohol content

Relative to what ChatGPT drinks no doubt.

And tequila doesn’t have any sugar at all.

*Peer pressure you into it drinking it maybe.

At best this might describe some drinks that have tequila in them. Does it know the difference between "tequila" and "drinks with tequila"?

 

Does ChatGPT not differentiate between sweet and sugar, or is ChatGPT just an online bot that improvises everything, and gaslights you when it's called on it? It keeps insisting:

..."I was simply pointing out that both orange juice and tequila can help to balance out the flavors of the other ingredients in the drink, and that both can add a nice level of sweetness to the finished beverage."...

Does someone want to try the two recipes out and compare them?

these success stories seem to boil down to just buying time, which is a good deal less impressive.

The counterpart to 'faster vaccination approval' is 'buying time' though. (Whether or not it ends up being well used, it is good at the time. The other reason to focus on it is - how much can you affect pool testing versus vaccination approval speed? Other stuff like improving statistical techniques might be easier for a lot of people than changing a specific organization.

Overall this was pretty good.

 

That night, Bruce dreamt of being a bat, of swooping in to save his parents. He dreamt of freedom, and of justice, and of purity. He dreamt of being whole. He dreamt of swooping in to protect Alfred, and Oscar, and Rachel, and all of the other good people he knew.

The part about "purity" didn't make sense.

 

Bruce would act.

This is bit of a change from before - something more about the mistake seems like it would make more sense. Not worry. ('Bruce would get it right this time' or something about 'Bruce would act (and it would make things better this time)'.) 'Bruce wouldn't be afraid' maybe?

I was thinking

The rules don't change over time, but what if on...the equivalent of the summer solstice, fire spells get +1 fire mana or something. i.e, periodic behavior. Wait, I misread that. I meant more like, rules might be different, say, once every hundred years (anniversary of something important) - like there's more duels that day, so you might have to fight multiple opponents, or something. 

This is a place where people might look at the game flux, and go 'the rules don't change'. 

Our world is so inadequate that seminal psychology experiments are described in mangled, misleading ways. Inadequacy abounds, and status only weakly tracks adequacy. Even if the high-status person belongs to your in-group. Even if all your smart friends are nodding along.

It says he started with the belief. Not, that he was right, or ended with it. Keeping the idea contained to the source, so it's clear it's not being stated could be improved, yes.

This is what would happen if you were magically given an extraordinarily powerful AI and then failed to aligned it,

Magically given a very powerful, unaligned, AI. (This 'the utility function is in code, in one place, and can be changed' assumption needs re-examination. Even if we assert it exists in there*, it might be hard to change in, say, a NN.)

* Maybe this is overgeneralizing from people, but what reason do we have to think an 'AI' will be really good at figuring out its utility function (so it can make changes without changing it, if it so desires). The postulate 'it will be able to improve itself, so eventually it'll be able to figure everything out (including how to do that)', seems to ignore things like 'improvements might make it more complex and harder to do that while improving.' Where and how do you distinguish between 'this is my utility function' and 'this is a bias I have'? (How have you improved this, and your introspecting abilities? How would a NN do either of those?)

 

One important factor seems to be that Eliezer often imagines scenarios in which AI systems avoid making major technical contributions, or revealing the extent of their capabilities, because they are lying in wait to cause trouble later. But if we are constantly training AI systems to do things that look impressive, then SGD will be aggressively selecting against any AI systems who don’t do impressive-looking stuff. So by the time we have AI systems who can develop molecular nanotech, we will definitely have had systems that did something slightly-less-impressive-looking.

Now there's an idea: due to competition, AIs do impressive things (which aren't necessarily safe). An AI creates the last advance that when implemented causes a FOOM + bad stuff.

Eliezer appears to expect AI systems performing extremely fast recursive self-improvement before those systems are able to make superhuman contributions to other domains (including alignment research),

This doesn't necessarily require the above to be right or wrong - human level contributions (which aren't safe) could, worst case scenario...etc.

 

[6.] Many of the “pivotal acts”

(Added the 6 back in when it disappeared while copying and pasting it here.)

There's a joke about a philosopher king somewhere in there. (Ah, if only we had, an AI powerful enough to save us from AI, but still controlled by...)

 

I think Eliezer is probably wrong about how useful AI systems will become, including for tasks like AI alignment, before it is catastrophically dangerous.

I think others (or maybe the OP previously?) have pointed out that AI can affect the world in big ways way before 'taking it over'.  Domain limited, or 'sub-/on par with/super-' 'human performance', doesn't necessarily matter which of those it is (though more power -> more effect is the expectation). Some domains are big.

Spoilering/hiding questions. Interesting.

Do the rules of the wizards' duels change depending on the date?

I'll aim to post the ruleset and results on July 18th (giving one week and both weekends for players).  If you find yourself wanting extra time, comment below and I can push this deadline back.

The dataset might not have enough info for this/rules might not be deep enough, but a wizards duel between analysts, or 'players', also sounds like it could be fun.

I think that is a flaw of comments, relative to 'google docs'. Long documents without the referenced areas being tagged in comments, might make it hard to find other people asking the same question you did, even if someone wondered about the same section. (And the difficulty of ascertaining that quickly seems unfortunate.)

Load More