LESSWRONG
LW

1755
Furcas
161214020
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Google "We Have No Moat, And Neither Does OpenAI"
Furcas2y103

This comment has gotten lots of upvotes but, has anyone here tried Vicuna-13B?

Reply
My Objections to "We’re All Gonna Die with Eliezer Yudkowsky"
Furcas2y*3116

Well, this is insanely disappointing. Yes, the OP shouldn't have directly replied to the Bankless podcast like that, but it's not like he didn't read your List of Lethalities, or your other writing on AGI risk. You really have no excuse for brushing off very thorough and honest criticism such as this, particularly the sections that talk about alignment.

And as others have noted, Eliezer Yudkowsky, of all people, complaining about a blog post being long is the height of irony.

This is coming from someone who's mostly agreed with you on AGI risk since reading the Sequences, years ago, and who's donated to MIRI, by the way.

On the bright side, this does make me (slightly) update my probability of doom downwards.

Reply
Hooray for stepping out of the limelight
Furcas2y10-2

You may be right about Deepmind's intentions in general but, I'm certain that the reason they didn't brag about AlphaStar is because it didn't quite succeed. There never was an official series between the best SC2 player in the world and AlphaStar. And, once Grandmaster-level players got a bit used to playing against AlphaStar, even they could beat it, to say nothing of pros. AlphaStar had excellent micro-management and decent tactics, but zero strategic ability. It had the appearance of strategic thinking because there were in fact multiple AlphaStars, each one having learned a different build during training. But then each instance would always execute that build. We never saw AlphaStar do something as elementary as scouting the enemy's army composition and building the units that would best counter it.

So Deepmind saw they had only partially succeeded, but for some reason instead of continuing their work on AlphaStar they decided to declare victory and quietly move on to another project.

Reply
What Are The Chances of Actually Achieving FAI?
Furcas8y30

I'd guess 1%. The small minority of AI researchers working on FAI will have to find the right solutions to a set of extremely difficult problems on the first try, before the (much better funded!) majority of AI researchers solve the vastly easier problem of Unfriendly AGI.

Reply
Split Brain Does Not Lead to Split Consciousness
Furcas9y40

Huh. Is it possible that the corpus callosum has (at least partially) healed since the original studies? Or that some other connection has formed between the hemispheres in the years since the operation?

Reply
MIRI's 2016 Fundraiser
Furcas9y310

Donated $500!

Reply
Open thread, Sep. 12 - Sep. 18, 2016
Furcas9y00

Yes it was video. As Brillyant mentioned, the official version will be released on the 29th of September. It's possible someone will upload it before then (again), but AFAIK nobody has since the video I linked was taken down.

Reply
Open thread, Sep. 12 - Sep. 18, 2016
Furcas9y00

I changed the link to the audio, should work now.

Reply
Open thread, Sep. 12 - Sep. 18, 2016
Furcas9y60

Sam Harris' TED talk on AGI existential risk: https://www.youtube.com/watch?v=IZhGkKFH1x0&feature=youtu.be

ETA: It's been taken down, probably so TED can upload it on their own channel. Here's the audio in the meantime: https://drive.google.com/open?id=0B5xcnhOBS2UhZXpyaW9YR3hHU1k

Reply
November 2013 Media Thread
Furcas9y30

If you don't like it now, you never will.

Reply
Load More
5If MWI is correct, should we expect to experience Quantum Torment?
13y
74