Algon

Wiki Contributions

Comments

I bet RatsWrongAboutUAP $200 at 50:1 odds against us both assigning >50% odds for a non-prosaic explanation for UFOs within 5 years from today. He agreed, and I have received the money. We'll try to adjudicate the bet ourselves, or failing that, ask the LW community, or whomever is suitable, to adjudicate matters. 

I'd take a bet at 1:50 odds for $200. I'm happy to let the LW community adjudicate, or for us to talk it over. I'm currently at something like 5e-5 for there being UFOs-as-non-prosaic. So I don't think I'd be that hard to convince.

So now that you've had almost a year to play with GPT-3 as a research assistant, what are your impressions? How did you update?

I skimmed this, but I get the sense that you're interpreting Hanson's predictions in ways that he would not have agreed with. My cached thoughts suggest that Hanson's model predicts deep learning couldn't possibly work, because creating "intelligence" will require lots of custom engineering for different skills instead of "GPU go brr". Hence his admiration of Cyc: it is focusing on implemeting a whole host of skills with lots of integrated knowledge.

See his post "I heart CYC". Here's a quote form it, which I think highlights Hanson's own interpretation of "architecture is overrated":

The lesson Lenat took from EURISKO is that architecture is overrated; AIs learn slowly now mainly because they know so little. So we need to explicitly code knowledge by hand until we have enough to build systems effective at asking questions, reading, and learning for themselves. Prior AI researchers were too comfortable starting every project over from scratch; they needed to join to create larger integrated knowledge bases. This still seems to me a reasonable view, and anyone who thinks Lenat created the best AI system ever should consider seriously the lesson he thinks he learned.

That sure doesn't look like "just do next token prediction with a simple architecture and lots of compute". 
 

EDIT: I didn't realize this, but "I heart CYC" was written in 2008, around the time of the FOOM debate. So it is directly relevant to how we should interpret, and score, the incidental predictions you mention. As you can tell, I advocate a contextual reading of old predictions. Yes, that leaves some degrees of freedom to twist words any way you wish, but not that much if we're looking at simple interpretations alongside other relevant statements by the predictor. Use all the data you have available. Plus, we can just ask Hanson.

EDIT^2: I asked Hanson and Yud on Twitter. Let's see if they reply. https://twitter.com/Algon_33/status/1664383482884239365 

I'd like to see a debate between you, or someone who shares your views, and Hanson on this topic. Partly because I think revealing your cruxes w/ each other will clarify your models to us. And partly because I'm unsure if Hanson is right on the topic. He's probably wrong, but this is important to me. Even if I and those I care for die, will there be something left in this world that I value? 

My summary of Hanson's views on this topic:

Hanson seems to think that any of our "descendants", if they spread to the stars, will be doing complex, valuable things. Because, I think, he thinks that a singleton is unlikely, and we'll get many AI competing against each other. Natural selection is pretty likely to rule. But many of the things we care for were selected by natural selection because they're useful. So we should expect some analogues of what we care about to show up in some AI in the future. Yes, they may not be exact analogues, but he's OK with that as he thinks that the best option for extrapolating our values is by looking for fitter analogues.

Disclaimer: I've never been to an academic conference
EDIT: Also, I'm just thinking out loud here. Not stating my desire to start a conference, just thinking about what can make academics feel like researching alignment is normal.

Those are some big names. I wonder if arranging a big AI safety conference w/ these people would make worrying about alignment feel more socially acceptable to a lot of researchers. It feels to me like a big part of making thinking about alignment socially acceptable is to visibly think about alignment in socially acceptable ways. In my imagination, you have conferences on important problems in academia. 

You talk about the topic there with your colleagues and impressive people. You also go there to catch up with friends, and have a good time. You network. You listen to big names talking about X, and you wonder which of your other colleagues will also talk about X in the open. Dismissing it no longer feels like it will go uncontested. Maybe you should take care when talking about X? Maybe even wonder if it could be true. 

Or on the flip side, you wonder if you can talk about X without your colleagues laughing at you. Maybe other people will back you up when you say X is important. At least, you can imply the big names will. Oh look, a big name X-thinker is coming round the corner. Maybe you can start up a conversation with them in the open. 

I'm not sure what you're pointing at. I think you're referring to awareness of, or engagement with, life. Focusing on reality is what lets you stand up. This sounds pretty peculiar to me. I enjoy heightened awareness of reality, but I certainly don't find any beauty in the "white hot intensity of pain". You, on the other hand... Your description reminds me of Carissa Sevar, a fictional character from a variety of fictions on glowfic, most memorably from project lawful. She takes comfort in the fact that she can withstand pain, so finds it strange that someone would prefer non-existince. She'd prefer existing in Hell over non-existence. Her entire character is about being in love with life, and becoming stronger.

I don't have anything that is better than BWT. I've just read, and heard, people who claim to have done substantial mental modifications talking about their experience. The guy I was talking about claimed that 1) This stuff is dangerous, so he won't go into details and 2) You really have to develop your own techniques. He seems quite sharp, so I'm inclined to trust his word, but that's not much evidence for you. And I haven't done much myself other than mess up my brain a few times, and practiced BWT-related focusing enough times that I started getting something out of it.

Oh, I'm quite wary of mental modifications. I've both had some poor experiences myself, and listened to enough stories by people who've done far more substantial changes to their cognition, to know that this is dangerous territory. 

Incidentally, I showed that skill from BWT to someone who claims to have done great amounts of mental surgery. They stated that the skill isn't a bad solution to the problem, but the author of the page didn't know what the problem even is. Namely, that people didn't spend enough time thinking alone as a kid due to repeated distractions, which caused firmware damage. N.B. they only read the "Tuning your cognitive strategies page". I think they also claimed that this damage was related to school, or perhaps social media? 

I'm not sure what to make of that claim, but the fact is that I have many instinctive flinches away from entering the state of mind which that skill-page describes. These flinches are, I think, caused by a fear of failure to solve problems or produce valuable thoughts. Which, you know, does sound like the kind of damage that school or social media could do. 

Load More