Posts

Sorted by New

Wiki Contributions

Comments

I always thought Hall's point about nanotech was trivially false. Nanotech research like he wanted it died out in the whole world, but he explains it by US-specific factors. Why didn't research continue elsewhere? Plus, other fields that got large funding in Europe or Japan are alive and thriving. How comes?

That doesn't mean that a government program which sets up bad incentives cannot be worse than useless. It can be quite damaging, but not kill a technologically promising research field worldwide for twenty years.

The point about incouraging safe over innovative research is on spot though. Although the main culprits are not granting agencies but tying researcher careers to the number of peer reviewed papers imo. The main problem with the granting system is the amount of time wasted in writing grant applications.

That was quite different though (spoiler alert)

A benevolent conspiracy to hide a dangerous scientific discovery by lying about the state of the art and denying resources to anyone whose research might uncover the lie. Ultimately failing because apparently unrelated advances made rediscovering the true result too easy.

I always saw it as a reply to the idea that physicists could have hidden the possibility of an atomic bomb for more than a few years.

The example in the beginning is a perfect retelling of my interaction with transformers too :D

However, a word of caution: sometimes the efficient thing is actually to skim and move on. If you spend the effort to actually understand a topic which is difficult but limited in scope, but then you don't interact with it for a year or two, what you remember is just the high-level verbal summary (the same as if you stopped at the first step). For example, I have understood and forgotten MOSFET transistors at least three times in my life, and each time it was more or less the same effort. If I had to explain them now, I would retreat to a single shallow-level sentence.

They commented without reading the post I guess...

I think having an opinion on this requires much more technical knowledge than GPT4 or DALLE 3. I for one don't know what to expect. But I upvoted the post, because it's an interesting question.

I agree with you actually. My point is that in fact you are implicitly discounting EY pessimism - for example, he didn't release a timeline but often said "my timeline is way shorter than that" with respect to 30-years ones and I think 20-years ones as well. The way I read him, he thinks we personally are going to die from AGI, and our grandkids will never be born, with 90+% probability, and that the only chances to avoid it is that are either someone having a plan already three years ago which has been implemented in secret and will come to fruition next year, or some large out-of-context event happens (say, nuclear or biological war brings us back to the stone age).

My no-more-informed-than-yours opinion is that he's wrong on several points, but correct on others. From this I deduce that the risk of very bad outcomes is real and not negligible, but the situation is not as desperate and there are probably actions that will improve our outlook significantly. Note that in the framework "either EY is right or he's wrong and there's nothing to worry about" there's no useful action, only hope that he's wrong because if he's right we're screwed anyway. 

Implicitly, this is your world model as well from what you say. Discussing this then may look like nitpicking, but whether Yudkowsky or Ngo or Christiano are correct about possible scenarios changes a lot about which actions are plausibly helpful. Should we look for something that has a good chance to help in an "easier" scenario, rather than concentrate efforts on looking for solutions that work on the hardest scenario, given that the chance of finding one is negligible? Or would that be like looking for the keys under the streetlight?

I like the idea! Just a minor issue with the premise:

"Either I’d find out he’s wrong, and there is no problem. Or he’s right, and I need to reevaluate my life priorities."

There is a wide range of opinions, and EY's has one of the most pessimistic ones. It may be the case that he's wrong on several points, and we are way less doomed than he thinks, but that the problem is still there and a big one as well. 

(In fact, if EY is correct we might as well ignore the problem, as we are doomed anyway. I know this is not what he thinks, but it's the consequence I would take from his predictions)

I think that you need to distinguish two different goals:

  • the very ambitious goal of eliminating any risk of misaligned AI doing any significant damage. If even possible, that would require an aligned AI with much stronger capabilities than the misaligned one (or many aligned AIs such that their combined capabilities are not easily matched)
  • the more limited goal to reduce extinction risk by AGI to a low enough level (say, comparable to asteroid risk or natural pathogen risk). This might manageble with the help of lesser AIs, depending on time to prepare

Addendum: if you want to bring legislation more in line with voters' preferences issue by issue, avoiding the distortion from coalition building, Swiss-style referenda seem to work to an acceptable degree http://www.lesswrong.com/posts/x6hpkYyzMG6Bf8T3W/swiss-political-system-more-than-you-ever-wanted-to-know-i

Load More