kubanetics
  • business data analyst / ex insurance broker
  • upskilling in ML Safety following a plan based on Gabe Mukobi's guide
  • volunteer at EA Poland
  • based in Poznań, Poland

Posts

Sorted by New

Wiki Contributions

Comments

This is another reply in this vein, I'm quite new to this so don't feel obliged to read through. I just told myself I will publish this.

I agree (90-99% agreement) with almost all of the points Eliezer made. And the rest is where I probably didn't understand enough or where there's no need for a comment, e.g.:

1. - 8.  agree

9. Not sure if I understand it right - if the AGI has been successfully designed not to kill everyone then why need oversight? If it is capable to do so and the design fails then on the other hand what would our oversight do? I don't think this is like the nuclear cores. Feels like it's a bomb you are pretty sure won't go off at random but if it does your oversight won't stop it.

10. - 14. - agree

15. - I feel like I need to think about it more to honestly agree.

16. - 18. - agree

19. - to my knowledge, yes

20. - 23. - agree

24. - initially I put "80% agree" to the first part of the argument here (that 

The complexity of what needs to be aligned or meta-aligned for our Real Actual Values is far out of reach for our FIRST TRY at AGI

 but then discussing it with my reading group I reiterated this few times and begun to agree even more grasping the complexity of something like CEV.

25. - 29. - agree

30. - agree, although wasn't sure about 

an AI whose action sequence you can fully understand all the effects of, before it executes, is much weaker than humans in that domain

I think that the key part of this claim is "all the effects of" and I wasn't sure whether we have to understand all, but of course we have to be sure one of the effects is not human extintion then yes, so for "solving alignment" also yes.

31. - 34. - agree

35. - no comment, I have to come back to this once I graps LDT better

36. - agree

37. - no comment, seems like a rant 😅

38. - agree

39. - ok, I guess

40. - agree, I'm glad some people want to experiment with the financing of research re 40.

41. - agree , although I agree with some of the top comments on this, e.g. evhub's

42. - agree

43. - agree, at least this is what it feels like
 

is there any society where people are ostracised for not possessing difficult skills?

Depending on what you call "difficult". I think you will try to fit in if 80%+ of your peers know the skill, but OTOH if 80%+ people know have the skill then is it "difficult"?

In gen pop I feel this way about driving cars - some people hate it and have to deal with a lot of stress/anxiety to learn this skills and although they could live without it. But living without it means they can't do some of the things - they have less power.

I'd say then the power motivation (increase the space of your action) could be lumped together with the economic one or form a separate one.

Similar skill like this - learning a language. In gen pop it's seldom the case unless you live in a country where 3/4 people know a second language (sadly not the case in my country). But it afaik it worked in the past with some elite societes where people learned french (lingua franca) or greek or latin or something to fit in. Languages also increase the space of your actions (you can understand more by yourself, talk to more people etc.) so this again could be "power" motivated.

I came here for this tip specifically to read Kaj's sequence as well! Used this plugin for now but I hope it becomes a feature.