Sometimes I have an internal desire different to do something different than what I think should be done (for example, I might desire to play a game while also thinking the better choice is to read). I've been experimenting with using randomness to mediate this. I keep a D20 with me, give each side of the dispute some odds proportional to the strength of its resolve, and then roll the die.
In theory, this means neither side will overpower the other, and even a small resolve still has a chance. I'm not sure how useful this is, but it's fun, and can sort of give me motivation (I've tried to internalize this kind of roll as a rule not to break without good reason).
Also, when I'm merely deciding between some options, sometimes I'll roll more casually with equal odds, and it'll help me realize that I already know which it is I really wanted to do (if I don't like the roll's outcome).
(status: im newer here, this is a random thought i had, could be obvious to others, might also help when talking to outsiders about ai risk)
humans seem like a good example of an intelligence takeoff. for most of prehistory, species were following the same basic patterns repetitively (eating each other, trying to survive, etc.)
then at some arbitrary point, one species either passed some threshold in intelligence, or maybe it just gained a pivotal intelligence-unrelated ability (such as opposable thumbs), or maybe it just found itself in the right situation (e.g the agricultural revolution is commonly explained by humans ending up in an environment better suited for plant growth).
and then it spiraled out of control to where we are now.
and in the future, this species is gonna create an even more powerful intelligence. this mirrors our own worries about AI creating a more powerful AI.
sometimes people say that there's no evidence for AI doom because it hasn't been tested. humans might be moving evidence to such people when framed this way.
this might also have implications for how AI takeoff might go. it might be that there won't be some surprisingly increase in intelligence compared to earlier AIs - it could be more like the biointellegence takeoff, where it happens after some arbitrary-seeming conditions are met.
Welcome! And yes, this is a thing people have talked about a lot, particularly in the context of outer versus inner alignment (the outer optimizer, evolution, designed an inner optimizer, humans, who optimize for different things, like pleasure etc, than evolution does, but ended up effectively becoming a "singularity" from its point of view). It's cool that you noticed this on your own!
(status: im newer here, this is a random thought i had, could be obvious to others, might also help when talking to outsiders about ai risk)
humans seem like a good example of an intelligence takeoff. for most of prehistory, species were following the same basic patterns repetitively (eating each other, trying to survive, etc.)
then at some arbitrary point, one species either passed some threshold in intelligence, or maybe it just gained a pivotal intelligence-unrelated ability (such as opposable thumbs), or maybe it just found itself in the right situation (e.g the agricultural revolution is commonly explained by humans ending up in an environment better suited for plant growth).
and then it spiraled out of control to where we are now.
and in the future, this species is gonna create an even more powerful intelligence. this mirrors our own worries about AI creating a more powerful AI.
sometimes people say that there's no evidence for AI doom because it hasn't been tested. humans might be moving evidence to such people when framed this way.
this might also have implications for how AI takeoff might go. it might be that there won't be some surprisingly increase in intelligence compared to earlier AIs - it could be more like the biointellegence takeoff, where it happens after some arbitrary-seeming conditions are met.
thanks for the reply btw, i'd upvote you but the site won't let me yet :p
eta: now i can :3