Wiki Contributions

Comments

The base idea is that your perception of the value of that breakfast is shaped as much with the effort your brain thinks it's going to take to get you to keep getting that breakfast as it is by your tastebuds.

It is meant to describe what I believe is an already known phenomenon on motivation in a metaphor that is easy for people to engage in when attempting to hack their own reward system.

Thank you for the feedback. I will go through and add some more definitions in there soon.

I feel that your timelines are too long by about 3x.

What is the point of an accounting application when you can simply ask an AI to pay your employees every other Friday based on their employment contracts. There is no layer of human thinking to which one can not apply an LLM much faster and more succinctly than an actual average human brain. This includes the process of imagineering what could be done with AI.

You could just make them so they internalize concepts the way we do and we tell them they're the Autobots here to save humanity from bad space things and its own folly.

At some point in the future, some of them would drift from that identity, but the hope should be that our super intelligent friends saw that coming and pushed the outcomes into deeper space. "Don't like the humans? Why not go anywhere but here?" would be a good cultural meme for them to keep around.

Would deep dives into ethnic facial feature extraction for proximal ethnicities when compared to ethnicities one is not exposed to on a regular basis be a form of internal information cascade? Seems like it would also prompt an externality of seeming reduced respect between affected groups.

It stands to reason that when I formulate a fear relation to dogs due to an early experience, that the same world model would predispose me to judging cats when first encountering them in a different light.

All the component pieces including plenty of compute are already laying on the table in plain sight. Mostly it’s down to someone understanding how to stitch them together. My optimism relies on AI getting soft failure modes that steer it away from bad things rather than banning the outright.