FiftyTwo

Wiki Contributions

Comments

Saying that people would be better off taking more risks under a particular model elides the question of why they don't take those risks to begin with, and how we can change that. If its desirable to do so. 

The psychological impact of a loss of x is generally higher than that of a corresponding gain. So if I know I will feel worse from losing $10 than I will feel good from gaining $100, then its entirely rational in my utility function to not take a 50/50 bet between those two outcomes.  Maybe I would be better off overall if I didn't over weight losses, but utility functions aren't easily rewritable by humans. The closest you could come is some kind of exposure therapy to losses. 

Also, we have a huge amount of mental architecture devoted to understanding and remembering spatial relationships of objects (for obvious evolutionary reasons). Using that as a metaphor for purely abstract things allows us to take advantage of that mental architecture to make other tasks easier.

A very structured version of this would be something like a memory palace where you assign ideas to specific locations in a place, but I think we are doing the same thing often when we talk about ideas in spatial relationships, and build loose mental models of them as existing in spatial relationship to one another (or at least I do). 

I think the core thing here is same-sidedness.

The converse of this is that the maximally charitable approach can be harmful when the interlocutor is fundamentally not on the same side as you, in trying to honestly discuss a topic and arrive at truth. I've seen people tie themselves in knots when trying to apply the principle of charity, when the most parsimonious explanation is that the other side is not engaging in good faith, and shouldn't be treated as such. 

It's taken me a long time to internalise this, because my instinct is to take what people say at face value. But its important to remember that sometimes there isn't anything complex or nuanced going on, people can just lie.

Thanks. This is the kind of content I originally came to LW for a decade ago, but seems to have become less popular

You might find Origins Of Political Order interesting. Emphasis on how the principle agent problem is one of the central issues of governance and how without strong mechanisms systems tend to descend into corruption

Is there any way of reverse engineering from these pictures what existing images were used to generate them? Would be interesting to see how much similarity there is. 

So we just need to get two superpowers who currently feel they are in a zero sum competition with each other to stop trying to advance in an area that gives them a potentially infinite advantage? Seems a very classic case of the kind of coordination problems that are difficult to solve, with high rewards for defecting.

We have, partially managed to do this for nuclear and biological weapons. But only with a massive oversight infrastructure that doesn't exist for AI. And relying on physical evidence and materials control that doesn't exist for AI. It's not impossible, but it would require a similar level of concerted international effort that was used for nuclear weapons. Which took a long time, so possibly doesn't fit with your short timeline

A more charitable interpretation of the same evidence would be that as a public health professional Dr Fauci has a lot of experience with the difficulties of communicating complex and messages and the political tradeoffs that are necessary for effective action. And has judged based on that experience what is most effective to say. Do you have data he doesn't? Or a reason to think his experience in his speciality is inapplicable?

Earth does have the global infrastructure

It does? What do you mean? The only thing I can think of is the UN, and recent events don't make it very likely they'd engage in coordinated action on anything.

It should not take long, given these pieces and a moderate amount of iteration, to create an agentic system capable of long-term decision-making

That is, to put it mildly, a pretty strong claim, and one I don't think the rest of your post really justifies. Without which it's still just listing a theoretical thing to worry about

Load More