Posts

Sorted by New

Wiki Contributions

Comments

You're not taking your own advice. Since your message, Ilya has publicly backed down, and Polymarket has Sam coming back as CEO at coinflip odds: Polymarket | Sam back as CEO of OpenAI?

How is that addressing Hotz's claim? Eliezer's post doesn't address any worlds with a God that is outside of the scope of our Game of Life, and it doesn't address how well the initial conditions and rules were chosen. The only counter I see in that post is that terrible things have happened in the past, which provide a lower bound for how bad things can get in the future. But Hotz didn't claim that things won't go bad, just that it won't be boring.

How about slavery? Should that be legal? Stealing food, medication? Age limits?

There are all sorts of things that are illegal which, in rare cases, would be better off being legal. But the legal system is a somewhat crude tool. Proponents of these laws would argue that in most cases, these options do more harm than good. Whether that's true or not is an open question from what I can tell. Obviously if the scenarios you provide are representative then the answer is clear. But I'm not sure why we should assume that to be the case. Addiction and mental illness immediately come to mind when I think of the most probable reasons for people to decide to take these options.

I don't want to live in a world where women have to prostitute themselves to afford rent. Have them be homeless until the homelessness situation becomes severe enough that we resolve it. Otherwise, IMO, we are just boiling the frog. There will be no protests, no riots, because selling our kidneys and having sex for rent is just enough for us to get by. 

Maybe these options should be legal in developing countries. But in the developed world we can truly afford to avoid these scenarios for all mentally healthy and non-addicted individuals. So, let's do that, rather than lowering our standards. If some portion of the population must suffer before we realize what we're doing wrong, then that's a price that needs to be paid.

It is explained in the first section of the referenced post: AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years - EA Forum (effectivealtruism.org)

Unaligned: If you're going to die soon, you probably want to spend your money soon.

Aligned: If you're going to be rich soon, you probably don't want to save your money.

Both scenarios depend upon the time-discounted value of money to be lower after AGI. I guess the underlying assumptions are that the value derived from aligned AGI will be distributed without respect to capital, and that capital is relatively worthless in the face of unaligned AGI.

You are not talking about per person, you are talking about per worker. Total working hours per person has increased ~20% from 1950-2000 for ages 25-55.

The problem with this explanation is that there is a very clear delineation here between not-fraud and fraud. It is the difference between not touching customer deposits and touching them. Your explanation doesn't dispute that they were knowingly and intentionally touching customer deposits. In that case, it is indisputably intentional, outright fraud. The only thing left to discuss is whether they knew the extent of the fraud or how risky it was.

I don't think it was ill-intentioned based on SBF's moral compass. He just had the belief, "I will pass a small amount of risk onto our customers, tell some small lies, and this will allow us to make more money for charity. This is net positive for the world." Then the risks mounted, the web of lies became more complicated to navigate, and it just snowballed from there.

I find https://youglish.com/ to be very helpful with this.