Idan Arye


Sorted by New

Wiki Contributions


SSC Journal Club: AI Timelines

The headline result: the researchers asked experts for their probabilities that we would get AI that was “able to accomplish every task better and more cheaply than human workers”. The experts thought on average there was a 50% chance of this happening by 2062 – and a 10% chance of it happening by 2026!

But on its own this is a bit misleading. They also asked by what year “for any occupation, machines could be built to carry out the task better and more cheaply than human workers”. The experts thought on average that there was a 50% chance of this happening by 2139, and a 20% chance of it happening by 2037.

As the authors point out, these two questions are basically the same – they were put in just to test if there was any framing effect. The framing effect was apparently strong enough to shift the median date of strong human-level AI from 2062 to 2139. This makes it hard to argue AI experts actually have a strong opinion on this.

These are not the same.

The first question sounds like an AGI - a single AI that can just do anything we tell it to do (or anything it decides to do?) without any farther development effort by humans. We'll just need to provide a reasonably specified description of the task, and the AI will learn on it's how to do it by deducing it from the laws of physics or by consuming existing learning resources made for humans or by trial-and-errors or whatever.

The second question does not require AGI - it's about regular AIs. It requires that for whatever task done by humans, it would be possible to build an AI that does it better and more cheaply. No research into the unknown would need to be done - just utilization of established theory, techniques, and tools - but you would still need humans to develop and build that specific AI.

So, the questions are very different, and different answers to them are expected, but... should't one expect the latter to happen sooner than the former?

Self-Integrity and the Drowning Child

I see. So essentially demandingness is not about how strong the demand is but about how much is being demanded?

Self-Integrity and the Drowning Child

I think the key to the drowning child parable is the ability of others to judge you. I can't judge you for not donating a huge portion of your income to charity, because then you'll bring up the fact that I don't donate a huge portion of my own income to charity. Sure, there are people who do donate that much, but they are few enough that it is still socially safe to not donate. But I can judge you for not saving the child, because you can't challenge me for not saving them - I was not there. This means that not saving the child poses a risk to your social status, which can greatly tilt the utility balance in favor of saving them.

Self-Integrity and the Drowning Child

Could you clarify what you mean by "demandingness"? Because according to my understanding the drowning child should be more demanding than donating to AMF because the situation demands that you sacrifice to rescue them, unlike AMF that does not place any specific demands on you personally. So I assume you mean something else?

A Modern Myth

If Heracles was staring at Hermes' back, shouldn't he have noticed the Eagle eating his liver?

The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible

Wait - but if you can use population control to manipulate the global utility just by changing the statistical weights, isn't it plain average utilitarianism instead of the fancier negative preference kind?


This also relates to your thrive/survive theory. A society in extreme survive mode cannot tolerate "burdens" - it needs 100% of the populace to contribute. Infants may be a special exception for the few years until they can start contributing, but other than that if you can't work for whatever reason you die - because if the society will have to allocate to you more utility than what you can give back, it'll lose utility and die. This is extreme survive mode, there is no utility to spare.

As we move thriveward, we get more and more room for "burdens". We don't want to leave our disabled and elderly to die once they are no longer useful - we only had to do that in extreme survive mode, but now that we have some surplus we want to use it to avoid casting away people who can't work.

This presents us with a problem - if we can support a small number of people who can't work, it means we can also support a small number of people who don't want to work. Whether or not it's true, the ruling assumption to this very day is that if left unchecked enough lazy people will take up that opportunity that the few willing to work will crumble under their weight.

So we need to create mechanisms for selecting the people that will get more support than they contribute. At first it's easy - we don't have that much slack anyway, so we just pick the obvious people, like the elders and the visible disabled. These things are very hard to fake. But eventually we run out of that, and can afford giving slack to less and less obvious disabilities, and even to people just ran out of luck - e.g. lost their job and are having trouble getting a new one, or need to stay home to take care of family members.

And these things are much easier to fake.

So we do still try to identify these lazy people and make them work, but we also employ deterrents to make faking less desirable. Lower living conditions is a natural occurring deterrent, and on top of that society adds shame and lower social status. If you legitimately can't work there is not much you can do about it so you suffer through these deterrents. If you are just lazy, it might be better to work anyway because while not working won't get you killed it'll still get you shunning looks, disrespect, and that shameful feeling of being a burden on society.

This has false negatives and false positives, of course, but overall it was good enough a filter to let society live and prosper without throwing out too many unfortunate members.

But... thanks to this mechanism, working became a virtue.

This was useful for quite a while, but it makes it harder to move on. If it's shameful not to work, and everyone who don't have a special condition have to work, then society needs to guarantee enough work for everyone or we'll have a problem. Instead of having to conserve the little slack we have and carefully distribute it, we now need to conserve the find ways to get rid of all that slack because people need to feel useful.

(note that this is a first world problem. Humanity is spread out on the thrive/survive axis, and there are many places when you still need to work to survive and not just to feel good about yourself)

Some of the methods we use to achieve that are beneficial (as long as they don't screw up, as they sometimes do) - letting kids study until somewhere in their twenties, letting people retire while they still have some vitality left, letting people have days off and vacations, etc. But there are also wastes for the sake of waste, like workfare or overproducing, which we only do because work is a virtue and we need to be virtuous.

At some point technology will get so far, that we'll be able to allow a majority of the populace to not work. Some say we are already there. So we need to get out of this mentality fast - because we can't let too many people feel like they are a burden on society.

I'm... not really sure how that "virtue" can be rooted out...


I came to a similar conclusion from a different angle. Instead of the past, I considered the future - specifically the future of automation. There is a popular pessimistic scenario of machines taking up human jobs making everyone - save for the tycoons who own the machines - unable to provide for themselves. This prediction is criticized by pointing out that automation in the past created better jobs, replacing the ones it took away. Which is countered by claiming that past automation was mainly replacing our muscles, but now we are working on automation that replaces our brains, which will make humans completely obsolete. And now that I read this post, I realize that these better jobs created by automation left many people behind, so wouldn't better automation leave even more people behind?

So developing automation has ethical problems - even if it benefits society as a whole, is it really okay to sacrifice all these people to attain it?

My ethical framework is based on Pareto efficiency - solutions are only morally acceptable if they are Pareto improvements. I wouldn't call it "fully consistent", because it raises the question of "Pareto improvement compared to what?" and by cleverly picking the baseline you can make anything moral or immoral as you wish. But if you can hand-wave that fundamental issue away it forms this vague basic principle:

A solution where everyone benefits is preferable to a solution where some are harmed, even if the total utility of the latter is higher than that of the former.

Sometimes the difference in total utility is very big, and it seems like a waste to throw away all that utility. Luckily real life is not a simple game theory scenario with a fixed and very small number of strategies and outcomes. We have many tools to create new strategies or just modify existing ones. And if we have one outcome that generates a huge surplus at the expense of some people, we can just take some of that surplus and give it to them, to create a new outcome where we have it all - every individual is better off and the total utility is greatly increased.

Even if a solution without the surplus division can result in more utility overall, I'd still prefer to divide the surplus just so no one will have to get hurt.

And this is where UBI comes in - use a small portion of that great utility surplus we get from automation to make sure even the people who lose their jobs end up at a net benefit.

But if we apply this to the future, why not apply it to the present as well? Use the same principle for the people who already got hurt due to automation?

EA Hangout Prisoners' Dilemma

Why ? The participants may have a preference for one nonprofit over the other, but surely - all else being equal - they should prefer their less favorite nonprofit to get money over it getting nothing.

I'd go even farther - this is charity, so instead of a social outcome which is the sum of the players' utility the individual utilities here are applications of the players' value functions on the social outcome. Even if you prefer one nonprofit over the other - do you prefer it enough to relinquish these extra $100? Do you think your favorite charity can do with $100 more than your second favorite can do with $200?

I don't think so. We have  here - and overall .

For most game it's clear what counts as cooperation and what counts as defecting. For BoS - no so much. Your classification relies on that labeling (otherwise you could switch W with Z and X with Y) and since we can't use them here I'll just fix  - that is cooperation is always the strategy that chosen by both players is better than the other strategy if chosen by both.

So - in BoS cooperation is doing what you were already wanting to do, and you hope for your spouse to defect. The order is , which is not exactly our case but closer than any other game I can think of.

EA Hangout Prisoners' Dilemma

You also need to only permit people who took part in the negotiations to launch nukes. Otherwise newcomers could just nuke without anyone having a chance to establish a precommittment to retaliate against them.

Load More