Sorted by New

Wiki Contributions


Doesn't society already consider it immoral to go to crowded places untested when you suspect you have COVID? This is not just about a specific detail of this specific story - one important feature is morality is preventing humans from convincing themselves that the thing they want to do is the utilitarian choice. We decided that going untested is immoral precisely because people like Alice who avoid testing themselves for such reasons.

Instead of morality, I think what Alice seeks here is deniability. If Alice does not take the test, she can convince herself (and possibly others?) that the probability of her sore throat indicating COVID is as low as she wants. No one else can really tell how bad it was - certainly not days later, when the thing is discovered - she may even claim that she felt nothing unusual. She is still immoral, but she can at least convince herself that she has done nothing wrong.

I interpret the engagement with conservative ideas Scott's describing a little more straightforwardly. Lots of people are inundated with Mrs. Grundy leftist takes on social media. They're smart enough to try and figure out what they really think. So they say  things like "Oh, I heard about that guy in South Carolina. Instead of knee-jerk condemnation, let’s try to form some general principles out of it and see what it teaches us about civil society.”

This isn't countersignaling. It's just signaling. This isn't making fun of anybody, and it's calling for straightforward civil discourse in terms nobody could possibly mistake for anything else.

I notice that a key characteristic of signaling vs countersignaling is that for regular signaling you are paying with resources while for counter signaling you are paying with risk. That is - the credibility of regular signals is derived from false signals being more costly than true signals, so it's harder to justify sending them. The credibility for countersignals comes from the risk of the signal backfiring, which should be have a greater probability when the signal is false, deterring false signalers.

Calling for a civil discourse is not a signal that requires much cost to send. So if it was a regular signal, it was at most a virtue signal. But even thought it comes "in terms nobody could possibly mistake for anything else", it can still backfire. If you are perceived as a person who often gets emotional in discussions and engage in personal attacks and general demagogy, calling for a civil discourse can paint you as a hypocrite who only wants civil discourse when it fits their agenda. If you consistently discuss civilly, the risk for that happening is much lower.

So, this may not be a strict by-definition countersignal - after all, the naive interpretation of the signal is exactly what you are trying to signal - but I still find it's mechanics to be much closer to countersignals than to traditional signals.

Rarity, by its very nature, cannot be too abundant. The more plentiful it becomes, the more it loses its defining property. There is only one original Mona Lisa, but every NFT project spits out a combinatorial number of images all built from a small number of assets and pretends they are all rare.

Each NFT is indeed unique, but since there are tens of thousands similarly unique NFTs - most or them are not really rare. One could claim that rare paintings are the same - that if NFTs are not rare because there are other NFTs, then by the same logic Mona Lisa should also not be rare because there are other paintings. But the Mona Lisa's rarity is real, because no other paining is valued like it. The second most valuable painting is The Last Supper is also very rare and valuable - but it's not the Mona Lisa.

If you try to make the same claim about NFTs - e.g. "Bored Ape #4444 is rare, but it is not Bored Ape #5555" - I'd reply that this claim can be reversed - "Bored Ape #5555 is rare, but it is not Bored Ape #4444". This doesn't work for the Mona Lisa and The Last Supper - "The Mona Lisa is rare, but it's not The Last Supper" doesn't make sense, because the Mona Lisa is the most famous painting in the world. The Last Supper is more famous that any painting other than these two, which makes it still very rare and valuable, and while the ordering is not always precise and can vary from evaluator to evaluator, there can usually be a rough estimation of "about <ballpark> other paintings are at least as famous as this one". The lower that estimation - the more rare and valuable the painting is.

Are there NFTs that are more rare than any other NFT? Probably. But there can't be that many of them - certainly not as many as the NFT advocators pretend there are.

And this is enough to explain why they are looked down upon, but not enough to explain why they are ridiculed and even hated.

My hypothesis is that the hatred they receive recently is related to Facebook's big announcement about the metaverse!

Rarity is a finite resource, but that does not mean its total amount is some physical constant. The thing that determines how much rarity there is to split between the rare things is the number of people interested in it, and the amount of resources and attention they are willing to put into that market (that's, of course, a simplification - because there are many intersecting markets. But it's good enough for this discussion). In order to get more people into NFTs and increase the value of existing NFTs (assuming the demand increases faster than supply), the NFT investors need to get non-investors into that market.

Facebook coined the term "metaverse", and changed their name to indicate how serious they are about it, but there is still very little consensus about what exactly the metaverse is going to be. The NFT people sees this as and opportunity - they want to push for metaverse assets to be managed by NFTs. They are already working on profile pictures (how are profile pictures related to the metaverse? Well, we can't agree what the metaverse is, so could you really say they are not?) to be NFT verified. They want the VR aspects of the metaverse to also be NFTs - so if you get a sword in a game, you get the NFT of it so that it's registered in the blockchain as yours. Same for virtual land, same for everything else.

And these are all things that people just assumed would be free. Or at least nearly free. And there is no reason for them not to be free. But the NFT guys want you to pay for your virtual things on more computing power for verifying your ownership than the computer power spent on their graphics and physics, just so some other avatar won't have the same instance of a virtual shirt as your avatar.

This is an obvious scam that people are seeing from miles away - and they are pushing back by attacking NFT and crypto on social networks.

The headline result: the researchers asked experts for their probabilities that we would get AI that was “able to accomplish every task better and more cheaply than human workers”. The experts thought on average there was a 50% chance of this happening by 2062 – and a 10% chance of it happening by 2026!

But on its own this is a bit misleading. They also asked by what year “for any occupation, machines could be built to carry out the task better and more cheaply than human workers”. The experts thought on average that there was a 50% chance of this happening by 2139, and a 20% chance of it happening by 2037.

As the authors point out, these two questions are basically the same – they were put in just to test if there was any framing effect. The framing effect was apparently strong enough to shift the median date of strong human-level AI from 2062 to 2139. This makes it hard to argue AI experts actually have a strong opinion on this.

These are not the same.

The first question sounds like an AGI - a single AI that can just do anything we tell it to do (or anything it decides to do?) without any farther development effort by humans. We'll just need to provide a reasonably specified description of the task, and the AI will learn on it's how to do it by deducing it from the laws of physics or by consuming existing learning resources made for humans or by trial-and-errors or whatever.

The second question does not require AGI - it's about regular AIs. It requires that for whatever task done by humans, it would be possible to build an AI that does it better and more cheaply. No research into the unknown would need to be done - just utilization of established theory, techniques, and tools - but you would still need humans to develop and build that specific AI.

So, the questions are very different, and different answers to them are expected, but... should't one expect the latter to happen sooner than the former?

I see. So essentially demandingness is not about how strong the demand is but about how much is being demanded?

I think the key to the drowning child parable is the ability of others to judge you. I can't judge you for not donating a huge portion of your income to charity, because then you'll bring up the fact that I don't donate a huge portion of my own income to charity. Sure, there are people who do donate that much, but they are few enough that it is still socially safe to not donate. But I can judge you for not saving the child, because you can't challenge me for not saving them - I was not there. This means that not saving the child poses a risk to your social status, which can greatly tilt the utility balance in favor of saving them.

Could you clarify what you mean by "demandingness"? Because according to my understanding the drowning child should be more demanding than donating to AMF because the situation demands that you sacrifice to rescue them, unlike AMF that does not place any specific demands on you personally. So I assume you mean something else?

If Heracles was staring at Hermes' back, shouldn't he have noticed the Eagle eating his liver?

Wait - but if you can use population control to manipulate the global utility just by changing the statistical weights, isn't it plain average utilitarianism instead of the fancier negative preference kind?

This also relates to your thrive/survive theory. A society in extreme survive mode cannot tolerate "burdens" - it needs 100% of the populace to contribute. Infants may be a special exception for the few years until they can start contributing, but other than that if you can't work for whatever reason you die - because if the society will have to allocate to you more utility than what you can give back, it'll lose utility and die. This is extreme survive mode, there is no utility to spare.

As we move thriveward, we get more and more room for "burdens". We don't want to leave our disabled and elderly to die once they are no longer useful - we only had to do that in extreme survive mode, but now that we have some surplus we want to use it to avoid casting away people who can't work.

This presents us with a problem - if we can support a small number of people who can't work, it means we can also support a small number of people who don't want to work. Whether or not it's true, the ruling assumption to this very day is that if left unchecked enough lazy people will take up that opportunity that the few willing to work will crumble under their weight.

So we need to create mechanisms for selecting the people that will get more support than they contribute. At first it's easy - we don't have that much slack anyway, so we just pick the obvious people, like the elders and the visible disabled. These things are very hard to fake. But eventually we run out of that, and can afford giving slack to less and less obvious disabilities, and even to people just ran out of luck - e.g. lost their job and are having trouble getting a new one, or need to stay home to take care of family members.

And these things are much easier to fake.

So we do still try to identify these lazy people and make them work, but we also employ deterrents to make faking less desirable. Lower living conditions is a natural occurring deterrent, and on top of that society adds shame and lower social status. If you legitimately can't work there is not much you can do about it so you suffer through these deterrents. If you are just lazy, it might be better to work anyway because while not working won't get you killed it'll still get you shunning looks, disrespect, and that shameful feeling of being a burden on society.

This has false negatives and false positives, of course, but overall it was good enough a filter to let society live and prosper without throwing out too many unfortunate members.

But... thanks to this mechanism, working became a virtue.

This was useful for quite a while, but it makes it harder to move on. If it's shameful not to work, and everyone who don't have a special condition have to work, then society needs to guarantee enough work for everyone or we'll have a problem. Instead of having to conserve the little slack we have and carefully distribute it, we now need to conserve the find ways to get rid of all that slack because people need to feel useful.

(note that this is a first world problem. Humanity is spread out on the thrive/survive axis, and there are many places when you still need to work to survive and not just to feel good about yourself)

Some of the methods we use to achieve that are beneficial (as long as they don't screw up, as they sometimes do) - letting kids study until somewhere in their twenties, letting people retire while they still have some vitality left, letting people have days off and vacations, etc. But there are also wastes for the sake of waste, like workfare or overproducing, which we only do because work is a virtue and we need to be virtuous.

At some point technology will get so far, that we'll be able to allow a majority of the populace to not work. Some say we are already there. So we need to get out of this mentality fast - because we can't let too many people feel like they are a burden on society.

I'm... not really sure how that "virtue" can be rooted out...

Load More