Wiki Contributions

Comments

I think you could, but then it would be unintelligible to most people who don't know wtf is Solomonoff Induction.

The Ponzi Pyramid scheme IMO is sn excellent framework, but the post still suffers from a certain, eh, lack of conciseness. I think you could make the point a lot more simply with just a few exchanges from the first section and anyone worth their salt will absolutely get the spirit of the point.

I think this is an added layer though - I don't think the responses listed here are responses of people deep enough in the transhumanism/AI rabbit hole to even consider those options. Rather, they sound like the more general kind of answers that you'd hear also in response to a theoretical offer of immortality that means 100% what you expect it to, no catches.

If immortality becomes widely available, we will lose the current guarantee that "awful people will eventually die", which greatly increases the upper bounds of the awfulness they can spread

I mean... amazingly good people die too. Sure, a society of immortals would obviously very weird, and possibly quite static, but I don't see how eventual random death is some kind of saving grace here. Awful people die and new ones are born anyway.

I think another big issue with codes of conduct is that they just shift the burden around. You're still left with the issue of interpreting the spirit of the norm, deciding if everyone at least made a good faith attempt to stick to it, if good faith is enough, etc. I don't have much experience with them but I honestly don't know if they help that much. Seems to me like there are two types of "troublemakers" in communities no matter what:

  1. people who are purposefully deceptive and manipulative;
  2. people who simply lack the social grace and ability to "read the room" required to meet other's expectations of social norms adherence rather than just stick to their own interpretation of them.

Type 1 you want to kick out. Type 2 you ideally want to be a lot more graceful and forgiving with, though in some extreme cases you might still need to kick them out if their problems are unfixable and they make no effort whatsoever to at least mitigate the issues. Writing the rules down doesn't help as long as they're flexible, because the problem those people have is a lack of the sort of intuition that others possess for grokking flexible rules altogether. And if you make them inflexible you just have a chilling effect on every interaction, and throw away a lot of good with the bad. After all, for example, why shouldn't someone ask a woman out at their first meeting if they're both clearly into each other and sparks are flying? These things happen! And people should be able to give it a try, I think it's important to make it clear that there's nothing sinful or bad about courtship or flirting per se; too many rigid rules about such personal interactions inevitably carry a sort of puritanical vibe with them, regardless of intention. But as usual, "use your best judgement" has very uneven effects because some people's best judgement is just not that great to begin with, often through no fault of their own.

I would. It's possible an election in which a third party candidate has a serious chance might exist, but it wouldn't look like this one at this point. Only way the boat could at least be rocked is if the charges go through and Trump is out of the race by force majeure, at which point there's quite a bit of chaos.

I mean, even so... it's ten minutes. I'd be bored on a 2 hour trip on which I'm unable to read. For ten minutes, I can manage.

Shared biological needs aren't a guarantee of friendliness, but they do restrict the space of possibilities significantly - enough, IMO, to make the hopes of peaceful contact not entirely moot. Also here it comes with more constraints. Again, if we ever meet aliens, it will probably have to be social organisms like us, who were able to coordinate and cooperate like us, and thus can be probably reasoned with somehow. Note that we can coexist with bears and chimpanzees. We just need to not be really fucking stupid about it. Bears aren't going to be all friendly with us, but that doesn't mean they just kill for kicks or have no sense of self-preservation. The communication barrier is a huge issue too. If you could tell the bear "don't eat me and I can bring you tastier food" I bet things might smooth out.

AI is not subject to those constraints. "Being optimised to produce human-like text" is a property of LLMs specifically, not all AI, and even then, its mapping to "being human-like" is mostly superficial; they still can fail in weird alien ways. But I also don't expect AGI to just be a souped up LLM. I expect it to contain some core long term reasoning/strategizing RL model more akin to AlphaGo than to GPT-4, and that can be far more alien.

This is a double edged sword to me. Biological entities might be very different in the details but shaped by similar needs at their core - nutrition, fear of death, need for sociality and reproduction (I don't expect any non-social aliens to ever become space faring in a meaningful way). AIs can ape the details but lack all those pressures at their core - especially those of prosociality. That's why they might end up potentially more hostile than any alien.

The notion that planetary spread will cause necessarily war is IMO hugely flawed because it ignores entirely the issue of logistics. People don't make war just because they piss each other off - I mean, sometimes they do, but war also has to be at least practical. Logistics of interplanetary or, heavens forbid, interstellar war are beyond nightmarish, which is why space operas always come up with jump drives or wormholes or gates or some other kind of technoblabbery doohickey to make wars across galactic empires work much like wars on this little mudball we're used to. Otherwise, the universe has a very very strong "live and let live" bias; plenty of real estate, plenty of buffer zones in between, and it's almost always cheaper to go somewhere empty than to wrestle somewhere full from the hands of someone else, especially if you want the planet to stay intact and livable.

There are precedents on Earth too. The Roman Empire and early Qin were both very powerful, very large, and very expansionistic, separated by thousands of years of cultural and technological divergence. According to this theory, they should have been natural enemies who went to war almost immediately. And yet they didn't, first and foremost because in between them was a lot of inhospitable land that neither side could economically cross without arriving to the other substantially weakened. And also because they were probably different enough that they didn't really concern themselves with mutual annihilation on ideological basis - that's more the province of the devil you know, the heretic, the guy who's similar enough that you care but different enough that he pisses you off. You don't fight a Thirty Years' War with some distant off culture that believes completely different things, you fight it with your brothers and sisters who dared believing a slightly different version of what you believe (and then, hopefully, you learn not to fight it at all because it's really self-destructive and stupid).

Obviously there are risks - it's true that space colonies would diverge from Earth for sure, and it's true that having humanity spread on multiple planets would make the use of even potentially planet-ending weapons like nukes or relativistic kinetic bombardment a bit less taboo. That's a problem, but it does not mean that History in such a future would be more predetermined than it ever has been.

That is an interesting counterpoint, but there's the fact that things like PRISM can exist in at least something like a pseudo-legal space; if government spooks come to you and ask you to do X and Y because terrorism, and it sounds legit, that's probably a strong coordination mechanism. It still came out eventually.

To compare with COVID-19, there probably are forms of more or less convergent behaviours that produce a conspiracy like appearance, but no space for real large conspiracies of that sort I can think of. My most nigh-conspiratorial C19 opinions are that early "masks are useless" recommendations were more of a ploy to protect PPE stocks than genuine advice, and that regardless of its truth, a lab leak was discounted way too quickly and too thoroughly for political reasons. Both these though don't require large active conspiracies, but simply convergent interests and biases across specific groups of people.

Load More