LESSWRONG
LW

2122
xpym
33001120
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
The Relationship Between Social Punishment and Shared Maps
xpym8d*10

publishing the evidence is prosocial, because it helps people make higher-quality decisions regarding friendship and trade opportunities with Mallory

And by the same token, subsequent punishment would be prosocial too. Why, then, would Alice want to disclaim it? Because, of course, in reality the facts of the matter whether somebody deserves punishment are rarely unambiguous, so it makes sense for people to hedge. But that's basically wanting to have the cake and eat it too.

The honorable thing for Alice to do would be to weigh the reliability of the evidence that she possesses, and disclose it only if she thinks that it's sufficient to justify the likely punishment that would follow it. No amount of nuances of wording and tone could replace this essential consideration.

Reply
Ethical Design Patterns
xpym12d*20

Feels true to me, but what’s the distinction between theoretical and non-theoretical arguments?

Having decent grounding for the theory at hand would be a start. To take the ignition of the atmosphere example, they did have a solid enough grasp of the underlying physics, with validated equations to plug numbers into. Another example would be global warming, where even though nobody has great equations, the big picture is pretty clear, and there were periods when the Earth was much hotter in the past (but still supported rich ecosystems, which is why most people don't take the "existential risk" part seriously).

Whereas, even the notion of "intelligence" remains very vague, straight out of philosophy's domain, let alone concepts like "ASI", so pretty much all argumentation relies on analogies and intuitions, also prime philosophy stuff.

Policy has also ever been guided by arguments with little related maths, for example, the MAKING FEDERAL ARCHITECTURE BEAUTIFUL AGAIN executive order.

I mean, sure, all sorts of random nonsense can sway national policy from time to time, but strictly-ish enforced global bans are in an entirely different league.

Maybe the problem with AI existential risk arguments is that they’re not very convincing.

Indeed, and I'm proposing an explanation why.

Reply
Ethical Design Patterns
xpym16d*120

I think that the primary heuristic that prevents drastic anti-AI measures is the following: "A purely theoretical argument about a fundamentally novel threat couldn't seriously guide policy."

There are, of course, very good reasons for it. For one, philosophy's track record is extremely unimpressive, with profound, foundational disagreements between groups of purported subject matter experts continuing literally for millennia, and philosophy being the paradigmatic domain of purely theoretical arguments. For another, plenty of groups throughout history predicted an imminent catastrophic end of the world, yet the world stubbornly persists even so.

Certainly, it's not impossible that "this time it's different", but I'm highly skeptical that humanity will just up and significantly alter the way it does things. For the nuclear non-proliferation playbook to becomes applicable, I expect that truly spectacular warning shots will be necessary.

Reply311
Transgender Sticker Fallacy
xpym16d40

There are tons of groups with significant motivation to publish just about anything detrimental to transgender people

In the academia? Come on now. If those people post their stuff on Substack, or even in some bottom-tier journal, nobody else would notice or care.

Well, there does seem to be no shortage of trans girls at any rate

Transgender people, total, between both transmasc and transfem individuals, make up around 0.5% of the population of the US.

Among youth aged 13 to 17 in the U.S., 3.3% (about 724,000 youth) identify as transgender, according to the first Google link - https://williamsinstitute.law.ucla.edu/publications/trans-adults-united-states/ In any case, when we're talking about at least hundreds of thousands, "no shortage" seems like a reasonable description.

And again, the number of trans people in high level sports is in the double digit numbers.

So far.

Reply1
Transgender Sticker Fallacy
xpym18d41

Based on https://pmc.ncbi.nlm.nih.gov/articles/PMC10641525/ trans women get well within the expected ranges for cis women within around 3-4 years.

Yes Requires the Possibility of No. Do you think that such a study would be published if it happened to come to the opposite conclusion?

And, given how few trans women there are

Well, there does seem to be no shortage of trans girls at any rate, so these issues are only going to become more salient.

Reply1
Why you should eat meat - even if you hate factory farming
xpym19d*30

I agree, and yet it does seem to me that self-identified EAs are better people, on average. If only there was a way to harness that goodness without skirting Wolf-Insanity quite this close...

Reply
Why you should eat meat - even if you hate factory farming
xpym19d42

Offsetting makes no sense in terms of utility maximisation.

Donating less than 100% of your non-essential income also makes no sense in terms of utility maximization, and yet pretty much everybody is guilty of it, what's up with that?

As it happens, people just aren't particularly good at this utility maximization thing, so they need various crutches (like the GWWC pledge) to do at least better than most, and offsetting seems like a not-obviously-terrible crutch.

Reply
EU and Monopoly on Violence
xpym22d30

Yeah, but this doesn't have much to do with conscription. Getting the moribund industrial capacity up to speed does make sense on the other hand.

Reply
EU and Monopoly on Violence
[+]xpym22d-5-1
I enjoyed most of IABIED
xpym1mo*3-2

a remotely realistic-seeming story for how things will be OK, without something that looks like coordination to not build ASI for quite a while

My mainline scenario is something like:

LLM scaling and tinkering peters out in the next few years without reaching capacity for autonomous R&D. LLMs end up being good enough to displace some entry-level jobs, but the hype bubble bursts and we enter a new AI winter for at least a couple of decades.

The "intelligence" thingie turns out to be actually hard and not amenable to a bag of simple tricks with a mountain of compute, for reasons gestured at in Realism about rationality. Never mind ASI, we're likely very far from being able to instantiate an AGI worthy of the name, which won't happen while we remain essentially clueless about this stuff.

I also expect that each subsequent metaphorical AI "IQ point" will be harder to achieve, not easier, so no foom or swift takeover. Of course, even assuming all that, it still doesn't guarantee that "things will be OK", but I'm sufficiently uncertain either way.

Reply
Load More