LESSWRONG
LW

730
artifex0
3181470
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
3artifex0's Shortform
2y
22
3artifex0's Shortform
2y
22
Transgender Sticker Fallacy
artifex01mo242

An important bit of context that often gets missed when discussing this question is that actual trans athletes competing in women's sports are very rare.  Of the millions competing in organized sports in the US, the total number who are trans might be under 20 (see this statement from the NCAA president estimating "fewer than ten" in college sports, this article reporting that an anti-trans activist group was able to identify only five in K-12 sports, and this Wikipedia article, which identifies only a handful of trans athletes in professional US sports).

Because this phenomenon is so rare relative to how often it's discussed, I'm a lot more interested in the sociology of the question than the question itself. There was a recent post from Hanson arguing that the Left and Right in the US have become like children on a road trip annoying each other in deniable ways to provoke responses that they hope their parents will punish.  I think the discrepancy between the scale of the issue and how often it comes up is mostly due to it being used in this way.

A high school coach who has to choose whether to allow a trans student to compete in female sports is faced with a difficult social dilemma. If they deny the request, then the student- who wants badly to be seen as female- will be disappointed and might face additional bullying; if they allow it, that will be unfair to the other female players. In some cases, other players may be willing to accept a bit of unfairness as an act of probably supererogatory kindness, but in cases where they are aren't, explaining to the student that they shouldn't compete without hurting their feelings will take a lot of tact on the part of the coach.

Elevating this to a national conversation isn't very tactful. People on the right can plausibly claim to only be concerned with fairness in sports, but presented so publicly, this looks to liberals like an attempt to bully trans people. They're annoyed, and may be provoked into responding in hard to defend ways like demanding unconditional trans participation in women's sports- which I think is often the point. It's a child in a car poking the air next to his sister and saying "I'm not touching you", hoping that she'll slap him and be punished.

I'm certain the OP didn't intend anything like that- LessWrong is, of course, a very high-decoupling place. But I'd argue that this is an issue best resolved by letting the very few people directly affected sort out the messy emotions involved among themselves, rather than through public analysis of the question on the object level.

Reply
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
artifex01mo10

So, in practice, what might that look like?

Of course, AI labs use quite a bit of AI in their capabilities research already- writing code, helping with hardware design, doing evaluations and RLAIF; even distillation and training itself could sort of be thought of as a kind of self-improvement. So, would the red line need to target just fully autonomous self-improvement?  But just having a human in the loop to rubber-stamp AI decisions might not actually slow down an intelligence explosion by all that much, especially at very aggressive labs. So, would we need some kind of measure for how autonomous the capabilities research at a lab is, and then draw the line at "only somewhat autonomous"? And if we were able to define a robust threshold, could we really be confident that it would prevent ASI development altogether, rather than just slowing it down?

Suppose instead we had a benchmark that measured something like the capabilities of AI agents in long-term real-world tasks like running small businesses and managing software development projects. Do you think it might make sense to draw a red line on somewhere on that graph- targeting a dangerous level of capabilities directly, rather than trying to prevent that level of capabilities from being developed by targeting research methods?

Reply
Global Call for AI Red Lines - Signed by Nobel Laureates, Former Heads of State, and 200+ Prominent Figures
artifex01mo10

The most important red line would have to be strong superintelligence, don't you think? I mean, if we have systems that are agentic in the way humans are, but surpass us in capabilities in the way we surpass animals, it seems like specific bans on the use of weapons, self-replication, and so on might not be very effective at keeping them in check.

Was it necessary to avoid mentioning ASI in the "concrete examples" section of the website to get these signatories on board? Are you concerned that avoiding that subject might contribute to the sense that discussion of ASI is non-serious or outside of the Overton window?

Reply
Notes on Consciousness
artifex05mo20

I think this is related to what Chalmers calls the "meta problem of consciousness"- the problem of why it seems subjectively undeniable that a hard problem of consciousness exists, even though it only seems possible to objectively describe "easy problems" like the question of whether a system has an internal representation of itself.  Illusionism- the idea that the hard problem is illusory- is an answer to that problem, but I don't think it fully explains things.

Consider the question "why am I me, rather than someone else". Objectively, the question is meaningless- it's a tautology like "why is Paris Paris". Subjectively, however, it makes sense, because your identity in objective reality and your consciousness are different things- you can imagine "yourself" seeing the world through different eyes, with different memories and so on, even though that "yourself" doesn't map to anything in objective reality. The statement "I am me" also seems to add predictive power to a subjective model of reality- you can reason inductively that since "you" were you in the past, you will continue to be in the future. But if someone else tells you "I am me", that doesn't improve your model's predictive power at all.

I think there's a real epistemological paradox there, possibly related somehow to the whole liar's/Godel's/Russell's paradox thing. I don't think it's as simple as consciousness being equivalent to a system with a representation of itself.

Reply
Eliezer and I wrote a book: If Anyone Builds It, Everyone Dies
artifex05mo588

I used to do graphic design professionally, and I definitely agree the cover needs some work.

I put together a few quick concepts, just to explore some possible alternate directions they could take it:
https://i.imgur.com/zhnVELh.png
https://i.imgur.com/OqouN9V.png
https://i.imgur.com/Shyezh1.png

These aren't really finished quality either, but the authors should feel free to borrow and expand on any ideas they like if they decide to do a redesign.

Reply21
Adele Lopez's Shortform
artifex07moΩ130

This suggests that in order to ensure a sincere author-concept remains in control, the training data should carefully exclude any text written directly by a malicious agent (e.g. propaganda). 

 

I don't think that would help much, unfortunately.  Any accurate model of the world will also model malicious agents, even if the modeller only ever learns about them second-hand. So the concepts would still be there for the agent to use if it was motivated to do so.

Censoring anything written by malicious people would probably make it harder to learn about some specific techniques of manipulation that aren't discussed much by non-malicious people or which appear much in fiction- but I doubt that would be much more than a brief speed bump for a real misaligned ASI, and probably at the expense of reducing useful capabilities in earlier models like the ability to identify maliciousness, which would give an advantage to competitors.

Reply
Consider showering
artifex07mo226

A counterpoint: when I skip showers, my cat appears strongly in favor of smell of my armpits- occasionally going so far as to burrow into my shirt sleeves and bite my armpit hair (which, to both my and my cat's distress, is extremely ticklish). Since studies suggest that cats have a much more sensitive olfactory sense than humans (see https://www.mdpi.com/2076-2615/14/24/3590), it stands to reason that their judgement regarding whether smelling nice is good or bad should hold more weight than our own.  And while my own cat's preference for me smelling bad is only anecdotal evidence, it does seem to suggest at least that more studies are required to fully resolve the question.

Reply
The News is Never Neglected
artifex08mo20

I think it's a very bad idea to dismiss the entirety of news as a "propaganda machine".  Certainly some sources are almost entirely propaganda. More reputable sources like the AP and Reuters will combine some predictable bias with largely trustworthy independent journalism. Identifying those more reliable sources and compensating for their bias takes effort and media literacy, but I think that effort is quite valuable- both individually and collectively for society.
 

  • Accurate information about large, important events informs our world model and improves our predictions. Sure, a war in the Middle East might not noticeably affect your life directly, but it's rare that a person lives an entire life completely unaffected by any war, and having a solid understanding of how wars start and progress based on many detailed examples will help us prepare and react sensibly when that happens. Accurate models of important things will also end up informing our understanding of tons of things that might have originally seemed unrelated. That's all true, of course, of more neglected sources of information- but it seems like the best strategy for maximizing the usefulness of your models is to focus on information which seems important or surprising, regardless of neglectedness.
  • Independent journalism also checks the power of leaders. Even in very authoritarian states, the public can collectively exert some pressure against corruption and incompetence by threatening instability- but only if they're able to broadly coordinate on a common understanding of those things. The reason so many authoritarians deny the existence of reliable independent journalism- often putting little to no effort into hiding the propagandistic nature of their state media- is that by promoting that maximally cynical view of journalism, they immunize their populations against information not under their control. Neglected information can allow for a lot of personal impact, but it's not something societies can coordinate around- so focusing on it to the exclusion of everything else may represent a kind of defection in the coordination problem of civic duty.


Of course, we have to be very careful with our news consumption- even the most sober, reliable sources will drive engagement by cherry-picking stories, which can skew our understanding of the frequency of all kinds of problems. But availability bias is a problem we have to learn to compensate for in all sorts of different domains- it would be amazing if we were able to build a rich model of important global events by consuming only purely unbiased information, but that isn't the world we live in.  The news is the best we've got, and we ought to use it.

Reply
artifex0's Shortform
artifex09mo31

So, the current death rate for an American in their 30s is about 0.2%. That probably increases another 0.5% or so when you consider black swan events like nuclear war and bioterrorism. Let's call "unsafe" a ~3x increase in that expected death rate to 2%.

An increase that large would take something a lot more dramatic than the kind of politics we're used to in the US, but while political changes that dramatic are rare historically, I think we're at a moment where the risk is elevated enough that we ought to think about the odds.

I might, for example, give odds for a collapse of democracy in the US over the next couple of years at ~2-5%- if the US were to elect 20 presidents similar to the current one over a century, I'd expect better than even odds of one of them making themselves into a Putinesque dictator. A collapse like that would substantially increase the risk of war, I'd argue, including raising a real possibility of nuclear civil war. That might increase the expected death rate for young and middle-aged adults in that scenario by a point or two on its own. It might also introduce a small risk of extremely large atrocities against minorities or political opponents, which could increase the expected death rate by a few tenths of a percent.

There's also a small risk of economic collapse. Something like a political takeover of the Fed combined with expensive, poorly considered populist policies might trigger hyperinflation of the dollar.  When that sort of thing happens overseas, you'll often see reduced health outcomes and breakdown in civil order increasing the death rate by up to a percent- and, of course, it would introduce new tail risks, increasing the expected death rate further.

I should note that I don't think the odds of any of this are high enough to worry about my safety now- but needing to emigrate is much more likely outcome than actually being threatened, and that's a headache I am mildly worried about.

Reply
artifex0's Shortform
artifex09mo10

That's a crazy low probability.

Honestly, my odds of this have been swinging anywhere from 2% to 15% recently. Note that this would be the odds of our democratic institutions deteriorating enough that fleeing the country would seem like the only reasonable option- p(fascism) more in the sense of a government that most future historians would assign that or a similar label to, rather than just a disturbingly cruel and authoritarian administration still held somewhat in check by democracy.

Reply
Load More