LESSWRONG
LW

1059
TsviBT
8306Ω55853100591
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
7TsviBT's Shortform
Ω
1y
Ω
141
Plan 1 and Plan 2
TsviBT2d31

Part of what I'm saying is that it's not respectable for someone to both

  1. claim to have substantial reason to think that making non-lethal AGI is tractable, and also
  2. not defend this position in public from strong technical critiques.

It sounds like you're talking about non-experts. Fine, of course a non-expert will be generally less confident about conclusions in the field. I'm saying that there is a camp which is treated as expert in terms of funding, social cachet, regulatory influence, etc., but which is not expert in my sense of having a respectable position.

Reply11
Which side of the AI safety community are you in?
TsviBT3d20

Am I in group C? Am I a fake member of C?

(IDK and I wouldn't be the one to judge, and there doesn't necessarily have to be one to judge.) I guess I'd be a bit more inclined to believe it of you? But it would take more evidence. For example, it would depend how your stances express themselves specifically in "political" contexts, i.e. in contexts where power is at stake (company governance, internal decision-making by an academic lab about allocating attentional resources, public opinion / discussion, funding decisions, hiring advice). And if you don't have a voice in such contexts then you don't count as much of a member of C. (Reminder that I'm talking about "camps", not sets of individual people with propositional beliefs.)

Reply
Plan 1 and Plan 2
TsviBT4d20

Nice, thank you! In a much more gray area of [has some non-trivial identity as "AI safety person"], I assume there'd be lots of people, some with relevant power. This would include some heads of research at big companies. Maybe you meant that by "People at labs who are basically AI researchers", but I mean, they would be less coded as "AI safety" but still would e.g.

  • Pay lip service to safety;
  • Maybe even bring it up sometimes internally;
  • Happily manufacture what boils down to lies for funders regarding technical safety;
  • Internally think of themselves as doing something good and safe for humanity;
  • Internally think of themselves as being reasonable and responsible regarding AI.

Further, this would include many AI researchers in academia. People around Silicon Valley / LW / etc. tend to discount academia, but I don't.

Reply
Which side of the AI safety community are you in?
TsviBT4d41

So it seems important to not help it move from being a discussion to a fight.

It seems like part of the practical implication of whatever you mean by this is to say:

Calling people kind of stupid for holding the position they do (which Tegmark's framing definitely does)

Like, Tegmark's post is pretty neutral, unless I'm missing something. So it sounds like you're saying to not describe there being two camps at all. Is that roughly what you're saying? I'm saying that in your abstract analysis of the situation, you should stop preventing yourself from understanding that there are two camps.

Reply
Which side of the AI safety community are you in?
TsviBT4d20

Yeah, safetywashing, or I guess mistake-theory-washing.

Reply
Which side of the AI safety community are you in?
TsviBT4d101

many have more nuanced views

Fine, and also I'm not saying what to do about it (shame or polarize or whatever), but PRIOR to that, we have to STOP PRETENDING IT'S JUST A VIEW. It's a conflictual stance that they are taking. It's like saying that the statisticians arguing against "smoking causes cancer" "have a nuanced view".

Reply1
Plan 1 and Plan 2
TsviBT4d20

who to be clear are filtered for "are willing to talk to me"

In particular, they're filtered for not being cruxy for whether AGI capabilities research continues. (ETA: ... which is partly but very much not entirely due to anti-correlation with claiming to be "AI safety".)

Maybe I'm not sure what you mean by "have a respectable position."

I'm not sure either, but for example if a scientist publishes an experiment, and then another scientist with a known track record of understanding things publishes a critique, the first scientist can't respectably dismiss the critique unsubstantially.

you need like a good model who/what the enemy is

Ok good, yes, we agree on what the question/problem is.

(in a way that seems more tribally biased than you usually seem to me)

Not more tribally biased, if anything less. I'm more upset though, because why can't we discuss the conflict landscape? I mean why can't the people in these LW conversations say something like "yeah the lion's share of the relevant power is held by people who don't sincerely hold A or B / Type 1/2"?

Reply2
Plan 1 and Plan 2
TsviBT4d111

I think the attitude of "don't share core intuitions isn't a respectable position" is, well, idk you have that attitude if you want but I don't think it'd going to help you understand or persuade people.

Of course it won't help me persuade people. I absolutely think it will help me understand people, relative to your apparent position. Yes, you have to understand that they are not doing the "have a respectable position" thing. This is important.

There is no clear line between Type 2 and Type 3 people, it can be true that people both have earnest intellectual positions you find frustrating but it's fundamentally an intellectual disagreement and also they can have biases that you-and-they would both agree would be bad, and the percent of causal impact of the intellectual-positions and biases can range from like 99% to 1% in either direction.

(Almost always true and usually not worth saying.)

Maybe I should just check, are you consciously trying to deny a conflict-type stance, and consciously trying to (conflictually) assert the mistake-type stance, as a strategy?

Reply
Plan 1 and Plan 2
TsviBT4d00

Reply
Plan 1 and Plan 2
TsviBT4d110

You're describing Type 3 people; "Wanting to feel traction / in-control" is absolutely insincere as "belief in Type 2 plan". I don't claim to understand the Type 3 mindset, I call for people to think about it. "They just don't share some core intuitions re: Alignment Is Hard" is not a respectable position. A real scientist would be open to debate.

Reply
Load More
138Do confident short timelines make sense?
3mo
76
41A regime-change power-vacuum conjecture about group belief
4mo
16
83Genomic emancipation
4mo
14
80Some reprogenetics-related projects you could help with
4mo
1
76Policy recommendations regarding reproductive technology
5mo
2
59Attend the 2025 Reproductive Frontiers Summit, June 10-12
6mo
0
30[Linkpost] Visual roadmap to strong human germline engineering
7mo
0
50The vision of Bill Thurston
7mo
34
76The principle of genomic liberty
7mo
51
149Methods for strong human germline engineering
8mo
29
Load More
Tracking
8 months ago
(+191)
Tracking
8 months ago
(+2/-2)
Tracking
8 months ago
(+1571)
Joint probability distribution
9 years ago
(+850)
Square visualization of probabilities on two events
9 years ago
(+72)