TsviBT

Wiki Contributions

Comments

Sorted by
TsviBT20

I didn't take you to be doing so--it's a reminder for the future.

TsviBT20

"You are [slipping sideways out of reality], and this is bad! Stop it!"

who is 'slipping sideways out of reality' to caveat their communications with an explicit disclaimer that admits that they are doing so

Excuse me, none of that is in my comment.

TsviBT1414

Reminder that you have a moral obligation, every single time you're communicating an overall justification of alignment work premised on slow takeoff, in a context where you can spare two sentences without unreasonable cost, to say out loud something to the effect of "Oh and by the way, just so you know, the causal reason I'm talking about this work is that it seems tractable, and the causal reason is not that this work matters.". If you don't, you're spraying your [slipping sideways out of reality] on everyone else.

TsviBT50

Right, your "obliqueness thesis" seems like a reasonable summary slogan. I'm lamenting that there are juicy problems here, but it's hard to discuss them theoretically because theoretical discussions are attracted to the two poles.

E.g. when discussing ontic crises, some people's first instinct is to get started on translating/reducing the new worldspace into the old worldspace--this is the pole that takes intelligence as purely instrumental. Or on the other pole, you have the nihilism -> Landian pipeline--confronted with ontic crises, you give up and say "well, whatever works". Both ways shrug off the problem/opportunity of designing/choosing/learning what to be. (I would hope that Heidegger would discuss this explicitly somewhere, but I'm not aware of it.)

In terms of government, you have communists/fascists on the one hand, and minarchists on the other. The founders of the US were neither and thought a lot about what to be. You don't just pretend that you aren't, shouldn't be, don't want to be part of a collective; but that collective should be deeply good; and to be deeply good it has to think; so it can't be totalitarian.

TsviBT70

It's pretty annoying that the only positions with common currency are

  1. we have to preserve our values the way they are, and
  2. actually that's confused, so we should just do whatever increases intelligence / effectiveness.

To have goals you have to point to reality, and to point to reality you have to unfold values through novelty. True, true. And you have to make free choices at each ontic crisis, included free choices about what to be. Also true.

TsviBT165

(I have a lot of disagreements with everyone lol, but I appreciate Ryan putting some money where his mouth is re/ blue sky alignment research as a broad category, and the acknowledgement of "rather than the ideal 12-24 months" re/ "connectors".)

TsviBT30

Sure, though if you're just going to say "I know how to do it! Also I won't tell you!" then it doesn't seem very pointful?

TsviBT90

@Nate Showell @P. @Tetraspace @Joseph Miller @Lorxus 

I genuinely don't know what you want elaboration of. Reacts are nice for what they are, but saying something out loud about what you want to hear more about / what's confusing  / what you did and didn't understand/agree with, is more helpful.

Re/ "to whom not...", I'm asking Wei: what groups of people would not be described by the list of 6 "underestimating the difficult of philosophy" things? It seems to me that broadly, EAs and "AI alignment" people tend to favor somewhat too concrete touchpoints like "well, suppressing revolts in the past has gone like such and such, so we should try to do similar for AGI". And broadly they don't credit an abstract argument about why something won't work, or would only work given substantial further philosophical insight. 

Re/ "don't think thinking ...", well, if I say "LLMs basically don't think", they're like "sure it does, I can keep prompting it and it says more things, and I can even put that in a scaffold" or "what concrete behavior can you point to that it can't do".  Like, bro, I'm saying it can't think. That's the tweet. What thinking is, isn't clear, but That thinking is should be presumed, pending a forceful philosophical conceptual replacement!

Load More