All of zulupineapple's Comments + Replies

"Stuck In The Middle With Bruce"

The link is broken. I was only able to find the article here, with the wayback machine.

Noticing Frame Differences

In the examples, sometimes the problem is people having different goals for the discussion, sometimes it is having different beliefs about what kinds of discussions work, and sometimes it might be about almost object-level beliefs. If "frame" refers to all of that, then it's way too broad and not a useful concept. If your goal is to enumerate and classify the different goals and different beliefs people can have regarding discussions, that's great, but possibly to broad to make any progress.

My own frustration with this topic is lack of ... (read more)

ozziegooen's Shortform

Making long term predictions is hard. That's a fundamental problem. Having proxies can be convenient, but it's not going to tell you anything you don't already know.

Book Review: Secular Cycles

That's what I think every time I hear "history repeats itself". I wish Scott had considered the idea.

The biggest claim Turchin is making seems to be about the variance of the time intervals between "bad" periods. Random walk would imply that it is high, and "cycles" would imply that it is low.

ozziegooen's Shortform
For example, say I wanted to know how good/enjoyable a specific movie would be.

My point is that "goodness" is not a thing in the territory. At best it is a label for a set of specific measures (ratings, revenue, awards, etc). In that case, why not just work with those specific measures? Vague questions have the benefit of being short and easy to remember, but beyond that I see only problems. Motivated agents will do their best to interpret the vagueness in a way that suits them.

Is your goal to find a method to generate specific interpretations an... (read more)

1ozziegooen2yHm... At this point I don't feel like I have a good intuition for what you find intuitive. I could give more examples, but don't expect they would convince you much right now if the others haven't helped. I plan to eventually write more about this, and eventually hopefully we should have working examples up (where people are predicting things). Hopefully things should make more sense to you then. Short comments back<>forth are a pretty messy communication medium for such work.
ozziegooen's Shortform
"What is the relative effectiveness of AI safety research vs. bio risk research?"

If you had a precise definition of "effectiveness" this shouldn't be a problem. E.g. if you had predictions for "will humans go extinct in the next 100 years?" and "will we go extinct in the next 100 years, if we invest 1M into AI risk research?" and "will we go extinct, if we invest 1M in bio risk research?", then you should be able to make decisions with that. And these questions should work fine in existing forecasting ... (read more)

1Tetraspace Grouping2yThere's something of a problem with sensitivity; if the x-risk from AI is ~0.1, and the difference in x-risk from some grant is ~10^-6, then any difference in the forecasts is going to be completely swamped by noise. (while people in the market could fix any inconsistency between the predictions, they would only be able to look forward to 0.001% returns over the next century)
3ozziegooen2yComing up with a precise definition is difficult, especially if you want multiple groups to agree. Those specific questions are relatively low-level; I think we should ask a bunch of questions like that, but think we may also want some more vague things as well. For example, say I wanted to know how good/enjoyable a specific movie would be. Predicting the ratings according to movie reviewers (evaluators) is an approach I'd regard as reasonable. I'm not sure what a precise definition for movie quality would look like (though I would be interested in proposals), but am generally happy enough with movie reviews for what I'm looking for. Agreed that that itself isn't a forecast, I meant in the more general case, for questions like, "How much value will this organization create next year" (as you pointed out). I probably should have used that more specific example, apologies. Can you be more explicit about your definition of "clearly"? I'd imagine that almost any proposal at a value function would have some vagueness. Certificates of Impact get around this by just leaving that for the review of some eventual judges, kind of similar to what I'm proposing. The goal for this research isn't fixing something with prediction markets, but just finding more useful things for them to predict. If we had expert panels that agreed to evaluate things in the future (for instance, they are responsible for deciding on the "value organization X has created" in 2025), then prediction markets and similar could predict what they would say.
Why are the people who could be doing safety research, but aren’t, doing something else?

While it's true that preferences are not immutable, the things that change them are not usually debate. Sure, some people can be made to believe that their preferences are inconsistent, but then they will only make the smallest correction needed to fix the problem. Also, sometimes debate will make someone claim to have changed their preferences, just to that they can avoid social pressures (e.g. "how dare you not care about starving children!"), but this may not reflect in their actions.

Regardless, my claim is that many (or most) people discount a lot, and that this would be stable under reflection. Otherwise we'd see more charity, more investment and more work on e.g. climate change.

A Personal Rationality Wishlist

Ok, that makes the real incentives quite different. Then, I suspect that these people are navigating facebook using the intuitions and strategies from the real world, without much consideration for the new digital environment.

A Personal Rationality Wishlist

Yes, and you answered that question well. But the reason I asked for alternative responses, was so that I could compare them to unsolicited recommendations from the anime-fan's point of view (and find that unsolicited recommendations have lower effort or higher reward).

Also, I'm not asking "How did your friend want the world to be different", I'm asking "What action could your friend have taken to avoid that particular response?". The friend is a rational agent, he is able to consider alternative strategies, but he shouldn't expect that other people will change their behavior when they have no personal incentive to do so.

Research Agenda v0.9: Synthesising a human's preferences into a utility function

What is the domain of U? What inputs does it take? In your papers you take a generic Markov Decision Process, but which one will you use here? How exactly do you model the real world? What is the set of states and the set of actions? Does the set of states include the internal state of the AI?

You may have been referring to this as "4. Issues of ontology", but I don't think the problem can be separated from your agenda. I don't see how any progress can be made without answering these questions. Maybe your can start with naive answers, an... (read more)

Why are the people who could be doing safety research, but aren’t, doing something else?

Discounting. There is no law of nature that can force me to care about preventing human extinction years from now, more than eating a tasty sandwich tomorrow. There is also no law that can force me to care about human extinction much more that about my own death.

There are, of course, more technical disagreements to be had. Reasonable people could question how bad unaligned AI will be or how much progress is possible in this research. But unlike those questions, the reasons of discounting are not debatable.

2Adam Scholl2y"Not debatable" seems a little strong. For example, one might suspect both that it's plausible some rational humans might disprefer persisting, and also that most humans who think they have this preference would change their minds with more reflection.
Gratification: a useful concept, maybe new

I do things my way because I want to display my independence (not doing what others tell me) and intelligence (ability to come up with novel solutions), and because I would feel bored otherwise (this is a feature of how my brain works, I can't help it).

"I feel independent and intelligent", "other people see me as independent and intelligent", "I feel bored" are all perfectly regular outcomes. They can be either terminal or instrumental goals. Either way, I disagree that these cases somehow don't fit in the usual preference model. You're only having this problem because you're interpreting "outcome" in a very narrow way.

A Personal Rationality Wishlist

Yes. The latter seems to be what OP is asking about: "If one wanted it to not happen, how would one go about that?". I assume OP is taking the perspective of his friends, who are annoyed by this behavior, rather than the perspective of the anime-fans, who don't necessarily see anything wrong with the situation.

2DanielFilan2yIn the literal world, I'm an anime fan, but the situation seems basically futile: the people recommending anime seem like they're accomplishing nothing but generating frustration. More metaphorically, I'm mostly interested in how to prevent the behaviour either as somebody complaining about anime or as a third party, and secondarily interested in how to restrain myself from recommending anime.
2Matt Goldenberg2yNote that my response was responding to this original question: It want obvious to me that this was asking "How did your friend want the world to be different such that the incentives were to respond differently?"
A Personal Rationality Wishlist

That sounds reasonable, but the proper thing is not usually the easy thing, and you're not going to make people do the proper thing just by saying that it is proper.

If we want to talk about this as a problem in rationality, we should probably talk about social incentives, and possible alternative strategies for the anime-hater (you're now talking about a better strategy for the anime-fan, but it's not good to ask other people to solve your problems). Although I'm not sure to what extent this is a problem that needs solving.

2Raemon2yIt sounds like you two are currently talking about two different problems: mr-hire is asking "how do avoid being That Guy Who Pressures People about Anime" and you're asking the question "If I want to avoid people pestering me with anime questions, or people in general to stop this behavior, what would have to change?"
A Personal Rationality Wishlist

And then the other person says "no thanks", and you both stand in awkward silence? My point is that offering recommendations is a natural thing to say, even if not perfect, and it's nice to have something to say. If you want to discourage unsolicited recommendations, then you need to propose a different trajectory for the conversation. Changing topic is hard, and simply going away is rude. People give unsolicited recommendations because it seems to be the best option available.

3DanielFilan2yAt this juncture, it seems important to note that all examples I can think of took place on Facebook, where you can just end interactions like this without it being awkward.
9Matt Goldenberg2yI think I would probably change the subject in a case like this. Good "vibing" conversation skill here is to "fractionate" the conversation, frequently cut topics before they reach their natural conclusion so that when you reach a conversation dead end like this, you have somewhere to go back to. Ditto with being able to make situational observations to restart a conversation, and having in your back pocket a list of topics and questions to go to. I don't think the proper thing to do here is to make someone else feel awkward or annoyed so that you feel less awkward, the proper thing to do is to learn the conversational skills to make people not feel awkward.
A Personal Rationality Wishlist

Sure, but it remains unclear what response the friend wanted from the other person. What better options are there? Should they just go away? Change topic? I'm looking for specific answers here.

2Matt Goldenberg2yMy response in this case would be to say something like "Well, I've got some shows that might change you're mind if you're ever interested. "Then leave it to them to continue that thread if interested. This goes with my general policy to try to avoid giving unsolicited advice.
A Personal Rationality Wishlist
a friend of mine observed that he couldn’t talk about how he didn’t like anime without a bunch of people rushing in to tell him that anime was actually good and recommending anime for him to watch

What response did your friend want? The reaction seems very natural to me (especially from anime fans). Note that your friend as at some point tried watching anime, and he has now chosen to talk about anime, which could easily mean that on some level he wants to like anime, or at least understand why others like it.

2Matt Goldenberg2yPossible scenario where this comes up: Your friends are talking about anime, they ask you if you watch anime, you say "I don't like anime," they say "well you just haven't watched the right shows, have you tried..."
Humans can be assigned any values whatsoever…
I got this big impossibility result

That's a part of the disagreement. In the past you clearly thought that Occam's razor was an "obvious" constraint that might work. Possibly you thought it was a unique such constraint. Then you found this result, and made a large update in the other direction. That's why you say the result is big - rejecting a constraint that you already didn't expect to work wouldn't feel very significant.

On the other hand, I don't think that Occam's razor is unique such constraint. So when I ... (read more)

Is LW making progress?

So it seems that there was progress in applied rationality and in AI. But that's far from everything LW has talked about. What about more theoretical topics, general problems in philosophy, morality, etc? Do you feel than discussing some topics resulted in no progress and was a waste of time?

There's some debate about which things are "improvements" as opposed to changes.

Important question. Does the debate actually exist, or is this a figure of speech?

Humans can be assigned any values whatsoever…

1 is trivial, so yes. But I don't agree with 2. Maybe the disagreement comes from "few" and "obvious"? To be clear, I count evaluating some simple statistic on a large data set as one constraint. I'm not so sure about "obvious". It's not yet clear to me that my simple constraints aren't good enough. But if you say that more complex constraints would give us a lot more confidence, that's reasonable.

From OP I understood that you want to throw out IRL entirely. e.g.

If we give up the assumption of human ra
... (read more)
4Stuart_Armstrong2yOk, we strongly disagree on your simple constraints being enough. I'd need to see these constraints explicitly formulated before I had any confidence in them. I suspect (though I'm not certain) that the more explicit you make them, the more tricky you'll see that it is. And no, I don't want to throw IRL out (this is an old post), I want to make it work. I got this big impossibility result, and now I want to get around it. This is my current plan: https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into [https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into]
Humans can be assigned any values whatsoever…
But it's not like there are just these five preferences and once we have four of them out of the way, we're done.

My example test is not nearly as specific as you imply. It discards large swaths of harmful and useless reward functions. Additional test cases would restrict the space further. There are still harmful Rs in the remaining space, but their proportion must be much lower than in the beginning. Is that not good enough?

What you're seeing as "adding enough clear examples" is actually "hand-crafting R(0) in totality".
... (read more)
2Stuart_Armstrong2yWe may not be disagreeing any more. Just to check, do you agree with both these statements: 1. Adding a few obvious constraints rule out many different R, including the ones in the OP. 2. Adding a few obvious constraints is not enough to get a safe or reasonable R.
How Can People Evaluate Complex Questions Consistently?

This is true, but it doesn't fit well with the given example of "When will [country] develop the nuclear bomb?". The problem isn't that people can't agree what "nuclear bomb" means or who already has them. The problem is that people are working from different priors and extrapolating them in different ways.

Integrity and accountability are core parts of rationality

Are you going to state your beliefs? I'm asking because I'm not sure what that looks like. My concern is that the statement will be very vague or very long and complex. Either way, you will have a lot of freedom to argue that actually your actions do match your statements, regardless of what those actions are. Then the statement would not be useful.

Instead I suggest that you should be accountable to people who share your beliefs. Having someone who disagrees with you try to model your beliefs and check your actions against that model seems like a source of conflict. Of course, stating your beliefs can be helpful in recognizing these people (but it is not the only method).

How Can People Evaluate Complex Questions Consistently?

What's the motivation? In what case is lower accuracy for higher consistency a reasonable trade off? Especially consistency over time sounds like something that would discourage updating on new evidence.

3ozziegooen2yI attempted to summarize some of the motivation for this here: https://www.lesswrong.com/posts/Df2uFGKtLWR7jDr5w/?commentId=tdbfBQ6xFRc7j8nBE [https://www.lesswrong.com/posts/Df2uFGKtLWR7jDr5w/?commentId=tdbfBQ6xFRc7j8nBE]
3Elizabeth2ySome examples are where people care more about fairness, such as criminal sentencing and enterprise software pricing. However you're right that implicit in the question was "without new information appearing", although you'd want the answer to update the same way every time the same new information appeared.
3ChristianKl2yIf every study on depression would use it's own metric for depression that's optimal for the specific study it would be hard to learn from the studies and aggregate information from them. It's much better when you have a metric that has consistency. Consistent measurements allow reacting to how a metric changes over time which is often very useful for evaluating interventions.
Humans can be assigned any values whatsoever…

Evaluating R on a single example of human behavior is good enough to reject R(2), R(4) and possibly R(3).

Example: this morning I went to the kitchen and picked up a knife. Among possible further actions, I had A - "make a sandwich" and B - "stab myself in the gut". I chose A. R(2) and R(4) say I wanted B and R(3) is indifferent. I think that's enough reason to discard them.

Why not do this? Do you not agree that this test discards dangerous R more often than useful R? My guess is that you're asking for very strong formal guarantees from the assumptions that you consider and use a narrow interpretation of what it means to "make IRL work".

2Stuart_Armstrong2yRejecting any specific R is easy - one bit of information (at most) per specific R. So saying "humans have preferences, and they are not always rational or always anti-rational" rules out R(1), R(2), and R(3). Saying "this apparent preference is genuine" rules out R(4). But it's not like there are just these five preferences and once we have four of them out of the way, we're done. There are many, many different preferences in the space of preferences, and many, many of them will be simpler than R(0). So to converge to R(0), we need to add huge amounts of information, ruling out more and more examples. Basically, we need to include enough information to define R(0) - which is what my research project is trying to do. What you're seeing as "adding enough clear examples" is actually "hand-crafting R(0) in totality". For more details see here: https://arxiv.org/abs/1712.05812 [https://arxiv.org/abs/1712.05812]
Humans can be assigned any values whatsoever…

The point isn't that there is nothing wrong or dangerous about learning biases and rewards. The point is that the OP is not very relevant to those concerns. The OP says that learning can't be done without extra assumptions, but we have plenty of natural assumptions to choose from. The fact that assumptions are needed is interesting, but it is by no means a strong argument against IRL.

What if in reality due to effects currently beyond our understanding, our actions are making the future more likely to be dystopian in some way than if we took rando
... (read more)
6Stuart_Armstrong2yYou'd think so, but nobody has defined these assumptions in anything like sufficient detail to make IRL work. My whole research agenda [https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into] is essentially a way of defining these assumptions, and it seems to be a long and complicated process.
Schelling Categories, and Simple Membership Tests

I feel like there are several concerns mixed together, that should be separated:

1. Lack of communication, which is the central condition of the usual Shelling points.

2. Coordination (with some communication), where we agree to observe x41 because we don't trust the rest of the group to follow a more complex procedure.

3. Limited number of observations (or costly observations). In that case you may choose to only observe x41, even if you are working alone, just to lower your costs.

I don't think 2 and 3 have much to do with Shelling. These considera... (read more)

1Pattern2yI think a "Theory" heading, and a "Example" heading would make for a nice compromise.
Musings on Double Crux (and "Productive Disagreement")

Is this ad hominem? Reasonable people could say that clone of saturn values ~1000 self-reports way too little. However it is not reasonable to claim that he is not at all skeptical of himself, and not aware of his biases and blind spots, and is just a contrarian.

"If I, clone of saturn, were wrong about Double Crux, how would I know? Where would I look to find the data that would disconfirm my impressions?"

Personally, I would go to a post about Double Crux, and ask for examples of it actually working (as Said Achmiz did). Alternatively, I would li... (read more)

Humans can be assigned any values whatsoever…

The problem is that with these additional and obvious constraints, humans cannot be assigned arbitrary values, unlike the title of the post suggests. Sure there will be multiple R that pass any number of assumptions and we will be uncertain about which to use. However, because we don't perfectly know π(h), we had that problem to begin with. So it's not clear why this new problem matters. Maybe our confidence in picking the right R will be a little lower then expected, but I don't see why this reduction must be large.

4rohinmshah2yIf we add assumptions like this, they will inevitably be misspecified, which can lead to other problems. For example, how would you operationalize that π is good at optimizing R? What if in reality due to effects currently beyond our understanding, our actions are making the future more likely to be dystopian in some way than if we took random actions? Should our AI infer that we prefer that dystopia, since otherwise we wouldn't be better than random? (See also the next three posts in this sequence.)
Why so much variance in human intelligence?
I learned a semester worth of calculus in three weeks

I'm assuming this is a response to my "takes years of work" claim, I have a few natural questions:

1. Why start counting time from the start of that summer program? Maybe you had never heard of calculus before that, but you had been learning math for many years already. If you learned calculus in 3 weeks, that simply means that you already had most of the necessary math skills, and you only had to learn a few definitions and do a little practice in applying them. Many people don't alre... (read more)

7Jay Molstad2y1) True, but by the time that roommate took the class he had had comparable math foundations to what I had had when I took the class. Considering the extra years, arguably rather more. (Upon further thought I realized that I had taken the class in 1988 at the age of 15) 2) That was first-semester calc, Purdue's Math 161 class (for me and the roommate). Intro calc. Over the next two years I took two more semesters of calc, one of differential equations, and one of matrix algebra. By the time I met my freshman roommate (he was a bit older than me) and he started the calc class, I'd had five semesters of college math (which was all I ever took b/c I don't enjoy math). Also, that roommate was a below-average college student, but there are people in the world with far less talent than he had. 3) Because time is the only thing you can't buy. Time in college can be bought, but not cheaply even then. I got through school with good grades and went on to grad school as planned; his plans didn't work out. Of course time marched on and I had failures of my own. I agree that there's more to success than one particular kind of intelligence. Persistence, looks, money, luck, and other factors matter. But my roommate's calculus aptitude was a showstopper for his engineering ambitions, and I don't think his situation was terribly uncommon.
Is LW making progress?

The worst case scenario is if two people both decide that a question is settled, but settle it in opposite ways. Then we're only moving from a state of "disagreement and debate" to a state of "disagreement without debate", which is not progress.

Is LW making progress?

I appreciate the concrete example. I was expecting more abstract topics, but applied rationality is also important. Double Cruxes pass the criteria of being novel and the criteria of being well known. I can only question if they actually work or made an impact (I don't think I see many examples of them in LW), and if LW actually contributed to their discovery (apart from promoting CFAR).

Why so much variance in human intelligence?

The fact that someone does not understand calculus, does not imply that they are incapable of understanding calculus. They could simply be unwilling. There are many good reasons not to learn calculus. For one, it takes years of work. Some people may have better things to do. So I suggest that your entire premise is dubious - the variance may not be as large as you imagine.

2Jay Molstad2yPersonally, I learned a semester worth of calculus in three weeks for college credit at a summer program (the Purdue College Credit Program circa 1989, specifically) when I was 16. Out of 20ish students (pre-selected for academic achievement), about 15% (see note 1) aced it while still goofing around, roughly 60% got college credit but found the experience difficult, and some failed. Two years later, my freshman roommate (note 2) took the same Purdue course over 16 weeks and failed it. The question isn't "why don't some people understand calculus", but "why do some people learn it easily while others struggle, often failing". Note 1: This wasn't a statistically robust sample. "About 15%" means "Chris, Bill, and I". Note 2: That roommate wanted to be an engineer and was well aware that he could only achieve that goal by passing calculus. He was often working on his homework at 1:30 am, much to my annoyance. He worked harder on that course than I had, despite being 18 years old and having a (presumably) more mature brain.
Intransitive Preferences You Can't Pump

That's a measly one in a billion. Why would you believe that this is enough? Enough for what? I'm talking about the preferences of a foreign agent. We don't get to make our own rules about what the agent prefers, only the agent can decide that.

Regarding practical purposes, sure you could treat the agent as if it was indifferent between A, B and C. However, given the binary choice, it will choose A over B, every time. And if you offered to trade C to B, B to A and A to C, at no cost, then the agent would gladly walk the cycle any number of times (if we can ignore the inherent costs of trading).

The Schelling Choice is "Rabbit", not "Stag"

Defecting in Prisoner's dilema sounds morally bad, while defecting in Stag hunt sounds more reasonable. This seems to be the core difference between the two, rather than the way their payoff matrices actually differ. However, I don't think that viewing things in moral terms is useful here. Defecting in Prisoner's dilema can also be reasonable.

Also, I disagree with the idea of using "resource" instead of "utility". The only difference the change makes is that now I have to think, "how much utility is Alexis getting from 10 resources?" and come up with my own value. And if his utility function happens not to be monotone increasing, then the whole problem may change drastically.

Prediction Markets: When Do They Work?

This is all good, but I think the greatest problem with prediction markets is low status and low accessibility. To be fair though, improved status and accessibility are mostly useful in that they bring in more "suckers".

There is also a problem of motivation - the ideal of futarchy is appealing, but it's not clear to me how we go from betting on football to impacting important decisions.

Logarithms and Total Utilitarianism

Note, that the key feature of log function used here is not its slow growth, but the fact that it takes negative values on small inputs. For example, if we take the function u(r)=log (r+1), so that u(0)=0, then RC holds.

Although there are also solutions that prevent RC without taking negative values, e.g u(r) = exp{-1/r}.

When is unaligned AI morally valuable?
a longer time horizon

Now that I think of it, a truly long-term view would not bother with such mundane things as making actual paperclips with actual iron. That iron isn't going anywhere, it doesn't matter whether you convert it now or later.

If you care about maximizing the number of paperclips at the heat death of the universe, your greatest enemies are black holes, as once some matter has fallen into them, you will never make paperclips from that matter again. You may perhaps extract some energy from the black hole, and convert that into matter... (read more)

When is unaligned AI morally valuable?

The fact that P(humans will make another AI) > 0 does not justify paying arbitrary costs up front, no matter how long our view is. If humans did create this second AI (presumably built out of twigs), would that even be a problem for our maximizer?

It's still more efficient to kill all humans than to think about which ones need killing

That is not a trivial claim and it depends on many things. And that's all assuming that some people do actually need to be killed.

If destroying all (macroscopic) life on earth is easy, e.g. maybe pumping some gas i... (read more)

When is unaligned AI morally valuable?

Killing all humans is hardly necessary. For example, the tribes living in the Amazon aren't going to develop a superintelligence any time soon, so killing them is pointless. And, once the paperclip maximizer is done extracting iron from our infrastructure, it is very likely that we wouldn't have the capacity to create any superintelligences either.

Note, I did not mean to imply that the maximizer would kill nobody. Only that it wouldn't kill everybody, and quite likely not even half of all people. Perhaps AI researchers really would be on the maximizer's short list of people to kill, for the reason you suggested.

5Aiyen3yHumans are made of atoms that are not paperclips. That's enough reason for extinction right there.

A thing to keep in mind here is that an AI would have a longer time horizon. The fact that humans *exist* means eventually they might create another AI (this could be in hundreds of years). It's still more efficient to kill all humans than to think about which ones need killing and carefully monitor the others for millenia.

*Another* Double Crux Framework
The structure here was "write an initial braindump on google docs, then invite people hash out disagreements in the comments

Is it possible that you did 90% of the work on those docs, at least of the kind that collects and cleans up existing arguments? This is sort of what I meant by "resistance". E.g. if I wanted to have a formalized debated with my hypothetical grandma, she'd be confused about why I would need that, or why we can't just talk like normal people, but this doesn't mean that she wouldn't play along, or that ... (read more)

*Another* Double Crux Framework
There's been periodic attempts to create formal Double Crux frameworks

Do you have any links about those, or specifically about how they fail?

To be honest, I think it's likely that the whole idea of formalizing that sort of thing is naive, and only appeals to a certain kind of person (such as myself), due to various biases. Still, I have some hope that it could work, at least for such people.

This framework shares that issue, but something that made me a bit more optimistic than usual about it is that I've had a lot of good experiences using g
... (read more)
2Raemon3yI looked for the DoubleCrux website but couldn't find it (I think Lifelonglearner made it). [fake edit: tried very slightly harder and then found it: http://double-crux.appspot.com/] I think most attempts have something of this quality. My own motivation came specifically because of some debates that went for several months and didn't seem to have resolved anything. 1. yes – I regularly hash out disagreements on google docs. I haven't had to do deep worldview disagreements, but standard disagreements within a shared frame. Some of the ideas were "controversial", but not frame-breakingly controversial among the people discussing them. 2. yes, basically entirely rationalists that I trust reasonably 3. Not sure I parse the third question – I didn't feel much resistance one way or another. (The structure here was "write an initial braindump on google docs, then invite people hash out disagreements in the comments). (I'd only suggest the DoubleCrux framework in the OP for people who trust or at least meta-trust each other)
*Another* Double Crux Framework

It looks very appealing, but, as was already pointed out, it's not a lightweight approach.

Maybe it could be though? One improvement would be to be able to stick with LW comment format, or any text message format. I think that could still work. We could agree on a set of tags/prefixes, instead of static sections. E.g. [I think we both believe] that ..., [I would bet] that ..., [Let me try to pass your ITT] ..., etc. The amorphous discussion probably does not need to be tagged. And the point of having tags, is that you can then ctrl-F the whole discussi... (read more)

4Raemon3yThanks, I appreciate the problem solving approach here. I definitely think clearer norms encouraging the various sorts of thing you suggest tagging here is good. Literally tagging them might also help for the reasons you've noted. I suspect that getting the bulk of the value I was pointing at requires some kind of additional infrastructure. Most of the value I saw of the framework here was as a working-memory aid, where I think it matters that you can quickly scan the thing at a glance. Ctrl-Fing the sounds like you'd get some of the value, but I'm suspect the overhead/benefit ratio would end up being similar to the OP. (i.e. less cost, but also less benefit) There's been periodic attempts to create formal Double Crux frameworks, with I think usually suffer from being a) unwieldy, b) not really suited for how people actually have conversations. This framework shares that issue, but something that made me a bit more optimistic than usual about it is that I've had a lot of good experiences using google docs as a way to hash out ideas, with the ability to blend between formal bullet points, freeforming paragraphs, and back-and-forth conversation in the comments as needed. AsI said to Chris_Leong, I'd still only recommend this as a supplement for when you've already tried once or twice to just talk through the thing seriously.
Duncan Sabien on Moderating LessWrong

I think you're confusing "aspiring to find truth" with "finding truth". Your crackpot uncle who writes facebook posts about how Trump eats babies isn't doing it because he loves lies and hates truth, he does it because he has poor epistemic hygiene.

So in this view almost every discussion forum and almost every newspaper is doing their best to find the truth, even if they have some other goals as well.

Also, of course, I'm only counting places that deal with anything like propositions at all, and excluding things like jokes, memes, porn, shopping, etc, which is a large fraction of the internet.

Duncan Sabien on Moderating LessWrong
I also think it is important here to have the someone who does the noticing be someone who actually has the relevant skills, <...> who won't feel licensed to point out such problems unless handed a literal license to do so).

Yes, but giving people licenses is pretty easy. I'd be fine with you having one, for example, though I guess I don't have the power to give it to you myself.

It is generally wise to solve social problems with tech, when possible.

The problem is that tech takes time and effort to write, so writing tech to solve problem... (read more)

When is unaligned AI morally valuable?
It sounds like your comment probably isn't relevant to the point of my post, except insofar as I describe a view which isn't your view.

Yes, you describe a view that isn't my view, and then use that view to criticize intuitions that are similar to my intuitions. The view you describe is making simple errors that should be easy to correct, and my view isn't. I don't really know how the group of "people who aren't too worried about paperclipping" breaks down between "people who underestimate P(paperclipping)" ... (read more)

9habryka3y[Moderator note: I wrote a warning to you on another post a few days ago, so this is your second warning. The next warning will result in a temporary ban.] Basically everything I said in my last comment still holds: Since then, it does not seem like you significantly reduced the volume of comments you've been writing, and I have not perceived a significant increase in the amount of thought and effort that goes into every single one of your comments. I continue to think that you could be a great contributor to LessWrong, but also think that for that to happen, it seems necessary that you take on significantly more interpretative labor in your comments, and put more effort into being clear. It still appears that most comment exchanges that involve you cause most readers and co-commenters to feel attacked by you or misunderstand you, and quickly get frustrated. I think it might be the correct call (though I obviously don't know your constraints and thought-habits around commenting here) to aim to write one comment per day, instead of an average of three, with that one comment having three times as much thought and care put into it, and with particular attention towards trying to be more collaborative, instead of adversarial.
Confusions Concerning Pre-Rationality
Winning bets is not literally the same thing as believing true things, nor is it the same thing as having accurate beliefs, or being rational.

They are not the same, but that's ok. You asked about constraints on, not definitions of rationality. This may not be an exhaustive list, but if someone has an idea about rationality that translates neither into winning some hypothetical bets, nor into having even slightly more accurate beliefs about anything, then I can confidently say that I'm not interested.

(Of course this is not to say that an idea that... (read more)

When is unaligned AI morally valuable?
Sexual desire is (more or less) universal in sexually reproducing species

Uploads are not sexually reproducing. This is only one of many many ways in which an upload is more different from you, than you are different from a dinosaur.

Whether regular evolution would drift away from our values ir more dubious. If we lived in caves for all that time, then probably not. But if we stayed at current levels of technology, even without making progress, I think a lot could change. The pressures of living in a civilization are not the same as the pressures of living i... (read more)

Expressive Vocabulary
Seems reasonable, does it work well?

What do you mean by "works well"? Getting positive responses from real people? I doubt it, but I don't think I've ever explained it like this to anyone. I don't do the "everything is chemicals" reply that often in the first place.

When is unaligned AI morally valuable?

I don't like the caveman analogy. The differences between you and a caveman are tiny and superficial, compared to the differences between you and the kind of mind that will exist after genetic engineering, mind uploads, etc., or even after a million years regular of evolution.

Would a human mind raised as (for example) an upload in a vastly different environment from our own still have our values? It's not obvious. You say "yes", I say "no", and we're unlikely to find strong arguments either way. I'm only hoping that ... (read more)

4rhollerith_dot_com3y>the maximizer may choose to go to space, looking for more accessible iron. The benefits of killing people are relatively small The main reason the maximizer would have for killing all the humans is the knowledge that since humans succeeded in creating the maximizer, humans might succeed in creating another superintelligence that would compete with the maximizer. It is more likely than not that the maximizer will consider killing all the humans to be the most effective way to prevent that outcome.
3Aiyen3yThe strongest argument that an upload would share our values is that our terminal values are hardwired by evolution. Self-preservation is common to all non-eusocial creatures, curiosity to all creatures with enough intelligence to benefit from it. Sexual desire is (more or less) universal in sexually reproducing species, desire for social relationships is universal in social species. I find it hard to believe that a million years of evolution would change our values that much when we share many of our core values with the dinosaurs. If maiasaura can have recognizable relationships 76 million years ago, are those going out the window in the next million? It's not impossible, of course, but shouldn't it seem pretty unlikely? I think the difference between us is that you are looking at instrumental values, noting correctly that those are likely to change unrecognizably, and fearing that that means that all values will change and be lost. Are you troubled by instrumental values shifts, even if the terminal values stay the same? Alternatively, is there a reason you think that terminal values will be affected? I think an example here is important to avoid confusion. Consider Western Secular sexual morals vs Islamic ones. At first glance, they couldn't seem more different. One side is having casual sex without a second thought, the other is suppressing desire with full-body burqas and genital mutilation. Different terminal values, right? And if there can be that much of a difference between two cultures in today's world, with the Islamic model seeming so evil, surely values drift will make the future beyond monstrous! Except that the underlying thoughts behind the two models aren't as different as you might think. A Westerner having casual sex knows that effective birth control and STD countermeasures means that the act is fairly safe. A sixth century Arab doesn't have birth control and knows little of STDs beyond that they preferentially strike the promiscuou
Load More