Recent Discussion

2RHollerith7h
My probability that quantum Bayesianism is onto something is .05. It went down a lot when I read Sean Carroll's book Something Deeply Hidden. .05 is about as extreme as my probabilities get for the parts of quantum physics that are not settled science since I'm not an expert.
10Nate Showell8h
I've come to believe (~65%) that Twitter is anti-informative: that it makes its users' predictive calibration worse on average. On Manifold, I frequently adopt a strategy of betting against Twitter hype (e.g., on the LK-99 market), and this strategy has been profitable for me.

Is Twitter literally worse than flipping a coin, or just worse than... someone following a non-Twitter crowd?

There was conversation on Facebook over an argument that any sufficiently complex system, whether a human, a society, or an AGI, will be unable to pursue a unified goal due to internal conflict among its parts, and that this should make us less worried about "one paperclipper"-style AI FOOM scenarios. Here's a somewhat edited and expanded version of my response:

1) yes this is a very real issue

2) yet as others pointed out, humans and organizations are still able to largely act as if they had unified goals, even if they often also act contrary to those goals

3) there's a lot of variance in how unified any given human is. trauma makes you less unified, while practices such as therapy and certain flavors of meditation can make a...

1rorygreig4h
I agree that initially a powerful AGI would likely be composed of many sub-agents. However it seems plausible to me that these sub-agents may “cohere” under sufficient optimisation or training. This could result in the sub-agent with the most stable goals winning out. It’s possible that strong evolutionary pressure makes this more likely. You could also imagine powerful agents that aren’t composed of sub-agents, for example a simpler agent with very computationally expensive search over actions. Overall this topic seems under-discussed in my opinion. It would be great to have a better understanding of whether we expect sun-agents to turn into a single coherent agent.

However it seems plausible to me that these sub-agents may “cohere” under sufficient optimisation or training.

I think it's possible to unify them somewhat, in terms of ensuring that they don't have outright contradictory models or goals, but I don't really see a path where a realistically feasible mind would stop being made up of different subagents. The subsystem that thinks about how to build nanotechnology may have overlap with the subsystem that thinks about how to do social reasoning, but it's still going to be more efficient to have them specialized ... (read more)

When there is a train, plane, or bus crash, it's newsworthy: it doesn't happen very often, lots of lives at stake, lots of people are interested in it. Multiple news outlets will send out reporters, and we will hear a lot of details. On the other hand, a car crash does not get this treatment unless there is something unusual about it like a driverless car or an already newsworthy person involved.

The effects are not great: while driving is relatively dangerous, both to the occupants and people outside, our sense of danger and impact is poorly calibrated by the news we read. My guess is that most people's intuitive sense of the danger of cars versus trains, planes, and buses has been distorted by this coverage, where most people, say, do not expect buses to be >16x safer than cars. This also...

1mad1h
"Cautious driver" is not a real category. It's not something my crash database can filter on.  You make mistakes when you drive. We all do. It is human nature, and driving is a complex chain of tasks. If you never speed, never drive after even one drink, never break a single road rule, know every single road rule (in my jurisdiction the road traffic code is some 400 pages long!), never take gaps in traffic that are too close, never go through an orange light too late, never jaywalk, always ensure your car is mechanically up to date, etc etc etc, then you are either pathological about your rule following or a liar.  I do crash analysis as part of my job, almost every day. I can tell you there are PLENTY of bus crashes - buses going before the passengers were sat down resulting in minor injuries, buses hitting pedestrians resulting in hospitalisation, heck I was in a bus about a year ago that rear-ended a car in front of it. I only have access to data in one jurisdiction and I don't believe that data includes taxis, uber drivers, etc. Anecdotally my uber drivers often adjust the GPS when they're driving and tend to speed so I wouldn't call them particularly cautious. For the record, as far as I know I don't have the right to pull out my jurisdiction's crash data so I won't be able to respond to specific requests. I do know that "bus" is a category of vehicle we have. I don't know whether taxi is. EDIT:  https://www.9news.com.au/national/liverpool-crash-pedestrian-dies-hit-by-bus-sydney-south-west/6eba1c4a-0825-4828-87c1-b530e5e4e2b5 - a man in Sydney died after being hit by a bus a month ago https://thewest.com.au/news/traffic/perth-crash-man-hit-by-bus-on-wellington-street-as-police-close-road-c-12809596 (paywall i can't bypass) - a man in Perth was hit by a bus last week seriously I just searched for "pedestrian hit by a bus" and there are SO MANY in Australia. With a cursory search I see three in Perth (2 million in the greater metro area) in the last six mo
1Bezzi34m
Yes, obviously it is not a well-defined category, I mostly hoped that you could filter for taxi or similar. Anyway, I am not claiming to be the best driver in the world (although I'm 100% safe at least w.r.t. drinking since I don't drink at all), I'm just claiming to be at least as good as a taxi driver, and I would be really really surprised if it turned out that taxi drivers crash their vehicles with the same frequency as the general population.
1mad27m
https://acrs.org.au/files/arsrpe/RS050099.pdf <- there's a paper that covers your exact question (comparing crashes in taxis and passenger cars. in case you don't know the terminology, "fleet vehicle" refers to cars that are registered as work cars for an organisation, so more likely to be people on their "best behaviour" as far as drinking/speeding/etc) Table 5 in particular, per 100 million vehicle kms travelled you have taxis having about half as many fatal crashes as cars but about 50% more injury crashes and maybe 10% more towaway crashes (eyeballing it) Table 10 also shows that some 30% of taxi drivers involved in crashes weren't wearing seat belts (they're apparently not legally required to in NSW! news to me), which is a pretty big clue that taxi drivers aren't the paragon of careful driving one might assume.

Table 10 also shows that some 30% of taxi drivers involved in crashes weren't wearing seat belts (they're apparently not legally required to in NSW! news to me), which is a pretty big clue that taxi drivers aren't the paragon of careful driving one might assume.

WTF!?

Ok, I suppose I have to update my priors on taxi drivers (man, they even write "There is considerable anecdotal evidence that taxi drivers around the world drive in a manner the rest of the public considers to be unsafe").

Do you have suggestions about other proxies for careful driving?

If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

It would save me a fair amount of time if all lesswrong posts had an "export BibTex citation" button, exactly like the feature on arxiv.  This would be particularly useful for alignment forum posts!

Each person is special. Amanda is in a class of her own, Fred is in a class of his own. Amanda has many different properties: where she came from, what she looks like, what behaviors she habitually does, what she knows and doesn't know, what her plans are, how she speaks, how she would behave if given power, etc. Fred has his own versions of those properties. They overlap partially but not completely with Amanda's properties.

When conflict arises, sides must be chosen [citation needed]. One side has people with one set of properties, the other side has people with the negation of those properties. Sides can be chosen out loud and explicitly, or in silence and implicitly.

Sometimes one side is more coordinated with itself than the other...

2TekhneMakre17h
I totally agree with what you say! ... And that's why I'm on the side of those against the system of conflict between groups of people with common interests amongst themselves, against the side of those in favor of that system. That taking sides in this way, is paradoxical (cf. the paradox of intolerance), is why I asked: A key aspect of that is to not look away from the fact that there is a class struggle between those in favor of class struggle and those against it. I think the key premise that you didn't say you agree with, is this: that there are people who are opposed to sharing information, pointing out norm violations, justice in general; perspective synthesizing, pulling the rope sideways. Cf. http://benjaminrosshoffman.com/notes-on-the-autobiography-of-malcolm-x-2/

there are people who are opposed to sharing information, pointing out norm violations, justice in general; perspective synthesizing, pulling the rope sideways

Generally, I agree that these are bad people and should be opposed.

There are also situations where I might locally do a similar thing, for example sometimes I oppose doxing (which is a special case of "sharing information"), I might disapprove of reporting violation of specifics norms that I consider bad (such as copyright), etc.

This is a linkpost for https://outsidetheasylum.blog/understanding-subjective-probabilities/. It's intended as an introduction to practical Bayesian probability for those who are skeptical of the notion. I plan to keep the primary link up to date with improvements and corrections, but won't do the same with this LessWrong post, so see there for the most recent version.

Any time a controversial prediction about the future comes up, there's a type of interaction that's pretty much guaranteed to happen.

Alice: "I think this thing is 20% likely to occur."

Bob: "Huh? How could you know that. You just made that number up!".

Or in Twitterese:

That is, any time someone attempts to provide a specific numerical probability on a future event, they'll inundated with claims that that number is meaningless. Is this true? Does it make...

Presumably you are not claiming that saying

...I don't know the exact initial conditions of the coin well enough to have any meaningful knowledge of how it's going to land, and I can't distinguish between the two options...

is actually necessarily what it means whenever someone says something has a 50% probability? Because there are obviously myriad ways something can have a 50% probability and this kind of 'exact symmetry between two outcomes' + no other information is only one very special way that it can happen. 

So what does it mean exactly when you say something is 50% likely?

1Spencer Becker-Kahn1h
Is this right? I would have said that what you describe is a more like the classical, logical view of probability, which isn't the same as the frequentist view. Even the wiki page you've linked seems to disagree with what you've written, i.e. it describes the frequentist view in the standard way of being about relative frequencies in the long-run. So it isn't a coin having intrinsic "50%-ness"; you actually need the construction of the repeated experiment in order to define the probability.
2TAG11h
You are assuming determinism. Determinism is not known to be true. Yes, a theory of subjective probability would be useful in any world except one that allows complete omniscience -- Knightian uncertainty is always with us. But it doesn't follow from that that probability "is" subjective, because it doesn't follow that probability isn't objective as well. Subjective and objective probability are not exclusive. No, frequentist probability just says that events fall into sets of a comparable type which have relative frequencies. You don't need to assume indeterminism for frequentism. It's also obvious that historically observed frequencies, where available, are a good basis for a guess -- better than nothing anyway. You were using them yourself , in the example about the presidents. One of the things that tells us is that frequentism and Bayesianism aren't mutually exclusive, either.
To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

confidence level: I am a physicist, not a biologist, so don’t take this the account of a domain level expert. But this is really basic stuff, and is very easy to verify.

Edit: I have added a few revisions and included a fact check of this post by an organic chemist. You can also read the comments on the EA forum to see Yudkowsky's response. 

Recently I encountered a scientific claim about biology, made by Eliezer Yudkowsky. I searched around for the source of the claim, and found that he has been repeating versions of the claim for over a decade and a half, including in “the sequences” and his TED talk. In recent years, this claim has primarily been used as an argument for why an AGI attack...

2Eliezer Yudkowsky18h
I rather expect that existing robotic machinery could be controlled by ASI rather than "moderately smart intelligence" into picking up the pieces of a world economy after it collapses, or that if for some weird reason it was trying to play around with static-cling spaghetti It could pick up the pieces of the economy that way too.

It seems to me as if we expect the same thing then: If humanity was largely gone (e.g. by several engineered pandemics) and as a consequence the world economy came to a halt, an ASI would probably be able to sustain itself long enough by controlling existing robotic machinery, i.e. without having to make dramatic leaps in nanotech or other technology first. What I wanted to express with "a moderate increase of intelligence" is that it won't take an ASI at the level of GPT-142 to do that, but GPT-7 together with current projects in robotics might suffice to... (read more)

There were several responses to What I Would Do If I Were Working On AI Governance which focused on the liability section, and had similar criticisms. In particular, I’ll focus on this snippet as a good representative:

Making cars (or ladders or knives or printing presses or...) "robust to misuse", as you put it, is not the manufacturer's job.

The commenter calls manufacturer liability for misuse “an absurd overreach which ignores people's agency in using the products they purchase”. Years ago I would have agreed with that; it’s an intuitive and natural view, especially for those of us with libertarian tendencies. But today I disagree, and claim that that’s basically not the right way to think about product liability, in general.

With that motivation in mind: this post lays out some...

We can certainly debate whether liability ought to work this way. Personally I disagree, for reasons others have laid out here, but it's fun to think through.

Still, it's worth saying explicitly that as regards the motivating problem of AI governance, this is not currently how liability works. Any liability-based strategy for AI regulation must either work within the existing liability framework, or (much less practically) overhaul the liability framework as its first step.

2tailcalled4h
Couldn't you make both of them liable? Not as a split, but essentially duplicating the liability, so facing $X damage means one can sue the user for $X, and the manufacturer for $X, making it a total of $2X?
2faul_sname3h
If you don't put a restriction that you can recover a maximum amount of $X this creates really bad incentives (specifically you've just built a being harmed bounty)
14Dweomite7h
Who should be second in line for liability (when the actual culprit isn't caught or can't pay) is a more debatable question, I think, but I still do not see any clear reason for a default of assigning it to the product manufacturer. Your principle 3 says we should assign liability to whoever can most cheaply prevent the problem.  My model says that will sometimes be the manufacturer, but will more often be the victim, because they're much closer to the actual harm.  For instance, it's cheaper to put your valuable heirloom into a vault than it is to manufacture a backpack that is incapable of transporting stolen heirlooms.  Also consider what happens if more than one product was involved; perhaps the thief also wore shoes! My model also predicts that in many cases both the manufacturer and the victim will have economically-worthwhile mitigations that we'd ideally like them to perform.  I think the standard accepted way of handling situations like that is to attempt to create a list of mitigations that we believe are reasonable for the manufacturer to perform, then presume the manufacturer is blameless if they did those, but give them liability if they failed to do one that appears relevant.  Yes, this is pretty much what you complained about in your malpractice example.  Our "list of reasonable mitigations" will probably not actually be economically optimal, which adds inefficiency, but plausibly less inefficiency than if we applied strict liability to any single party (and thereby removed all incentive for the other parties to perform mitigations).