This is a special post for short-form writing by Paul Crowley. Only they can create top-level comments. Comments here also appear on the Shortform Page and All Posts page.
11 comments, sorted by Click to highlight new comments since: Today at 6:32 PM

For the foreseeable future, it seems that anything I might try to say to my UK friends about anything to do with LW-style thinking is going to be met with "but Dominic Cummings". Three separate instances of this in just the last few days.

Can you give some examples of "LW-style thinking" that they now associate with Cummings?

On Twitter I linked to this saying

Basic skills of decision making under uncertainty have been sorely lacking in this crisis. Oxford University's Future of Humanity Institute is building up its Epidemic Forecasting project, and needs a project manager.


I'm honestly struggling with a polite response to this. Here in the UK, Dominic Cummings has tried a Less Wrong approach to policy making, and our death rate is terrible. This idea that a solution will somehow spring from left-field maverick thinking is actually lethal.

Did Dominic Cummings in fact try a "Less Wrong approach" to policy making? If so, how did it fail, and how can we learn from it? (if not, ignore this)

Seems like a good discussion could be had about long-term predictions and how much evidence there is to be had in short-term political fluctuations. The Cummings silliness vs unprecedented immigration restrictions - which is likely to have impact 5 years from now?

You mean the swine are judging ideas by how they work in practice?

Extracted from a Facebook comment:

I don't think the experts are expert on this question at all. Eliezer's train of thought essentially started with "Supposing you had a really effective AI, what would follow from that?" His thinking wasn't at all predicated on any particular way you might build a really effective AI, and knowing a lot about how to build AI isn't expertise on what the results are when it's as effective as Eliezer posits. It's like thinking you shouldn't have an opinion on whether there will be a nuclear conflict over Kashmir unless you're a nuclear physicist.

(Replying without the context I imagine to be present here)

I agree with a version of this which goes "just knowing how to make SGD go brrr does not at all mean you have expertise for predicting what happens with effective AI." 

I disagree with a version of this comment which means, "Having a lot of ML expertise doesn't mean you have expertise for thinking about effective AIs." Eliezer could have started off his train of thought by imagining systems which are not the kind of system which gets trained by SGD. There's no guarantee that thought experiments nominally about "effective AIs" are at all relevant to real-world effective AIs. (Example specific critique A of claims about minds-in-general, example specific critique B of attempts to use AIXI as a model of effective intelligence.)

Perhaps the response by experts is something like: "the only kind of AI we have are LLMs, and people who work with LLMs know that they cannot be really effective, therefore Eliezer's premises are not realistic?"

Okay, it sounds stupid when I write it like this, so likely a strawman. But maybe it points in the right direction...

New to LessWrong?