Being told to ‘show your work’ and graded on the steps helps you learn the steps and y default murders your creativity, execution style.
I acutely empathize with this, for I underwent similar traumas.
But putting on a charitable interpretation: what if we compare this to writing proofs? It seems to me that we approximately approach proofs this way: if the steps are wrong, contradictory, or incomplete the proof is wrong; if they are all correct we say the proof is correct; the fewer steps there are the more elegant the proof, etc.
It seems like proofs are just a higher-dimensional case of what is happening here, and it doesn't seem like a big step to go from here to something that could at least generate angles of attack on a problem in the Hamming sense.
I'm not at all sure this would actually be relevant to the rhetorical outcome, but I feel like the AI-can't-go-wrong camp wouldn't really accept the "Denier" label in the same way people in the AI-goes-wrong-by-default camp accept "Doomer." Climate change deniers agree they are deniers, even if they prefer terms like skeptic among themselves.
In the case of climate change deniers, the question is whether or not climate change is real, and the thing that they are denying is the mountain of measurements showing that it is real. I think what is different about the can't-go-wrong, wrong-by-default dichotomy is that the question we're arguing about is the direction of change, instead; it would be like if we transmuted the climate change denier camp into a bunch of people whose response wasn't "no it isn't" but instead was "yes, and that is great news and we need more of it."
Naturally it is weird to imagine people tacitly accepting the Mary Sue label in the same way we accept Doomer, so cut by my own knife I suppose!
ideally something similarly short and catchy with exactly the same level of implied respect
I nominate Mary Sues, after the writing trope of the same name. I say it is a good fit because these people are not thinking about a problem, they are engaging in narrative wish fulfillment instead.
With AI chatbots behaving badly around the world
Welp, I guess it is time to take a look at how to make a good-faith actual-mind-changing chatbot now.
Won't the goal of getting humans to reason better necessarily turn political at a certain point?
Trivially, yes. Among other things, we would like politicians to reason better, and for everyone to profit thereby.
I'm not here very frequently, I just really like political theory and have seen around the site that you guys try to not discuss it too much.
As it happens, this significantly predates the current political environment. Minimizing talking about politics, in the American political party horse-race sense, is one of our foundational taboos. It is not so strong anymore - once even a relevant keyword without appropriate caveats would pile on downvotes and excoriation in the comments - but for your historical interest the relevant essay is Politics Is The Mind-Killer. You can search that phrase, or similar ones like "mind-killed" or "arguments are soldiers" to get a sense of how it went. The basic idea was that while we are all new at this rationality business, we should try to avoid talking about things that are especially irrational.
Of course at the same time the website was big on atheism, which is an irony we eventually recognized and corrected. The anti-politics taboo softened enough to allow talking about theory, and mechanisms, and even non-flashpoint policy (see the AI regulation posts). We also added things like arguing about whether or not god exists to the taboo list. There was a bunch of other developments too, but that's the directional gist.
Happily for you and me both, political theory tackled well as theory finds a good reception here. As an example I submit A voting theory primer for rationalists and the follow-up posts by Jameson Quinn. All of these are on the subject of theories of voting, including discussing some real life examples of orgs and campaigns on the subject, and the whole thing is one of my favorite chunks of writing on the site.
So if you reject the Orthogonality Thesis, what map between capability and goals are you using instead?
In the pushback on Allen's arguments, I feel like the focus on wages is misleading. The question of whether new machines are a good investment is determined by the producer's costs, not by the worker's revenue. This seems to mean:
On reflection, I suppose it might be that they are all correctly refuting a generic claim of high wages. My model is is not the same, and differs in two key dimensions from a generic high wages model: one, I model on producer costs rather than worker wages; two, I model for specific producers rather than generic workers.
Reflecting further, these might be mutually reinforcing - if labor costs are high enough for mechanization to be profitable only for some producers, that means that they are also surrounded by people with much lower labor costs, and I feel like there has to be some effect of "how can I get my costs in the spinning business to be lower like my friend in the dyer business" or whichever other comparison obtains.
I'm excited for the new set of filters that do nothing to change a video or picture except add deepfake artifacts to it. In this way, when damning video gets out, people can just quickly run it through one of these things, and then repost it themselves pointing out the artifacts and claiming the whole thing is fake.
I feel like I have the same implied confusion, but it seems like a case where we don't need it to record the same kind of steps a mathematician would use, so much as the kind of steps a mathematician could evaluate.
Although if every book, paper or letter a mathematician ever wrote on the subject of "the steps I went through to find the proof" is scanned in, we could probably get it to tell a story of approaching the problem from a mathematician's perspective, using one of those "You are Terry Tao..."-style prompts.