Mo Putera

Long-time lurker (c. 2013), recent poster. I also write on the EA Forum.

Wikitag Contributions

Comments

Sorted by

Scott is often considered a digressive or even “astoundingly verbose” writer.

This made me realise that as a reader I care about, not so much "information & ideas per word" (roughly speaking), but "per unit of effort reading". I'm reminded of Jason Crawford on why he finds Scott's writing good:

Most writing on topics as abstract and technical as his struggles just not to be dry; it takes effort to focus, and I need energy to read them. Scott’s writing flows so well that it somehow generates its own energy, like some sort of perpetual motion machine.

My favorite Scott essays take no effort to read due to that perpetual motion effect, so the denominator vanishes and the ratio skyrockets; the word count becomes unnoticeable. I'd guess that Scott's avid readers would mostly say the same.

I've been working with a professional editor on a report and it's amazing how much clearer and punchier the writing is after they've done a pass on my rough drafts. But the perpetual motion effect of Scott's writing is on another level: there's almost a motive force to it. 

Here's an old comment he wrote in response to Luke's "What are your favorite pieces of writing advice?" where he explains how he writes; the whole thing is worth reading, I'll only quote part of it:

There's that quote about how "the most important thing is sincerity, and if you can fake that, you've got it made." So there are two equal and opposite commandments for popular writing. First, you've got to sound like you're chatting with your reader, like you're giving them an unfiltered stream-of-consciousness access to your ideas as you think them. Second, on no account should you actually do that.

Eliezer is one of the masters at this; his essays are littered with phrases like "y'know" and "pretty much", but they're way too tight to be hastily published first drafts (or maybe I'm wrong and Eliezer is one of the few people in the world who can do this; chances are you're not). You've got to put a lot of work into making something look that spontaneous. I'm a fan of words like "sorta" and "kinda" myself, but I have literally gone through paragraphs and replaced all of the "to some degrees" with "sortas" to get the tone how I wanted it. ... 

The real meat of writing comes from an intuitive flow of words and ideas that surprises even yourself. Editing can only enhance and purify writing so far; it needs to have some natural potential to begin with. My own process here is to mentally rehearse an idea very many times without even thinking about writing. Once I'm an expert at explaining it to myself or an imaginary partner, then I transcribe the explanation I settle upon (some people say they don't think in words; I predict writing will not come naturally to these people). Then I edit the heck out of it. ...

Some people say to write down everything and only edit later. I take the opposite tack. I used to believe that I rarely edited at all because I usually publish something as soon as it's done. Then a friend watching me write said that she was getting seasick from my tendency to go back and forth deleting and rewriting the same sentence fragment or paragraph before moving on. Most likely the best writers combine both editing methods.

Chinchilla scaling finally seems to be slowing

Interesting, any pointers to further reading?

Balioc's A taxonomy of bullshit jobs has a category called Worthy Work Made Bullshit which resonated with me most of all:

Worthy Work Made Bullshit is perhaps the trickiest and most controversial category, but as far as I’m concerned it’s one of the most important.  This is meant to cover jobs where you’re doing something that is obviously and directly worthwhile…at least in theory…but the structure of the job, and the institutional demands that are imposed on you, turn your work into bullshit.   

The conceptual archetype here is the Soviet tire factory that produces millions of tiny useless toy-sized tires instead of a somewhat-smaller number of actually-valuable tires that could be put on actual vehicles, because the quota scheme is badly designed.  Everyone in that factory has a Worthy Work Made Bullshit job.  Making tires is something you can be proud of, at least hypothetically.  Making tiny useless tires to game a quota system is…not. 

Nowadays we don’t have Soviet central planners producing insane demands, but we do have a marketplace that produces comparably-insane demands, especially in certain fields. 

This is especially poignant, and especially relevant, in certain elite/creative fields where you don’t need market discipline in order to get people to produce.  All those writers who are churning out garbage clickbait?  They don’t want to be writing clickbait, any more than you want them to be writing clickbait.  If you just handed them checks and told them “go do whatever”…well, some of them would take the money and do nothing, some of them would produce worthless product that appealed to no one, but a lot of them would generate work considerably more worthwhile than clickbait.  Almost certainly not as easily monetizable, but – better, by the standards of anyone who actually cared.  Their writing has been made bullshit by the demands of an advertisement-driven system.

Academia is the ground-zero locus of this.  Academia is a world that is designed around a model of “here’s enough money to live on, go do some abstractly worthwhile thing.”  It selects for people who have the talent, and the temperament, to thrive under that kind of system.  But nowadays it mostly can’t be that, because of competitive pressures and drastic funding cuts, so it demands an ever-increasing share of bullshit from the inmates.  Thus we get the grant application circus, the publishing treadmill, etc. etc. 

It's the exponential map that's more fundamental than either e or 1/e. Alon Amit's essay is a nice pedagogical piece on this.

Thank you, sounds somewhat plausible to me too. For others' benefit, here's the chart from davidad's linked tweet:

Image

What is the current best understanding of why o3 and o4-mini hallucinate more than o1? I just got round to checking out the OpenAI o3 and o4-mini System Card and in section 3.3 (on hallucinations) OA noted that 

o3 tends to make more claims overall, leading to more accurate claims as well as more inaccurate/hallucinated claims. While this effect appears minor in the SimpleQA results (0.51 for o3 vs 0.44 for o1), it is more pronounced in the PersonQA evaluation (0.33 vs 0.16). More research is needed to understand the cause of these results. 

as of publication on April 16, so it's only been a few weeks, but wondering anyhow if people have figured this out.

Importantly, I value every intermediate organism in this chain

An interesting and personally relevant variant of this is if the approval only goes one direction in time. This happened to me: 2025!Mo is vastly different from 2010!Mo in large part due to step-changes in my "coming of age" story that would've left 2010!Mo horrified (indeed he tried to fight the step-changes for months) but that 2025!Mo retrospectively fully endorses post-reflective equilibrium. 

So when I read something like Anders Sandberg's description here

There is a kind of standard argument you sometimes hear if you’re a transhumanist — like I am — that talks about life extension, where somebody cleverly points out that you would change across your lifetime. If it’s long enough, you will change into a different person. So actually you don’t get an indefinitely extended life; you just get a very long life thread. I think this is actually an interesting objection, but I’m fine with turning into a different future person. Anders Prime might have developed from Anders in an appropriate way — we all endorse every step along the way — and the fact that Anders Prime now is a very different person is fine. And then Anders Prime turns into Anders Biss and so on — a long sequence along a long thread.

I think: it's not all that likely that I'm done with the whole "coming of age" reflective equilibrium thing, so I find it very likely that there are more step-changes I'll experience that 2025!Mo would find horrifying but Future!Mo would fully endorse, contra Anders' "we all endorse every step along the way". It's not just the outcomes that Past!Mos disendorse: reflection changes what changes are endorsed too. 

This is the sort of retrospection that makes me sympathetic to what Scott said in his review of Hanson's Age of Em:

A short digression: there’s a certain strain of thought I find infuriating, which is “My traditionalist ancestors would have disapproved of the changes typical of my era, like racial equality, more open sexuality, and secularism. But I am smarter than them, and so totally okay with how the future will likely have values even more progressive and shocking than my own. Therefore I pre-approve of any value changes that might happen in the future as definitely good and better than our stupid hidebound present.” 

I once read a science-fiction story that depicted a pretty average sci-fi future – mighty starships, weird aliens, confederations of planets, post-scarcity economy – with the sole unusual feature that rape was considered totally legal, and opposition to such as bigoted and ignorant as opposition to homosexuality is today. Everybody got really angry at the author and said it was offensive for him to even speculate about that. Well, that’s the method by which our cheerful acceptance of any possible future values is maintained: restricting the set of “any possible future values” to “values slightly more progressive than ours” and then angrily shouting down anyone who discusses future values that actually sound bad. But of course the whole question of how worried to be about future value drift only makes sense in the context of future values that genuinely violate our current values. Approving of all future values except ones that would be offensive to even speculate about is the same faux-open-mindedness as tolerating anything except the outgroup.

Hanson deserves credit for positing a future whose values are likely to upset even the sort of people who say they don’t get upset over future value drift. I’m not sure whether or not he deserves credit for not being upset by it. Yes, it’s got low-crime, ample food for everybody, and full employment. But so does Brave New World. The whole point of dystopian fiction is pointing out that we have complicated values beyond material security. Hanson is absolutely right that our traditionalist ancestors would view our own era with as much horror as some of us would view an em era. He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”

Except that I'm pretty sure Future!Mos won't be defending Past!Mos' "provincial and arbitrary values", the way 2025!Mo doesn't defend and in fact flatly rejects a lot 2010!Mo's core values. I'm not sure how to think of all this.

Predictive coding research shows our brains use both bottom-up signals (intuition) and top-down predictions (systematization) in a dynamic interplay . These are integrated parts of how our brains process information. One person can excel at both.

Link is broken, can you reshare?

Load More