LESSWRONG
LW

2287
Mo Putera
187424340
Message
Dialogue
Subscribe

Long-time lurker (c. 2013), recent poster. I also write on the EA Forum.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
5Mo Putera's Shortform
9mo
166
Consider donating to Alex Bores, author of the RAISE Act
Mo Putera10h40

Tangential, but I really appreciate your explicit cost-effectiveness estimate figures ($85-105k per +1% increment in win prob & 2 basis points x-risk reduction if he wins → $4-5M per basis point which looks fantastic vs the $100M per basis point bar I've seen for a 'good bet' or the $3.5B per basis point ballpark willingness to pay), just because public x-risk cost-eff calculations of this level of thoroughness are vanishingly rare (nothing Open Phil publishes approaches this for instance). So thanks a million, and bookmarked for future reference on how to do this sort of calculation well for politics-related x-risk interventions.

Reply
Consider donating to Alex Bores, author of the RAISE Act
Mo Putera10h20

See also pushback to this same comment here, reproduced below

I think (1) is just very false for people who might seriously consider entering government, and irresponsible advice. I've spoken to people who currently work in government, who concur that the Trump administration is illegally checking on people's track record of support for Democrats. And it seems plausible to me that that kind of thing will intensify. I think that there's quite a lot of evidence that Trump is very interested in loyalty and rooting out figures who are not loyal to him, and doing background checks, of certain kinds at least, is literally the legal responsibility of people doing hiring in various parts of government (though checking donations to political candidates is not supposed to be part of that).  

I'll also say that I am personally a person who has looked up where individuals have donated (not in a hiring context), and so am existence proof of that kind of behavior. It's a matter of public record, and I think it is often interesting to know what political candidates different powerful figures in the spaces I care about are supporting. 

If you haven't already, you might want to take a look at this post: https://forum.effectivealtruism.org/posts/6o7B3Fxj55gbcmNQN/considerations-around-career-costs-of-political-donations

Reply
Is 90% of code at Anthropic being written by AIs?
Mo Putera16h20

In case you haven't seen it, this post is by a professional developer with comparable experience to you (30+ years) who gets a lot of mileage out of pair programming with Claude Code in building "a pretty typical B2B SaaS product" and credits their productivity boost to the intuitions built up by their extensive experience enabling effective steering. I'd be curious to know your guesses as to why your experience differs.

Reply
Postrationality: An Oral History
Mo Putera17h42

Getting to the history of it, it really starts in my mind in Berkeley, around 2014-2015. ... At one of these parties there was this extended conversation that started between myself, Malcolm Ocean, and Ethan Ashkii.... From there we formed something like a philosophical circle. We had nominally a book club—that was the official structure of it—but it was mostly just an excuse to get together every two to three weeks and talk about whatever we had been reading in this space of how do we be rationalist but actually win. 

I think this is the seed of it. ...

I don't predict we fundamentally disagree or anything, just thought to register my knee-jerk reaction to this part of your oral history was "what about Scott's 2014 map?" which had already featured David Chapman, the Ribbonfarm scene which I used to be a fan of, Kevin Simler who unfortunately hasn't updated Melting Asphalt in years, and A Wizard's Word (which I'd honestly forgotten about):

I also vaguely recalled Darcey Riley's 2014 post Postrationality, Table of Contents in which they claimed

But anyway, as a result of this map, a lot of people have been asking: what is postrationality? I think Will Newsome or Steve Rayhawk invented the term, but I sort of redefined it, and it’s probably my fault that it’s come to refer to this cluster in blogspace. So I figured I would do a series of posts explaining my definition.


You say

So maybe one day we will get the postrationalist version of Eliezer. Someone will do this. You could maybe argue that David Chapman is this, but I don’t think it’s quite there yet. I don’t think it’s 100% working. The machine isn’t working quite that way.

While I do think of Chapman as being the most Eliezer-like-but-not-quite postrat solo figure with what he's doing at Meaningness, Venkat Rao seems like by far the more successful intellectual scene-creator to me, although he's definitely not postrat-Eliezer-esque at all.

Reply
How Stuart Buck funded the replication crisis
Mo Putera2d50

A favorite essay of mine in the "personal anecdotes" department. (Stuart is also here on LW)

I'll pull out some quotes I liked to entice folks to read the whole thing:

I.

From this point forward, I won’t narrate all of the grants and activities chronologically, but according to broader themes that are admittedly a bit retrofitted. Specifically, I’m now a fan of the pyramid of social change that Brian Nosek has written and talked about for a few years:

 

In other words, if you want scientists to change their behavior by sharing more data, you need to start at the bottom by making it possible to share data (i.e., by building data repositories). Then try to make it easier and more streamlined, so that sharing data isn’t a huge burden. And so on, up the pyramid.

You can’t start at the top of the pyramid (“make it required”) if the other components aren’t there first. For one thing, no one is going to vote for a journal or funder policy to mandate data sharing if it isn’t even possible. Getting buy-in for such a policy would require work to make data sharing not just possible, but more normative and rewarding within a field.

That said, I might add another layer at the bottom of the pyramid: “Raise awareness of the problem.” For example, doing meta-research on the extent of publication bias or the rate of replication can make entire fields aware that they have a problem in the first place—before that, they aren’t as interested in potential remedies for improving research behaviors.

The rest of this piece will be organized accordingly:

  • Raise Awareness: fundamental research on the extent of irreproducibility;
  • Make It Possible and Make It Easy: the development of software, databases, and other tools to help improve scientific practices;
  • Make It Normative: journalists and websites that called out problematic research, and better standards/guidelines/ratings related to research quality and/or transparency;
  • Make It Rewarding: community-building efforts and new journal formats
  • Make It Required: organizations that worked on policy and advocacy.

II.

On p-values and science communication done well:

In 2015, METRICS (the Meta-Research Innovation Center at Stanford) hosted an international conference on meta-research that was well-attended by many disciplinary leaders. The journalist Christie Aschwanden was there, and she went around ambushing the attendees (including me) by asking politely, “I’m a journalist, would you mind answering a few questions on video?,” and then following that with, “In layman’s terms, can you explain what is a p-value?” The result was a hilarious “educational” video and article, still available here. I was singled out as the one person with the “most straightforward explanation” of a p-value, but I did have an advantage — thanks to a job where I had to explain research issues on a daily basis to other foundation employees with little research background, I was already in the habit of boiling down complicated concepts.

(There's a longer passage further down on Stuart's experience consulting with the John Oliver Show where he rewrote the script on how to talk about p-values properly.)

III. 

On how Stuart thinks his success as a "metascience venture capitalist" would've been far less if he'd been forced to project high EVs for each grant:

One grantee wrote to me:

“That grant was a real accelerator. The flexibility (which flows from trust, and confidence, in a funder) was critical in being able to grow, and do good work. It also helped set my expectations high around funders being facilitative rather than obstructive (possibly too high…). I think clueful funding is critical, I have seen funders hold projects and people back, not by whether they gave money, but how they gave it, and monitored work afterwards.”

To me, that captures the best of what philanthropy can do. Find great people, empower them with additional capital, and get out of their way.

By contrast, government funders make grants according to official criteria and procedures. Private philanthropy often acts the same way. As a result, there aren’t enough opportunities for innovative scientists or metascientists to get funding for their best ideas.

My own success as a metascience VC would have been far less if I had been forced to project a high expected-value for each grant. Indeed, such a requirement would have literally ruled out many of the highest-impact grants that I made (or else I would have been forced to produce bullshit projections).

The paradox is that the highest-impact work often cannot be predicted reliably in advance. Which isn’t that surprising. As in finance, the predictable activities that might lead to high impact are essentially priced into the market, because people and often entire organizations will already be working on those activities (often too much so!).

If you want to make additional impact beyond that, you’re left with activities that can’t be perfectly predicted and planned several years in advance, and that require some insight beyond what most peer reviewers would endorse.

What’s the solution? You have to rely on someone’s instinct or “nose” for smelling out ideas where the only articulable rationale is, “These people seem great and they’ll probably think of something good to do,” or “Not sure why, but this idea seems like it could be really promising.” In a way, it’s like riding a bicycle: it depends heavily on tacit and unarticulable knowledge, and if you tried to put everything in writing in advance, you would just make things worse.

Both public and private funders should look for more ways for talented program officers to hand out money (with few or no strings attached) to people and areas that they feel are promising. That sort of grantmaking might never be 100% when it comes to public funds at NIH or NSF, but maybe it could be 20%, just as a start. I suspect the results would be better than today, if only by increasing variance.

Reply
Mo Putera's Shortform
Mo Putera2d190

I can't tell from their main text whether the human authors of this math paper that solved the $1,000 Erdos problem 707 used ChatGPT-5 Pro or Thinking or what. Supposing they didn't use Pro, I wonder how their experience would've been if they did; they said that vibe-coding the 6,000+ line Lean proof with ChatGPT took about a week and was "extremely annoying"

(technically one of the authors said Marshall Hall Jr. already solved it in 1947 via counterexample)

Image

I dislike hype-flavored summaries by the likes of Sebastien Bubeck et al, so I appreciated these screenshots of the paper and accompanying commentary by @life2030com on how the authors felt about using ChatGPT to assist them in all this:

Image
Image
Image
Image

I found that "curious inversion" remark at the end interesting too.

Reply
Humanity Learned Almost Nothing From COVID-19
Mo Putera3d80

To be honest I was ready to believe it (especially since your writings are usually analytically thorough), and was just curious about the derivation! Thanks for the post.

Reply
Humanity Learned Almost Nothing From COVID-19
Mo Putera3d492

The loss of gross world product is around $82 trio. over five years

This isn't a retrospective assessment, it's the worst-case projection out of 4 scenario forecasts done in May 2020, ranging from $3.3 to $82 trillion over 5 years, using an undefined reasoning-nontransparent metric called "GDP@Risk" I couldn't find anything on after a quick search.

Reply111
Mo Putera's Shortform
Mo Putera4d47-1

The most vivid passage I've read recently on trying hard, which reminded me of Eliezer's challenging the difficult sequence, is the opener in John Psmith's review of Reentry by Eric Berger:

My favorite ever piece of business advice comes from a review by Charles Haywood of a book by Daymond John, the founder of FUBU. Loosely paraphrased, the advice is: “Each day, you need to do all of the things that are necessary for you to succeed.” Yes, this is tautological. That’s part of its beauty. Yes, actually figuring out what it is you need to do is left as an exercise for the reader. How could it be otherwise? But the point of this advice, the stinger if you will, is that most people don’t even attempt to follow it.

Most people will make a to-do list, do as many of the items as they can until they get tired, and then go home and go to bed. These people will never build successful companies. If you want to succeed, you need to do all of the items on your list. Some days, the list is short. Some days, the list is long. It doesn’t matter, in either case you just need to do it all, however long that takes. Then on the next day, you need to make a new list of all the things you need to do, and you need to complete every item on that list too. Repeat this process every single day of your life, or until you find a successor who is also capable of doing every item on their list, every day. If you slip up, your company will probably die. Good luck.

A concept related to doing every item on your to-do list is “not giving up.” I want you to imagine that it is a Friday afternoon, and a supplier informs you that they are not going to be able to deliver a key part that your factory needs on Monday. Most people, in most jobs, will shrug and figure they’ll sort it out after the weekend, accepting the resulting small productivity hit. But now I want you to imagine that for some reason, if the part is not received on Monday, your family will die.

Are you suddenly discovering new reserves of determination and creativity? You could call up the supplier and browbeat/scream/cajole/threaten them. You could LinkedIn stalk them, find out who their boss is, discover that their boss is acquaintances with an old college friend, and beg said friend for the boss’s contact info so you can apply leverage (I recently did this). You could spend all night calling alternative suppliers in China and seeing if any of them can send the part by airmail. You could spend all weekend redesigning your processes so the part is unnecessary. And I haven’t even gotten to all the illegal things you could do! See? If you really, really cared about your job, you could be a lot more effective at it.

Most people care an in-between amount about their job. They want to do right by their employer and they have pride in their work, but they will not do dangerous or illegal or personally risky things to be 5% better at it, and they will not stay up all night finishing their to-do list every single day. They will instead, very reasonably, take the remaining items on their to-do list and start working on them the next day. Part of what makes “founder mode” so effective is that startup founders have both a compensation structure and social permission that lets them treat every single issue that comes up at work as if their family is about to die.

The rest of the review is about Elon and SpaceX, who are well beyond "founder mode" in trying hard; the anecdotes are both fascinating and a bit horrifying in the aggregate, but also useful in recalibrating my internal threshold for what actually trying hard looks like and whether that's desirable (short answer: no, but a part of me finds it strangely compelling). It also makes me somewhat confused as to why I get the sense that some folks with both high p(doom)s and a bias towards action aren't trying as hard, in a missing mood sort of way. (It's possible I'm simply wrong; I'm not working on anything alignment-related and am simply going off vibes across LW/AF/TPOT/EAGs/Slack/Discord etc.)

This reminded me of another passage by Some Guy armchair psychologizing Elon (so take this with a truckload of salt):

Imagine you’re in the cockpit of an airplane. There’s a war going on outside and the plane has taken damage. The airport where you were going to land has been destroyed. There’s another one, farther away, but all the dials and gauges are spitting out one ugly fact. You don’t have the fuel to get there.

The worst part of your situation is that it’s not hopeless. If you are willing to do the unthinkable you might survive.

You go through the plane with a wrench and you start stripping out everything you possibly can. Out the door it goes. The luggage first. The seats. The overhead storage bins. Some of this stuff you can afford to lose, but it’s not enough to get where you’re going. All the easy, trivial decisions are made early.

Out goes the floor paneling and back-up systems. Wires and conduits and casing. Gauges for everything you don’t need, like all the gauges blaring at you about all the things you threw out the door. You have to stand up in the cockpit because your pilot chair is gone. Even most of the life support systems are out the door because if you can’t get to the other airport you’re going to die anyway. The windows were critical to keep the plane aerodynamic but as long as you can shiver you don’t think you’ll freeze to death so your coat went out the window as well. Same with all the systems keeping the air comfortable in the cabin, so now you’re gasping just to stay standing.

Everything you’re doing is life or death. Every decision.

This is the relationship that Elon has with his own psyche. Oh, it’s not a perfect analogy but this seems close enough to me. There’s some chicken and the egg questions here for me, but consider the missions he’s chosen. All of them involve the long-term survival of humanity. Every last one. ... If he didn’t choose those missions because he has a life-or-death way of looking at the world, he certainly seems to have acquired that outlook after the decades leading those companies.

This makes sense when you consider the extreme lengths he’s willing to push himself to in order to succeed. In his own mind, he’s the only thing that stands between mankind and oblivion. He’s repurposed every part of his mind that doesn’t serve the missions he’s selected. Except, of course, no human mind could bear that kind of weight. You can try, and Elon has tried, but you will inevitably fail. ... 

Put yourself back in the cockpit of the plane.

You tell yourself that none of it matters even if part of you knows that some of your behavior is despicable, because you have to land the plane. All of humanity is on the plane and they’re counting on you to make it to the next airport. You can justify it all away because humanity needs you, and just you, to save it.

Maybe you’ve gone crazy, but everyone else is worse off.

People come into the cockpit to tell you how much better they would do at flying the plane than you. Except none of them take the wheel. None of them even dream of taking the wheel.

You try to reason with them, explain your actions, tell them about the dangers, but all they do is say it doesn’t seem so bad. The plane has always flown. They don’t even look at the gauges. The plane has always flown! Just leave the cockpit and come back into the cabin. It’s nice back there. You won’t have to look at all those troubling gauges!

Eliezer gives me this "I'm the only person willing to try piloting this doomed plane" vibe too.

Reply222
Mo Putera's Shortform
Mo Putera4d21-2

Interesting take on language evolution in humans by Max Bennett from his book A Brief History of Intelligence, via Sarabet Chang Yuye's review via Byrne Hobart's newsletter. Hobart caught my eye when he wrote (emphasis mine)

There's also a great bit towards the end that helps to explain two confusing stylized facts: humans don't seem to have much speech-specific hardware that other primates lack, but we're better at language, and the theory of language evolving to support group coordination requires a lot of activation energy. But if language actually started out one-on-one, between mothers and infants, that neatly solves both problems.

The bit towards the end by Yuye (emphasis mine):

The hardest thing to explain about humans, given that their brains underwent no structural innovation, is language.

(Our plausible range for language is 100-500K years ago. Modern humans exhibit about the same language proficiencies and diverged ~100K years ago, which is also when symbology like cave art show up. Before 500K the larynx and vocal cords weren’t adapted to vocal language.)

Apes can be taught sign language (since they’re physically not able to speak as we do), and there are multiple anecdotes of apes recombining signs to say new things. But they never surpass a young human child. How are we doing that? What’s going on in the brain?

Okay, sure, we’ve heard of Broca’s area and Wernicke’s area. They’re in the middle of the primate mentalizing regions. But chimps have those same areas, wired in the same ways. Plus, children with their entire left hemisphere (where those regions usually live) removed can still learn language fine.

If not a specific region, then what? The human ability to do this probably comes not from a cognitive advancement (although it can’t hurt that our brains are three times bigger than chimps’) but rather tweaks to developmental behavior and instincts.

Here are two things about human children that are not true of chimp children:

At 4 months, they engage in proto-conversation, taking turns with their parents in back-and-forth vocalizations. At 9 months, they start doing “joint attention to objects”: pointing at things and wanting the parent to look at the object, or looking at what their mom is pointing at and interacting with it. (You can see that if language arose as a mother-child activity that improved the child’s tool use, there’s no need to lean on group selection to explain its evolutionary advantage.)

Chimps don’t do either. They do gaze following, yes, but they don’t crave joint attention like human children. And what does a human parent do when they achieve joint attention? They assign labels to the object.

To get a chimp to speak language, it would help to beef up their brain, but this wouldn’t be enough – you’d have to change their instincts to engage in childhood play that is ‘designed’ for language acquisition. The author’s conclusion:

There is no language organ in the human brain, just as there is no flight organ in the bird brain. Asking where language lives in the brain may be as silly as asking where playing baseball or playing guitar lives in the brain. Such complex skills are not localized to a specific area; they emerge from a complex interplay of many areas. What makes these skills possible is not a single region that executes them but a curriculum that forces a complex network of regions to work together to learn them.

So this is why your brain and a chimp brain are practically identical and yet only humans have language. What is unique in the human brain is not in the neocortex; what is unique is hidden and subtle, tucked deep in older structures like the amygdala and brain stem. It is an adjustment to hardwired instincts that makes us take turns, makes children and parents stare back and forth, and that makes us ask questions.

This is also why apes can learn the basics of language. The ape neocortex is eminently capable of it. Apes struggle to become sophisticated at it merely because they don’t have the required instincts to learn it. It is hard to get chimps to engage in joint attention; it is hard to get them to take turns; and they have no instinct to share their thoughts or ask questions. And without these instincts, language is largely out of reach, just as a bird without the instinct to jump would never learn to fly.

As weak indirect evidence that the major difference is about language acquisition instinct, not language capability: Homo floresiensis underwent a decrease in brain and body size in their island environment (until their brains were comparable in size to chimpanzees’), but they kept manufacturing stone tools that may have required language to pass on.

Reply
Load More
5Mo Putera's Shortform
9mo
166
2Non-loss of control AGI-related catastrophes are out of control too
2y
3
12How should we think about the decision relevance of models estimating p(doom)?
Q
2y
Q
1