Mo Putera

I've been lurking on LW since 2013, but only started posting recently. My day job was "analytics broadly construed" although I'm currently exploring applied prio-like roles; my degree is in physics; I used to write on Quora and Substack but stopped, although I'm still on the EA Forum. I'm based in Kuala Lumpur, Malaysia.

Wiki Contributions

Comments

Sorted by

One frustrating conversation was about persuasion. Somehow there continue to be some people who can at least somewhat feel the AGI, but also genuinely think humans are at or close to the persuasion possibilities frontier – that there is no room to greatly expand one’s ability to convince people of things, or at least of things against their interests.

This is sufficiently absurd to me that I don’t really know where to start, which is one way humans are bad at persuasion. Obviously, to me, if you started with imitations of the best human persuaders (since we have an existence proof for that), and on top of that could correctly observe and interpret all the detailed signals, have limitless time to think, a repository of knowledge, the chance to do Monty Carlo tree search of the conversation against simulated humans, never make a stupid or emotional tactical decision, and so on, you’d be a persuasion monster. It’s a valid question ‘where on the tech tree’ that shows up how much versus other capabilities, but it has to be there. But my attempts to argue this proved, ironically, highly unpersuasive.

Scott tried out an intuition pump in responding to nostalgebraist's skepticism:

Nostalgebraist: ... it’s not at all clear that it is possible to be any better at cult-creation than the best historical cult leaders — to create, for instance, a sort of “super-cult” that would be attractive even to people who are normally very disinclined to join cults.  (Insert your preferred Less Wrong joke here.)  I could imagine an AI becoming L. Ron Hubbard, but I’m skeptical that an AI could become a super-Hubbard who would convince us all to become its devotees, even if it wanted to.

Scott: A couple of disagreements. First of all, I feel like the burden of proof should be heavily upon somebody who thinks that something stops at the most extreme level observed. Socrates might have theorized that it’s impossible for it to get colder than about 40 F, since that’s probably as low as it ever gets outside in Athens. But when we found the real absolute zero, it was with careful experimentation and theoretical grounding that gave us a good reason to place it at that point. While I agree it’s possible that the best manipulator we know is also the hard upper limit for manipulation ability, I haven’t seen any evidence for that so I default to thinking it’s false.

(lots of fantasy and science fiction does a good job intuition-pumping what a super-manipulator might look like; I especially recommend R. Scott Bakker’s Prince Of Nothing)

But more important, I disagree that L. Ron Hubbard is our upper limit for how successful a cult leader can get. L. Ron Hubbard might be the upper limit for how successful a cult leader can get before we stop calling them a cult leader.

The level above L. Ron Hubbard is Hitler. It’s difficult to overestimate how sudden and surprising Hitler’s rise was. Here was a working-class guy, not especially rich or smart or attractive, rejected from art school, and he went from nothing to dictator of one of the greatest countries in the world in about ten years. If you look into the stories, they’re really creepy. When Hitler joined, the party that would later become the Nazis had a grand total of fifty-five members, and was taken about as seriously as modern Americans take Stormfront. There are records of conversations from Nazi leaders when Hitler joined the party, saying things like “Oh my God, we need to promote this new guy, everybody he talks to starts agreeing with whatever he says, it’s the creepiest thing.” There are stories of people who hated Hitler going to a speech or two just to see what all the fuss was about and ending up pledging their lives to the Nazi cause.  Even while he was killing millions and trapping the country in a difficult two-front war, he had what historians estimate as a 90% approval rating among his own people and rampant speculation that he was the Messiah. Yeah, sure, there was lots of preexisting racism and discontent he took advantage of, but there’s been lots of racism and discontent everywhere forever, and there’s only been one Hitler. If he’d been a little bit smarter or more willing to listen to generals who were, he would have had a pretty good shot at conquering the world. 100% with social skills.

The level above Hitler is Mohammed. I’m not saying he was evil or manipulative, just that he was a genius’ genius at creating movements. Again, he wasn’t born rich or powerful, and he wasn’t particularly scholarly. He was a random merchant. He didn’t even get the luxury of joining a group of fifty-five people. He started by converting his own family to Islam, then his friends, got kicked out of his city, converted another city and then came back at the head of an army. By the time of his death at age 62, he had conquered Arabia and was its unquestioned, God-chosen leader. By what would have been his eightieth birthday his followers were in control of the entire Middle East and good chunks of Africa. Fifteen hundred years later, one fifth of the world population still thinks of him as the most perfect human being ever to exist and makes a decent stab at trying to conform to his desires and opinions in all things.

The level above Mohammed is the one we should be worried about. 

Mo Putera-4-4

I like it too, although there's 500+ fiction posts on LW (not including the subreddit) so you probably meant something else.

What about just not pursuing a PhD and instead doing what OP did? With the PhD you potentially lose #1 in 

I actually think that you can get great results doing research as a hobby because

  • it gives you loads of slack, which is freedom to do things without constraints. In this context, I think slack is valuable because it allows you to research things outside of the publishing mainstream.
  • and less pressure.

I think these two things are crucial for success. The slack allows you to look at risky and niche ideas are more likely to yield better research rewards if they are true, since surprising results will trigger further questions.

Also, since you are more likely to do better at topics you enjoy, getting money from a day job allows you to actually purse your interests or deviate from your supervisor’s wishes. Conversely, it also allows you to give up when you’re not enjoying something.

which is where much of the impact comes from, especially if you subscribe to a multiplicative view of impact.

Wikipedia says it's a SaaS company "specializing in AI-powered document processing and automation, data capture, process mining and OCR": https://en.wikipedia.org/wiki/ABBYY 

To be clear, GiveWell won’t be shocked by anything I’ve said so far. They’ve commissioned work and published reports on this. But as you might expect, these quality of life adjustments wouldnt feature in GiveWell’s calculations anyway, since the pitch to donors is about the price paid for a life, or a DALY.

Can you clarify what you mean by these quality of life adjustments not featuring in GiveWell's calculations? 

To be more concrete, let's take their CEA of HKI's vitamin A supplementation (VAS) program in Burkina Faso. They estimate that a $1M grant would avert 553 under-5 deaths (~80% of total program benefit) and incrementally increase future income for the ~560,000 additional children receiving VAS (~20%) (these figures vary considerably by location by the way, from 60 deaths averted in Anambra, Nigeria to 1,475 deaths averted in Niger) then they convert this to 81,811 income-doubling equivalents (their altruistic common denominator — they don't use DALYs in any of their CEAs, so I'm always befuddled when people claim they do), make a lot of leverage- and funging-related adjustments which reduces this to 75,272 income doublings, then compare it with the 3,355 income doublings they estimate would be generated by donating that $1M to GiveDirectly to get their 22.4x cash multiplier for HKI VAS in Burkina Faso. 

So: are you saying that GiveWell should add a "QoL discount" when converting lives saved and income increase, like what Happier Lives Institute suggests for non-Epicurean accounts of badness of death? 

the obvious thing to happen is that nvidia realizes it can just build AI itself. if Taiwan is Dune, GPUs are the spice, then nvidia is house Atreides

They've already started... 

You mention in another comment that your kid reads the encyclopaedia for fun, in which case I don't think The Martian would be too complex, no? 

I'm also reminded of how I started perusing the encyclopaedia for fun at age 7. At first I understood basically nothing (English isn't my native language), but I really liked certain pictures and diagrams and keep going back to them wanting to learn more, realising that I'd comprehend say 20% more each time, which taught me to chase exponential growth in comprehension. Might be worth teaching that habit. 

Society seems to think pretty highly of arithmetic. It’s one of the first things we learn as children. So I think it’s weird that only a tiny percentage of people seem to know how to actually use arithmetic. Or maybe even understand what arithmetic is for.

I was a bit thrown off by the seeming mismatch between the title ("underrated") and this introduction ("rated highly, but not used or understood as well as dynomight prefers").

The explanation seems straightforward: arithmetic at the fluency you display in the post is not easy, even with training. If you only spend time with STEM-y folks you might not notice, because they're a very numerate bunch. I'd guess I'm about average w.r.t. STEM-y folks and worse than you are, but I do quite a bit of spreadsheet-modeling for work, and I have plenty of bright hardworking colleagues who can't quite do the same at my level even though they want to, which suggests not underratedness but difficulty.

(To be clear I enjoy the post, and am a fan of your blog. :) )

Load More