Raemon

LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.

Sequences

Feedbackloop-First Rationality
The Coordination Frontier
Privacy Practices
Keep your beliefs cruxy and your frames explicit
LW Open Source Guide
Tensions in Truthseeking
Project Hufflepuff
Rational Ritual
Drawing Less Wrong

Comments

Sorted by
Raemon40

I had interpeted your initial comment to mean "this post doesn't accurately characterize Pauls views" (as opposed to "John is confused/wrong about the object level of 'is verification easier than generation' in a way that is relevant for modeling AI outcomes")

I think your comment elsethread was mostly commenting on the object level. I'm currently unsure if your line "I don't think this characterization is accurate at all" was about the object level, or about whether this post successfully articulates a difference in Paul's views vs Johns. 

Raemon50

Curated. This post gave me a lot of concrete gears for understanding and predicting how AI will affect the economy in the near future. This wasn't quite "virtue of scholarship" (my impression is Sarah is more reporting firsthand experience than doing research) but I appreciated the rich details. 

I'm generally interested in curating posts where someone with a lot of industry experience writes up details about that industry.

Some particular notes:

  • I'm not surprised by "companies store their data really badly and siloed", but I appreciated the gears of several incentives that make this not trivial to fix by just saying "c'mon guys" (i.e. legitimate fear of losing trade secrets), as well as dumb screwups.
  • Correspondingly, understanding how human-social-labor-intensive Palantir's business model is, and why that's hard to replicate. (This fits into John Wentworth's Coordination as Scarce Resource)
  • Generally appreciating how long it takes technological changes to propagate.
Raemon30

Thanks. I sent in a letter.

I'm confused about the "it's gotta be a PDF" but, I guess arbitrary bureaucracies gotta arbitrary bureaucracy.

Raemon20

Did you mean Zach Stein-Perlman or Zac Hatfield-Dodds?

Raemon40

The original text has a couple sentences noting bits that "aren't actually important to the story, but are just necessary to make it a coherent world." (i.e. it's important the civilization needs to wait millions of years for the message to play out, and somehow you need to explain how society remains reasonably static while doing so)

Raemon87

This seems right, though I'd interpreted the context of Sarah's post to be more about what we expect in a pre-superintelligence economy.

Raemon40

I think  AGI Safety From First Principles by Richard Ngo is probably good.

I think AGI Ruin: A List of Lethalities is comprehensive but also sort of advanced and skips over the two basic bits.

Raemon20

I don't see it at all, is it supposed to be generally available (to paid users?)

Raemon124

"Should orangutans have felt save inventing humans" is an unnecessarily abstract question, why not just ask whether orangutans have benefited from the existence of humans or not.

 

I'm not sure I can model what a lay person would think, but, fwiw I think the "should orangatans have invented humans?" much more direct as an intuition pump here. Yes it's a bit abstract, but, it more directly prompts me to think "we may be inventing AI that is powerful relative to us the way we're powerful relative to chimps."

Raemon62

An update I wanted to come back to make was "art is a scalar, not a boolean." Art that involves more interesting choices, technique, and deliberate psychological effects on viewers is "more arty." Clicking a filter in photoshop on a photo someone else took is, maybe like, a .5 on a 1-10 scale. I honestly do rank much photography as lower on the "is it art?" scale than equivalent paintings.

A lot of AI art will be "slop" that is very low-but-nonzero on the art scale. 

Art is somewhat anti-inductive or "zero sum"[1], where if it turns out that everyone makes identical beautiful things with a click that would previously have required tons of technique and choicefulness to create, that stuff ends up lower on the artiness scale than previously, and the people who are somehow innovating with the new tools count as more arty. 

The first person to make the Balenciaga Harry Potter AI clip was making art. Subsequent Balenciaga meme clips are much less arty. I like to think that my WarCraft Balenciaga video was "less arty than the original but moreso than most of the dross."

  1. ^

    this is somewhat an abuse of what 'zero sum' means, I think the sum of art can change,  but is sort of... resistant to change.

Load More