Software engineering, parenting, cognition, meditation, other
Linkedin, Facebook, Admonymous (anonymous feedback)
(caveat: I'm still reading the book)
The book takes a risk by - and I assume it is intentional - ignoring some of the more nuanced arguments (esp. your Tricky hypothesis 2). I think they are trying to shock the Overton Window to the very real risk of death by alignment failure if society continues with business as usual. The risk management seems to be:
A) Yet another carefully hedged warning call (like this one). Result:
B) If Anyone Builds It, Everyone Dies. Result:
If these numbers are halfway right, B seems advisable? And you can still do A if it fails!
Related to that: You have much fewer variables under consideration that you can even have standard names for. A remnant of this effect can be seen in typical Fortan programs.
The spoilered map is no longer rendering, but a direct link is here:
As to the "why are there two mechanisms that do about the same thing?" I guess this part of the answer:
So if you’re an animal at constant risk of having your behavior hijacked by parasites, what do you do?
First, you make your biological signaling cascades more complicated. You have multiple redundant systems controlling every part of behavior, and have them interact in ways too complicated for any attacker to figure out.
and with actual revenue.
“It’s not just the number one or two companies -- the whole batch is growing 10% week on week,”
YC makes all startups in the batch report KPIs even from before being accepted into the batch, If you participate in their Startup School, you are asked to track and report weekly numbers, such as number of users.
Paul Graham posts unlabeled charts from YC startups every now and then, so I assume the aggregate of all of these is what Garry Tan is refering to. Unfortunately, it is not possible to reproduce his analysis. But we should see the effect with the next round of exits. They should happen faster or at higher valuations compared to previous batches.
Relevant study:
Cindy Meston and Penny Frohlich (2003) investigated how residual physiological arousal from a roller‑coaster ride affects perceptions of attractiveness. Participants at an amusement park either just finished or were about to begin a ride. They then rated the attractiveness and dating desirability of an opposite‑gender target photograph.
Those exiting the ride rated the photographed person as significantly more attractive and more desirable for dating than those entering, but only when riding with a non‑romantic partner. The fear‑induced arousal from the ride could get misattributed to attractiveness when the actual source (the ride) isn’t consciously linked to the arousal.
https://labs.la.utexas.edu/mestonlab/files/2016/05/excitation-transfer.pdf
I don’t like that the written transcripts of the videos don’t read as well as written posts would. Or at least that’s what I think. They contain a lot more fluff, which is more tolerable when speaking, but less so in writing.
Paul Graham discusses that good thinking requires good writing and vice versa.
someone who never writes has no fully formed ideas about anything nontrivial.
I'm saying thinking well is a necessary condition for writing really well, not a sufficient one.
You have written a lot, but maybe what you notice in your video transcripts is that one of the effects of writing is missing. I don't think it has to be literal writing. Generalizing Paul Graham, I think that for clear thinking you need to put ideas in a form that forces precision and makes them grow into more. The grow into more seems to be clearly the case with the engagement and the posts here. But video often doesn't have the precision - maybe that's what shows in the fluff?
I can relate to the feeling. Whenever something I posted got downvoted without comment, I wondered about the reasons. Without comment, what can the poster learn from the downvotes? It feels like being sent away. Which it might. But that's how a community maintains its standards - for better or worse. I think you point out the "...or worse." I think it is a risk maybe worth taking. The alternative is Well-Kept Gardens Die By Pacifism.
I don't think you are arguing only about the title. Titles naturally have to simplify, but the book content has to support it. The "with techniques like those available today" in "If anyone builds it (with techniques like those available today), everyone dies" sure is an important caveat, but arguably it is the default. And, as Buck agrees, the authors do qualify it that way in the book. You don't have to repeat the qualification each time you mention it.
The core disagreement doesn't seem to be about that but about leaving out Tricky hypothesis 2. I'm less sure that is an intentional omission by the authors. Yudkowsky sure has argued many times that alignment is tricky and hard and may feel that the burden of proof is on the other side now.