yagudin

Wiki Contributions

Comments

I think it might be good to normalize "just try stuff until they fix your condition" as one of the treatment strategies. I guess it's a bit ironic that Dr. Spray-n-pray's indifference toward which pill worked and why seems so epistemically careless, while actually maybe being a correct way to orient towards success when you optimize for luck and have little reliable information.

  1. Russian military doctrine allows the usage of nuclear weapons to defend Russian territory.

 

This is ~false. See: https://forum.effectivealtruism.org/posts/TkLk2xoeE9Hrx5Ziw/nuclear-attack-risk-implications-for-personal-decision?commentId=ukEznwTnD78wFdZip#ukEznwTnD78wFdZip

Trust
Rule Thinkers In, Not OutScott Alexander 
Gears vs BehaviorJohn S. Wentworth 
Book Review: The Secret Of Our SuccessScott Alexander 
Reason isn't magicBen Hoffman 
"Other people are wrong" vs "I am right"Buck Shlegeris 
In My CultureDuncan Sabien 
Chris Olah's views on AGI safetyEvan Hubinger 
Understanding "Deep Double Descent"Evan Hubinger 
How to Ignore Your Emotions (while also thinking you're awesome at emotions)Hazard 
Paper-Reading for GearsJohn S. Wentworth 
Book summary: Unlocking the Emotional BrainKaj Sotala 
Noticing Frame DifferencesRaymond Arnold 
Propagating Facts into AestheticsRaymond Arnold 
Do you fear the rock or the hard place?Ruben Bloom 
Mental MountainsScott Alexander 
Steelmanning DivinationVaniver 
Modularity
Book Review: Design Principles of Biological CircuitsJohn S. Wentworth 
Reframing Superintelligence: Comprehensive AI Services as General IntelligenceRohin M. Shah 
Building up to an Internal Family Systems modelKaj Sotala 
Being the (Pareto) Best in the WorldJohn S. Wentworth 
The Schelling Choice is "Rabbit", not "Stag"Raymond Arnold 
Literature Review: Distributed TeamsElizabeth Van Nostrand 
Gears-Level Models are Capital InvestmentsJohn S. Wentworth 
Evolution of ModularityJohn S. Wentworth 
You Have About Five WordsRaymond Arnold 
Coherent decisions imply consistent utilitiesEliezer Yudkowsky 
Alignment Research Field GuideAbram Demski 
Forum participation as a research strategyWei Dai 
The Credit Assignment ProblemAbram Demski 
Selection vs ControlAbram Demski 
Incentives
Asymmetric JusticeZvi Mowshowitz 
The Copenhagen Interpretation of EthicsJai Dhyani 
Unconscious EconomicsJacob Lagerros 
Power Buys You Distance From The CrimeElizabeth Van Nostrand 
Seeking Power is Often Convergently Instrumental in MDPsAlexander Turner & Logan Smith 
Yes Requires the Possibility of NoScott Garrabrant 
Mistakes with Conservation of Expected EvidenceAbram Demski 
Heads I Win,Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green RationalistsZack M. Davis 
Excerpts from a larger discussion about simulacraBen Hoffman 
Moloch Hasn’t WonZvi Mowshowitz 
Integrity and accountability are core parts of rationalityOliver Habryka 
The Real Rules Have No ExceptionsSaid Achmiz 
Simple Rules of LawZvi Mowshowitz 
The Amish, and Strategic Norms around TechnologyRaymond Arnold 
Risks from Learned Optimization: IntroductionEvan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, & Scott Garrabrant 
Gradient hackingEvan Hubinger 
Failure
The Parable of Predict-O-MaticAbram Demski 
BlackmailZvi Mowshowitz 
BioinfohazardsMegan Crawford, Finan Adamson, & Jeffrey Ladish 
What failure looks likePaul Christiano 
Seeking Power is Often Convergently Instrumental in MDPsAlexander Turner & Logan Smith 
AI Safety “Success Stories”Wei Dai 
Reframing ImpactAlexander Turner 
The strategy-stealing assumptionPaul Christiano 
Is Rationalist Self-Improvement Real?Jacob Falkovich 
The Curse Of The CounterfactualP.J. Eby 
human psycholinguists: a critical appraisalNostalgebraist 
Why wasn't science invented in China?Ruben Bloom 
Make more landJeff Kaufman 
Rest Days vs Zombie DaysLauren Lee 

 

Here is a google sheet.

I want to mention that Tsvi Benson-Tilsen is a mentor at this summer's PIBBSS. So some readers might consider applying (the deadline is Jan 23rd).

I myself was mentored by Abram Demski once through the FHI SRF, which AFAIK was matching fellows with a large pull of researchers based on mutual interests.

I am looking for text-to-speech tools for various contexts.  As of now, I am using

I would appreciate it if the ToC linked to the web versions of the essay.

A follow-up (h/t LW review). I got quite a bit out of the workshop, most importantly

  • I found a close friend and collaborator, whom I don't think I would have met otherwise.
  • I found a close friend and co-founder, whom I was likely to meet otherwise, but it's unlikely that we would have a good enough bond by covid-times.

There was much more but much less legible and "evaluatable." I think ESE was excellent, and I would have done it even if I knew that I wouldn't get two close friendships out of it.

Or, to change tack: the operating budget of the LessWrong website has historically been ~$600k, and this budget is artificially low because the site has paid extremely below-market salaries. Adjusting for the market value of the labor, the cost is more like $1M/year, or $2,700/day. If I assume LessWrong generates more value than the cost required to run it, I estimate that the site provides at least $2,700/day in value, probably a good deal more.

 

I think this estimate is mistaken because it ignores marginalism: basically, the cost of disabling LW for a year is much larger than 365 * the cost of disabling LW for only a day.  The same goes for disabling the whole website vs. disabling only the frontpage.

(Sorry for adding salt to hurt feelings; posting because impact evaluation of longtermism projects is important.)

Maybe reading Gelman's self-contained comments on SSC's More Confounders would make you more confused in a good way.

Load More