All Posts

Sorted by Top

Week Of Sunday, July 16th 2023
Week Of Sun, Jul 16th 2023

No posts for this week
Shortform
23Elizabeth3d
ooooooh actual Hamming spent 10s of minutes asking people about the most important questions in their field and helping them clarify their own judgment, before asking why they weren't working on this thing they clearly valued and spent time thinking about. That is pretty different from demanding strangers at parties justify why they're not working on your pet cause. 
2
8lc1d
It is hard for me to tell whether or not my not-using-GPT4 as a programmer is because I'm some kind of boomer, or because it's actually not that useful outside of filling Google's gaps.
2
6lc1d
There is a kind of decadence that has seeped into first world countries ever since they stopped seriously fearing conventional war. I would not bring war back in order to end the decadence, but I do lament that governments lack an obvious existential problem of a similar caliber, that might coerce their leaders and their citizenry into taking foreign and domestic policy seriously, and keep them devolving into mindless populism and infighting.
6Thomas Kwa2d
I looked at Tetlock's Existential Risk Persuasion Tournament results [https://static1.squarespace.com/static/635693acf15a3e2a14a56a4a/t/64abffe3f024747dd0e38d71/1688993798938/XPT.pdf#%5B%7B%22num%22%3A2876%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C70%2C542%2C0%5D], and noticed some oddities. The headline result is of course "median superforecaster gave a 0.38% risk of extinction due to AI by 2100, while the median AI domain expert gave a 3.9% risk of extinction." But all the forecasters seem to have huge disagreements from my worldview on a few questions: * They divided forecasters into "AI-Concerned" and "AI-Skeptic" clusters. The latter gave 0.0001% for AI catastrophic risk before 2030, and even lower than this (shows as 0%) for AI extinction risk. This is incredibly low, and don't think you can have probabilities this low without a really good reference class. * Both the AI-Concerned and AI-skeptic clusters gave low probabilities for space colony before 2030, 0.01% and "0%" medians respectively. * Both groups gave numbers I would disagree with for the estimated year of extinction: year 3500 for AI-concerned, and 28000 for AI-skeptic. Page 339 suggests that none of the 585 survey participants gave a number above 5 million years, whereas it seems plausible to me and probably many EA/LW people on the "finite time of perils" thesis that humanity survives for 10^12 years or more, likely giving an expected value well over 10^10. The justification given for the low forecasts even among people who believed the "time of perils" arguments seems to be that conditional on surviving for millions of years, humanity will probably become digital, but even a 1% chance of the biological human population remaining above the "extinction" threshold of 5,000 still gives an expected value in the billions. I am not a forecaster and would probably be soundly beaten in any real forecasting tournament, but perhaps there is a bias
2
4Nisan3d
Conception [https://conception.bio/] is a startup trying to do in vitro gametogenesis for humans!

Week Of Sunday, July 9th 2023
Week Of Sun, Jul 9th 2023

No posts for this week
Shortform
20LoganStrohl10d
I had a baby on June 20th. I wrote a whole bunch of stuff about what it was like for me to give birth at home without pain medication. I've just published it all to my website, [https://www.loganstrohl.com/birth] along with photos and videos.  CN: If you click on "words", you won't see anybody naked. If you click on "photos" or "videos", you will see me very extra naked.  The "words" section includes a birth story, followed by a Q&A section with things like "What do contractions feel like?", "How did you handle the pain?", and "How did you think about labor, going into it?". There's also a bit at the very bottom of the page where you can submit more questions, though of course you're also welcome to ask me stuff here.
15Elizabeth7d
Much has been written about how groups tend to get more extreme over time. This is often based on evaporative cooling, but I think there's another factor: it's the only way to avoid the geeks->mops->sociopaths death spiral. An EA group of 10 people would really benefit from one of those people being deeply committed to helping people but hostile to the EA approach, and another person who loves spreadsheets but is indifferent to what they're applied to. But you can only maintain the ratio that finely when you're very small. Eventually you need to decide if you're going to ban scope-insensitive people or allow infinitely many, and lose what makes your group different. "Decide" may mean consciously choose an explicit policy, but it might also mean gradually cohere around some norms. The latter is more fine-tuned in some ways but less in others. 
15Elizabeth8d
Are impact certificates/retroactive grants the solution to grantmaking corrupting epistemics? They're not viable for everyone, but for people like me who: 1. do a lot of small projects (which barely make sense to apply for grants for individually) 2. benefit from doing what draws their curiosity at the moment (so the delay between grant application and decision is costly) 3. take commitments extremely seriously (so listing a plan on a grant application is very constraining) 4. have enough runway that payment delays and uncertainty for any one project aren't a big deal They seem pretty ideal. So why haven't I put more effort into getting retroactive funding? The retroactive sources tend to be crowdsourced. Crowdfunding is miserable in general, and leaves you open to getting very small amounts of money, which feels worse than none at all. Right now I can always preserve the illusion I would get more money, which seems stupid. In particular even if I could get more money for a past project by selling it better and doing some follow up, that time is almost certainly better spent elsewhere. 
1
12Lucie Philippon11d
Yesterday, I was searching for posts by alignment researchers describing how they got into the field. I was searching specifically for personal stories rather than guides on how other people can get into the field. I was trying to perform Intuition flooding [https://www.lesswrong.com/posts/F3vNoqA7xN4TFQJQg/14-techniques-to-accelerate-your-learning-1], by reading lots of accounts, and getting intuitions on which techniques work to enter the field. I only managed to find three which fit somewhat my target: * Neel Nanda: How I Formed My Own Views About AI Safety [https://www.lesswrong.com/posts/JZrN4ckaCfd6J37cG/how-i-formed-my-own-views-about-ai-safety] * Kevin RoWang: Lessons After a Couple Months of Trying to Do ML Research [https://www.lesswrong.com/posts/9rn3HmprA9poujTN5/lessons-after-a-couple-months-of-trying-to-do-ml-research] * TurnTrout: Lessons I've Learned from Self-Teaching [https://www.lesswrong.com/posts/cumc876woKaZLmQs5/lessons-i-ve-learned-from-self-teaching] * Nate Soares: The mechanics of my recent productivity [https://www.lesswrong.com/posts/uX3HjXo6BWos3Zgy5/the-mechanics-of-my-recent-productivity] Neel Nanda's post was the central example of what I was looking for, and I was surprised to not find more. Does anyone know where I can find more posts like this ?
2
8Elizabeth7d
People talk about sharpening the axe vs. cutting down the tree, but chopping wood and sharpening axes are things we know how to do and know how to measure. When working with more abstract problems there's often a lot of uncertainty in: 1. what do you want to accomplish, exactly? 2. what tool will help you achieve that? 3. what's the ideal form of that tool?  4. how do you move the tool to that ideal form? 5. when do you hit diminish returns on improving the tool? 6. how do you measure the tool's [sharpness]? Actual axe-sharpening rarely turns into intellectual masturbation because sharpness and sharpening are well understood. There are tools for thinking that are equally well understood, like learning arithmetic and reading, but we all have a sense that more is out there and we want it. It's really easy to end up masturbating (or epiphany addiction-ing) in the search for the upper level tools, because we are almost blind. This suggests massive gains from something that's the equivalent of a sharpness meter. 
3

Week Of Sunday, July 2nd 2023
Week Of Sun, Jul 2nd 2023

No posts for this week
Shortform
33gwern16d
I have some long comments I can't refind now (weirdly) about the difficulty of investing based on AI beliefs (or forecasting in general): similar to catching falling knives, timing is all-important and yet usually impossible to nail down accurately; specific investments are usually impossible if you aren't literally founding the company, and indexing 'the entire sector' definitely impossible. Even if you had an absurd amount of money, you could try to index and just plain fail - there is no index which covers, say, OpenAI. Apropos, Matt Levine [https://www.bloomberg.com/opinion/articles/2023-07-03/insider-trading-is-better-from-home] comments on one attempt to do just that: This is especially funny because it also illustrates timing problems: Oops. Oops. Also, people are quick to tell you how it's easy to make money, just follow $PROVERB, after all, markets aren't efficient, amirite? So, in the AI bubble, surely the right thing is to ignore the AI companies who 'have no moat' and focus on the downstream & incumbent users and invest in companies like Nvidia ('sell pickaxes in a gold rush, it's guaranteed!'): Oops. tldr: Investing is hard; in the future, even more so.
30Elizabeth15d
EA/rationality has this tension between valuing independent thought, and the fact that most original ideas are stupid. But the point of independent thinking isn't necessarily coming up with original conclusions. It's that no one else can convey their models fully so if you want to have a model with fully fleshed-out gears you have to develop it yourself. 
1
8DragonGod11d
I find noticing surprise more valuable than noticing confusion. Hindsight bias and post hoc rationalisations make it easy for us to gloss over events that were apriori unexpected.
1
8lc16d
You don't hear much about the economic calculation problem anymore, because "we lack a big computer for performing economic calculations" was always an extremely absurd reason to dislike communism. The real problem with central planning is that most of the time the central planner is a dictator who has no incentive to run anything well in the first place, and gets selected by ruthlessness from a pool of existing apparatchiks, and gets paranoid about stability and goes on political purges. What are some other, modern, "autistic" explanations for social dysfunction? Cases where there's an abstract economic or sociological argument about why certain policy/command structures are bad, which are mostly rationalizations designed to fit obviously correct conclusions into an existing field that wouldn't accept them in their normal format?
2
7Ruby12d
The LessWrong admins are often evaluating whether users (particularly new users) are going to be productive members of the site vs are just really bad and need strong action taken. A question we're currently disagreeing on is which pieces of evidence it's okay to look at in forming judgments. Obviously anything posted publicly. But what about: - Drafts (admins often have good reason to look at drafts, so they're there) - Content the user deleted - The referring site that sent someone to LessWrong I'm curious how people feel about moderators looking at those. Alternatively, we're not in complete agreement about: * Should deleted stuff even be that private? It was already public, could already have been copied, archived, etc. So there isn't that much expectation of privacy so admins should look at it. * Is it the case that we basically shouldn't extend the same rights, e.g. privacy, to new users because they haven't earned them as much, and we need to look at more activity/behavior to assess the new user? * There's some quantitative here where we might sometimes doing this depending on our degree of suspicion. Generally respecting privacy but looking at more things, e.g. drafts, that if we're on the edge about banning someone. * We are generally very hesitant to look at votes, but start to do this if we suspect bad voting behavior (e.g. someone possibly indiscriminately downvoting another person). Rate limiting being tied to downvotes perhaps makes this more likely and more of an issue. Just how ready to investigate (including deanonymization) should be if we suspect abuse?  
9

Week Of Sunday, June 25th 2023
Week Of Sun, Jun 25th 2023

No posts for this week
Shortform
19angelinahli23d
June 2023 cheap-ish lumenator DIY instructions (USA) I set up a lumenator! I liked the products I used, and did ~3 hours of research, so am sharing the set-up here. Here are some other posts about lumenators [https://www.lesswrong.com/posts/HJNtrNHf688FoHsHM/guide-to-rationalist-interior-decorating#Lumenators]. * Here's my shopping list [https://share-a-cart.com/get/XZNJJ]. * $212 total as of writing: * $142 for bare essentials (incl $87 for the bulbs) * $55 for the lantern covers * $17 for command hooks (if you get a different brand, check that your hooks can hold the cumulative weight of your bulbs + string) * 62,400 lumens total(!!) * Here are the bulbs [https://www.amazon.com/gp/product/B07Z1QSC83] I like. The 26 listed come out to $3.35 / bulb, 2600 lumens, 85 CRI, 5000k, non-dimmable. This comes out at 776 lumens / $ (!!!) which is kind of ridiculous. * The only criteria I cared about were: (1) CRI >85, (2) Color temperature of 5000K, and then I just tried to max out lumens / $. * These are super cheap. @mingyuan [https://www.lesswrong.com/users/mingyuan?mention=user] seemed to have spent $300 [https://www.lesswrong.com/posts/HJNtrNHf688FoHsHM/guide-to-rationalist-interior-decorating#Lumenators] on bulbs for their last lumenator. They had a stricter CRI cutoff (>90), but the price difference here means it might be worth considering going cheaper. * I don't understand if I am missing something / why this is such a good deal? But IRL, these are extremely bright, they don't 'feel' cheap (e.g. are somewhat heavy), and don't flicker (as of day 1). * They aren't dimmable. I wasn't willing to pay the premium for the dimmability — TBH I would just get another set of less bright lights for when you don't want it to be so bright! * To set up (1-2 hours): * Set up the command hooks, ideally somewhere kind of high up in your room, wait an hour for the backing
12jimrandomh21d
Deep commitment to truth requires investing in the skill of nondisruptive pedantry. Most communication contains minor errors: slightly wrong word choices, unstated assumptions, unacknowledged exceptions. By default, people interpret things in a way that smooths these out. When someone points out one of these issues in a way that's disruptive to the flow of conversation, it's called pedantry. Often, someone will say something that's incorrect, but close-enough to a true thing for you to repair it. One way you can handle this is to focus on the error. Smash the conversational context, complain about the question without answering it, that sort of thing. A different thing you can do is to hear someone say something that's incorrect, mentally flag it, repair it to a similar statement that matches the other person's intent but is actually true, act as though the other person had something ambiguous (even if it was actually unambiguously wrong). Then you insert a few words of clarification, correcting the error without forcing the conversation to be about the error, and providing something to latch on to if the difference turns out to be a real disagreement rather than a pedantic thing. And a third thing you can do is a thing where you sort of... try to do the second thing, but compressed all into one motion, where you substitute a corrected version of the sentence without noticing that you've repaired it, or verbally acknowledging the ambiguity. I don't think I've ever seen someone point at it explicitly, but I think this mental motion, noticing an error and fixing it without overreacting and damaging the surrounding context, may be one of the most important foundational rationality skills there is. And, it seems... actually pretty easy to practice, when you look squarely at it? (Crossposted with FB [https://www.facebook.com/jimrandomh8471/posts/pfbid0QwM3eWrGcgtZ5F8dvQnnJEKdxsWSGSuKsZXR834iHQryVyozKN5s4Q8FGNvNqsRtl])
1
10SirTruffleberry19d
There is a distinction people often fail to make, which is commonly seen in analyses of fictional characters' actions but also those of real people. It is the distinction between behaving irrationally and having extreme preferences. If we look at actions and preferences the way decision theorists do, it is clear that preferences cannot be irrational. Indeed, rationality is defined as tendency toward preference-satisfaction. To say preferences are irrational is to say that someone's tastes can be objectively wrong. Example: Voldemort is very stubborn in JKR's Harry Potter. He could have easily arranged for a minion to kill Harry, but he didn't, and this is decried as irrational. Or even more to the point, he could have been immortal if only he hid in a cave somewhere and didn't bother anyone. But that is ignoring Voldemort's revealed preference relation and just treating survival as his chief end. What is the point of selling your soul to become the most powerful lich of all time so you can live as a hermit? That would be irrational, as it would neglect Voldemort's preferences.
4
7Daniel Kokotajlo23d
A nice thing about being a fan of Metaculus for years is that I now have hard evidence of what I thought about various topics back in the day. It's interesting to look back on it years later. Case in point: Small circles are my forecasts: The change was, I imagine, almost entirely driven by the update in my timelines.  
7frontier6424d
You woo a girl to fall in love with you, sleep with her, then abandon her? You're going to be run out of town as a rotten fool 100 years ago. Nowadays that communal protection is gone.
6

Load More Weeks