If it’s worth saying, but not worth its own post, here's a place to put it. 

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here

Open Thread Spring 2024
New Comment
161 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Linch11753

Do we know if @paulfchristiano or other ex-lab people working on AI policy have non-disparagement agreements with OpenAI or other AI companies? I know Cullen doesn't, but I don't know about anybody else.

I know NIST isn't a regulatory body, but it still seems like standards-setting should be done by people who have no unusual legal obligations. And of course, some other people are or will be working at regulatory bodies, which may have more teeth in the future.

To be clear, I want to differentiate between Non-Disclosure Agreements, which are perfectly sane and reasonable in at least a limited form as a way to prevent leaking trade secrets, and non-disparagement agreements, which prevents you from saying bad things about past employers. The latter seems clearly bad to have for anybody in a position to affect policy. Doubly so if the existence of the non-disparagement agreement itself is secretive.

[-]gwern283

Sam Altman appears to have been using non-disparagements at least as far back as 2017-2018, even for things that really don't seem to have needed such things at all, like a research nonprofit arm of YC.* It's unclear if that example is also a lifetime non-disparagement (I've asked), but nevertheless, given that track record, you should assume the OA research non-profit also tried to impose it, followed by the OA LLC (obviously), and so Paul F. Christiano (and all of the Anthropic founders) would presumably be bound.

This would explain why Anthropic executives never say anything bad about OA, and refuse to explain exactly what broken promises by Altman triggered their failed attempt to remove Altman and subsequent exodus.

(I have also asked Sam Altman on Twitter, since he follows me, apropos of his vested equity statement, how far back these NDAs go, if the Anthropic founders are still bound, and if they are, whether they will be unbound.)

* Note that Elon Musk's SpaceX does the same thing and is even worse because they will cancel your shares after you leave for half a year, and if they get mad at you after that expires, they may simply lock you out of tender offers indefinitely - whi... (read more)

[-]habryka4630

It seems really quite bad for Paul to work in the U.S. government on AI legislation without having disclosed that he is under a non-disparagement clause for the biggest entity in the space the regulator he is working at is regulating. And if he signed an NDA that prevents him from disclosing it, then it was IMO his job to not accept a position in which such a disclosure would obviously be required. 

I am currently like 50% Paul has indeed signed such a lifetime non-disparagement agreement, so I do think I don't buy that a "presumably" is appropriate here (though I am not that far away from it).

[-]gwern3215

It would be bad, I agree. (An NDA about what he worked on at OA, sure, but then being required to never say anything bad about OA forever, as a regulator who will be running evaluations etc...?) Fortunately, this is one of those rare situations where it is probably enough for Paul to simply say his OA NDA does not cover that - then either it doesn't and can't be a problem, or he has violated the NDA's gag order by talking about it and when OA then fails to sue him to enforce it, the NDA becomes moot.

[-]Linch104

At the very least I hope he disclosed it to the gov't (and then it was voided at least in internal government communications, I don't know how the law works here), though I'd personally want it to be voided completely or at least widely communicated to the public as well.

1Tao Lin
lol Paul is a very non-disparaging person. He always makes his criticism constructive, i don't know if there's any public evidence of him disparaging anyone regardless of NDAs
2Linch
Wow, good point. I've never considered that aspect. 
[-]Akash236

+1. Also curious about Jade Leung (formerly Open AI)– she's currently the CTO for the UK AI Safety Institute. Also Geoffrey Irving (formerly DeepMind), who is a research director at the UKAISI.

7Linch
Geoffrey Irving was one of the first people to publicly say some very aggressive + hard-to-verify things about Sam Altman during the November board fiasco, so hopefully this means he's not bound (or doesn't feel bound) by a very restrictive non-disparagement agreement.  
[-]Akash104

Great point! (Also oops– I forgot that Irving was formerly OpenAI as well. He worked for DeepMind in recent years, but before that he worked at OpenAI and Google Brain.)

Do we have any evidence that DeepMind or Anthropic definitely do not do non-disparagement agreements? (If so then we can just focus on former OpenAI employees.)

2Garrett Baker
Relevant market

Hi! I have been lurking here for over a year but I've been too shy to participate until now. I'm 14, and I've been homeschooled all my life. I like math and physics and psychology, and I've learned lots of interesting things here. I really enjoyed reading the sequences last year. I've also been to some meetups in my city and the people there (despite – or maybe because of – being twice my age) are very cool. Thank you all for existing!

7nim
hey, welcome! Congrats on de-lurking, I think? I fondly remember my own teenage years of lurking online -- one certainly learns a lot about the human condition. If I was sending my 14-year-old self a time capsule of LW, it'd start with the sequences, and beyond that I'd emphasize the writings of adults examining how their own cognition works. Two reasons -- first, being aware that one is living in a brain as it finishes wiring itself together is super entertaining if you're into that kind of thing, and even more fun when you have better data to guess how it's going to end up. (I got the gist of that from having well-educated and openminded parents, who explained that it's prudent to hold off on recreational drug use until one's brain is entirely done with being a kid, because most recreational substances make one's brain temporarily more childlike in some way and the real thing is better. Now I'm in my 30s and can confirm that's how such things, including alcohol, have worked for me) Second, my 20s would have been much better if someone had taken kid-me aside and explained some neurodiversity stuff to her: "here's the range of normal, here's the degree of suffering that's not expected nor normal and is worth consulting a professional for even if you're managing through great effort to keep it together", etc. If you'd like to capitalize on your age for some free internet karma, I would personally enjoy reading your thoughts on what your peers think of technology, how they get their information, and how you're all updating the language at the moment. I also wish that my 14-year-old self had paid more attention to the musical trends and attempted to guess which music that was popular while I was of highschool age would stand the test of time and remain on the radio over the subsequent decades. In retrospect, I'm pretty sure I could probably have taken some decent guesses, but I didn't so now I'll never know whether I would have guessed right :)
6atergitna
I really don't know much about popular music, but I'm guessing that music from video games is getting more popular now, because when I ask people what they are humming they usually say it's a song from a game. But maybe those songs are just more humm-able, and the songs that I hear people humming are not a representative sample of all the songs that they listen to. About updating the language, if you mean the abbreviations and phrases that people use in texts, I think that people do it so that they don't sound overly formal. Sometimes writing a complete sentence in a text would be like speaking to a friend in rhyming verses. I think the kids I know get most of their information (and opinions) from their parents, and a few things from other places to make them feel grown up (I do this too). I think that because I often hear a friend saying something and then I find out later that that is what their parents think. (sorry it took me so long to respond)
6kave
Hello and welcome to the site! I'm glad you're saying hello despite having been too shy :-) Do let us know in this thread or in the intercom in the bottom right if you run into any problems.

H5N1 has spread to cows. Should I be worried?

As of May 16, 2024 an easily findable USDA/CDC report says that widely dispersed cow herds are being detectably infected.

Map of US showing 9 states with infected herds, including Texas, Idaho, Michigan, and North Carolina (but not other states in between (suggesting either long distance infections mediated by travel without testing the travelers, or else failure of detection in many intermediate states)).

So far, that I can find reports of, only one human dairy worker has been detected as having an eye infection.

I saw a link to a report on twitter from an enterprising journalist who claimed to have gotten some milk directly from small local farms in Texas, and the first lab she tried refuse to test it. They asked the farms. The farms said no. The labs were happy to go with this!

So, the data I've been able to get so far is consistent with many possibly real worlds.

The worst plausible world would involve a jump to humans, undetected for quite a while, allowing time for adaptive evolution, and an "influenza normal" attack rate of 5% -10% for adults and ~30% for kids, and an "avian flu plausible" mortality rate of 56%(??) (but maybe not until this winter when cold weather causes lots of enclosed air sharing?) which implies that by June of 2025 maybe half a billion people (~= 7B*0.12*0.56) will be dead???

But probably not, for a variety of reasons.

However, I sure hope that the (half imaginary?) Administrators who would hypothetically exist in some bureaucracy somewhere ... (read more)

Also, there's now a second detected human case, this one in Michigan instead of Texas.

Both had a surprising-to-me "pinkeye" symptom profile. Weird!

The dairy worker in Michigan had various "compartments" tested and their nasal compartment (and people they lived with) were all negative. Hopeful?

Apparently and also hopefully this virus is NOT freakishly good at infecting humans and also weirdly many other animals (like covid was with human ACE2, in precisely the ways people have talked about when discussing gain-of-function in years prior to covid).

If we're being foolishly mechanical in our inferences "n=2 with 2 survivors" could get rule of succession treatment. In that case we pseudocount 1 for each category of interest (hence if n=0 we say 50% survival chance based on nothing but pseudocounts), and now we have 3 survivors (2 real) versus 1 dead (0 real) and guess that the worst the mortality rate here would be maybe 1/4 == 25% (?? (as an ass number)), which is pleasantly lower than overall observed base rates for avian flu mortality in humans! :-)

Naive impressions: a natural virus, with pretty clear reservoirs (first birds and now dairy cows), on the maybe slightly less bad side of... (read more)

[-]wilkox100

Katelyn Jetelina has been providing some useful information on this. Her conclusion at this point seems to be 'more data needed'.

3CronoDAS
Thank you!

Does anybody know what happened to Julia Galef?

The only thing I can conclude looking around for her is that she's out of the public eye. Hope she's ok, but I'd guess she's doing fine and just didn't feel like being a public figure anymore. Interested if anyone can confirm that, but if it's true I want to make sure to not pry.

Hello! I'm building an open source communication tool with a one-of-a-kind UI for LessWrong kind of deep, rational discussions. The tool is called CQ2 (https://cq2.co). It has a sliding panes design with quote-level threads. There's a concept of "posts" for more serious discussions with many people and there's "chat" for less serious ones, and both of them have a UI crafted for deep discussions.

I simulated some LessWrong discussions there – they turned out to be a lot more organised and easy to follow. You can check them out in the chat channel and direct message part of the demo on the site. However, it is a bit inconvenient – there's horizontal scrolling and one needs to click to open new threads. Since forums need to prioritize convenience, I think CQ2's design isn't good for LessWrong. But I think the inconvenience is worth it for such discussions at writing-first teams, since it helps them with hyper-focusing on one thing at a time and avoid losing context in order to come to a conclusion and make decisions.

If you have such discussions at work, I would love to learn about your team, your frustrations with existing communication tools, and better understand how CQ2 can help! I ... (read more)

8papetoast
I just stumbled on this website: https://notes.andymatuschak.org/About_these_notes It has a similar UI but for Obsidian-like linked notes. The UI seem pretty good.
1Anand Baburajan
I like his UI. In fact, I shared about CQ2 with Andy in February since his notes site was the only other place where I had seen the sliding pane design. He said CQ2 is neat!
1papetoast
https://delve.a9.io/
4Anand Baburajan
Update: now you can create discussions on CQ2! And, here's a demo with an actual LessWrong discussion between Vanessa and Rob: https://cq2.co/demo.
2habryka
This is cool! Two pieces of feedback:  1. I think it's quite important that I can at least see the number of responses to a comment before I have to click on the comment icon. Currently it only shows me a generic comment icon if there are any replies. 2. I think one of the core use-cases of a comment UI is reading back and forth between two users. This UI currently makes that a quite disjointed operation. I think it's fine to prioritize a different UI experience, but it does feel like a big loss to me.
5Anand Baburajan
Thanks for the feedback! Can you share why you think it's quite important (for a work communication tool)? For a forum, I think it would make sense -- many people prefer reading the most active threads. For a work communication tool, I can't think of any reason why it would matter how many comments a thread has. I thought about this for quite a while and have started to realise that the "posts" UI could be too complicated. I'm going to try out the "chat" and "DMs" UI for posts and see how it goes. Thanks! Although "Chat" and "DMs"' UI allows easily followable back and forth between people, I would like to point out that CQ2 advocates for topic-wise discussions, not person-wise. Here's an example comment from LessWrong. In that comment, it's almost impossible to figure out where the quotes are from -- i.e., what's the context. And what happened next is another person replied to that comment with more quotes. This example was a bit extreme with many quotes but I think my point applies to every comment with quotes. One needs to scroll person-wise through so many topics, instead of topic-wise. I (and CQ2) prefer exploring what are people's thoughts topic-by-topic, not what are the thoughts on all topics simultaneously, person-by-person. Again, not saying my design is good for LessWrong; I understand forums have their own place. But I think for a tool for work, people would prefer topic-wise over person-wise.
4jmh
My sense, regarding the read the most active thread desire, is that the most active thread might well be amongst either the team working on some project under discussion or across teams that are envolved in or impacted by some project. In such a case I would think knowing where the real discussion is taking place regarding some "corporate discussions" might be helpful and wanted. I suppose the big question there is what about all the other high volume exchanges, are they more personality driven rather than subject/substance driven. Does the comment count just be a really noisy signal to try keying off?
2Anand Baburajan
P.S. I'm open to ideas on building this in collaboration with LessWrong!
1Celarix
Ooh, nice. I've been wanting this kind of discussion software for awhile. I do have a suggestion: maybe, when hovering over a highlighted passage, you could get some kind of indicator of how many child comments are under that section, and/or change the highlight contrast for threads that have more children, so we can tell which branches of the discussion got the most attention
2Anand Baburajan
Thanks @Celarix! I've got the same feedback from three people now, so seems like a good idea. However, I haven't understood why it's necessary. For a forum, I think it would make sense -- many people prefer reading the most active threads. For a discussion tool, I can't think of any reason why it would matter how many comments a thread has. Maybe the point is to let a user know if there's any progress in a thread over time, which makes sense.
1Celarix
My thinking is that the more discussed threads would have more value to the user. Small threads with 1 or 2 replies are more likely to be people pointing out typos or just saying +1 to a particular passage. Of course, there is a spectrum - deeply discussed threads are more likely to be angry back-and-forths that aren't very valuable.
1Anand Baburajan
This feels self and learning focused, as opposed to problem and helping focused, and I'm building CQ2 for the latter. There could also be important and/or interesting points in a thread with only 1 or 2 replies, and implementing this idea would prevent many people from finding that point, right? Will add upvote/downvote.

Hello everyone! My name is Roman Maksimovich, I am an immigrant from Russia, currently finishing high school in Serbia. My primary specialization is mathematics, and back in middle school I have had enough education in abstract mathematics (from calculus to category theory and topology) to call myself a mathematician.

My other strong interests include computer science and programming (specifically functional programming, theoretical CS, AI, and systems programming s.a. Linux) as well as languages (specifically Asian languages like Japanese).

I ended up here after reading HP:MOR, which I consider to be an all-time masterpiece. The Sequences are very good too, although not that gripping. Rationality is a very important principle in my life, and so far I found the forum to be very well-organized and the posts to be very informative and well-written, so I will definitely stick around and try to engage in the forum to the best of my ability.

I thought I might do a bit of self-advertising as well. Here's my GitHub: https://github.com/thornoar

If any of you use this very niche mathematical graphics tool called Asymptote, you might be interested to know that I have been developing a cool 6000-... (read more)

3nim
Congratulations! I'm in today's lucky 10,000 for learning that Asymptote exists. Perhaps due to my not being much of a mathematician, I didn't understand it very clearly from the README... but the examples comparing code to its output make sense! Comparing your examples to the kind of things Asymptote likes to show off (https://asymptote.sourceforge.io/gallery/), I see why you might have needed to build the additional tooling. I don't think you necessarily have to compare smoothmanifold to a JavaScript framework to get the point across -- it seems to be an abstraction layer that allows one to describe a drawn image in slightly more general terms than Asymptote supports. I admire how you're investing so much effort to use your talents to help others.
1thornoar
Thank you for your kind words! Unfortunately, Asymptote doesn't really have much of a community development platform, but I'll be trying to make smoothmanifold part of the official project in some way or another. Right now the development is so fast that the README is actually out of date... gotta fix that. So far, though, my talents seem less to help others and more to serve as a pleasurable pastime :) I'm also glad that another person discovered Asymptote and liked it --- it's a language that I cannot stop to admire for the graphical functionality, ease of image creation (pdf's, jpeg's, svg's, etc., all with the same interface), and at the same time amazing programming potential (you can redefine any builtin function, for example, and Asymptote will carry on with your definition)
[-]Nick M135

Hey all
I found out about LessWrong through a confluence of factors over the past 6 years or so, starting with Rob Miles' Computerphile videos and then his personal videos, seeing Aella make rounds on the internet, and hearing about Manifold, which all just sorta pointed me towards Eliezer and this website. I started reading the rationality a-z posts about a year ago and have gotten up to the value theory portion, but over the past few months I've started realizing just how much engaging content there is to read on here. I just graduated with my bachelor's and I hope to get involved with AI alignment (but Eliezer paints a pretty bleak picture for a newcomer like myself (and I know not to take any one person's word as gospel, but I'd be lying if I said it wasn't a little disheartening)). 

I'm not really sure how to break into the field of AI safety/alignment, given that college has left me without a lot of money and I don't exactly have a portfolio or degree that scream machine learning. I fear that I would have to go back and get an even higher education to even attempt to make a difference. Maybe, however, this is where my lack of familiarity in the field shows, because I don't actually know what qualifications are required for the positions I'd be interested in or if there's even a formal path for helping with alignment work. Any direction would be appreciated.

1Nick M
Additional Context that I realized might be useful for anyone that wants to offer advice:  I'm in my early 20's, so when I say 'portfolio' there's nothing really there outside of hobby projects that aren't that presentable to employers, and my degree is like a mix of engineering and physics simulation. Additionally, I live in Austin, so that might help with opportunities, yet I'm not entirely sure where to look for those.
2Screwtape
I'm not an AI safety specialist, but I get the sense that a lot of extra skillsets became useful over the last few years. What kind of positions would be interesting to you? MIRI was looking for technical writers recently. Robert Miles makes youtube videos. Someone made the P(Doom) question well known enough to be mentioned in the senate. I hope there's a few good contract lawyers looking over OpenAI right now. AISafety.Info is a collection of on-ramps, but it also takes ongoing web development and content writing work. Most organizations need operations teams and accountants no matter what they do. You might also be surprised how much engineering and physics is a passable starting point. Again, this isn't my field, but if you haven't already done so it might be worth reading a couple recent ML papers and seeing if they make sense to you, or better yet if it looks like you see an idea for improvement or a next step you could jump in or try.  Put your own oxygen mask on though. Especially if you don't have a cunning idea and can't find a way to get started, grab a regular job and get good at that.  Sorry I don't have a better answer.

Hi LessWrong Community!

I'm new here, though I've been an LW reader for a while. I'm representing complicated.world website, where we strive to use similar rationality approach as here and we also explore philosophical problems. The difference is that, instead of being a community-driven portal like you, we are a small team which is working internally to achieve consensus and only then we publish our articles. This means that we are not nearly as pluralistic, diverse or democratic as you are, but on the other hand we try to present a single coherent view on all discussed problems, each rooted in basic axioms. I really value the LW community (our entire team does) and would like to start contributing here. I would also like to present from time to time a linkpost from our website - I hope this is ok. We are also a not-for-profit website.

5habryka
Hey!  It seems like an interesting philosophy. Feel free to crosspost. You've definitely chosen some ambitious topics to try to cover, which I am generally a fan of.
1complicated.world
Thanks! The key to topic selection is where we find that we are most disagreeing with the popular opinions. For example, the number of times I can cope with hearing someone saying "I don't care about privacy, I have nothing to hide" is limited. We're trying to have this article out before that limit is reached. But in order to reason about privacy's utility and to ground it in root axioms, we first have to dive into why we need freedom. That, in turn requires thinking about mechanisms of a happy society. And that depends on our understanding of happiness, hence that's where we're starting.
[-]P.100

Does anyone have advice on how I could work full-time on an alignment research agenda I have? It looks like trying to get a LTFF grant is the best option for this kind of thing, but if after working more time alone on it, it keeps looking like it could succeed, it’s likely that it would become too big for me alone, I would need help from other people, and that looks hard to get. So, any advice from anyone who’s been in a similar situation? Also, how does this compare with getting a job at an alignment org? Is there any org where I would have a comparable amount of freedom if my ideas are good enough?

Edit: It took way longer than I thought it would, but I've finally sent my first LTFF grant application! Now let's just hope they understand it and think it is good.

6Garrett Baker
My recommendation would be to get an LTFF, manifund, or survival and flourishing fund grant to work on the research, then if it seems to be going well, try getting into MATS, or move to Berkeley & work in an office with other independent researchers like FAR for a while, and use either of those situations to find co-founders for an org that you can scale to a greater number of people. Alternatively, you can call up your smart & trustworthy college friends to help start your org. I do think there's just not that much experience or skill around these parts with setting up highly effective & scalable organizations, so what help can be provided won't be that helpful. In terms of resources for how to do that, I'd recommend Y Combinator's How to Start a Startup lecture recordings, and I've been recommended the book Traction: Get a Grip on Your Business. It should also be noted that if you do want to build a large org in this space, once you get to the large org phase, OpenPhil has historically been less happy to fund you (unless you're also making AGI[1]). ---------------------------------------- 1. This is not me being salty, the obvious response to "OpenPhil has historically not been happy to fund orgs trying to grow to larger numbers of employees" is "but what about OpenAI or Anthropic?" Which I think are qualitatively different than, say, Apollo. ↩︎

Hello! I'm dipping my toes into this forum, coming primarily from the Scott Alexander side of rationalism. Wanted to introduce myself, and share that i'm working on a post about ethics/ethical frameworks i hope to share here eventually!

4habryka
Hey metalcrow! Great to have you here! Hope you have a good time and looking forward to seeing your post!

Feature request: I'd like to be able to play the LW playlist (and future playlists!) from LW. I found it a better UI than Spotify and Youtube, partly because it didn't stop me from browsing around LW and partly because it had the lyrics on the bottom of the screen. So... maybe there could be a toggle in the settings to re-enable it?

5habryka
I was unsure whether people would prefer that, and decided yesterday to instead cut it, but IDK, I do like it. I might clean up the code and find some way to re-activate it on the site.
5Dagon
I liked it, but probably don't want it there all the time.  I wonder if it's feasible (WRT your priority list) to repeat some of the site feature options from account settings on a "quick feature menu", to make it easy to turn on and off.
4whestler
In terms of my usage of the site, I think you made the right call. I liked the feature when listening but I wanted to get rid of it afterwards and found it frustrating that it was stuck there. Perhaps something hidden on a settings page would be appropriate, but I don't think it's needed as a default part of the site right now.
6habryka
This probably should be made more transparent, but the reason why these aren't in the library is because they don't have images for the sequence-item. We display all sequences that people create that have proper images on the library (otherwise we just show it on user's profiles).
4nim
Can random people donate images for the sequence-items that are missing them, or can images only be provided by the authors? I notice that I am surprised that some sequences are missing out on being listed just because images weren't uploaded, considering that I don't recall having experienced other sequences' art as particularly transformative or essential.
4habryka
Only the authors (and admins) can do it.  If you paste some images here that seem good to you, I can edit them unilaterally, and will message the authors to tell them I did that. 
7Lorxus
I'm neither of these users, but for temporarily secret reasons I care a lot about having the Geometric Rationality and Maximal Lottery-Lottery sequences be slightly higher-quality. Warning: these are AI-generated, if that's a problem. It's that, an abstract pattern, or programmer art from me. Two options for Maximal Lottery-Lotteries:   Two options for Geometric Rationality:
3gilch
How did you manage to prompt these? My attempts with Stable Diffusion so far have usually not produced anything suitable.
2nim
I am delighted that you chimed in here; these are pleasingly composed and increase my desire to read the relevant sequences. Your post makes me feel like I meaningfully contributed to the improvement of these sequences by merely asking a potentially dumb question in public, which is the internet at its very best. Artistically, I think the top (fox face) image for lotteries cropped for its bottom 2/3 would be slightly preferable to the other, and the bottom (monochrome white/blue) for geometric makes a nicer banner in the aspect ratio that they're shown as.
5habryka
Uploaded them both!
1Lorxus
Excellent, thanks!
1Lorxus
 IMO you did! Like I said in my comment, for reasons that are secret temporarily I care about those two sequences a lot, but I might not have thought to just ask whether they could be added to the library, nor did I know that the blocker was suitable imagery.
2nim
I notice that I am confused: an image of lily pads appears on https://www.lesswrong.com/s/XJBaPPEYAPeDzuAsy when I load it, but when I expand all community sequences on https://www.lesswrong.com/library (a show-all button might be nice....) and search the string "physical" or "necessity" on that page, I do not see the post appearing. This seems odd, because I'd expect that having a non-default image display when the sequence's homepage is loaded and having a good enough image to appear in the list should be the same condition, but it seems they aren't identical for that one.
4habryka
There are two images provided for a sequence, the banner image and the card image. The card image is required for it to show up in the Library. 

Post upvotes are at the bottom but user comment upvotes are at the top of each comment. Sometimes I'll read a very long comment and then have to scroll aaaaall the way back up to upvote it. Is there some reason for this that I'm missing or is it just an oversight?

5habryka
Post upvotes are both at the bottom and top, but repeating them for comments at the bottom looks a lot too cluttered. Having them at the top is IMO more important since you want to be able to tell how good something is before you read it.

Obscure request:

Short story by Yudkowsky, on a reddit short fiction subreddit, about a time traveler coming back to the 19th century from the 21st. The time traveler is incredibly distraught about the red tape in the future, screaming about molasses and how it's illegal to sell food on the street.

Nevermind, found it.

Hi everyone!
I found lesswrong at the end of 2022, as a result of ChatGPT’s release. What struck me fairly quickly about lesswrong was how much it resonated with me. Much of the ways of thinking discussed on lesswrong were things I was already doing, but without knowing the name for it. For example, I thought of the strength of my beliefs in terms of probabilities, long before I had ever heard the word “bayesian”.

Since discovering lesswrong, I have been mostly just vaguely browsing it, with some periods of more intense study. But I’m aware that I haven’t be... (read more)

2habryka
Welcome! I hope you have a good time here!

Hey everyone! I work on quantifying and demonstrating AI cybersecurity impacts at Palisade Research with @Jeffrey Ladish.

We have a bunch of exciting work in the pipeline, including:

  • demos of well-known safety issues like agent jailbreaks or voice cloning 
  • replications of prior work on self-replication and hacking capabilities
  • modelling of above capabilities' economic impact
  • novel evaluations and tools

Most of my posts here will probably detail technical research or announce new evaluation benchmarks and tools. I also think a lot about responsible release, ... (read more)

Hi! I have lurked for quite a while and wonder if I can/should participate more. I'm interested in science in general, speculative fiction and simulation/sandbox games among other stuff. I like reading speculations about the impact of AI and other technologies, but find many of the alignment-related discussions too focused on what the author wants/values rather than what future technologies can really cause. Also, any game recommendations with a hard science/AI/transhumanist theme that are truly simulation-like and not narratively railroading?

4nim
Welcome! If you have the emotional capacity to happily tolerate being disagreed with or ignored, you should absolutely participate in discussions. In the best case, you teach others something they didn't know before, or get a misconception of your own corrected. In the worst case, your remarks are downvoted or ignored. Your question on games would do well fleshed out into at least a quick take, if not a whole post, answering: * What games you've ruled out for this and why * what games in other genres you've found to capture the "truly simulation-like" aspect that you're seeking * examples of game experiences that you experience as narrative railroading * examples of ways that games that get mostly there do a "hard science/AI/transhumanist theme" in the way that you're looking for * perhaps what you get from it being a game that you miss if it's a book, movie, or show? If you've tried a lot of things and disliked most, then good clear descriptions of what you dislike about them can actually function as helpful positive recommendations for people with different preferences.

Is it really desirable to have the new "review bot" in all the 100+ karma comment sections? To me it feels like unnecessary clutter, similar to injecting ads.

4habryka
Where else would it go? We need a minimum level of saliency to get accurate markets, and I care about the signal from the markets a good amount.
3Dagon
I haven't noticed it (literally at all - I don't think I've seen it, though I'm perhaps wrong).  Based on this comment, I just looked at https://www.lesswrong.com/users/review-bot?from=search_autocomplete and it seems a good idea (and it points me to posts I may have missed - I tend to not look at the homepage, just focusing on recent posts and new comments on posts on https://www.lesswrong.com/allPosts). I think putting a comment there is a good mechanism to track, and probably easier and less intrusive than a built-in site feature.  I have no clue if you're actually getting enough participation in the markets to be useful - it doesn't look like it at first glance, but perhaps I'm wrong.  It does seem a little weird (and cool, but mostly in the "experiment that may fail, or may work so well we use it elsewhere" way) to have yet another voting mechanism for posts.  I kind of like the explicitness of "make a prediction about the future value of this post" compared to "loosely-defined up or down".  
2Neel Nanda
I only ever notice it on my own posts when I get a notification about it

Hi everyone! I'm new to LW and wanted to introduce myself. I'm from the SF bay area and working on my PhD in anthropology. I study AI safety, and I'm mainly interested in research efforts that draw methods from the human sciences to better understand present and future models. I'm also interested in the AI safety's sociocultural dynamics, including how ideas circulate the research community and how uncertainty figures into our interactions with models. All thoughts and leads are welcome.

This work led me to LW. Originally all the content was overwhelming bu... (read more)

3habryka
Welcome! I hope you have a good time here!

Hey, I'm new to LessWrong and working on a post - however at some point the guidelines which pop up at the top of a fresh account's "new post" screen went away, and I cannot find the same language in the New Users Guide or elsewhere on the site.

Does anyone have a link to this? I recall a list of suggestions like "make the post object-level," "treat it as a submission for a university," "do not write a poetic/literary post until you've already gotten a couple object-level posts on your record."

It seems like a minor oversight if it's impossible to find certa... (read more)

3RobertM
EDIT: looks like habryka got there earlier and I didn't see it. https://www.lesswrong.com/posts/zXJfH7oZ62Xojnrqs/#sLay9Tv65zeXaQzR4 Intercom is indeed hidden on mobile (since it'd be pretty intrusive at that screen size).
1Nevin Wetherill
Thanks anyway :) Also, yeah, makes sense. Hopefully this isn't a horribly misplaced thread taking up people's daily scrolling bandwidth with no commensurate payoff. Maybe I'll just say something here to cash out my impression of the "first post" intro-message in question: its language has seemed valuable to my mentality in writing a post so far. Although, I think I got a mildly misleading first-impression about how serious the filter was. The first draft for a post I half-finished was a fictional explanatory dialogue involving a lot of extended metaphors... After reading that I had the mental image of getting banned immediately with a message like "oh, c'mon, did you even read the prompt?" Still, that partially-mistaken mental frame made me go read more documentation on the editor and take a more serious approach to planning a post. A bit like a very mild temperature-drop shock to read "this is like a university application." I grok the intent, and I'm glad the community has these sorta norms. It seems likely to help my personal growth agenda on some dimensions.
3habryka
It's not the most obvious place, but the content lives here: https://www.lesswrong.com/posts/zXJfH7oZ62Xojnrqs/lesswrong-moderation-messaging-container?commentId=sLay9Tv65zeXaQzR4 
1Nevin Wetherill
Thanks! :) Yeah, I don't know if it's worth it to make it more accessible. I may have just failed a Google + "keyword in quotation marks" search, or failed to notice a link when searching via LessWrong's search feature. Actually, an easy fix would just be for Google to improve their search tools, so that I can locate any link regardless of how specific for any public webpage just by ranting at my phone. Anyway, thanks as well to Ben for tagging those mod-staff people.
3Ben Pace
@Ruby @RobertM 

Some features I'd like:

a 'mark read' button next to posts so I could easily mark as read posts that I've read elsewhere (e.g. ones cross-posted from a blog I follow)

a 'not interested' button which would stop a given post from appearing in my latest or recommended lists. Ideally, this would also update my recommended posts so as to recommend fewer posts like that to me. (Note: the hide-from-front-page button could be this if A. It worked even on promoted/starred posts, and B. it wasn't hidden in a three-dot menu where it's frustrating to access)

a 'read late... (read more)

5Alex_Altair
You can "bookmark" a post, is that equivalent to your desired "read later"?
2Nathan Helm-Burger
Yeah, I should use that. I'd need to remember to unbookmark after reading it I suppose.

Someone strong-downvoted a post/question of mine with a downvote strength of 10, if I remember correctly.

I had initially just planned to keep silent about this, because that's their good right to do, if they think the post is bad or harmful.

But since the downvote, I can't shake off the curiosity of why that person disliked my post so strongly—I'm willing to pay $20 for two/three paragraphs of explanation by the person why they downvoted it.

3ektimo
Maybe because somebody didn't think your post qualified as a "Question"? I don't see any guidelines on what qualifies as a "question" versus a "post" -- and personally I wouldn't have downvoted because of this --- but your question seems a little long/opinionated. 
3niplav
Thanks, that makes sense.
2Linch
"This is more of a comment than a question" as they say
2ektimo
Btw, I really appreciate if people explain downvotes, and it would be great if there was some way to still allow unexplained downvotes while incentivizing adding explanations.  Maybe a way (attached to the post) for people to guess why other people downvoted?
5habryka
Yeah, I feel kind of excited about having some strong-downvote and strong-upvote UI which gives you one of a standard set of options for explaining your vote, or allows you to leave it unexplained, all anonymous.

PSA: Tooth decay might be reversible! The recent discussion around the Lumina anti-cavity prophylaxis reminded me of a certain dentist's YouTube channel I'd stumbled upon recently, claiming that tooth decay can be arrested and reversed using widely available over-the-counter dental care products. I remember my dentist from years back telling me that if regular brushing and flossing doesn't work, and the decay is progressing, then the only treatment option is a filling. I wish I'd known about alternatives back then, because I definitely would have tried tha... (read more)

8Nathan Helm-Burger
I've been using a remineralization toothpaste imported from Japan for several years now, ever since I mentioned reading about remineralization to a dentist from Japan. She recommended the brand to me. FDA is apparently bogging down release in the US, but it's available on Amazon anyway. It seems to have slowed, but not stopped, the formation of cavities. It does seem to result in faster plaque build-up around my gumline, like the bacterial colonies are accumulating some of the minerals not absorbed by the teeth. The brand I use is apagard. [Edit: I'm now trying the recommended mouthwash CloSys as the link above recommended, using it before brushing, and using Listerine after. The CloSys seems quite gentle and pleasant as a mouthwash. Listerine is harsh, but does leave my teeth feeling cleaner for much longer. I'll try this for a few years and see if it changes my rate of cavity formation.]
4gilch
That dentist on YouTube also recommended a sodium fluoride rinse (ACT) after the Listerine and mentioned somewhere that if you could get your teenager to use only one of the three rinses, that it should be the fluoride. (I've heard others suggest waiting 20 minutes after brushing before rinsing out the toothpaste to allow more time for the fluoride in the toothpaste to work.) She also mentioned that the brands involved sell multiple formulations with different concentrations and even different active ingredients (some of which may even be counterproductive), and she can't speak to the efficacy of the treatment if you don't use the exact products that she has experience with.
1dirk
I apologize for my lack of time to find the sources for this belief, so I could well be wrong, but my recollection of looking up a similar idea is that I found it to be reversible only in the very earliest stages, when the tooth has weakened but not yet developed a cavity proper.
2gilch
I didn't say "cavity"; I said, "tooth decay". No-one is saying remineralization can repair a chipped, cracked, or caved-in tooth. But this dentist does claim that the decay (caries) can be reversed even after it has penetrated the enamel and reached the dentin, although it takes longer (a year instead of months), by treating the underlying bacterial infection and promoting mineralization. It's not clear to me if the claim is that a small hole can fill in on its own, but a larger one probably won't although the necessary dental treatment (filling) in that case will be less invasive if the surrounding decay has been arrested. I am not claiming to have tested this myself. This is hearsay. But the protocol is cheap to try and the mechanism of action seems scientifically plausible given my background knowledge.
[-]jmh50

How efficient are equity markets? No, not in the EMH sense. 

My take is that market efficiency viewed from economics/finance is about total surplus maximization -- the area between the supply and demand curves. Clearly when S and D are order schedules and P and Q correspond to the S&D intersection one maximizes the area of the triangle defined in the graph.

But existing equity markets don't work off an ordered schedule but largely match trades in a somewhat random order -- people place orders (bids and offers) throughout the day and as they come in ... (read more)

2Thomas Kwa
In practice it is not as bad as uniform volume throughout the day would be for two reasons: * Market-makers narrow spreads to prevent any low-value-exchange pairings that would be predictable price fluctuations. They do extract some profits in the process. * Volume is much higher near the open and close. I would guess that any improvements of this scheme would manifest as tighter effective spreads, and a reduction in profits of HFT firms (which seem to provide less value to society than other financial firms).
2jmh
I had prehaps a bit unjustly tossed the market maker role into that "not real bid/off" bucket. I also agree they do serve to limit the worst case matches. But such a role would simply be unnecessary so I still wonder about the cost in terms of the profits captured by the market makers. Is that a necessary cost in today's world? Not sure. And I do say that as someone who is fairly active in the markets and have taken advantage of thin markets in the off market hours sessions where speads can widen up a lot.

Feature Suggestion: add a number to the hidden author names.

I enjoy keeping the author names hidden when reading the site, but find it difficult to follow comment threads when there isn't a persistent id for each poster. I think a number would suffice while keeping the hiddenness.

Any thoughts on Symbolica? (or "categorical deep learning" more broadly?)

All current state of the art large language models such as ChatGPT, Claude, and Gemini, are based on the same core architecture. As a result, they all suffer from the same limitations.

Extant models are expensive to train, complex to deploy, difficult to validate, and infamously prone to hallucination. Symbolica is redesigning how machines learn from the ground up. 

We use the powerfully expressive language of category theory to develop models capable of learning algebraic structur

... (read more)
5Garrett Baker
A new update
3faul_sname
I'd bet against anything particularly commercially successful. Manifold could give better and more precise predictions if you operationalize "commercially viable".
2Garrett Baker
Is this coming from deep knowledge about Symbolica's method, or just on outside view considerations like "usually people trying to think too big-brained end up failing when it comes to AI".
2faul_sname
Outside view (bitter lesson). Or at least that's approximately true. I'll have a post on why I expect the bitter lesson to hold eventually, but is likely to be a while. If you read this blog post you can probably predict my reasoning for why I expect "learn only clean composable abstraction where the boundaries cut reality at the joints" to break down as an approach.
2Garrett Baker
I don’t think the bitter lesson strictly applies here. Since they’re doing learning, and the bitter lesson says “learning and search is all that is good”, I think they’re in the clear, as long as what they do is compute scalable. (this is different from saying there aren’t other reasons an ignorant person (a word I like more than outside view in this context since it doesn’t hide the lack of knowledge) may use to conclude they won’t succeed)
2faul_sname
There are commercially valuable uses for tools for code synthesis and theorem proving. But structured approaches of that flavor don't have a great track record of e.g. doing classification tasks where the boundary conditions are messy and chaotic, and similarly for a bunch of other tasks where gradient-descent-lol-stack-more-layer-ML shines.

I’m in the market for a new productivity coach / accountability buddy, to chat with periodically (I’ve been doing one ≈20-minute meeting every 2 weeks) about work habits, and set goals, and so on. I’m open to either paying fair market rate, or to a reciprocal arrangement where we trade advice and promises etc. I slightly prefer someone not directly involved in AGI safety/alignment—since that’s my field and I don’t want us to get nerd-sniped into object-level discussions—but whatever, that’s not a hard requirement. You can reply here, or DM or email me. :) update: I’m all set now

Hi everyone, my name is Mickey Beurskens. I've been reading posts for about two years now, and I would like to participate more actively in the community, which is why I'll take the opportunity to introduce myself here.

In my daily life I am doing independent AI engineering work (contracting mostly). About three years ago a (then) colleague introduced me to HPMOR, which was a wonderful introduction to what would later become some quite serious deliberations on AI alignment and LessWrong! After testing out rationality principles in my life I was convinced th... (read more)

5habryka
Hello and welcome! I also found all of this stuff via HPMoR many years ago. Hope you have a good time commenting more!
1Mickey Beurskens
It took some time to go from reading to commenting, so I appreciate the kind words!

Hello, I'm Marius, an embedded SW developer looking to pivot into AI and machine learning.
 

I've read the Sequences and am reading ACX somewhat regularly.

Looking forward to a fruitful discussions.

Best wishes,
Marius Nicoară

3habryka
Welcome! Looking forward to having you around!

I'm neither of these users, but for temporarily secret reasons I care a lot about having the Geometric Rationality and Maximal Lottery-Lottery sequences be slightly higher-quality.

The reason is not secret anymore! I have finished and published a two-post sequence on maximal lottery-lotteries.

[-]jenn30

I'm trying to remember the name of a blog. The only things I remember about it is that it's at least a tiny bit linked to this community, and that there is some sort of automatic decaying endorsement feature. Like, there was a subheading indicating the likely percentage of claims the author no longer endorses based on the age of the post. Does anyone know what I'm talking about?

6Raemon
The ferrett.
1jenn
That's it! Thank you.

Is there a post in the Sequences about when it is justifiable to not pursue going down a rabbit hole? It's a fairly general question, but the specific context is a tale as old as time. My brother, who has been an atheist for decades, moved to Utah. After 10 years, he now asserts that he was wrong and his "rigorous pursuit" of verifying with logic and his own eyes, leads him to believe the Bible is literally true. I worry about his mental health so I don't want to debate him, but felt like I should give some kind of justification for why I'm not personally ... (read more)

5nim
More concrete than your actual question, but there's a couple options you can take: * acknowledge that there's a form of social truth whereby the things people insist upon believing are functionally true. For instance, there may be no absolute moral value to criticism of a particular leader, but in certain countries the social system creates a very unambiguous negative value to it. Stick to the observable -- if he does an experiment, replicate that experiment for yourself and share the results. If you get different results, examine why. IMO, attempting in good faith to replicate whatever experiments have convinced him that the world works differently from how he previously thought would be the best steelman for someone framing religion as rationalism. * There is of course the "which bible?" question. Irrefutable proof of the veracity of the old testament, if someone had it, wouldn't answer the question of which modern religion incorporating it is "most correct". * It's entirely valid and consistent with rationalism to have the personal preference to not accept any document as fully and literally true. If you can gently find out how he handles the internal contradictions (https://en.wikipedia.org/wiki/Internal_consistency_of_the_Bible), you've got a ready-made argument for taking some things figuratively. And as unsolicited social advice, distinct from the questions of rationalism -- don't strawman him into someone who criticizes your atheism until he as an actual human tells you what if any actual critiques he has. That's not nice. What is nice is to frame it as a harm reduction option, because organized religion can be great for some people with mental health struggles, and tell him the truth about what you see in his current behavior that you like and support. For instance if his church gets him more involved with the community, or encourages him to do more healthy behaviors or less unhealthy ones, maintain common ground by endorsing the outcomes of his bel
4gilch
If the Utah mention means the Mormons in particular, their standard answer is that the Bible is only correct "as far as it is translated correctly" (that phrasing appears in their extended canon), which is a motte they can always retreat to if one presses them too hard on Biblical correctness generally. However, that doesn't apply to the rest of their canon, so pressure may be more fruitful there. (If it's not the Mormons, the rest of my comment probably isn't relevant either.) The Book of Mormon would at least narrow it down to the LDS movement, although there have been a few small schisms in their relatively short history. Disagree with this one. The experiment the Mormon missionaries will insist on is Moroni's Promise: read the Book of Mormon and then pray to God for a spiritual confirmation. The main problem with this experiment should be obvious to any good scientist: no controls. To be fair, one should try the experiment on many other books (holy or otherwise) to see if there are any other hits. Also, a null result is invariably interpreted as failing to do the experiment correctly, because it's guaranteed by God, see, it's right there in the book. The inability to accept a negative outcome is also rather unscientific. And finally, a "spiritual confirmation" will be interpreted for you as coming from (their particular version of) God, rather than some other explanation for a human emotional response, which we all know, can be achieved in numerous other ways that don't particularly rely on God as an explanation. Make the experiment fair before you agree to play with a stacked deck!
3kromem
If your brother has a history of being rational and evidence driven, you might encourage them to spend some time lurking on /r/AcademicBiblical on Reddit. They require citations for each post or comment, so he may be frustrated if he tries to participate, especially if in the midst of a mental health crisis. But lurking would be very informative very quickly. I was a long time participant there before leaving Reddit, and it's a great place for evidence driven discussion of the texts. Its a mix of atheists, Christians, Jews, Muslims, Norse pagans, etc. (I'm an Agnostic myself that strongly believes we're in a simulation, so it really was all sorts there.) Might be a healthy reality check to apologist literalism, even if not necessarily disrupting a newfound theological inclination. The nice things about a rabbit hole is that while not always, it's often the case that someone else has traveled down whatever one you aren't up for descending into. (Though I will say in its defense, that particular field is way more interesting than you'd ever think if you never engaged with the material through an academic lens. There's a lot of very helpful lessons in critical analysis wrapped up in the field given the strong anchoring and survivorship biases and how that's handled both responsibly and irresponsibly by different camps.)
2gilch
Theism is a symptom of epistemic deficiency. Atheism follows from epistemic sufficiency, but not all atheists are rational or sane. The epistemically virtuous do not believe on insufficient evidence, nor ignore or groundlessly dismiss evidence relevant to beliefs they hold. That goes for both of you. The Litany of Tarsky is the correct attitude for a rationalist, and it's about not thumbing the scales. If your brother were sane (to rationalist standards), he would not hold such a belief, given the state of readily available evidence. If he hasn't figured this out, it's either because he's put his thumb on the scales or refuses to look. Organized religions (that have survived) teach their adherents not to look (ironically), and that it is virtuous to thumb the scales (faith), and that is something they have in common with cults, although not always to the same degree. These tactics are dark arts—symmetric weapons, that can promote any other beliefs (false or otherwise) just as easily. If you feel like talking to him about it, but don't want it to devolve into a debate, Street Epistemology is a pretty good approach. It can help dislodge irrational beliefs without attacking them directly, by instead promoting better epistemics (by Socratically poking holes in bad epistemics). To answer your direct question, I think Privileging the Hypothesis is pretty relevant. Einstein's Arrogance goes into more detail about the same key rationality concept of locating the hypothesis.

Hi any it may concern,

You could say I have a technical moat in a certain area and came across an idea/cluster of ideas that seemed unusually connected and potentially alignment-significant but whose publication seems potentially capabilities-enhancing. (I consulted with one other person and they also found it difficult to ascertain or summarize)

I was considering writing to EY on here as an obvious person who would both be someone more likely to be able to determine plausibility/risk across a less familiar domain and have an idea of what further to do. Is t... (read more)

3Nathan Helm-Burger
EY may be too busy to respond, but you can probably feel pretty safe consulting with MIRI employees in general. Perhaps also Conjecture employees, and Redwood Research employees, if you read and agree with their views on safety. That at least gives you a wider net of people to potentially give you feedback.

I have the mild impression that Jacqueline Carey's Kushiel trilogy is somewhat popular in the community?[1] Is it true and if so, why?

  1. ^

    E.g. Scott Alexander references Elua in Mediations on Moloch and I know of at least one prominent LWer who was a big enough fan of it to reference Elua in their discord handle.

4habryka
My model is that it's mostly popular because Scott Alexander referenced it in Meditations on Moloch, and the rest is just kind of background popularity, but not sure.

Unsure if there is normally a thread for putting only semi-interesting news articles, but here is a recently posted news article by Wired that seems.... rather inflammatory toward Effective Altruism. I have not read the article myself yet, but a quick skim confirms the title is not only to get clickbait anger clicks, the rest of the article also seems extremely critical of EA, transhumanism, and Rationality. 

I am going to post it here, though I am not entirely sure if getting this article more clicks is a good thing, so if you have no interest in read... (read more)

4HiddenPrior
I did a non-in-depth reading of the article during my lunch break, and found it to be of lower quality than I would have predicted.  I am open to an alternative interpretation of the article, but most of it seems very critical of the Effective Altruism movement on the basis of "calculating expected values for the impact on peoples lives is a bad method to gauge the effectiveness of aid, or how you are impacting peoples lives."  The article begins by establishing that many medicines have side effects. Since some of these side effects are undesirable, the author suggests, though they do not state explicitly, that the medicine may also be undesirable if the side effect is bad enough. They go on to suggest that Givewell, and other EA efforts at aid are not very aware of the side effects of their efforts, and that the efforts may therefore do more harm than good. The author does not stoop so low as to actually provide evidence of this, or even make any explicit claims that could be checked or contradicted, but merely suggests that givewell does not do a good job of this. This is the less charitable part of my interpretation (no pun intended), but I feel the author spends a lot of the article constantly suggesting that trying to be altruistic, especially in an organized or systematic way, is ineffective, maybe harmful and generally not worth the effort. Mostly the author does this by suggesting anecdotal stories of their investigations into charity, and how they feel much wiser now. The author then moves on to their association of SBF with Effective Altruism, going so far as to say: "Sam Bankman-Fried is the perfect prophet of EA, the epitome of its moral bankruptcy." In general, the author goes on to give a case for how SBF is the classic utilitarian villain, justifying his immoral acts through oh-so esoteric calculations of improving good around the world on net.  The author goes on to lay out a general criticism of Effective Altruism as relying on arbitrary utilit

I've came across a poll about exchanging probability estimates with another rationalist: https://manifold.markets/1941159478/you-think-something-is-30-likely-bu?r=QW5U.

You think something is 30% likely but a friend thinks 70%. To what does that change your opinion?

I feel like there can be specially-constructed problems when the result probability is 0, but haven't been able to construct an example. Are there any?

There is a box which contains money iff the front and back are painted the same color. Each side is independently 30% to be blue, and 70% to be red. You observe that the front is blue, and your friend observes that the back is red.

4Throwaway2367
"No one assigns 70% to this statement." (Yes, your friend is an idiot, but that can be remedied, if needed, with a slight modification in the statement)
[-]jmh30

I don't think this crosses the line regarding poltics on the board but note that as a warning header.

I was just struck by a though related to the upcoming elections in the USA. Age of both party's candidate have been noted and create both some concern and even risks for the country.

No age limits exist and I suspect trying to get get legislative action on that would be slow to impossible as it undoubtedly would be a new ammendment to the Constitution.

BUT, I don't think there is any law or other restriction on any political party imposing their own age limit... (read more)

3MondSemmel
From what I understand, because the US Electoral College is structured such that state laws determine who the electors will vote for as president, you wouldn't need any constitutional amendment or federal legislative action to impose an age limit for the US presidential election in particular. In contrast, I think the lower age limit of 35 for US presidents is a constitutional requirement, and as such would not be nearly as easy to change. On a somewhat related note, there's an interesting attempt by US states to assign electoral votes based on the national popular vote. Based on this Wikipedia quote, I imagine states could impose arbitrary requirements for who can or cannot receive the electoral votes, including imposing an age limit. Basically, add a clause to the state laws that "Electors must abstain if the winner of the plurality does not fulfill the following requirements...". EDIT: Note, however, that if no candidate gets a majority of the electoral vote (270+ votes), then the US House of Representatives elects the US President instead. So while such a state law would disincentivize particular candidates, if such a candidate ran for president anyway and won the plurality of the state vote, then the abstention of the electors might well result in the Electoral College disempowering itself. And furthermore the House of Representatives could still elect an arbitrary candidate. EDIT2: Okay, I think I've come up with a better state law design: If the winner of the plurality of state votes exceeds the age limit, then assign the electoral votes to either the second place instead (regardless of their age), or alternatively to whoever of the top two candidates is younger. Either version ensures that the electoral college will not abstain, which makes the House of Representatives route less likely. And either version disincentivizes a scenario where the presidential candidates of both parties exceed the age limit, since in this case, both parties are incentivized t
1James Camacho
Age limits do exist: you have to be at least 35 to run for President, at least 30 for Senator, and 25 for Representative. This automatically adds a decade or two to your candidates.
[-]pom20

Hi, I am new to the site having just registered, after reading through a couple of the posts referenced in the suggested reading list I felt comfortable enough to try to participate on this site. I feel I could possible add something to some of the discussions here, though time will tell. I did land on this site "through AI", so we'll see if that means this isn't a good place for me to land and/or pass through. Though I am slightly bending the definition of that quote and its context here (maybe). Or does finding this site by questioning an AI about possib... (read more)

I didn't get any replies on my question post re: the EU parliamentary election and AI x-risk, but does anyone have a suggestion for a party I could vote for (in Germany) when it comes to x-risk?

1Amalthea
I think the best bet is to vote for a generally reasonable party. Despite their many flaws, it seems like Green Party or SPD are the best choices right now. (CDU seems to be too influenced in business interests, the current FDP is even worse) The alternative would be to vote for a small party with a good agenda to help signal-boost them, but I don't know who's around these days.