If it’s worth saying, but not worth its own post, here's a place to put it.

If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.

If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.

If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.

The Open Thread tag is here. The Open Thread sequence is here.

New Comment
12 comments, sorted by Click to highlight new comments since:

Hello! I'm running the Unofficial LessWrong Community Survey this year, and we're down to the last week it's open. If you're reading this open thread, I think you're in the target audience. 

I'd appreciate if you took the survey

If you're wondering what happens with the answers, it gets used for big analysis posts like this one, the data gets published so you can use it to answer questions about the community, and it sometimes guides the decision-making of people who work or run things in the community. (Including me!)

What is the current status of CFAR? The website seems like it is inactive, which I find surprising given that there were four weekend workshops in 2022 that CFAR wanted to use for improving its workshops.

[-]lsusr110

Is anyone else on this website making YouTube videos? Less Wrong is great, but if you want to broadcast to a larger audience, video seems like the place to be. I know Rational Animations makes videos. Is there anyone else? Are you, personally, making any?

Robert Miles has a channel popularizing AI safety concepts.

[Edit] Also manifold markets has recordings of talks at the Manifest conferences.

Hi! I'm Embee but you can call me Max.

I'm a mathematics for quantum physics graduate student considering redirecting my focus toward AI alignment research. My background includes:
- Graduate-level mathematics
- Focus on quantum physics
- Programming experience with Python
- Interest in type theory and formal systems

I'm particularly drawn to MIRI-style approaches and interested in:
- Formal verification methods
- Decision theory implementation
- Logical induction
- Mathematical bounds on AI systems

My current program feels too theoretical and disconnected from urgent needs. I'm looking to:
- Connect with alignment researchers
- Find concrete projects to contribute to
- Apply mathematical rigor to safety problems
- Work on practical implementations

Regarding timelines: I have significant concerns about rapid capability advances, particularly given recent developments (o3). I'm prioritizing work that could contribute meaningfully in a compressed timeframe.

Looking for guidance on:
- Most neglected mathematical approaches to alignment
- Collaboration opportunities
- Where to start contributing effectively
- Balance between theory and implementation

If you're interested in mathematical bounds in AI systems and you haven't seen it already check out https://arxiv.org/pdf/quant-ph/9908043 Ultimate Physical Limits to Computation by Seth Loyd and related works. Online I've been jokingly saying "Intelligence has a speed of light." Well, we know intelligence involves computation so there has to be some upper bound at some point. But until we define some notion of Atomic Standard Reasoning Unit of Inferential Distance, we don't have a good way of talking about how much more efficient a computer like you and me are compared to Claude at natural language generation, for example.

Is there an aphorism regarding the mistake P(E|H) = P(H|E) ?

Suggestions: 

  1. Thou shalt not reverse thy probabilities!
  2. Thou shalt not mix up thy probabilities!

Arbital conditional probability

Example 2

Suppose you're Sherlock Holmes investigating a case in which a red hair was left at the scene of the crime.

The Scotland Yard detective says, "Aha! Then it's Miss Scarlet. She has red hair, so if she was the murderer she almost certainly would have left a red hair there. P(redhair∣Scarlet)=99%, let's say, which is a near-certain conviction, so we're done."

"But no," replies Sherlock Holmes. "You see, but you do not correctly track the meaning of the conditional probabilities, detective. The knowledge we require for a conviction is not P(redhair∣Scarlet), the chance that Miss Scarlet would leave a red hair, but rather P(Scarlet∣redhair), the chance that this red hair was left by Scarlet. There are other people in this city who have red hair."

"So you're saying..." the detective said slowly, "that P(redhair∣Scarlet) is actually much lower than 1?"

"No, detective. I am saying that just because P(redhair∣Scarlet) is high does not imply that P(Scarlet∣redhair) is high. It is the latter probability in which we are interested - the degree to which, knowing that a red hair was left at the scene, we infer that Miss Scarlet was the murderer. This is not the same quantity as the degree to which, assuming Miss Scarlet was the murderer, we would guess that she might leave a red hair."

"But surely," said the detective, "these two probabilities cannot be entirely unrelated?"

"Ah, well, for that, you must read up on Bayes' rule."

Hey, can anyone help me find this LW (likely but could be diaspora) article, especially if you might have read it too?

My vague memory: It was talking about (among other things?) some potential ways of extending point estimate probability predictions and calibration curves. I.e. in a situation where making a prediction in one way affects what the outcome will be, i.e. if there is a mind-reader/accurate-simulator involved that bases its actions on your prediction. And in this case, a two dimensional probability estimate might be more appropriate: If 40% is predicted for event A, event B will have a probability of 60%. If 70% for event A, then 80% for event B, and so on, a mapping potentially continuously defined for the whole range. (event A and event B might be the same.) IIRC the article contained 2D charts where curves and rectangles were drawn for illustration.

IIRC it didn't have too many upvotes, more like around low-dozen, or at most low-hundred.

Searches I've tried so far: Google, Exa, Gemini 1.5 with Deep Research, Perplexity, OpenAI GPT-4o with Search.

p.s. if you are also unable to put enough time into finding it, do you have any ideas how it could be found?  

I have found it! This was the one:

The winning search strategy was quite interesting as well I think:

I took the history of all LW articles I have roughly ever read, I had easy access to all such titles and URLs, but not article contents. I fed them one by one into a 7B LLM asking it to rate how likely based on the title alone the unseen article content could match what I described above, as vague as that memory may be. Then I looked at the highest ranking candidates, and they were a dud. Did the same thing with a 70B model, et voila, the solution was near the top indeed.

Now I just need to re-read it if it was worth dredging up, I guess when a problem starts to itch it's hard to resist solving it.

Now seems like a good time to have a New Year's Predictions post/thread?