Wiki Contributions

Comments

My not-very-deep understanding is that phytosterols (plant sterols) are a bit iffy: most people don't absorb much from dietary phytosterols and so it doesn't end up doing anything, but the few people with genetic mutations that cause phytosterol hyperabsorption usually suffer worse health outcomes as a result. Is my understanding wrong, and is there some other benefit to seeking out supplemental phytosterols?

Edit: To be clear, there is research showing a measured reduction in cholesterol from phytosterol supplementation, but I'm a bit confused about how that's supposed to work, and I don't know enough about the field to know if this is one of those results I should side-eye.

I'm not familiar with how these things usually work, and I suspect other lurkers might be in the same boat, so:

  1. What kind of lodging is included? Would attendees just have their own hotel rooms near the venue, or is this more of an 'immersion' thing where everyone's under one roof for a weekend?
  2. How are expenses handled? Are there prepaid services, or would attendees submit expenses after the fact for reimbursement?
  3. About how many people are expected (rough order of magnitude)?

It seems that we have independently converged on many of the same ideas. Writing is very hard for me and one of my greatest desires is to be scooped, which you've done with impressive coverage here, so thank you.

Thanks for writing the simulators post! That crystallized a lot of things I had been bouncing around.

A decision transformer conditioned on an outcome should still predict a probability distribution, and generate trajectories that are typical for the training distribution given the outcome occurs, which is not necessarily the sequence of actions that is optimally likely to result in the outcome.

That's a good framing.

RL with KL penalties may also aim at a sort of calibration/conservatism having technically a non zero entropy distribution as its optimal policy

I apparently missed this relationship before. That's interesting, and is directly relevant to one of the neuralese collapses I was thinking about.

Sometimes it's clear how GPT leaks evidence that it's GPT, e.g. by getting into a loop.

Good point! That sort of thing does seem sufficient.

I have many thoughts about what an interpretable and controllable interface would look like, particularly for cyborgism, a rabbit hole I'm not going to go down in this comment, but I'm really glad you've come to the same question.

I look forward to reading it, should you end up publishing! It does seem like a load bearing piece that I remain pretty uncertain about.

I do wonder if some of this could be pulled into the iterable engineering regime (in a way that's conceivably relevant at scale). Ideally, there could be an dedicated experiment to judge human ability to catch and control models across different interfaces and problems. That mutual information paper seems like a good step here, and InstructGPT is sorta-kinda a datapoint. On the upside, most possible experiments of this shape seem pretty solidly on the 'safety' side of the balance.

If by intelligence spectrum you mean variations in capability across different generally intelligent minds, such that there can be minds that are dramatically more capable (and thus more dangerous): yes, it's pretty important.

If it were impossible to make an AI more capable than the most capable human no matter what software or hardware architectures we used, and no matter how much hardware we threw at it, AI risk would be far less concerning.

But it really seems like AI can be smarter than humans. Narrow AIs (like MuZero) already outperform all humans at some tasks, and more general AIs like large language models are making remarkable and somewhat spooky progress.

Focusing on a very simple case, note that using bigger, faster computers tends to let you do more. Video games are a lot more realistic than they used to be. Physics simulators can simulate more. Machine learning involves larger networks. Likewise, you can run the same software faster. Imagine you had an AGI that demonstrated performance nearly identical to that of a reasonably clever human. What happens when you use enough hardware that it runs 1000 times faster than real time, compared to a human? Even if there are no differences in the quality of individual insights or the generality of its thought, just being able to think fast alone is going to make it far, far more capable than a human.

Seconded. I don't have a great solution for this, but this remains a coordination hole that I'd really like to see filled.

Yup. I'd liken it to the surreality of a bad dream where something irrevocable happens, except there's no waking up.

If you're reading this porby, do you really want to be wrong?

hello this is porby, yes

This made me pace back and forth for about 30 minutes, trying to put words on exactly why I felt an adrenaline spike reading that bit.

I don't think your interpretation of my words (or words similar to mine) is unique, so I decided to write something a bit longer in response.

I went back and forth on whether I should include that bit for exactly that reason. Knowing something is possible is half the battle and such. I ended up settling on a rough rule for whether I could include something:

  1. It is trivial, or
  2. it is already covered elsewhere, that coverage goes into more detail, and the audience of that coverage is vastly larger than my own post's reach.
  3. The more potentially dangerous an idea is, the stronger the requirements are.

Something like "single token prediction runs in constant time" falls into 1, while this fell in 2. There is technically nonzero added risk, but given the context and the lack of details, the risk seemed very small to the point of being okay to allude to as a discussion point.

Hmm. Apparently you meant something a little more extreme than I first thought. It kind of sounds like you think the content of my post is hazardous.

I see this particular kind of prediction as a kind of ethical posturing and can't in good conscience let people make them without some kind of accountability.

Not sure what you mean by ethical posturing here. It's generally useful for people to put their reasoning and thoughts out in public so that other people can take from the reasoning what they find valuable, and making a bunch of predictions ahead of time makes the reasoning testable.

For example, I'd really, really like it if a bunch of people who think long timelines are more likely wrote up detailed descriptions of their models and made lots of predictions. Who knows, they might know things I don't, and I might change my mind! I'd like to!

People have been paid millions to work on predictions similar to these.

I, um, haven't. Maybe the FTX Future Fund will decide to throw money at me later if they think the information was worth it to them, but that's their decision to make.

If they are wrong, they should be held accountable in proportion to whatever cost they have have incurred on society, big or small, financial or behavioural. 

If I am to owe a debt to Society if I am wrong, will Society pay me if I am right? Have I established a bet with Society? No. I just spent some time writing up why I changed my mind.

Going through the effort to provide testable reasoning is a service. That's what FTX would be giving me money for, if they give me any money at all.

You may make the valid argument that I should consider possible downstream uses of the information I post- which I do! Not providing the information also has consequences. I weighed them to the best of my ability, but I just don't see much predictable harm from providing testable reasoning to an audience of people who understand reasoning under uncertainty. (Incidentally, I don't plan to go on cable news to be a talking head about ~impending doom~.)

I'm perfectly fine with taking a reputational hit for being wrong about something I should have known, or paying up in a bet when I lose. I worry what you're proposing here is something closer to "stop talking about things in public because they might be wrong and being wrong might have costs." That line of reasoning, taken to the limit, yields arresting seismologists.

As a reasonably active tall person, allow me to try to mitigate some of your sadness!

I suspect some people like me who eat time-optimized food do so because they have to eat a lot of food. I can eat 2000 calories worth of time efficient, nutrient dense food, and still go eat a big meal of conventionally tasty food with other people without blowing my calorie budget. Or I can eat breakfast, and then immediately leave to go eat re-breakfast because by the time I get there I'll be hungry again.

Trying to eat my entire calorie budget in more traditional ways would effectively mean I'm never doing anything but eating. I did that for a while, but it becomes a real chore.

Load More