jessicata

Jessica Taylor. CS undergrad and Master's at Stanford; former research fellow at MIRI.

I work on decision theory, social epistemology, strategy, naturalized agency, mathematical foundations, decentralized networking systems and applications, theory of mind, and functional programming languages.

Blog: unstableontology.com

Twitter: https://twitter.com/jessi_cata

Wikitag Contributions

Comments

Sorted by

Dunno; gym membership also feels like a form of blackmail (although preferable to the alternative forms of blackmail), while home gym reduces the inconvenience of exercising.

I'm not sure what differentiates these in your mind. They both reduce the inconvenience of exercising, presumably? Also, in my post I'm pretty clear that it's not meant as a punishment type incentive:

And it’s prudent to take into account the chance of not exercising in the future, making the investment useless: my advised decision process counts this as a negative, not a useful self-motivating punishment.

...

Generally, it seems like the problem is signaling. You buy the gym membership to signal your strong commitment to yourself. Then you feel good about sending a strong signal. And then the next day you feel just as lazy as previously, and the fact that you already paid for the membership probably feels bad.

That's part of why I'm thinking an important step is checking whether one expects the action to happen if the initial steps are taken. If not then it's less likely to be a good idea.

There is some positive function of the signaling / hyperstition, but it can lead people to be unnecessarily miscalibrated.

  1. I was already paying attention to Ziz prior to this.
  2. Ziz's ideology is already influential. I've been having discussions about which parts are relatively correct or not correct. This is a part that seems relatively correct and I wanted to acknowledge that.
  3. If engagement with Zizian philosophy is outlawed, then only outlaws have access to Zizian philosophy. Antimemes are a form of camouflage. If people refuse to see what is in front of them, people can coordinate crimes in plain sight. (Doesn't apply so much to this post, more of a general statement)
  4. The effect you're pointing too seems very small if it even exists, in terms of causing negative effects.

Okay, I don't think I was disagreeing except in cases of very light satisficer-type self-commitments. Maybe you didn't intend to express disagreement with the post, idk.

So far I don't see evidence that any LessWrong commentator has read the post or understood the main point.

Not disagreeing, but, I'm not sure what you are responding to? Is it something in the post?

We might disagree about the value of thinking about "we are all dead" timelines. To my mind, forecasting should be primarily descriptive, not normative; reality keeps going after we are all dead, and having realistic models of that is probably a useful input regarding what our degrees of freedom are. (I think people readily accept this in e.g. biology, where people can think about what happens to life after human extinction, or physics, where "all humans are dead" isn't really a relevant category that changes how physics works.)

Of course, I'm not implying it's useful for alignment to "see that the AI has already eaten the sun", it's about forecasting future timelines by defining thresholds and thinking about when they're likely to happen and how they relate to other things.

(See this post, section "Models of ASI should start with realism")

I was trying to say things related to this:

In a more standard inference amortization setup one would e.g. train directly on question/answer pairs without the explicit reasoning path between the question and answer. In that way we pay an up-front cost during training to learn a "shortcut" between question and answers, and then we can use that pre-paid shortcut during inference. And we call that amortized inference.

Which sounds like supervised learning. Adam seemed to want to know how that relates to scaling up inference time compute so I said some ways they are related.

I don't know much about amortized inference in general. The Goodman paper seems to be about saving compute by caching results between different queries. This could be applied to LLMs but I don't know of it being applied. It seems like you and Adam like this "amortized inference" concept and I'm new to it so don't have any relevant comments. (Yes I realize my name is on a paper talking about this but I actually didn't remember the concept)

I don't think I implied anything about o3 relating to parallel heuristics.

I would totally agree they were directionally correct, I under-estimated AI progress. I think Paul Christiano got it about right.

I'm not sure I agree about the use of hyperbolic words being "correct" here; surely, "hyperbolic" contradicts the straightforward meaning of "correct".

Partially the state I was in around 2017 was, there are lots of people around me saying "AGI in 20 years", by which they mean a thing that shortly after FOOMs and eats the sun or something, and I thought this was wrong and a strange set of belief updates (which was not adequately justified, and where some discussions were suppressed because "maybe it shortens timelines"). And I stand by "no FOOM by 2037".

The people I know these days who seem most thoughtful about the AI that's around and where it might go ("LLM whisperer" / cyborgism cluster) tend to think "AGI already, or soon" plus "no FOOM, at least for a long time". I think there is a bunch of semantic confusion around "AGI" that makes people's beliefs less clear, with "AGI is what makes us $100 billion" as a hilarious example of "obviously economically/politically motivated narratives about what AGI is".

So, I don't see these people as validating "FOOM soon" even if they're validating "AGI soon", and the local rat-community thing I was objecting to was something that would imply "FOOM soon". (Although, to be clear, I was still under-estimating AI progress.)

jessicata110

I think this shades into dark forest theory. Broadly my theory about aliens in general is that they're not effectively hiding themselves, and we don't see them because any that exist are too far away.

Partially it's a matter of, if aliens wanted to hide, could they? Sure, eating a star would show up in terms of light patterns, but also, so would being a civilization at the scale of 2025-earth. And my argument is that these aren't that far-off in cosmological terms (<10K years).

So, I really think alien encounters are in no way an urgent problem: we won't encounter them for a long time, and if they get light from 2025-Earth, they'll already have some idea that something big is likely to happen soon on Earth.

  1. Doesn't have to expend the energy. It's about reshaping the matter to machines. Computers take lots of mass-energy to constitute them, not to power them.
  2. Things can go 6 orders of magnitude faster due to intelligence/agency, it's not highly unlikely in general.
  3. I agree that in theory the arguments here could be better. It might require knowing more physics than I do, and has the "how does Kasparov beat you at chess" problem.
Load More