Jotto999

Wiki Contributions

Comments

It’s Probably Not Lithium

Why "ecologically realistic food"? And which types of realism are you going to pick?

Overfeeding and obesity are common problems in pets, which are mostly not bred to gain weight the way cows are.

My family has kept many kinds of animals.  If you give bunny rabbits as much veggies as they want, a large fraction becomes obese.  And guinea pigs too.  And for their own favorite foods, tropical fish do too.  Cats too.

In fact, I have never noticed a species that doesn't end up with a substantial fraction with obesity, if you go out of your way to prepare the most-compelling food to them, and then give that in limitless amounts.  Even lower-quality, not-as-compelling foods free-fed can cause some obesity.  Do you even know of any animal species like this?!

If there is large variation in susceptibility (which there would be) to the ostensible environmental contaminant, there should be species that you can free-feed and they don't get obesity.

It’s Probably Not Lithium

Do you have any empirical evidence for either of the following?

  1. Farmers were historically wrong to think that free-feeding their animals would tend to fatten them up, OR they didn't believe it has that effect.
  2. Prior to the more recent novel contaminants, humans are an exception among animals in this general trend, that free-feeding tends to fatten animals up.
It’s Probably Not Lithium

I'm going to bury this a bit deeper in the comment chain because it's no more indicative than Eliezer's anecdote.  But FWIW,

I am in the (very fortunate) minority who struggles to gain much weight, and has always been skinny.  But when I have more tasty food around, especially if it's prepared for me and just sitting there, I absolutely eat more, and manage to climb up from ~146 to ~148 or ~150 (pounds).  It's unimaginable that this effect isn't true for me.

Yudkowsky and Christiano discuss "Takeoff Speeds"

I see what you're saying, but it looks like you're strawmanning me yet again with a more extreme version of my position.  You've done that several times and you need to stop that.

What you've argued here prevents me from questioning the forecasting performance of every pundit who I can't formally score, which is ~all of them.

Yes, it's not a real forecasting track record unless it meets the sort of criteria that are fairly well understood in Tetlockian research.  And neither is Ben Garfinkel's post, that doesn't give us a forecasting track record, like on Metaculus.

But if a non-track-recorded person suggests they've been doing a good job anticipating things, it's quite reasonable to point out non-scorable things they said that seem incorrect, even with no way to score it.

In an earlier draft of my essay, I considered getting into bets he's made (several of which he's lost). I ended up not including those things.  Partly because my focus was waning and it was more attainable to stick to the meta-level point.  And partly because I thought the essay might be better if it was more focused.  I don't think there is literally zero information about his forecasting performance (that's not plausible), but it seemed like it would be more of a distraction from my epistemic point.  Bets are not as informative as Metaculus-style forecasts, but they are better than nothing.  This stuff is a spectrum, even Metaculus doesn't retain some kinds of information about the forecaster.  Still, I didn't get into it, though I could have.

But I ended up later editing in a link to one of Paul's comments, where he describes some reasons that Robin looks pretty bad in hindsight, but also includes several things Eliezer said that seem quite off.  None of those are scorable.  But I added in a link to that, because Eliezer explicitly claimed he came across better in that debate, which overall he may have, but it's actually more mixed than that, and that's relevant to my meta-point that one can obfuscate these things without a proper track record.  And Ben Garfinkel's post is similarly relevant.

If the community felt more ambivalently about Eliezer's forecasts, or even if Eliezer was more ambivalent about his own forecasts? And then there was some guy trying to convince people he has made bad forecasts? Then your objection of one-sidedness would make much more sense to me.  That's not what this is.

Eliezer actively tells people he's anticipating things well, but he deliberately prevents his forecasts from being scorable.  Pundits do that too, and you bet I would eagerly criticize vague non-scorable stuff they said that seems wrong.  And yes, I would retweet someone criticizing those things too.  Does that also bother you?

Where I agree and disagree with Eliezer
  1. I disagree with the community on that.  Knocking out silver turing, Montezuma (in the way described), 90% equivalent on Winogrande, and 75th percentile on maths SAT will either take longer to be actually demonstrated in a unified ML system, OR it will happen way sooner than 39 months before "an AI which can perform any task humans can perform in 2021, as well or superior to the best humans in their domain.", which is incredibly broad.  If the questions mean what they are written to mean, as I read them, it's a hell of a lot more than 39 months (median community estimate).
  2. The thing I said is about some important scenarios described by people giving significant probability to a hostile hard takeoff scenario.  I included the comment here in this subthread because I don't think it contributed much to the discussion.
Where I agree and disagree with Eliezer

Very broadly,

in 2030 it will still be fairly weird and undersubstantiated, to say that a dev's project might accidentally turn everyone's atoms into ML hardware, or might accidentally cause a Dyson sphere to be build.

On A List of Lethalities

I haven't read most of the post.  But in the first few paragraphs, you mention how he was ranting, and you interpret that as an upward update on the risk of AI extinction:

The fact that this is the post we got, as opposed to a different (in many ways better) post, is a reflection of the fact that our Earth is failing to understand what we are facing. It is failing to look the problem in the eye, let alone make real attempts at solutions.

But that's extremely weak evidence.  People rant all the time, including while being incorrect.  Him formatting a message as a rant isn't evidence of an increased risk of doom compared to yesterday, unless you already agree with him.

Biology-Inspired AGI Timelines: The Trick That Never Works

My posting this comment will be contrary to the moderation disclaimer advising not to talk about tone.  But FWIW, I react similarly and I skip reading things written in this way, interpreting them as manipulating me into believing the writer is hypercompetent.

Load More