Posts

Sorted by New

Wiki Contributions

Comments

IL2yΩ176319

When you exhaust all the language data from text, you can start extracting language from audio and video.

As far as I know the largest public repository of audio and video is YouTube. We can do a rough back-of-the-envelope computation for how much data is in there:

  • According to some 2019 article I found, in every minute 50 hours of video are uploaded to YouTube. If we assume this was the average for the last 15 years, that gets us 200 billion minutes of video.
  • An average conversation has 150 words per minute, according to a Google search. That gets us 30T words, or 30T tokens if we assume 1 token per word (is this right?)
  • Let's say 1% of that is actually useful, so that gets us 300B tokens, which is... a lot less than I expected.

So it seems like video doesn't save us, if we just use it for the language data. We could do self-supervised learning on the video data, but for that we need to know the scaling laws for video (has anyone done that?).

IL2y252

The previous SOTA for MATH (https://arxiv.org/pdf/2009.03300.pdf) is a fine-tuned GPT-2 (1.5b params), whereas the previous SOTA for GSM8K (https://arxiv.org/pdf/2203.11171.pdf) is PaLM (540b params), using a similar "majority voting" method as Minerva (query each question ~40 times, take the most common answer).

IL4y50

Here's a thought experiment: Suppose that a market is perfectly efficient, except that every 50 years or so there's a crash, which sufficiently smart people can predict a month in advance. Would you say that this market is efficient? Technically it isn't, because smart people have a systematic advantage over the market. But practically, no trader systematically beats the market, because no trader lives long enough!

I suppose you can create a long-living institution, a "black swan fund", that very rarely makes bets on predictable crashes, and over a few centuries can prove it has higher returns. But I guess not enough people care about returns over these timescales.

IL4y100

What's the best way to convince skeptics of the severity of COVID? I keep seeing people saying it's just a slightly worse flu, or that car accidents kill a lot more people, and so on. I want some short text or image that illustrates just how serious this is.

I found this heartbreaking testimony from an Italian ICU doctor: https://twitter.com/silviast9/status/1236933818654896129

But I guess skeptics will want a more authoritative source.

IL15y20

"Or the first replicator to catch on, if there were failed alternatives lost to history - but this seems unlikely, given the Fermi Paradox; a replicator should be more improbable than that, or the stars would teem with life already."

So do you thing that the vast majority of The Big Filter is concentrated on the creation of a first replicator? What's the justification for that?

IL16y110

-You can't prove I'm wrong!

-Well, I'm an optimist.

-Millions of people believe it, how can they all be wrong?

-You're relying too much on cold rationality.

-How can you possibly reduce all the beauty in the world to a bunch of equations?

IL16y00

Eliezer, I remember an earlier post of yours, when you said something like: "If I would never do impossible things, how could I ever become stronger?" That was a very inspirational message for me, much more than any other similar sayings I heard, and this post is full of such insights.

Anyway, on the subject of human augmentation, well, what about them? If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment (it doesn't has to be full scale neural rewiring, it could be just smarter nootropics).

IL16y50

...Can someone explain why?

Many people believe in an afterlife... why sign up for cryonics when you're going to go to Heaven when you die?

That's probably not the explanation, since there are many millions of atheists who heard about cryonics and/or extinction risks. I figure the actual explanation is a combination of conformity, the bystander effect, the tendency to focus on short term problems, and the Silliness Factor.

IL16y-10

Eliezer, I have an objection to your metaethics and I don't think it's because I mixed levels:

If I understood your metaethics correctly, then you claim that human morality consists of two parts: a list of things that we value(like love, friendship, fairness etc), and what we can call "intuitions" that govern how our terminal values change when we face moral arguments. So we have a kind of strange loop (in the Hofstadterian sense); our values judge if a moral argument is valid or not, and the valid moral arguments change our terminal values. I think I accept this. It explains quite nicely a lot of questions, like where does moral progress comes from. What I am skeptic about is the claim that if a person hears enough moral arguments, their values will always converge to a single set of values, so you could say that his morality approximates some ideal morality that can be found if you look deep enough into his brain. I think it's plausible that the initial set of moral arguments that the person hears will change considerably his list of values, so that his morality will diverge rather than converge, and there won't be any "ideal morality" that he is approximating.

Note that I am talking about a single human that hears different sets of moral arguments, and not about the convergence of moralities across all humans (which is a different matter altogether)

Also note that this is a purely empirical objection; I am asking for empirical evidence that supports your metaethics

IL16y70
why isn't the moral of this fable that pursuing subjective intuitions about correctness a wild goose chase?

Bacause those subjective intuitions are all we got. Sure, in an absolute sense, human intuitions on correctness are just as arbitrary as the pebblesorter's intuitions(and vastly more complex), but we don't judge intuitions in an absolute way, we judge them with are own intuitons. You can't unwind past your own intuitions. That was the point of Eliezer's series of posts.

Load More