alexei

alexei's Comments

The Unreasonable Effectiveness of Deep Learning

This is really good and I found it very useful for what I'm currently working on.

One note: it felt a bit disconnected. And I didn't get the impression that RL is "unreasonably effective."

A possible solution to the Fermi Paradox

Yeah, makes sense. Also note that if Many Worlds is true and quantum immortality exists, you will never (from your own point of view) die.

Ms. Blue, meet Mr. Green

Yeah, splitting it into two parts would have been better.

What exactly do you mean by "things that try very hard to break down the map/territory distinction"?

Ms. Blue, meet Mr. Green

Good points. I agree that there are many ways to slice these practices. This goes along with habryka's point that I should have split this post into two parts, which I agree with as well.

Ms. Blue, meet Mr. Green

I'd predict that most people teach mindfulness horribly wrong. I'd also predict that the way it's usually taught does not resonate with most people, and they end up not doing the thing. (This was true for me the first few times I encountered it.) (Also, I know people who've done meditation for years and they're not much further along than when they started because they're still not doing the thing.) I'd also predict that they didn't do it for long enough. (Conservatively, I'd say you need 6 months to see some results, but it depends how many minutes a day you meditate.) And, yes, it's hard to measure internal clarity.

One experiment that might pick up on it though: when my brother was in college, he participated in an experiment ran by some PhD student. The experiment was: they'd flash letters very rapidly on the screen, changing about every 20 ms or so (don't remember the exact number, but it was very fast, where you couldn't keep up consciously). You were supposed to count how many As and Bs appeared. Their hypothesis was that when you'd see one of those letters, your mind would become occupied with counting that letter and your vision would become temporarily turned off, so you'd miss if there was another A or B right after. I think they did end up finding that effect. But what's interesting is that my brother scored 3 standard deviations higher than the mean. (At that time I think he has been meditating for at least a year.) This is something that I'd predict other people who practiced insight meditation to perform well at.

Arbital postmortem

I sympathize. It's a giant and weird project the likes of which the world has not seen in a while. If I wrote down how to implement just what we built so far so that someone could read it an unambiguously translate it into the current product, I think the document would be around 200 pages. And what we implemented was may be ~15% of Eliezer's full vision that he was describing in his document.

By the way, we followed Eliezer's direct vision for only 1.5 years. Then we took matters into our own hands and the design went elsewhere.

Turns out it's hard to get the broad details right too. It's basicly hard on every level.

If it's not according to Eliezer's specification, then it doesn't have Eliezer's "magic touch". I think if you'd ask Eliezer, he would tell you that the feature you built (or the whole product) won't work as well or at all.

Arbital postmortem

What habryka said. Basically you're totally underestimating the complexity of the project and how granular and specific things get if you're to build them in a way Eliezer would approve.

Arbital postmortem

It's too easy for people to just recommend their best lawyer friends. I suppose if you really trust your friends not to recommend their lawyer friends just because of the relationship (a big if!) then you could take their advice.

Arbital postmortem

Oh no, totally the same feelings. You get it. :)

However, since then I've gotten over that "should universe" and went back to "is universe", where this is just how people are. Won't be making that mistake twice. Sounds like we learned the same lesson. :)

Load More