Moving Data Around is Slow

Game programmers love this trick too, and for the same reasons: you're typically not doing elaborate computations to small amounts of data, you're doing simple computations (position updates, distance comparisons, &c) with large amounts of data.  Paring away unnecessary memory transfers is a big part of how to make that kind of computation go fast.

Mentorship, Management, and Mysterious Old Wizards

FWIW I also think your summary of the 2015 article is inaccurate.  For example, "EA needs very specific talents that are missing." isn't consistent with the section titled "Less Earning to Give", which states very clearly that more than 20% of EAs, total, should be doing direct work.  "EA needs lots of generally talented people" is a much better fit.  My own experiences are consistent with that:  the people I know who got career advice from 80k or other EA thought leaders in that era were all told to do direct work, typically operations at EA orgs.

Normally this wouldn't be worth talking about; who really cares whether an article from 2015 was unclear, or clearly communicated something its authors now disagree with?  Here I think the distinction matters, because it's a load-bearing part of the argument that mentorship is a bottleneck for EA specifically.  People who got top-tier mentorship in 2015 were told things we now agree aren't true, but that were consistent with the articles available at the time.  People who got top-tier mentorship in 2020 got different advice (I assume, I haven't kept up since covid started),  but how much better was it, in terms of knowledge, than the articles available?

I could definitely buy that EA has a shortage of mysterious old wizards, though.

Promoting Prediction Markets With Meaningless Internet-Point Badges

People are tired of shitty media. There's an enormous groundswell of media distrust from many angles, as far as I can tell. A measure like this is easy to understand, at least in the basics, and provides clear evidence of credibility for those who use it, entirely independent of trust.

If this were true, we would expect to see declining media consumption -- reduced viewership at Fox and CNN, for example.  Instead the opposite is true, both reported record viewership this year.  I take that to mean that the problem with journalism, insofar as there is one, is on the demand side rather than the supply side.

So, in general I think this claim is false.  I would focus on finding a small subgroup for which it's true, and dedicate your efforts to them.

What could one do with truly unlimited computational power?

Would BlooP allow for chained arrow notation, or would it be too restrictive for that?

Sadly, you need more than that: chained arrows can express the Ackermann function, which isn't primitive recursive.  But I guess you don't really need them.  Even if you just have Knuth's up-arrows, and no way to abstract over the number of up-arrows, you can just generate a file containing two threes with a few million up-arrows sandwiched between them, and have enough compute for most purposes (and if your program still times out, append the file to itself and retry).

What could one do with truly unlimited computational power?

Maybe the input box uses its own notation, something weak enough that infinite loops are impossible but powerful enough to express Conway's arrows?  That seems like it would be enough to be interesting without accidentally adding a halting oracle.

What could one do with truly unlimited computational power?

If you're using the oracle to generate moves directly then you don't need an agent, yeah.  But that won't always work: you can generate the complete Starcraft state space and find the optimal reply for each state, but you can't run that program in our universe (it's too big) and you can't use the oracle to generate SC moves in real time (it's too slow).

What could one do with truly unlimited computational power?
Answer by TaranNov 12, 202011

Pretty interesting.  You're still constrained by your ability to specify solutions, so you can't immediately solve cold fusion or FTL (you'd need to manually write and debug an accurate-enough physics simulator first).  Truly, no computing system can free you from the burden of clarifying your ideas.  But this constraint does leave some scope for miracles, and I want to talk about one technique in particular: program search.

Program Search

Program search is a very powerful, but dangerous and ethically dubious, way to exploit unbounded compute.  Start with a set of test cases, then generate all programs of length less than 100 megabytes (or whatever) and return the shortest, fastest one that passes all the test cases.  Both constraints are important: "shortest" prevents the optimizer from returning a hash table that memorizes all possible inputs, and "fastest" prevents it from relying on the unusual nature of the oracle universe (note that you will need a perfect emulator in order to find out which program is fastest, since wall-clock time measurements in the oracle's universe might be ineffective or misleading).  In a narrow sense, this is the perfect compiler: you tell it what kind of program you want, and it gives you exactly what you asked for.


There are some practical dangers.  In Python or C, for example, the space of all programs includes programs which can corrupt or mislead your test harness.  The ideal language for this task has no runtime flexibility or ambiguity whatsoever; Haskell might work.  But that still leaves you at the mercy of God's Haskell implementation: we can assume that He introduced no new bugs, but He might have faithfully replicated an existing bug in the reference Haskell compiler, which your enumeration will surely find.  This is unlikely to cause serious problems (at least at first), but it means you have to cross-check the output of whatever program the oracle finds for you.

More insidiously, some the programs that we run during the search might instantiate conscious minds, or otherwise be morally relevant.  If that seems unlikely, ask yourself: are you totally sure it's impossible to simulate a suffering human brain in 100 megs of Haskell?  This risk can be limited somewhat, for example by running the programs in order from smallest to largest, but is hard to rule out entirely.


If you're willing to put up with all that, the benefits are enormous.  All ML applications can be optimized this way: just find the program that scores above some threshold on your metric, given your other constraints (if you have a lot of data you might be able to use the best-scoring program, but in small-data regimes the smallest, fastest program might still just be a hash table.  Maybe score your programs by how much simpler than the training data they are?).

With a little more work, it should be possible to -- almost -- solve all of mathematics: to create an oracle which, given a formal system, can tell you whether any given statement can proved within that system and, if so, whether it can be proved true or false (or both)...that is, for proofs up to some ridiculous but finite length.  I think you will have to invent your own proof language for this; the existing ones are all designed around complexity limitations that don't apply to you.  Make sure your language isn't Turing complete, to limit the risk of moral catastrophe.  Once you have that, you can just generate all possible proofs and then check whether the one you want is present or not.


Up until now we've been limited by our ability to specify the solution we want.  We can write test cases and generate a program which fulfills them, but it won't do anything we didn't explicitly ask for.  We can find the ideal classifier for a set of images, but we first have to find those images out in the real world somewhere, and the power of our classifier is bounded by the number of images we can find.

If we can specify precise rules for a simulation, and a goal within that simulation, most of that constraint disappears.  For example, to find the strongest Go-playing program, we can instantiate all possible Go-playing programs and have them compete until there's an unambiguous winner; we don't need any game records from human players.  The same trick works for everything simulatable: Starcraft, Magic: the Gathering, piloting fighter jets, you name it.  If you don't want to use the oracle to directly generate a strong AI, you can instead develop accurate-enough simulations of the real-world, and then use the oracle to develop effective agents within those simulations.


Ultimately the idea would be to develop a computer model of the laws physics that's as correct and complete as our computer model of the rules of Go, so that you can finally develop nanofactories, anti-aging drugs, and things like that.  I don't see how to do it, but it's the only prize worth playing for.  At this point it becomes very important to be able prove the Friendliness of every candidate program; use the math oracle you built earlier to develop a framework for that before moving forward.

Trick-or-treating in Covid Times

But I don't understand how any town that allows indoor dining can categorize trick-or-treating as impermissibly high risk?

I think it's about risk versus reward, rather than risk per se.  If you allow indoor dining, the restaurant owners make money and won't fail or need bailouts (as often).  Trick or treating doesn't offer as much benefit economically.

(Not endorsing this reasoning, just trying to empathize).

Covid Covid Covid Covid Covid 10/29: All We Ever Talk About

Yes, within its limits:

  1. They don't do very much investigative journalism, mostly they just report on things that happen publicly.  
  2. Their articles tend to be pretty short, without a lot of storytelling or background detail.

If you want to efficiently survey what German people are hearing about it seems like a good choice.

If you want something more like a normal American newspaper, consider Der Spiegel: https://www.spiegel.de.  I rarely visit them as their website does not run well for me, but they still have an independent fact-checking department.

Load More