Posts

Sorted by New

Wiki Contributions

Comments

reup12y90

I think that some of the issue is that while Eliezer's conception of these issues has continued to evolve, we continue to both point and be pointed back to posts that he only partially agrees with. We might chart a more accurate position by winding through a thousand comments, but that's a difficult thing to do.

To pick one example from a recent thread, here he adjusts (or flags for adjustment) his thinking on Oracle AI, but someone who missed that would have no idea from reading older articles.

It seems like our local SI representatives recognize the need for an up to date summary document to point people to. Until then, our current refrain of "read the sequences" will grow increasingly misleading as more and more updates and revisions are spread across years of comments (that said, I still think people should read the sequences :) ).

reup12y120

I agree but, as I've understood it, they're explicitly saying they won't release any AGI advances they make. What will it do to their credibility to be funding a "secret" AI project?

I honestly worry that this could kill funding for the organization which doesn't seem optimal in any scenario.

Potential Donor: I've been impressed with your work on AI risk. Now, I hear you're also trying to build an AI yourselves. Who do you have working on your team?

SI: Well, we decided to train high schoolers since we couldn't find any researchers we could trust.

PD: Hm, so what about the project lead?

SI: Well, he's done brilliant work on rationality training and wrote a really fantastic Harry Potter fanfic that helped us recruit the high schoolers.

PD: Huh. So, how has the work gone so far?

SI: That's the best part, we're keeping it all secret so that our advances don't fall into the wrong hands. You wouldn't want that, would you?

PD: [backing away slowly] No, of course not... Well, I need to do a little more reading about your organization, but this sounds, um, good...

reup12y10

I remember reading and enjoying that article (this one, I think).

I would think that the same argument would apply regardless of the scale of the donations (assuming there aren't fixed transaction costs (which might not be valid)). My read would be that it comes down to the question of risk versus uncertainty. If there is actual uncertainty, investing widely might make sense if you believe that those investments will provide useful information to clarify the actual problem structure so that you can accurately target future giving.

reup12y10

And, if they're relying on perfect secrecy/commitment over a group of even a half-dozen researchers as the key to their safety strategy, then by their own standards they should not be trying to build an FAI.

reup12y20

Remember he's playing an iterated game. So, if we assume that right now he has very little information about which area is the most important to invest in or which areas are most likely to produce the best return, playing a wider distribution in order to gain information in order maximize the utility of later rounds of donations/investments seems rational.

reup12y10

Is there a post on the relative strengths/weaknesses of UDT and TDT? I've searched but haven't found one.

reup12y10

On the html side, grab a free template (quite a few sites out there offer nice ones). I find that it's easier to keep working when my project at least looks decent. Also, at least for me, I feel more comfortable showing it to friends for advice when there's some superficial polish.

Also, when you see something (a button, control or effect) on a site, open the source. A decent percent of the time you'll find it's actually open source already (lots of js frameworks out there) and you can just copy directly. If not, you'll still learn how it's done.

Good luck!

reup12y00

Maybe solving them will require new math, but it seems possible that existing math already provides the necessary tools.

There seems to be far more commitment to a particular approach than is justified by the evidence (at least what they've publicly revealed).

reup12y-10

I think we can safely stipulate that there is no universal route to contest success or Luke's other example of 800 math SATs.

But, I can answer your question that, yes, I'm sure that at least some of the students are receiving supplemental tutoring. Not necessarily contest-focused, but still.

Anecdotally: the two friends I had from undergrad who were IMO medalists (about 10 years ago) had both gone through early math tutoring programs (and both had a parent who was a math professor). All of my undergrad friends who had 800 math SAT had either received tutoring or had their parents buy them study materials (most of them did not look back fondly on the experience).

Remember, for any of these tests, there's a point where even a small amount of training to the test overwhelms a good deal of talent. Familiarity with problem types, patterns, etc can vastly improve performance.

I have no way to evaluate the scope of your restrictions on doing "super-well" or the particular that the tutoring start at an "early age" (although at least one of the anecdotal IMO cases did a Kumon-type program that started at pre-school).

Are there some people who don't follow that route? Certainly. However, I do think that it's important to be aware of other factors that may be present.

reup12y40

I think it could be consistent if you treat his efforts as designed to gather information.

Load More