NaiveTortoise

Comments

ricraz's Shortform

Yeah good point - given generous enough interpretation of the notebook my rejection doesn't hold. It's still hard for me to imagine that response feeling meaningful in the context but maybe I'm just failing to model others well here.

ricraz's Shortform

I've seen this quote before and always find it funny because when I read Greg Egan, I constantly find myself thinking there's no way I could've come up with the ideas he has even if you gave me months or years of thinking time.

Progress: Fluke or trend?

Sorry I was unclear. I was actually imagining two possible scenarios.

The first would be deeper investigation reveals that recent progress mostly resulted from serendipity and lucky but more contingent than we expected historical factors. For example, maybe it turns out that the creation of industrial labs all hinged on some random quirk of the Delaware C-Corp code (I'm just making this up to be clear). Even though these factors were a fluke in the past and seem sort of arbitrary, we could still be systematic about bringing them about going forward.

The second scenario is even more pessimistic. Suppose we fail to find any factors that influenced recent progress - it's just all noise. It's hard to give an example of what this would look like because it would look like an absence of examples. Every rigorous investigation of a potential cause of historical progress would find a null result. Even in this pessimistic world, we still could say, "ok, well nothing in the past seemed to make a difference but we're going to experiment to figure out things that do."

That said, writing out this maximally pessimistic case made me realize how unlikely I think it is. It seems like we already know of certain factors which at least marginally increased the rate of progress, so I want to emphasize that I'm providing a line of retreat not arguing that this is how the world actually is.

Progress: Fluke or trend?

Isn't it both possible that it's a fluke and also that going forward we can figure out mechanisms to promote it systematically?

To be clear, I think it's more likely that not that a nontrivial fraction of recent progress has non fluke causes. I'm just also noting that the goal of enhancing progress seems at least partly disjoint from whether recent progress was a fluke.

[AN #115]: AI safety research problems in the AI-GA framework

Yep, clicking "View this email in browser" allowed me to read it but obviously would be better to have it fixed here as well.

ricraz's Shortform

Thanks for your reply! I largely agree with drossbucket's reply.

I also wonder how much this is an incentives problem. As you mentioned and in my experience, the fields you mentioned strongly incentivize an almost fanatical level of thoroughness that I suspect is very hard for individuals to maintain without outside incentives pushing them that way. At least personally, I definitely struggle and, frankly, mostly fail to live up to the sorts of standards you mention when writing blog posts in part because the incentive gradient feels like it pushes towards hitting the publish button.

Given this, I wonder if there's a way to shift the incentives on the margin. One minor thing I've been thinking of trying for my personal writing is having a Knuth or Nintil style "pay for mistakes" policy. Do you have thoughts on other incentive structures to for rewarding rigor or punishing the lack thereof?

ricraz's Shortform

I'd be curious what, if any, communities you think set good examples in this regard. In particular, are there specific academic subfields or non-academic scenes that exemplify the virtues you'd like to see more of?

Becoming Unusually Truth-Oriented

Makes sense - would some of the early posts about Focusing and other "lower level" concepts you reference here qualify? If you create a tag, people (including maybe me) could probably help curate!

Becoming Unusually Truth-Oriented

I can make a longer comment there if you'd like but personally I wasn't that bothered by the dreams example because I agreed with you that confabulation in the immediate moments after I woke up didn't seem like a huge issue. As a result, I was definitely interested in seeing more posts from the meditative/introspective angle even if they just expanded upon some of these moment-to-moment habits with more examples and detail. Unfortunately, that would at least partly require writing more posts rather than pure curation.

The Future of Science

Great post (or talk I guess)!

Two "yes, and..." add-ons I'd suggest:

  1. Faster tool development as the result of goal-driven search through the space of possibilities. Think something like Ed Boyden's Tiling Tree Method semi-automated and combined with powerful search. As an intuition pump, imagine doing search in the latent space of GPT-N, maybe fine tuned on all papers in an area's, embeddings.
  2. Contrary to some of the comments from the talk, I weakly suspect NP-hardness will be less of a constraint for narrow AI scientists than it is for humans. My intuition here comes from what we've seen with protein folding and learned algorithms where my understanding is that hardness results limit how quickly we can do things in general but not necessarily on the distributions we encounter in practice. I think this is especially likely if we assume that AI scientists will be better at searching for complex but fast approximations than humans are. (I'm very uncertain about this one since I'm by no means an expert in these areas.)
Load More