RyanCarey

Wiki Contributions

Comments

RyanCarey's Shortform

Transformer models (like GPT-3) are generators of human-like text, so they can be modeled as quantilizers. However, any quantiliser guarantees are very weak, because they quantilise with very low q, equal to the likelihood that a human would generate that prompt.

Sci-Hub sued in India

The most plausible way out seems to be for grantmakers to grant money conditionally on work being published as open source. Some grantmakers may benefit from doing this, despite losing some publication prestige, because the funded work will be read more widely, and the grantmaker will look like they are improving the scientific process. Researchers lose some prestige, but gain some funding. Not sure how well this has worked so far, but perhaps we could get to the world where this works, if we're not already there.

Common knowledge about Leverage Research 1.0

It would be useful to have a clarification of these points, to know how different of an org you actually encountered, compared to the one I did when I (briefly) visited in 2014.

It is not true that people were expected to undergo training by their manager.

OK, but did you have any assurance that the information from charting was kept confidential from other Leveragers? I got the impression Geoff charted people who he raised money from, for example, so it at least raises the question whether information gleaned from debugging might be discussed with that person's manager.

“being experimented on” was not my primary purpose in joining nor would I now describe it as a main focus of my time at Leverage. 

OK, but would you agree that a primary activity of leverage was to do psych/sociology research, and a major (>=50%) methodology for that was self-experimentation?

I did not find the group to be overly focused on “its own sociology.”

OK, but would you agree that at least ~half of the group spent at least ~half of their time studying psychology and/or sociology, using the group as subjects?

The stated purpose of Leverage 1.0 was not to literally take over the US and/or global governance or “take over the world,”...OPs claim is false.

OK, but you agree that it was was to ensure "global coordination" and "the impossibility of bad governments", per the plan, right? Do you agree that "the vibe was 'take over the world'", per the OP?

I did not believe or feel pressured to believe that Leverage was “the only organization with a plan that could possibly work.”

OK, but would you agree that many staff said this, even if you personally didn't feel pressured to take the belief on?

 I did not find “Geoff’s power and prowess as a leader [to be] a central theme.”

OK, but did you notice staff saying that he was one of the great theorists of our time? Or that a significant part of the hope for the organisation was to deploy adapt certain ideas of his, like connection theory, which "solved psychology" to deal with cases with multiple individuals, in order to design larger orgs, memes, etc?

Hopefully, the answers to these questions could be mostly-separated from our subjective impressions. Which might sound harsh, or resembling a cross-examination. But it seems necessary in order to figure out to what extent we can reach a shared understanding of "common knowledge facts", at least about different moments in LR's history (potentially also differing in our interpretations), versus the facts themselves actually being contested.

Zoe Curzi's Experience with Leverage Research

Thanks for your courage, Zoe!

Personally, I've tried to maintain anonymity in online discussion of this topic for years. I dipped my toe into openly commenting last week, and immediately received an email that made it more difficult to maintain anonymity - I was told "Geoff has previously speculated to me that you are 'throwaway', the author of the 2018 basic facts post". Firstly, I very much don't appreciate my ability to maintain anonymity being narrowed like this. Rather, anonymity is a helpful defense in any sensitive online discussion, not least this one. But secondly, yes, I am throwaway/anonymoose - I posted anonymously because I didn't want to suffer adverse consequences from friends who got more involved than me. But I'm not throwaway2,  anonymous, or BayAreaHuman - those three are bringing evidence that is independent from me at least.

I only visited Leverage for a couple months, back in 2014. One thing that resonated strongly with me about your post is that the discussion is badly confused by lack of public knowledge and strong narratives, about whether people are too harsh on Leverage, what biases one might have, and so on. This is why I think we often retreat to just stating "basic" or "common knowledge" facts; the facts cut through the spin.

Continuing in that spirit, I personally can attest that much of what you have said is true, and the rest congruent with the picture I built up there. They dogmatically viewed human nature as nearly arbitrarily changeable. Their plan was to study how to change their psychology, to turn themselves into Elon Musk type figures, to take over the world. This was going to work because Geoff was a legendary theoriser, Connection Theory had "solved psychology", and the resulting debugging tools were exceptionally powerful. People "worked" for ~80 hours a week - which demonstrated the power of their productivity coaching.

Power asymmetries and insularity were present to at least some degree. I personally didn't encounter an NDA, or talk of "demons" etc. Nor did I get a solid impression of the psychological effects on people from that short stay, though of course there must have been some.

What's frustrating about still hearing noisy debate on this topic, so many years later, is that Leverage being a really bad org seems overdetermined at this point. On the one hand, if I ranked MIRI, CFAR, CEA, FHI, and several startups I've visited, in terms of how reality-distorting they can be, Leverage would score ~9, while no other would surpass ~7. (It manages to be nontransparent and cultlike in other ways too!).  While on the other hand, their productive output was... also like a 2/10? It's indefensible. But still only a fraction of the relevant information is in the open.

As you say, it'll take time for people to build common understanding, and to come to terms with what went down. I hope the cover you've offered will lead some others to feel comfortable sharing their experiences, to help advance that process.

Common knowledge about Leverage Research 1.0

As in, 5+ years ago, around when I'd first visited the Bay, I remember meeting up 1:1 with Geoff in a cafe. One of the things I asked, in order to understand how he thought about EA strategy, was what he would do if he wasn't busy starting Leverage. He said he'd probably start a cult, and I don't remember any indication that he was joking whatsoever. I'd initially drafted my comment as "he told me, unjokingly", except that it's a long time ago, so I don't want to give the impression that I'm quite that certain.

Common knowledge about Leverage Research 1.0

He's also told me, deadpan, that he would like to be starting a cult if he wasn't running Leverage.

Is GPT-3 already sample-efficient?

Your comparison does a disservice to the human's sample efficiency in two ways: 

  1. You're counting diverse data in the human's environment, but you're not comparing their performance on diverse tasks. Human's are obviously better than GPT3 at interactive tasks, walking around, etc. For either kind of fair comparison text data & task, or diverse data & task, the human has far superior sample efficiency.
  2. "fancy learning techniques" don't count as data. If the human can get mileage out of them, all the better for the human's sample efficiency.

So you seem to have it backwards when you say that the comparison that everyone is making is the "bad" one.

Is GPT-3 already sample-efficient?

I think this becomes a lot clearer if we distinguish between total and marginal thinking. GPT-3's total sample efficiency for predicting text is poor:

  • To learn to predict text, GPT-3 has to read >1000x as much text as a human can learn in their lifetime.
  • To learn to win at go, AlphaGo has to play >100x times as many games as a human could play in their lifetime.

But on-the-margin, it's very sample efficient at learning to perform new text-related tasks:

  • GPT-3 can learn to perform a new text-related task as easily as a human can.

Essentially, what's happened is GPT-3 is a kind-of mega-analytical-engine that was really sample inefficient to train up to its current level, but that can now be trained to do additional stuff at relatively little extra cost.

Does that resolve the sense of confusion/mystery, or is there more to it that I'm missing?

The LessWrong Team is now Lightcone Infrastructure, come work with us!

Can you clarify whether you're talking about "30% of X" i.e. 0.3*X, or "30% off X", i.e. 0.7*X?

Load More