Wiki Contributions

Comments

We have Wildeford's Third Law: "Most >10 year forecasts are technically also AI forecasts".

We need a law like "Most statements about the value of EA are technically also AI forecasts".

Yep that's fair, there is some subjectivity here. I was hoping that the charges from SDNY would have a specific amount that Sam was alleged to have defrauded, but they don't seem to.

Regarding $4B missing: adding in Anthropic gets another $4B on the EA side of the ledger, and founders pledge another $1B. The value produced by Anthropic is questionable, and maybe negative of course, but I think by the strict definition of "donated or built in terms of successful companies" EA comes out ahead.

(And OpenAI gets another $80B, so if you count that then I think even the most aggressive definition of how much FTX defrauded is smaller. But obviously OAI's EA credentials are dubious.)

EA has defrauded much more money than we've ever donated or built in terms of successful companies

 

FTX is missing $1.8B.  OpenPhil has donated $2.8B. 

I do think it's at the top of frauds in the last decade, though that's a narrower category.

Nikola went from a peak market cap of $66B to ~$1B today, vs. FTX which went from ~$32B to [some unknown but non-negative number].

I also think the Forex scandal counts as bigger (as one reference point, banks paid >$10B in fines), although I'm not exactly sure how one should define the "size" of fraud.[1] 

I wouldn't be surprised if there's some precise category in which FTX is the top, but my guess is that you have to define that category fairly precisely.

  1. ^

    Wikipedia says "the monetary losses caused by manipulation of the forex market were estimated to represent $11.5 billion per year for Britain’s 20.7 million pension holders alone" which, if anywhere close to true, would make this way bigger than FTX, but I think the methodology behind that number is just guessing that market manipulation made foreign-exchange x% less efficient, and then multiplying through by x%, which isn't a terrible methodology but also isn't super rigorous.

Oh  yeah, just because it's a reference point that doesn't mean that we should copy them

I think almost any large organization/company would have gone through a much more comprehensive fault-analysis and would have made many measurable improvements.

I claim YCombinator is a counter example.

(The existence of one counterexample obviously doesn't disagree with the "almost any" claim.)

IMO the EA community has had a reckoning, a post-mortem, an update, etc. far more than most social or political movements would (and do) in response to similar misbehavior from a prominent member

As a reference point: fraud seems fairly common in ycombinator backed companies, but I can't find any sort of postmortem, even about major things like uBiome where the founders are literally fugitives from the FBI.

It seems like you could tell a fairly compelling story that YC pushing founders to pursue risky strategies and flout rules is upstream of this level of fraudulent behavior, though I haven't investigated closely.

My guess is that they just kind of accept that their advice to founders is just going to backfire 1-2% of the time.

Thanks for the questions!

  1. I feel a little confused about this myself; it's possible I'm doing something wrong. (The code I'm using is the `get_prob` function in the linked notebook; someone with LLM experience can probably say if that's broken without understanding the context.) My best guess is that human intuition has a hard time conceptualizing just how many possibilities exist; e.g. "Female", "female", "F", "f" etc. are all separate tokens which might realistically be continuations.
  2. I haven't noticed anything; my guess is that there probably is some effect but it would be hard to predict ex ante. The weights used to look up information about "Ben" are also the weights used to look up information about "the Eiffel Tower", so messing with the former will also mess with the latter, though I don't really understand how.
    1. A thing I would really like to do here is better understand "superposition". A really cool finding would be something like: messing with the "gender" dimension of "Ben" is the same as messing with the "architected by" dimension of "the Eiffel Tower" because the model "repurposes" the gender dimension when talking about landmarks since landmarks don't have genders. But much more research would be required here to find something like that.
  3. My guess is that this is just randomness. It would be interesting to force the random seed to be the same before and after modification and see how much it actually changes.

Thanks! I mentioned anthropic in the post, but would similarly find it interesting if someone did a write up about cohere. It could be that OAI is not representative for reasons I don't understand.

  1. Yep, revenue multiples are a heuristic for expectations of future growth, which is what I care about
  2. This is true, but I'm not aware of any investments on $0 revenue at the $10B scale. Would love to hear of counterexamples if you know of any![1]
  1. ^

    Instagram is the closest I can think of, but that was ~20x smaller and an acquisition, not an investment

Load More