Veedrac

Wikitag Contributions

Comments

Sorted by
Veedrac60

Consider, in support: Netflix has a $418B market cap. It is inconsistent to think that a $300B valuation for OpenAI or whatever's in the news requires replacing tens of trillions of dollars of capital before the end of the decade.

Similarly, for people wanting to argue from the other direction, who might think a low current valuation is case-closed evidence against their success chances, consider that just a year ago the same argument would have discredited how they are valued today, and a year before that would have discredited where they were a year ago, and so forth. This holds similarly for historic busts in other companies. Investor sentiment is informational but clearly isn't definitive, else stocks would never change rapidly.

Veedrac10

That's how I interpreted it originally; you were arguing their product org vibed fake, I was arguing your vibes were miscalibrated. I'm not sure what to say to this that I didn't say originally.

Veedrac20

But most of your criticisms in the point you gave have ~no bearing on that? If you want to make a point about how effectively OpenAI's research moves towards AGI you should be saying things relevant to that, not giving general malaise about their business model.

Or, I might understand ‘their business model is fake which implies a lack of competence about them broadly,’ but then I go back to the whole ‘10% of people in the entire world’ and ‘expects 12B revenue’ thing.

Veedrac42

Your very first point is, to be a little uncharitable, ‘maybe OpenAI's whole product org is fake.’ I know you have a disclaimer here but you're talking about a product category that didn't exist 30 months ago that today has this one website now reportedly used by 10% of people in the entire world and that the internet is saying expects ~12B revenue this year.

If your vibes are towards investing in that class of thing being fake or ‘mostly a hype machine’ then your vibes are simply not calibrated well in this domain.

Veedrac20

I failed to find an example easily when checking twitter this way.

Veedrac80

Blue Origin was started two years earlier (2000 v 2002), had much better funding for most of its history,

This claim is untrue. SpaceX has never had less money than Blue Origin. It is maybe true that Blue Origin had fewer obligations attached to this money, since it was exclusively coming from Bezos, rather than a mix of investment, development contracts, and income for SpaceX, but the baseline claim that SpaceX was “money-poor” is false.

Veedrac50

I need to remake the graph with more recent data, but here is a graphic of US energy additions.

https://live.staticflickr.com/65535/53977597462_2095add298_k.jpg

Nonrenewables are a walking dead at this point. I wouldn't personally tell the story of it through Musk—I think cost curves and China are a more relevant framing—but the end point is much the same.

Veedrac*20

LeelaKnightOdds has convincingly beaten both Awonder Liang and Anish Giri at 3+2 by large margins, and has an extremely strong record at 5+3 against people who have challenged it.

I think 15+0 and probably also 10+0 would be a relatively easy win for Magnus based on Awonder, a ~150 elo weaker player, taking two draws at 8+3 and a win and a draw at 10+5. At 5+3 I'm not sure because we have so little data at winnable time controls, but wouldn't expect an easy win for either player.

It's also certainly not the case that these few-months-old networks running a somewhat improper algorithm are the best we could build—it's known at minimum that this Leela is tactically weaker than normal and can drop endgame wins, even if humans rarely capitalize on that.

Veedrac*20

Fundamentally, the story was about the failure cases of trying to make capable systems that don't share your values safe by preventing specific means by which its problem solving capabilities express themselves in scary ways. This is different to what you are getting at here, which is having those systems actually operationally share your values. A well aligned system, in the traditional ‘Friendly AI’ sense of alignment, simply won't make the choices that the one in the story did.

Veedrac42

I was finding it a bit challenging to unpack what you're saying here. I think, after a reread, that you're using ‘slow’ and ‘fast’ in the way I would use ‘soon’ and ‘far away’ (aka. referring to the time it will occur from the present). Is this read about correct?

Load More