pioneer self-driving
Tesla didn't get into the space early, didn't deliver yet (unlike others), doesn't seem to have made significant contribution to the field. So I wouldn't describe Musk as pioneering self-driving cars.
Waymo predecessor has been working on it in 2009[1], discussed the project publicly in 2010[1]. In 2012 they've reported 300k self-driven miles[2]. That's all before first public discussion of self driving by Musk in 2013[3].
And now Waymo is doing actual self driving while Teslas are doing is not full self driving, but level 2 autonomy, available in many other brands (usually called something adaptive cruise control + line assist)[4], basically not yet where Waymo predecessor has been in 2015[1].
[1] https://en.wikipedia.org/wiki/Waymo
[2] https://canvasbusinessmodel.com/blogs/brief-history/waymo-brief-history
[3] https://en.wikipedia.org/wiki/Tesla_Autopilot
[4] https://en.wikipedia.org/wiki/Advanced_driver-assistance_system
The main reason the ‘AGI types’ are not calling for tax cuts is, quite frankly, that we don’t much care. The world is about to be transformed beyond recognition and we might all die, and you’re talking about tax cuts and short term consumption levels?
I would also say that if you expect transformation by AGI to result in human extinction it seems bad to make worlds where don't do the transformation worse. (as it makes it less likely we're going to do the deadly transformation because otherwise we'd have to face unpleasantness). And how much good can you buy with the extra debt?
This seems similar to personal considerations on getting in debt as much as possible adn spending all the money now.
The Secretary Problem thus suggests that if you are maximizing, you should be deeply stingy about accepting a match until you’ve done a lot of calibration, and then take your sweet time after that.
Also you have much richer information than in the secretary problem, so you can do way better! https://putanumonit.com/2019/03/03/exponential-secretary/ has a nice write up on this.
This is fine if you’re shooting a bunch of shots, but not if this is an especially valuable shot to shoot.
So taking a poorly executed shots is evidence that you're taking many shots, which affects chances of getting a no (esp. if you don't know each other well - if you do this is likely not very much evidence).
so they become afraid to speak out or otherwise try to help the village win, and that the villager side is in various ways the harder one to play well.
In my experience playing games like this being too good at being a a villiger has a few disadvantages:
I think I can see how this might scale.
The way this looks to me is that if you're applying this consistently in an organization, you don't need to actually fully do all tasks that need doing. You need to be able to recurse 1 level (which if you actually do might involve going down a level... but you mostly don't need to go down 1 level, going down 2 levels is much more rare, etc.).
To use your example: low-level tasks should not be bubbling up to CEO level. If a controversy about naming a variable bubbles up from a code review to a CEO of a company with 100k people - clearly there has been a failure on multiple levels in the middle (even if the CEO is not up to date on the style guide for the language). The CEO might make the call but more importantly they need to do something about the suborganization before it blows up.
But I'd like to know if this is how Lightcone sees scaling of this principle.
Is this not being posted to substrack intentional?
EDIT: It's been published there in April
Except at Stanford and some other colleges you can’t, because of this thing called the ‘honor code.’ As in, you’re not allowed to proctor exams, so everyone can still whip out their phones and ask good old ChatGPT or Claude, and Noam Brown says it will take years to change this. Time for oral exams? Or is there not enough time for oral exams?
I'm very confused by this. How are LLMs the problem here? Sounds like you could have been googling answers, calling human helpers on your mobile for last decades?
And bring books / notes with you before that?
- Almost all feedback is noisy because almost all outcomes are probabilistic.
Yes but signal / noise ratios matter a lot.
Language is somewhat optimized to pick up signal and skip noise. For example "red" makes it easier to pick ripe fruit, "grue" doesn't really exist because its useless, "expired" is a real concept because it's useful.
It also has some noise added. For example putting (murderers and jay wakers) in a category "criminal" to politically oppose something.
Also not being exposed to the kind of noise that's present IRL might be an issue when you start to deal with IRL (sometimes people say something like "just do the max EV action" is a good enough plan)
I'm pretty sure this is some obstacle for LLMs, I'm pretty sure its something that can be overcome, I'm very unsure how much this matters.
Larry Page and Sergey Brin hold majority voting control of Alphabet. Sergey has been actively involved in Gemini development. In 2014 Google created non-voting Class C shares and switched new employee equity grants to that class to prevent dilution of their control (SEC filing). So Pichai may not be the relevant decision-maker.
Page called Musk a 'speciesist' for prioritizing human interests over future digital minds (Business Standard).