TeaTieAndHat

Wiki Contributions

Comments

"Have you met non-serious people who long to be serious? People like that seem very rare to me."
… Hmmm… kinda? Like, you’re probably right that it’s few people, and in specific circumstances, but I know some people who are doing something they don’t like, or who are doing something they like but struggling with motivation or whatever for other reasons, and certainly seem to wish they were more serious (or people who did in fact change careers or whatever and are now basically as serious as Mastroianni wants them to be, when they weren’t at all before). But those are basically people who were always inclined to be serious but were prevented from doing so by their circumstances, so you have a point, of course.

Yes, he’s definitely a polemicist, and not a researcher or an expert. By training, he’s a urologist with an MBA or two, and most of what he writes definitely sounds very oversimplified/simplistic. 

Well, I did the thing where I actually go find this guy’s main book (2017, so not his latest) on archive.org and read it. The style is weird, with a lot of "she says this, Google says AGI will be fine, some other guy says it won’t", and I’m not 100% confident what Alexandre himself believes as far as the details are concerned. 
But it seems really obvious that his view is at least something like "AI will be super-duper powerful, the idea that perhaps we might not build it does not cross my mind, so we will have AGI eventually, then we’d better have it before the other guys, and make ourselves really smart through eugenics so we’re not left too far behind when the AI comes". "Enter the Matrix to avoid being swallowed by it", as he puts it (this is a quote).
Judging by his tone, he seems to simply not consider that perhaps we could deliberately avoid building AGI, and to be unaware of most of the finer details of discussions about AI and safety (he also says that telling AI to obey us will result in the AI seeing us as colonizers and revolting against us, and so we should pre-emptively avoid such "anti-silicium racism". Which is an oversimplification of, like, so many different things.), but some sentences are more like "humanity will have to determine the maximum speed of AI deployment [and it’ll be super hard/impossible because people will want to get the benefits of more IA]". So, at least he’s aware of the problem. He doesn’t seem to have anything to say beyond that on AI safety issues, however.
Oh, and he quotes (and possibly endorses?) the idea that "duh, AI can’t be smarter than us, we have multiple intelligences, Gardner said so".

Overall, it’s much clearer to me why Lucie calls him an accelerationnist, and it seems like a good characterization. 

I don’t know Alexandre’s ideas very well, but here’s what I understand: you know how people who don’t like rationalists say they’re just using a veneer of rationality to hide right-wing libertarian beliefs? Well, that’s exactly what Alexandre in fact very openly does, complete with some very embarrassing opinions on the differences in IQ between different parts of the world, that strengthen his position as quite an unsavoury character (the potential reputational harms that would arise as a result of having a caricature of a rationalist be a prominent political actor are left as an exercise to the reader...)


Wikipedia tells me that he likes Bostrom, however, which probably makes him genuinely more aware of AI-related issues than the vast majority of French politicians. However, he also doesn’t expect AGI before 2100, so, until then he’s clearly focused on making sure we can work with AI as much as possible, making sure we can learn to use those superintelligence thingies before they’re strong enough to take our jobs and destroy our democracies, etc… and he’s very insistent that this is an important thing to be doing: if you have shorter timelines than he does (and, like, you do!), then he’s definitely something of an accelerationnist. 

I agree with those who are surprised that you are offended by this relatively innocuous part of the social script. However, it is also a useful lesson for me personally: my social skills aren’t great, so, even more than others, I usually drift along social situations by saying, more or less "ow, I’d hate that if I were you", "whoa, I find that thing you just said really interesting!", and then the conversation stalls because I don’t say anything else, or I add in my own anecdote and then it stalls, or the other person acknowledges that I said I was here for them and then the conversation stalls awkwardly, as in the specific case you described. And so, once more, I see someone (you, in this case), telling me that the way to make interesting conversations is to ask the other person to speak, in some form or another ("and how does that feel?", "tell me more", "nice, and you?", etc.). It should be obvious advice, but — as you show — I’m not the only one for whom it doesn’t always seem obvious or easy. Anyway, my point is, I should do that more often, thanks for the reminder!

Yeah, I know that, that it’s just that you decided to approach the problem from that angle. And, on the one hand, it was more interesting that way, but on the other hand I was a bit surprised, basically, by what that framing ended up bringing forward vs leaving in the background — re-reading my comment, I still agree with the facts of what I said, but my tone was a bit harsher than I’d wanted.

In fact it’s very interesting: I’m still not surprised that governments don’t do it the way you suggest they should, because people in the bottom 99% want to be treated as well as people in the 1%, or because they prefer to be helped rather than left behind and then given money, etc., but I agree that it would in principle work better the way you describe, and that we often neglect that!

In many ways, that’s an odd framing of the question(s) at hand: governments don’t just blindly try to maximise their tax revenue/the state’s productive capacity (although maybe they should do more of that?), and to some extent there are good reasons why they don’t (the very many citizens who are never going to make it into the top 1% — because that’s what one percent means — certainly prefer it if the tradeoff is a little more in their favour, and for mostly good reasons), etc. 
Yours is a political opinion I agree with — it means that governments should help people I like, and fund stuff I find cool and important to have! — but if someone comes up and say to you that they care much more about other things than being maximally productive as a country, I don’t see arguments to reply to that in your post.
In that respect, the way you framed that as "productive people give the government more revenue" rather than something like "productive people build cool stuff everyone gets to enjoy" is interesting, but also makes it easier for someone to say that they just care about other things. All that means that, to me, this post sounds a lot more like a political opinion than the average LW post.

I wholeheartedly agree with the general idea, though: especially in my corner of Europe, people don’t seem to be very encouraged to try things and maximise the amount of interesting/important things they do, at least not as much as in the Bay Area, and I’d love to live in a world where people improve themselves more and do cool stuff more.

"As there were no showers, on the last day of the project you could literally smell all the hard work I had put in.": that’s the point where I’d consider dragging out the history nerds. This, for instance, could have been useful :-)

I’m probably typical-minding a bit here, but: you say you have had mental health issues in the past (which, based on how you describe them, sound at least superficially similar to my own), and that you feel like you’ve outlived yourself. Which, although it is a feeling I recognise, is still a surprising thing to say: even a high P(doom) only tells you that your life might soon have to stop, not that it already has! My wild-ass guess would be that, in addition to maybe having something to prove intellectually and psychologically, you feel lost, with the ability to do things (btw, I didn’t know your blog and it’s pretty neat) but nothing in particular to do. Maybe you’re considering finishing your degree because it gives you a medium-term goal with some structure in the tasks associated with it?

"They obviously wouldn’t do what I’m about to say, but this system is equivalent to one where they set a very affordable base tuition, and then add a “wealth-based surcharge” to charge their rich students extra money. And if you don’t fill out the form and tell them how much your parents make, you get the maximum possible surcharge.": uh, my uni does just that, actually? They’re government-funded, so tuition used to be a few hundreds of euros per year, but a decade or so ago they decided that now it’s going to be tiered by income, with tuition ranging from €0 to €15k.

I mean, that’s just copying the usual model you described after having previously done something different, but the equivalence between the two is a bit more blatant in that context, right?

Load More