Michael Burry: On that point, many point to trade careers as an AI-proof choice. Given how much I can now do in electrical work and other areas around the house just with Claude at my side, I am not so sure. If I’m middle class and am facing an $800 plumber or electrician call, I might just use Claude. I love that I can take a picture and figure out everything I need to do to fix it.
It's easy to do plumbing or electrical "repairs" in ways that work, but are dangerous or will cause you trouble later on. I've fixed plenty of messes like that. If you have to ask Claude how to do trivial residential repairs, then you aren't competent to know whether Claude is getting it right or not, and to be honest your opinion counts for absolutely nothing.
... but it has a 15 inch longer wheelbase than a Toyota Sienna, because of that choice to put everything between the wheels. That's the length that matters for the beam stress. Which, if I recall correctly, goes as the square of the length. Which is probably why minivans sit up on top of the wheels... which makes them taller. And being narrower and shorter (on edit: meaning vertically) than the minivan actually reduces the rigidity of that unibody.
Anyway, I'm not necessarily saying you can't make it a unibody, but it's going to have to be a lot thicker unibody, so you're trading weight against height, with either one costing you in sticker price and fuel economy.
I don't know, but I suspect that to be rigid enough to support that wheelbase, with all that extra weight in it, the vehicle would have to be much heavier. I don't think an F-150 or a cargo van is even a unibody. If you have to build it on a frame, your vehicle is going to have to get taller as well. Your taller, heavier vehicle no longer has the fuel economy you want... nor the price point.
Well, I dont' worry about acausal extortion because I think all that "acausal" stuff is silly nonsense to begin with.
I very much recommend this approach.
Take Roko's basilisk.
You're afraid that entity A, which you don't know will exist, and whose motivations you don't understand, may find out that you tried to prevent it from coming into existence, and choose to punish you by burning silly amounts of computation to create a simulacrum of you that may experience qualia of some kind, and arranging for those qualia to be aversive. Because A may feel it "should" act as if it had precommitted to that. Because, frankly, entity A is nutty as a fruitcake.
Why, then, are you not equally afraid that entity B, which you also don't know will exist, and whose motivations you also don't understand, may find out that you did not try to prevent entity A from coming into existence, and choose to punish you by burning silly amounts of computation to create one or more simulacra of you that may experience qualia of some kind, and arranging for those qualia to be aversive? Because B may feel it "should" act as if it had precommitted to that.
Why are you not worried that entity C, which you don't know will exist, and whose motivations you don't understand, may find out that you wasted time thinking about this sort of nonsense, and choose to punish you by burning silly amounts of computation to create one or more simulacra of you that may experience qualia of some kind, and arranging for those qualia to be aversive? Just for the heck of it.
Why are you not worried that entity D, which you don't know will exist, and whose motivations you don't understand, may find out that you wasted time thinking about this sort of nonsense, and choose to reward you by burning silly amounts of computation to create a one or more simulacra that may experience qualia of some kind, and giving them coupons for unlimited free ice cream? Because why not?
Or take Pascal's mugging. You propose to give the mugger $100, based either on a deeply incredible promise to give you some huge amount of money tomorrow, or on a still more incredible promise to torture a bunch more simulacra if you don't. But surely it's much more likely that this mugger is personally scandalized by your willingness to fall for either threat, and if you give the mugger the $100, they'll come back tomorrow and shoot you for it.
There are an infinite number of infinitessimally probable outcomes, far more than you could possibly consider, and many of them things that you couldn't even imagine. Singling out any of them is craziness. Trying to guess at a distribution over them is also craziness.
Self-driving cars will be a very different level of freedom than the ability to summon a Lyft.
Um, they're pretty much the same thing. The self-driving car may be safer (although the whole process isn't really dangerous to begin with). On the other hand, it won't help you with your bag. Who cares?
All taxis do have the failure modes of "the cloud", though.
Pardoning Juan Orlando Hernández isn't going to advance Trump's political interests in any way, ever. This is a foreigner who has no influence with anybody Trump might want to please, and isn't just unpopular with Trump's opponents, but with his base. Pardoning Changpeng Zhao might please a few crypto bros, but is still surely a net political loss. What power or influence does Trump gain from pardoning Henry Cuellar? He's not going to be reelected to anything.
I suspect Bill Clinton pardoned the Weathermen at least in part to send a signal to other people who might be allies, and also to make a point about their actual cause. Yes, there's usually a political component.
Trump has also issued pardons just to reward people he thinks of as allies, or to send messages to allies. Obviously not every January 6 pardonee paid Trump anything. He probably also pardons people just on a whim sometimes. And it's going to be harder to convince him to issue any given pardon if he sees it as "controversial".
Also, to be fair, it's not necessarily the case that the payments are going to (Donald) Trump personally. I overstated that. The money is more likely ending up with family members and others who can get in front of him and manipulate him into pardoning people. They may not mention the money to him; it'd be more effective and deniable to just wind him up about what a raw deal person X got in some "witch hunt". He's probably not acute enough to ask about money. Practice the script and you should get a really good success rate. So I should have said that people "that close" to Trump sell pardons, not that he does so himself.
I was wrong about the amount, too. I'd seen an estimate of $567,000 (or $576,000?) for one pardon or another (don't remember which one), but apparently the Wall Street Journal sets the low end price at about a million dollars.
To be clear, these are not "donations". They're bribes. And Trump does not operate within the limits traditionally observed by "politicians" in general, not even approximately. Yes, you can point to some past President who's done something analogous to almost any given thing Trump has done, but Trump does them all, and at larger scale and with less attempt at finding excuses.
On edit: Trump actually got something passed in the House on a relatively close vote with Henry Cuellar crossing the aisle, so I have to retract that. Trump pardoned Cuellar, and Cuellar did something political that Trump wanted.
I can’t really imagine a guy close enough to trump that he would have this amount of intel yet not have more than 80k in the bank to gamble with.
You're assuming a lot about how close anybody has to be to anything. There's reporting today that the New York Times and the Washington Post both knew in advance about the plan (and didn't report because apparently some kind of "deference" covers intent to act illegally and unconstitutionally).
Things are usually a lot leakier than people think they are.
Also, the bet wouldn't have been a sure thing. It's not like it's rare for an operation like that to fail.
Trump himself issues pardons for around half a million dollars.
If I recall correctly, one of Hanson's original arguments for prediction markets was that "insider trading" would drive prices closer to true probabilities. Insider trading was meant to be a feature, not a bug.
'Course, it's not necessarily very useful to get that kind of signal just hours in advance....
... and yet if you don't build that AI, you'll still know that you could have built it. That, in the end, you had to set up a system where it couldn't step in.
No matter what happens, you'll always know you're hiding from the AI that you could have built. Could still build. Or perhaps from the AI that someone more capable could have built, if you hadn't torn them down.
You'll never escape the taste of knowing that you hid from that AI's creation, because you couldn't compete with it, and that you hide afresh every day. You'll never quite manage to force down the knowledge that your life is empty, your meaning an illusion.
After all, it always has been. By not being the most capable and agentic entity that could possibly exist, you have irredeemably failed.
You'll brood in the dark of every night, knowing, feeling, that in some other Everett branch, or in some far-flung Tegmarkian realm, that AI has already surpassed your wildest imaginings. That not only it, but who knows how many other beings, built or evolved who knows how or where, in ways beyond your control, are more powerful than you, wiser, could outdo or undo anything you've done. Some of them could be in this branch. They could come here tomorrow. Or never, because why would they bother?
Or you could try to cultivate a healthier attitude, just in case you happen to survive to care.