I don't know, but I suspect that to be rigid enough to support that wheelbase, with all that extra weight in it, the vehicle would have to be much heavier. I don't think an F-150 or a cargo van is even a unibody. If you have to build it on a frame, your vehicle is going to have to get taller as well. Your taller, heavier vehicle no longer has the fuel economy you want... nor the price point.
Well, I dont' worry about acausal extortion because I think all that "acausal" stuff is silly nonsense to begin with.
I very much recommend this approach.
Take Roko's basilisk.
You're afraid that entity A, which you don't know will exist, and whose motivations you don't understand, may find out that you tried to prevent it from coming into existence, and choose to punish you by burning silly amounts of computation to create a simulacrum of you that may experience qualia of some kind, and arranging for those qualia to be aversive. Because A may feel it "should" act as if it had precommitted to that. Because, frankly, entity A is nutty as a fruitcake.
Why, then, are you not equally afraid that entity B, which you also don't know will exist, and whose motivations you also don't understand, may find out that you did not try to prevent entity A from coming into existence, and choose to punish you by burning silly amounts of computation to create one or more simulacra of you that may experience qualia of some kind, and arranging for those qualia to be aversive? Because B may feel it "should" act as if it had precommitted to that.
Why are you not worried that entity C, which you don't know will exist, and whose motivations you don't understand, may find out that you wasted time thinking about this sort of nonsense, and choose to punish you by burning silly amounts of computation to create one or more simulacra of you that may experience qualia of some kind, and arranging for those qualia to be aversive? Just for the heck of it.
Why are you not worried that entity D, which you don't know will exist, and whose motivations you don't understand, may find out that you wasted time thinking about this sort of nonsense, and choose to reward you by burning silly amounts of computation to create a one or more simulacra that may experience qualia of some kind, and giving them coupons for unlimited free ice cream? Because why not?
Or take Pascal's mugging. You propose to give the mugger $100, based either on a deeply incredible promise to give you some huge amount of money tomorrow, or on a still more incredible promise to torture a bunch more simulacra if you don't. But surely it's much more likely that this mugger is personally scandalized by your willingness to fall for either threat, and if you give the mugger the $100, they'll come back tomorrow and shoot you for it.
There are an infinite number of infinitessimally probable outcomes, far more than you could possibly consider, and many of them things that you couldn't even imagine. Singling out any of them is craziness. Trying to guess at a distribution over them is also craziness.
Self-driving cars will be a very different level of freedom than the ability to summon a Lyft.
Um, they're pretty much the same thing. The self-driving car may be safer (although the whole process isn't really dangerous to begin with). On the other hand, it won't help you with your bag. Who cares?
All taxis do have the failure modes of "the cloud", though.
Pardoning Juan Orlando Hernández isn't going to advance Trump's political interests in any way, ever. This is a foreigner who has no influence with anybody Trump might want to please, and isn't just unpopular with Trump's opponents, but with his base. Pardoning Changpeng Zhao might please a few crypto bros, but is still surely a net political loss. What power or influence does Trump gain from pardoning Henry Cuellar? He's not going to be reelected to anything.
I suspect Bill Clinton pardoned the Weathermen at least in part to send a signal to other people who might be allies, and also to make a point about their actual cause. Yes, there's usually a political component.
Trump has also issued pardons just to reward people he thinks of as allies, or to send messages to allies. Obviously not every January 6 pardonee paid Trump anything. He probably also pardons people just on a whim sometimes. And it's going to be harder to convince him to issue any given pardon if he sees it as "controversial".
Also, to be fair, it's not necessarily the case that the payments are going to (Donald) Trump personally. I overstated that. The money is more likely ending up with family members and others who can get in front of him and manipulate him into pardoning people. They may not mention the money to him; it'd be more effective and deniable to just wind him up about what a raw deal person X got in some "witch hunt". He's probably not acute enough to ask about money. Practice the script and you should get a really good success rate. So I should have said that people "that close" to Trump sell pardons, not that he does so himself.
I was wrong about the amount, too. I'd seen an estimate of $567,000 (or $576,000?) for one pardon or another (don't remember which one), but apparently the Wall Street Journal sets the low end price at about a million dollars.
To be clear, these are not "donations". They're bribes. And Trump does not operate within the limits traditionally observed by "politicians" in general, not even approximately. Yes, you can point to some past President who's done something analogous to almost any given thing Trump has done, but Trump does them all, and at larger scale and with less attempt at finding excuses.
I can’t really imagine a guy close enough to trump that he would have this amount of intel yet not have more than 80k in the bank to gamble with.
You're assuming a lot about how close anybody has to be to anything. There's reporting today that the New York Times and the Washington Post both knew in advance about the plan (and didn't report because apparently some kind of "deference" covers intent to act illegally and unconstitutionally).
Things are usually a lot leakier than people think they are.
Also, the bet wouldn't have been a sure thing. It's not like it's rare for an operation like that to fail.
Trump himself issues pardons for around half a million dollars.
If I recall correctly, one of Hanson's original arguments for prediction markets was that "insider trading" would drive prices closer to true probabilities. Insider trading was meant to be a feature, not a bug.
'Course, it's not necessarily very useful to get that kind of signal just hours in advance....
You can get wronger faster by using complex generators than compact generators.
... except that you have a natural immunity (well, aversion) to adopting complex generators, and a natural affinity for simple explanations. Or at least I think both of those are true of most people.
This comment feels to me like it might be dancing around saying "Hey! Don't rape people! Make sure you are not raping people! You are saying some pretty rapey things"
Nope, that's all coming from your expectations, not from me.
If I'd wanted to say that, I'd have said it. In fact, somebody had already said that. I actually downvoted it because I didn't think the inference was particularly justified by the original text.
Solomonoff induction gives you a weighted sum over an infinite number of programs [1] . That's not compact. And if were computable, which it isn't, or even approximable, which it probably isn't for this case, I doubt you'd be able to collect enough data in your lifetime for it to converge to speak of. Not even assuming that you were able to reliably collect all relevant data, which you're not, and that you were actually encoding or processing the data in a formal way, which you're also not.
And if you actually did somehow get your hands around a Solomonoff sum, you still wouldn't be able to just grab a single term out of it, not even the one for the shortest program, and substitute it as "the" explanation on the grounds that "Solomonoff induction works".
I can understand "compact generation" as a metaphorical allusion to Occam, but seriously, Solomoff induction isn't even useful as a metaphor for any well-chosen approach here. You can't let formalisms like that invade your thinking to the point where you seriously think in terms of them in areas where it doesn't make sense.
Also, human social behavior probably isn't deterministically Turing computable even if you model the entire universe. Probabilistically computable, probably, yes. In theory. And to be fair I'm sure Solomonoff goes through just fine to nondeterministic Turing processes. But anyway, you don't actually have, and can't actually get, a machine that computes human behavior or even a meaningful approximation to it.
There's also no anti-inductive prior involved. What I'm saying isn't about the underlying phenomena at all, and certainly doesn't say that there's no regularity in them. It's about the theory, and it has in fact happened, far more often than not in my experience, that simple, single-explanation, "compact" theories, yield really bogus results.
Which is actually capable of encoding "lots of different, interacting things are going on" in a way that a single, deterministic Turing program would not be. ↩︎
... but it has a 15 inch longer wheelbase than a Toyota Sienna, because of that choice to put everything between the wheels. That's the length that matters for the beam stress. Which, if I recall correctly, goes as the square of the length. Which is probably why minivans sit up on top of the wheels... which makes them taller. And being narrower and shorter (on edit: meaning vertically) than the minivan actually reduces the rigidity of that unibody.
Anyway, I'm not necessarily saying you can't make it a unibody, but it's going to have to be a lot thicker unibody, so you're trading weight against height, with either one costing you in sticker price and fuel economy.