I was born in 1962 (so I’m in my 60s). I was raised rationalist, more or less, before we had a name for it. I went to MIT, and have a bachelors degree in philosophy and linguistics, and a masters degree in electrical engineering and computer science. I got married in 1991, and have two kids. I live in the Boston area. I’ve worked as various kinds of engineer: electronics, computer architecture, optics, robotics, software.
Around 1992, I was delighted to discover the Extropians. I’ve enjoyed being in that kind of circles since then. My experience with the Less Wrong community has been “I was just standing here, and a bunch of people gathered, and now I’m in the middle of a crowd.” A very delightful and wonderful crowd, just to be clear.
I‘m signed up for cryonics. I think it has a 5% chance of working, which is either very small or very large, depending on how you think about it.
I may or may not have qualia, depending on your definition. I think that philosophical zombies are possible, and I am one. This is a very unimportant fact about me, but seems to incite a lot of conversation with people who care.
I am reflectively consistent, in the sense that I can examine my behavior and desires, and understand what gives rise to them, and there are no contradictions I‘m aware of. I’ve been that way since about 2015. It took decades of work and I’m not sure if that work was worth it.
Tractor pulling is a midwestern sport where people compete to build the tractor with the greatest pulling power, and then use it to pull against an ever-increasing drag. Whoever gets furthest wins. There are no restrictions on the types of engine or tractor, aside from overall weight. So it's basically a contest in who can buy the biggest jet engine? No, because the rules require that it be piloted by hand. No gyroscopes, electronics, or mechanical steering. That keeps the sport human and engaging.
The automatic guidance required for superhuman tractor pull driving is pretty simple (I looked into it in about 1991 when I was going to tractor pulls). You could build it out of 1950's missile guidance technology. By 1991, it would have been well within my one-person engineering capabilities. And by 2026, It's trivial. But to even get close to it is illegal. So here's a sport that decided to rule out AI decades ago, because it was decades ago when AI began to impinge on its sorrow, glory and beauty.
Another how not to die of carbon monoxide tip: Don't fall asleep in a car below decks on a ferry that has trucks idling. I was poisoned into unconsciousness, but revived with oxygen.
This could have been much shorter and more straightforwardly written, while being even more engaging.
Because then we can't trust that that's what the moons of Neptune really look like. The information has come from a source with goals and motivations and long-term plans, and the ability to lie. If a space probe tells us that the largest moon of Neptune has black geysers and terrain shaped like cantaloupe skin, we can trust it because it's subhuman and incapable of fooling us. With an AI we have to think "what if it's wrong? What if it has an ulterior motive?"
It occurs to me that both of my examples are similar, in that the moon of Neptune are remote in space, while historical facts are remote in time. We can imagine facts that are both. A few years ago, the comet Omuamua passed through the solar system on an interstellar journey. We took lots of observations of its weird properties as it passed the Sun, and then it vanished back into the interstellar darkness. The longer we wait, the harder it would be to send a space probe.
We have pictures of the moons of Neptune. Verifying them would require sending another space probe, and be no easier than the first one.
Lots of historical facts were easy to determine at the time they were written down, and now quite impossible to check.
All of your points are good ones. I now think my original position was an over-reaction. I'm still worried in general, but this was a carefully composed post, and i think my original position went too far.
I agree that it should be possible to make political posts here that are consistent with rational thought. But i think that we, as fallible humans, will have a hard time doing it. And an outbreak of political disagreement here would be very unfortunate. We've done so well thus far keeping out the toxoplasma of rage.
I just want to register my protest at this explicitly political post appearing on Less Wrong. We've got a marvelously politics-free space here. Let's keep it that way. If you want to talk politics, you have essentially every other web site in the Internet to do it.
When our kids were toddlers and youngsters, we had a rule that a whiny kid would not get what they asked for. We eventually developed a custom where if a kid asked for something in a whiny manner, we would dramatically intone “If you whine, you get…” and the kid would reply “NOTHING!” Then they would ask for whatever-it-was in a pleasant manner, and often get it. Kids often don’t realize that they’re whining, so this was a way of helping them to notice. And by often giving them the thing when they asked nicely, instead of holding the whining against them, we encouraged self-correction.
I have some experience in the design of systems designed for high reliability and resistance to adversaries. I feel like I’ve seen this kind of thinking before.
Your current line of thinking is at a stage I would call “pretheoretical noodling around.” I don’t mean any disrespect; all design has to go through this stage. But you’re not going to find any good references, or come to any conclusions, if you stay at this stage. A next step is to settle on a model of what you want to get done, and what capabilities the adversaries have. You need some bounds on the adversaries; otherwise nothing can work. And of course you need some bounds on what the system does, and how reliably. Once you’ve got this, you can either figure out how to do it, or prove that it can’t be done.
For example there are ways of designing hardware which is reliable on the assumption that at most N transistors are corrupt.
The problem of coming to agreement between a number of actors, some of whom are corrupt, is known as the Byzantine generals problem. It is well studied, and you may find it interesting.
I’m also interested in this topic, and I look forward to seeing where this line of thinking takes you.
My understanding of brain vasculature suggests that you should cool the carotid arteries in the neck, not the scalp. The scalp is fed by the external branch of the carotid, while the brain is fed by the internal branch. So cooling the scalp won’t cool the blood going to the brain.
There has been research on using localized microwave heating to cook brain tumors. Forty five years ago, it was located in the basement of building 26 at MIT. My father looked into it and said that it was hard to get good localized heating because heat was carried away so fast by the blood.
Epistemic status: all from memory, don’t have time for research, got to go now. 30% chance I’ve gotten something importantly wrong.
Epistemic status: I didn't read the paper but I read the blog post.
In 1976, the essay "Artificial Intelligence meets Natural Stupidity" pointed out a failure mode into which AI researchers can fall. I fear this is another example, 50 years later. It goes as follows:
Of course there is no necessary connection between the new phenomenon "X" and the existing X in ordinary language. For this to be good research, you need to show that the two Xes are similar in all important respects.
In this case, X is "incoherence". They define incoherence to be the fraction of error explained by variance. This has little or no connection to the property of being an actually incoherent reasoner, or to the effectiveness of superhuman AI.
I hope this doesn't result in redefining the meaning of "incoherence" in the wider field.