Carl Feynman

I was born in 1962 (so I’m in my 60s).  I was raised rationalist, more or less, before we had a name for it.  I went to MIT, and have a bachelors degree in philosophy and linguistics, and a masters degree in electrical engineering and computer science.  I got married in 1991, and have two kids.  I live in the Boston area.  I’ve worked as various kinds of engineer: electronics, computer architecture, optics, robotics, software.

Around 1992, I was delighted to discover the Extropians.  I’ve enjoyed being in that kind of circles since then.  My experience with the Less Wrong community has been “I was just standing here, and a bunch of people gathered, and now I’m in the middle of a crowd.”  A very delightful and wonderful crowd, just to be clear.  

I‘m signed up for cryonics.  I think it has a 5% chance of working, which is either very small or very large, depending on how you think about it.

I may or may not have qualia, depending on your definition.  I think that philosophical zombies are possible, and I am one.  This is a very unimportant fact about me, but seems to incite a lot of conversation with people who care.

I am reflectively consistent, in the sense that I can examine my behavior and desires, and understand what gives rise to them, and there are no contradictions I‘m aware of.  I’ve been that way since about 2015.  It took decades of work and I’m not sure if that work was worth it.

Posts

Sorted by New

Wiki Contributions

Comments

I assumed that being struck by lightning was fairly common, but today I learned I was wrong. Apparently it only kills about 30 Americans per year, and I assumed it was more like 3000, or even 30,000.

As a child, I was in an indoor swimming pool in a house that was struck by lightning, and as a young man, I was in a car that was about 40 feet from a telephone pole that was struck by lightning.  In both cases I was fine because of the Faraday cage effect, but the repeated near-misses and spectacular electrical violence made me think that lightning was a non-negligible hazard.  I suppose that’s a rationalist lesson: don’t generalize from your experience of rare events if you can actually look up the probabilities.

I guess I wasted all that time training my kids in lightning safety techniques.  

Why should we accept as evidence something that you perceived while you were dreaming?  Last night I dreamed that I was walking barefoot through the snow, but it wasn’t cold because it was summer snow.  I assume you don’t take that as evidence that warm snow is an actual summer phenomenon, so why should we take as evidence your memory of having two consciousnesses?

It seems to me that a correctly organized consciousness would occur once per body.  Consciousness is (at least in part) a system for controlling our actions in the medium and long term.  If we had two consciousnesses, and they disagree as to what to do next, it would result in paralysis.  And if they agree, then one of them is superfluous, and we’d expend less brain energy if we only had one.

It’s curious that you ask for personal experience or personal research.  Did you mean to discount the decades of published research in using xenon for anesthesia and MRI contrast?  Anyway, if you’ll accept the opinion of someone who has merely read some books on anesthesia and gas physiology: my opinion is that this guy is full of it.  The physiology of small-molecule anaesthetics and serotoninergic hallucinogens is completely different.  And he doesn’t seem like a serious person.

Electrical engineer here.  I read the publicity statement, and from my point of view it is both (a) a major advance, if true, and (b) entirely plausible.  When you switch from a programmable device (e.g. GPU) to a similarly sized special purpose ASIC, it is not unreasonable to pick up a factor of 10 to 50 in performance.  The tradeoff is that the GPU can do many more things than the ASIC, and the ASIC takes years to design.  They claim they started design in 2022, on a transformer-only device, on the theory that transformers were going to be popular.  And boy, did they luck out.  I don‘t know if other people can tell, but to me, that statement oozes with engineering glee.  They’re so happy!  
I would love to see a technical paper on how they did it.

Of course they may be lying.

In a great deal of detail, apparently, since it has a recommended reading time of 131 minutes.

I read along in your explanation, and I’m nodding, and saying “yup, okay”, and then get to a sentence that makes me say “wait, what?”  And the whole argument depends on this.  I’ve tried to understand this before, and this has happened before, with “the universal prior is malign”.  Fortunately in this case, I have the person who wrote the sentence here to help me understand.

So, if you don’t mind, please explain “make them maximally probable”.  How does something in another timeline or in the future change the probability of an answer by writing the wrong answer 10^100 times?
 

Side point, which I’m checking in case I didn’t understand the setup: we’re using the prior where the probability of a bit string (before all observations) is proportional to 2^-(length of the shortest program emitting that bit string).  Right?

I will speak to the question of “what are some situations in real life, other than "AI takeoff", where the early/mid/late game metaphor seems useful?”.  It seems to me that such a metaphor is useful in any situation with 

—two or more competitors,

—who start small and expand,

—in a fixed-size field of contention,

—and such that bigger competitors tend to beat small ones.

The phases in such a competition can be described as

—Early: competitors are building up power and resources more or less independently, because they’re not big enough to run into each other significantly.  Important to strategize correctly.  You can plan longer term because other players can’t upset your plans yet.

—Mid: what other players do matters very much to what you do.  Maximum coupling leads to maximum confusion.

—End: time for the leading player to grind down the smaller players.  Becomes easier to understand as hope disappears.

Chess is an example, where there are two competitors, and the resource is “pieces that have been moved and not yet taken”.  This also applies to multiplayer tabletop games (which is where I thought of it).  It also applies to companies moving into a new product area, like desktop operating systems in the ‘80s.  It applied to European colonial powers moving into new continents.

Could you please either provide a reference or more explanation of the concept of an acausal attack between timelines?  I understand the concept of acausal cooperation between copies of yourself, or acausal extortion by something that has a copy of you running in simulation.  But separate timelines can’t exchange information in any way.  How is an attack possible?  What could possibly be the motive for an attack?

Funny thing— your message seemed to be phrased as disagreeing, so I was all set to post a devastating reply.  But after I tried to find points of actual disagreement, I couldn’t.  So I will write a reply of violent agreement.

Your points about the dissimilarity between aerospace in 1972 and AI in 2024 are good ones.  Note that my original message was about how close current technology is to AGI.  The part about aerospace was just because my rationalist virtue required me to point out a case where an analogous argument would have failed.  I don’t think it’s likely.  

Was Concorde “inherently a bad idea”?  No, but “inherently” is doing the work here.  It lost money and didn’t lead anywhere, which is the criteria on which such an engineering project must be judged.  It didn’t matter how glorious, beautiful or innovative it was.  It’s a pyramid that was built even though it wasn’t efficient.

The impossibility of traveling faster than the speed of light was a lot less obvious in 1961. 

If the increase in speed had continued at the rate it did from 1820 to 1961, we’d be faster than the speed of light by 1982. This extrapolation is from an article by G. Harry Stine in Analog, in 1961.  It was a pretty sloppy analysis by modern standards, but gives an idea of how people were thinking at the time.
 

These all happened in 1972 or close to it:

—Setting the air speed record, which stands to this day.

—End of flights to the Moon.  
—Cancellation of the American SST project.   

—Cancellation of the NERVA nuclear rocket program.

—The Boeing 747 enters service as the largest passenger plane until 2003.

—Concorde enters service, turns out to be a bad idea.

In the ‘80s, I found an old National Geographic from 1971 or 1972 about the “future of flight”.  Essentially none of their predictions had come true.  That’s why I think it was a surprise.

Load More