LESSWRONG
LW

3615
Carl Feynman
1782Ω112440
Message
Dialogue
Subscribe

I was born in 1962 (so I’m in my 60s).  I was raised rationalist, more or less, before we had a name for it.  I went to MIT, and have a bachelors degree in philosophy and linguistics, and a masters degree in electrical engineering and computer science.  I got married in 1991, and have two kids.  I live in the Boston area.  I’ve worked as various kinds of engineer: electronics, computer architecture, optics, robotics, software.

Around 1992, I was delighted to discover the Extropians.  I’ve enjoyed being in that kind of circles since then.  My experience with the Less Wrong community has been “I was just standing here, and a bunch of people gathered, and now I’m in the middle of a crowd.”  A very delightful and wonderful crowd, just to be clear.  

I‘m signed up for cryonics.  I think it has a 5% chance of working, which is either very small or very large, depending on how you think about it.

I may or may not have qualia, depending on your definition.  I think that philosophical zombies are possible, and I am one.  This is a very unimportant fact about me, but seems to incite a lot of conversation with people who care.

I am reflectively consistent, in the sense that I can examine my behavior and desires, and understand what gives rise to them, and there are no contradictions I‘m aware of.  I’ve been that way since about 2015.  It took decades of work and I’m not sure if that work was worth it.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Checking in on AI-2027
Carl Feynman9d112

I am amused that we are, with perfect seriousness, discussing the dates for the singularity with a resolution of two weeks.  I’m an old guy; I remember when the date for the singularity was “in the twenty first century sometime.”  For 50 years, predictions have been getting sharper and sharper.  The first time I saw a prediction that discussed time in terms of quarters instead of years, it took my breath away.  And that was a couple of years ago now.  

Of course it was clear decades ago that as the singularity approached, we have a better and better idea of its timing and contours.  It’s neat to see it happen in real life.

(I know “the singularity” is disfavored, vaguely mystical, twentieth century terminology.  But I’m using it to express solidarity with my 1992 self, who thought with that word.)

Reply
Checking in on AI-2027
Carl Feynman9d105

Here’s a try at phrasing it with less probability jargon:

The forecast contains a number of steps, all of which are assumed to take our best estimate of their most likely time.  But in reality, unless we’re very lucky, some of those steps will be faster than predicted, and some will be slower.  The ones that are faster can only be so much faster (because they can’t take no time at all).  On the other hand, the ones that are slower can be much slower.  So the net effect of this uncertainty probably adds up to a slowdown relative to the prediction. 

Does that seem like a fair summary?

Reply
Accelerando as a "Slow, Reasonably Nice Takeoff" Story
Carl Feynman9d62

Some may wonder at the mention of “empire time” in the second excerpt from chapter 5.  It refers to a kind of artificially constructed simultaneity available to civilizations which have mastered both traversable wormholes and near-light-speed travel.  It doesn’t really do much for a civilization bounded within the orbit of Jupiter, which is only about a light-hour across.  I think Stross included it as a flavor phrase.  It’s marvelously evocative even if you don’t know what it means.

Back in the early ‘90s, when all this singularity stuff was much more theoretical, I remember empire time making a big impression on me.  It was neat how we could discern some of the contours of future possible civilizations before we got there.

You can read more about it here: http://www.aleph.se/Trans/Tech/Space-Time/wormholes.html#6 

Reply1
Four ways learning Econ makes people dumber re: future AI
Carl Feynman2mo20

Increasing inequality has been a thing here in the US for a few decades now, but it’s not universal, and it’s not an inevitable consequence of economic growth.  Moreover, it does not (in the US) consist of poor people getting poorer and rich people getting richer.  It consists of poor people staying poor, or only getting a bit richer, while rich people get a whole lot richer.  Thus, it is not demand destroying.

One could imagine this continuing with the advent of AI, or of everyone ending up equally dead, or many other outcomes.

Reply
Neuroscience of human sexual attraction triggers (3 hypotheses)
Carl Feynman2mo60

This suggests the perfect date would be to meet at an amusement park, go on a roller coaster together, walk separately to the next roller coaster, and so on.

Reply
How quickly could robots scale up?
Carl Feynman5mo20

I wrote a LessWrong article that tries to estimate doubling time for a self-reproducing robot.  A critical step is that smaller robots are faster.  Most manufacturing processes scale such that they get N times faster as they get N times smaller.  I picked N=4, for reasons explained in the article.  I concluded the doubling time is five weeks.   So the time to a billion robots is on the order of five years.

Even if your goal is a human-size robot, you’re better off building small robots to build it, since they work faster.  I assumed fairly clumsy hardware, but software comparable to a human machinist in cleverness.

Reply
Most Questionable Details in 'AI 2027'
Carl Feynman6mo82

Nitpick: No single organism can destroy the biosphere; at most it can fill its niche & severely disrupt all ecosystems.

Have you read the report on mirror life that came out a few months ago?  A mirror bacterium has a niche of “inside any organism that uses carbon-based biochemistry”.  At least, it would parasitize all animals, plants, fungi, and the larger Protozoa, and probably kill them.  I guess bacteria and viruses would be left.  I bet that a reasonably smart superintelligence could figure out a way to get them too.

Reply
Daniel Tan's Shortform
Carl Feynman7mo40

Quite right.  AI safety is moving very quickly and doesn’t have any methods that are well-understood enough to merit a survey article.  Those are for things that have a large but scattered literature, with maybe a couple of dozen to a hundred papers that need surveying.  That takes a few years to accumulate.

Reply
Daniel Tan's Shortform
Carl Feynman7mo30

Could you give an example of the sort of distinction you’re pointing at?  Because I come to completely the opposite conclusion.  

Part of my job is applied mathematics.  I’d rather read a paper applying one technique to a variety of problems, than a paper applying a variety of techniques to one problem.  Seeing the technique used on several problems lets me understand how and when to apply it.  Seeing several techniques on the same problem tells me the best way to solve that particular problem, but I’ll probably never run into that particular problem in my work.

But that’s just me; presumably you want something else out of reading the literature.  I would be interested to know what exactly.

Reply
So how well is Claude playing Pokémon?
Carl Feynman7mo91

When I say Pokémon-type games, I don’t mean games recounting the adventures of Ash Ketchum and Pikachu.  I mean games with a series of obstacles set in a large semi-open world, with things you can carry, a small set of available actions at each point, and a goal of progressing past the obstacles.  Such games can be manufactured in unlimited quantities by a program.  They can also be “peopled” by simple LLMs, for increased complexity.  They don’t actually have to be fun to play or look at, so the design requirements are loose.

There have been attempts at reinforcement learning using unlimited computer-generated games.  They haven’t worked that well.  I think the key feature that favors Pokémon-like games is that when the player dies or gets stuck, they can go back to the beginning and try again.  This rewards trial-and-error learning to get past obstacles, keeping a long-term memory, and to re-plan your approach when something doesn’t work.  These are capabilities in which current LLMs are notably lacking.

Another way of saying what Claude’s missing skill is: managing long-term memory.  You need to remember important stuff, forget minor stuff, summarize things, and realize when a conclusion in your memory is wrong and needs correction.

Reply
Load More
102It's time for a self-reproducing machine
1y
70