# All of Gerald Monroe's Comments + Replies

We need a theory of anthropic measure binding

Biological meat doesn't have the needed properties but this is how SpaceX and others avionics control works.  Inputs are processed in discrete frames, all computers receive a frame of [last output | sensor_inputs], and implement a function where output = f(frame) : the output depends only on the frame input and all internal state is identical between the 3 computers.

Then, after processing, last_output=majority(output1, output2, output_n)

So even when one of the computers makes a transient fault, it can contribute to the next frame.

The Machine that Broke My Heart

My other comment is that you probably didn't succeed as well as you thought you did.  I am taking your story at face value - that your model was spookily accurate, that it worked way better than you could reasonably expect, etc.  But scale matters.  Many tech prototypes work perfectly at small scales but would fail if you had built a few thousand or million hardware instances and tried them with the user wearing them across the full range of human activity and cultures.

But say you did, and your model that's cheap enough to train on a ... (read more)

The Genetics of Space Amazons

"Sci fi plot alert": what happens if due to random chance/genetic drift the "A" version of the X chromosome becomes more common and the last male dies?  This would be more probable the smaller the population is.  And, I dunno, a space rock snipes the sperm storage freezer.  (similar to what happened in Seveneves)

1Jan Christian Refsgaard22dWhen the space ship lands there is a 1% chance that no males are among the first 16 births (3416) Luckily males are firtile for longer so if the second generation had no men the first generation still works If the A had a mutation such that AX did not have 50% chance of passing on a A, then the gender ratio would be even more extreme, if the last man dies the a AY female could probably artificially incriminate a female. You can update the matrix and do the for product to see how those different rules pan out, if you have a specific ratio you want to try then I can calculate it for you, calculating a target gender ratio will require a mathematician as this is a Markov process and their Transmission Matrix are hard to calculate fom a target steady state, if you are a for mortal like me
Universal counterargument against “badness of death” is wrong

Just to add to the above: even without (massive) cognitive decline in the aged, just knowing you only have a few years left likely has an effect on someone's decisions.  Most changes and improvements, in technology and institutional processes, cause initial short term problems.  They only pay off long term.  If you're in the last 5 years of your career, or your life, there's no expected payoff for learning most new things, or for seeing a major change in how the institution you work in works.

If you can reasonably expect to live for ma... (read more)

The Machine that Broke My Heart

Did you really throw away the software and not keep it in a VCS or on an extra storage device?  I feel a sort of pain thinking about it - that if it really worked as well as described, it may not at all be easy to rebuild.

4lsusr23dWe kept the software. It's somewhere on the cloud in a VCS. Not sure about the data. This was an embedded system. The hardest, most frustrating part to maintain was hardware integration. The prototype was a specific wearable device attached to a specific laptop. I erased that laptop and gave it away to a relative. In a perfect world, I'd redo this project as a contractor for an established wearables company. I'd do the machine learning, they'd do the hardware and we'd outsource the data annotation to Mechanical Turk. (Data collection is easy. The data bottleneck is annotation.) But that takes industry connections I don't have.
Gerald Monroe's Shortform

R1 <-> R4 are arbitrary positive floating point numbers.  Units are currency units.  So "human harm" is in terms of estimated actual costs + estimate reputation damage paid for injuries/wrongful deaths, "outside boundary" is an estimate of the fines for trespassing and settlements paid in lawsuits, "paperclips made" is the economic value of the paperclips, and operating cost is obvious.

2Dagon1moHmm. Either I'm misunderstanding, or you just described a completely amoral optimizer, which will kill billions as long as it can't be held financially liable. Maybe just take over the governments (or at least currency control), so it can't be financially penalized for anything, ever. Also, you're adding paperclip-differential to money, so the result won't be pure money. That's probably good, because otherwise this beast stops making paperclips and optimizes for negative realized costs on one of the other dimensions.
Gerald Monroe's Shortform

This is, arguably, AGI.  The reason it's AGI is because you can solve most real world problems by licensing a collection of common subcomponents (I would predict some stuff will be open source but the need for data and cloud compute resources to build and maintain a component means nothing can be free), where you only need to define your problem.

In this specific toy example, the only thing that is written by human devs for their paperclip factory might be a small number of json files that reference paperclip specs, define the system topology, and refe... (read more)

1JBlack25dThis is definitively not AGI. If it lacks cognitively ability to consider things that humans can consider, then it's not AGI.
Gerald Monroe's Shortform

Paperclip quality control is an agent that was trained on simulated sensor inputs (camera images and whatever else) of variations of paperclip. Paperclips that are not within a narrow range of dimensions and other measurements for correctness are rejected.

It doesn't have any learning ability. It is literally an overgrown digital filter that takes in some dimensions of input image and outputs true or false to accept or reject. (and probably another vector specifying the checks failed)

We can describe every subagent for everything the factory needs as such limited, narrow domain machines that alignment issues are not possible. (Especially as most will have no memory and all will have learning disabled)

Gerald Monroe's Shortform

The trivial solution to AI alignment.

Preface: I'm not expecting this solution to work, I just want to understand why the 'ez' solution doesn't work.

A paperclip maximizer:

The paperclip maximizer is a paperclip factory designer/manager.  It has the role to select, from a finite set of machines available on the market, machines to manufacture, inspect, and package paperclips.

It's reward heuristic is H = ( (quota - paperclips_made) x R1 - <human harm> x R2 - <machi... (read more)

2JBlack1moYes, you can avoid AGI misalignment if you choose to not employ AGI. What do you do about all the other people who will deploy AGI as soon as it is possible?
2Dagon1moPut units on your equation. I don't think H will end up being what you think it is. Or, the coefficients R1-R4 are exactly as complex as the problem you started with, and you've accomplished no simplification with this. Heck, even the first term, (quota - paperclips made) hand-waves where the quota comes from, and any non-linearity in making slightly more for next year being better than slightly fewer than needed this year.
1Ansel1moWithout even getting into whether your specific reward heuristic is misaligned, it seems to me that you'd just shifted the problem slightly out of the focus of your description of the system, by specifying that all of the work will be done by subsystems that you're just assuming will be safe. "paperclip quality control" has just as much potential for misalignment in the limit as does paperclip maximization, depending on what kind of agent you use to accomplish it. So, even if we grant the assumption that your heuristic is aligned, we are merely left with the task of designing a bunch of aligned agents to do subtasks.
Why Bedroom Closets?

Yes.  I also sorta daydream about similar things.  With real estate we've got this archaic model mired in all these artificial rules and barriers and intermediaries who are legally allowed to consume a chunk from every transaction.

A proper setup would be, well, once we get robotics to be more flexible and reliable (obviously using the latest breakthroughs in reinforcement learning/neural networks), we'd make everything out of cubical modules like you say.

So you assign the robots to clear a site and the first ones come and demolish anything... (read more)

Why Bedroom Closets?

Modular housing would let you do that, it hasn't taken off in the USA in favor of finance scam built on site places that are extremely expensive to modify.  So you need a design that meets most needs no matter what they are, which is part of the reason US houses are so big with extra rooms.

Another factor is that it's expensive to switch houses.  In theory you should be able to just trade up and down with minimal hassle.  In practice you pay approximately 5% of the value of the property to various people as fees!  Not to men... (read more)

Why Bedroom Closets?

Dining rooms.  Foyers with double height ceilings.  Sun rooms.  upstairs kitchens.  Commercial grade kitchens in houses meant for 4-5 total occupants.

There's a lot of ways to waste space in housing, and the other factor as you figured out is that there's not a whole lot of engineering effort put in.  A methodical way to design a house would be to sample the movement and activities of the occupants, over a decent sample size, over a period of years.  Find out in the data where people go and what they do, how long they spend on ... (read more)

2Viliam2moPerhaps MIRI should take "designing a perfect house" as a subproblem of "extracting human preferences". :D I am not even sure I could design a perfect house for myself. Seems like my preferences change over time, depending on my situation. It is different being childless, having toddlers, having teenagers. Optimal kitchen and dining room depend on your social life (how often do you invite people for dinner? how many?), how much you cook, and even what you cook. For example, a kitchen connected to another room is good, because the person who cooks is not socially isolated at the moment. But if you cook something that smells, then your whole house smells. Unless you have a powerful cooker hood, which suggests that the answer may depend on the available technology, which can be different ten years later. Maybe the perfect house would be one that is easiest to redesign. Like, whenever you change your mind, you just move things into new positions and try it for a week; then either keep it or revert the change. How far could you go in this direction, to make the house as flexible as possible? Rearranging furniture could be easy, but what about walls? A simple board that can be easily moved to another position is probably bad at isolating sounds. Moving a kitchen or a bathroom requires connecting the water and gas supplies; getting a complete freedom would probably cost too much. (In theory, perhaps you could put the water and gas pipes in walls around the whole house, so that you would have outlets everywhere. Not sure how often the pipes would leak then, impacted by changing temperature in the outer walls.) So maybe you could only have multiple outlets along one wall? Or course, electric power and ethernet everywhere. Has anyone already tried something like this? How well does it work? It is significantly more expensive than a normal house?
Postmodern Warfare

So TLW, at the end of the day, all your objections are in the form of "this method isn't perfect" or "this method will have issues that are fundamental theorems".

And you're right.  I'm taking the perspective of, having built smaller scale versions of networked control systems, using a slightly lossy interface and an atomic state update mechanism, "we can make this work".

I guess that's the delta here.  Everything you say as an objection is correct.  It's just not sufficient

At the end of the day, we're talking about a coll... (read more)

Is Molecular Nanotechnology "Scientific"?

(Unfortunately, I'd put AI down with flight in 1200 - intelligence exists, but we don't understand it to any real extent, and current technologies are not approaching proper intelligence; they need new conceptual ideas)

What's your measuring stick here?  "Artificial general intelligence" doesn't require the intelligent system to be able to have emotions, or even organism level goals, arguably.  Arguably, a software stack where you can define what a robotic work system must accomplish in some heuristic language, and then autonomously generate the n... (read more)

Is Molecular Nanotechnology "Scientific"?

What about "microtechnology"?  These would be self assembly machines made of bigger parts, similar to the actuators in a DLP chip.  Or "hybrid microtechnology", where some other process using a catalyst has made subunits large enough to be manipulated by these "micro-scale" robotics.

Wouldn't "hybrid microtechnology" have precisely the same real world consequences (self replication) that the nanoscale machinery you object to?

Postmodern Warfare

It might be interesting to discuss this in a more interactive format, such as on https://discord.gg/GVkQF2Wn .  You do know some stuff, I know some stuff, and we seem to be talking past each other.  Fundamentally I think these problems are solvable.

(1) merger of conflicting world spaces is possible.  or if this turns out to be too complex to implement, you deterministically pick one network to be the primary one, and have it load from the subordinate network the current observations.

(2) If commanders need more memory than the communic... (read more)

2TLW2moI'm not the Mailman [https://en.wikipedia.org/wiki/True_Names], but I'm up there. I tend to write out a sketch, then go back and ponder it a while, massaging it into some semblance of order, deleting/modifying arguments that in retrospect don't work, and inflating it out into a quasi-coherent post in the process. This takes a fair bit of time. It works well in an asynchronous context. It does not work well in a synchronous context. In my experience, when I attempt to discuss in a synchronous context I end up with one of the following two things (or both!): 1. I state arguments or views that are insufficiently thought-out and that are obviously incorrect/inconsistent in retrospect, or are misleading/confusing/weaker than they should be. 2. I end up with essentially just a forum discussion that happens to be on Discord. Walls of text and all. The 2nd would be fine, but this then runs into another issue: Much of the reason why I am on a website like this is so that people can follow arguments / point out issues with my views / etc. Partly for the later benefit of others following my chains of logic. Partly for the later benefit of others when they can refer back to my chains of logic. Partly for the later benefit of myself, when someone down the line sees an old comment of mine and replies with something I hadn't thought of. And partly for the later benefit of myself, when I can refer back to my chains of logic. Discord does not achieve these. Someone searching this site does not see a Discord conversation. If you (or whoever owns the room, rather) close the Discord room, then the information is lost. (Or if e.g. Discord decides 6m down the line to start dropping old conversation history, etc, etc.) You can, somewhat awkwardly, archive a Discord conversation. And, say, post it on this site. But that's now just a forum conversation with extra steps (not to mention that it's now associated with the person who posted the transcript, not the people in the transcript.
Secure homes for digital people

Only the writes are blocked.

1Sune3moIf unrestricted read is allowed, that would allow someone to copy the em (the person being emulated on the chip) and run it without any safety mechanism on some other hardware. You could perhaps set it up such that the em would have to give consent before being read, but it is not clear to me how the em could verify that it was only being copied to other secure hardware.
Secure homes for digital people

I mean a simpler version would just be hardware that has limits baked into the substrate so certain operations are not possible.

The most trivial is you let core personality has copies in secure enclaves. These are mind emulator chips that is loaded with the target personality and then a fuse is blown. This fuse prevents any outside source from writing to the memory of the target personality.

It can still grow and learn but only by internal update rules to internal memory.

1Sune3moThis will prevent you from being copied even if you wish to be copied.
Postmodern Warfare

So note I do work on embedded systems IRL, and have implemented many, many variations of messaging pipeline.  It is true I have not implemented one this complex, but I don't see any showstoppers.

1. This is how SpaceX does it right now.  In summary, it's fine to have some of the "commanders" miss entire frames as "commanders" are stateless.  Their algorithm is f([observations_this_frame|consensus_calculated_values_last_frame]).  Resynchronizing when entire subnets get cut off for multiple frames and then reconnected is tricky, but straightf
1TLW3mo> In summary, it's fine to have some of the "commanders" miss entire frames as "commanders" are stateless. Having a single commander miss an update? Sure. That's not really the problem. The problem is cases like "half of the commanders got update A and half didn't, which then results in a two-way split of the commanders, which then results in agents splitting into two halves semi-randomly based on which way the majority fell of the subset of commanders that they can see". You really should look up testing of distributed databases, because these sort of split-brain scenarios are analogous there. You're also currently falling afoul of the CAP theorem I believe ( https://en.wikipedia.org/wiki/CAP_theorem [https://en.wikipedia.org/wiki/CAP_theorem] ). Note that "commanders receiving the same set of observations_this_frame" is equivalent to a distributed database with all observers adding observations and all commanders seeing a consistent view of this database... > Resynchronizing when entire subnets get cut off for multiple frames and then reconnected is tricky, but straightforward Again, you really should look up testing of distributed databases. One particularly interesting scenario is asymmetric failures. That is, A can send to B but not vice versa. > This is how SpaceX does it right now. Yep. Consensus among multiple redundant computations is also how the space shuttle operated (although the details are somewhat different for the space shuttle of course). It's not perfect, but it's a fairly decent approach so long as failures are rare enough that multiple simultaneous failures are rare, and you are not in an adversarial environment. > "commanders" are stateless. Commanders cannot be stateless unless either a) they do not retain memories of previously-observed world state or b) they are included in the world data every frame. The former results in demonstrably suboptimal behavior (there's a reason why humans have object permanence :-) ), and the latter req
Dating profiles from first principles: heterosexual male profile design

Upper right, 0.03 messages from high->low male receivers.

One row down, 0.02 messages from medium-high->low male receivers.

To me I mentally see this as 0.01 to 0.03 of these messages are motivated by something other than attractivness, aka financial.  It could be just noise.

1Xodarap3moOh yeah, I agree that's a bit weird but I would guess it's just noise.
1Ericf3moAlso, if some fraction of males are presenting an extreme profile (a-la Jacob of putanumonit) they could be rated low attractiveness "on average" while still getting messages from the tiny fractional percent of females of each attractiveness band who are interested in that unique profile.
Postmodern Warfare

It's easier to visualize if you try to work out the hierarchy of software agents you might use for this.

First, most of the bigger drones will probably some kind of land vehicle, whether a legged infantry or a robot on tracks.  This is for obvious range and power reasons - a walking or rolling robot can carry far more weapons and armor than anything in the air.  And in a battlespace where everyone on the enemy side has computer controlled aim, flying drones without armor will likely only survive for mere seconds of exposure.

3TLW3moI think you're grossly underestimating the following effects/issues: 1. How do multiple redundant commanders ensure that they reliably have the same information, much less in a battlefield environment? Our best efforts still ended up with Bysantine faults on the space shuttle, and that was carefully designed wired connections... (see also Murphy Was an Optimist, which describes a 4-way split due to a failed diode). 2. How do commanders broadcast information in a manner that isn't also broadcasting their location to enemies? (Honestly, the least important of these issues, and I was tempted not to include this lest you respond to this point and only this point.) 3. If many vehicles are constantly recieving enough information to make higher level decisions, how do you prevent a compromised vehicle from also leaking said state to the enemy? Note the number of known attacks against TPMs, and note that homomorphic encryption is many orders of magnitude away from being feasible here. (And worse, requires a serial speedup in many cases to be feasible.) 4. If many vehicles have the deterministic agent algorithm, how do you prevent a compromised vehicle from leaking said algorithm in a manner the enemy can use for adversarial attacks of various sorts? Same notes as 3. 5. "Each agent must query the layer below it to function, exporting these subtasks to an agent specialized in performing them." What you're describing runs into exponential blowup in the number of queries in some cases. (For a simple example, note that sliding-block puzzles are PSPACE-complete, and consider what happens when each bottom agent is a single block that has to be feasibility-queried as to if it can move.) Normally, I'd just say "sure, but you're unlikely to run into those cases", however combat is rather necessarily adversarial. The OpenAI 5 DOTA2 bot beating professionals received a lot of press. A random team who got ten wins against said bot, not so much. Beware glass jaws. > in a battlespace w
Dating profiles from first principles: heterosexual male profile design

Note that a certain percentage of 'female senders' on dating apps have a financial motive.  Some are offering various forms of sex work (from nude photos to forms of prostitution), and some are part of an organized scam.  (pretending to be an attractive female sender who is just a little short of money and needs gift card number in order to 'meet' the recipient).

A quick eyeball analysis of the data you have shows 1-2% of the senders are likely doing this.  Look how for medium-high and medium the percentages go down.  This is because a scammer is not going to copy a profile photo that isn't top quintile.

1Xodarap3moI'm not sure I understand what you're pointing at here. Can you explain more? Every category of female sender is monotonically less likely to send messages to less attractive males, as you would expect, without any consideration of spam.
Whole Brain Emulation: No Progress on C. elgans After 10 Years

I'll try to propose one.

• Is the technology feasible with demonstrated techniques at a laboratory level.
• Will there likely be gain to the organization that sells or deploys this technology in excess of it's estimated cost?
• Does the technology run afoul of existing government regulation that will slow research into it?
• Does the technology have a global market that will result in a sigmoidal adoption curve?

Electric cars should have been predictable this way:

They were feasible since 1996, or 1990.  (LFP battery is the ... (read more)

Whole Brain Emulation: No Progress on C. elgans After 10 Years

Ok. I have thought about it further and here is the reason I think you're wrong. You implicitly have made an assumption that the tools available to neuroscientist today are good, and we have a civilization with the excess resources to support such an endeavor.

This is false. Today the available resources for such endeavors is only enough to fund small teams. The research that is profitable like silicon chip improvement gets hundreds of billions invested into it.

So any extrapolation is kinda meaningless. It would be like asking in 1860 how many subway t... (read more)

2jefftk3moDo you have a better way of estimating the timing of new technologies that require many breakthroughs to reach?
Whole Brain Emulation: No Progress on C. elgans After 10 Years

Ok. Hypothetical experiment. In 2042 someone does demonstrate a convincing dirt dynamics simulation and a flock of emulated nematodes. The emulated firing patterns correspond well with experimentally observed nematodes.

With that information you would still feel safe in concluding the solution is 58 years away for human scale?

2jefftk3moI'm not sure what you mean by "convincing dirt dynamics simulation and a flock of emulated nematodes"? I'm going to assume you mean the task I described in my post: teach one something, upload it, verify it still has the learned behavior. Yes, I would still expect it to be at least 58 years away for human scale. The challenges are far larger for humans, and it taking over 40 years from people starting on simulating nematodes to full uploads would be a negative timeline update to me. Note that in 2011 I expected this for around 2021, which is nowhere near on track to do: https://www.jefftk.com/p/whole-brain-emulation-and-nematodes [https://www.jefftk.com/p/whole-brain-emulation-and-nematodes]
Why Not a Natural Gas Generator?

Awesome, thank you.  Ah I see, at 220 watts just for your furnace your own plot says you should have bought a generator, because a power outage longer than 4 hours is quite possible.  And a shorter power outage you won't miss the lack of heat anyways.

And a proper convenient setup is that you install a generator receptacle, like this one, somewhere outside near the main electrical panel.  This goes to a breaker on your main electrical panel, which is made mutually exclusive with an interlock kit like this one.

As for generators, which I... (read more)

2jefftk3moI think you're overlooking the solar? Which can both run the furnace directly and recharge the battery. I don't need to have the same level of heat in an emergency as not; only being able to run the furnace for part of the time is still very useful. I'm not interested in using my normal household wiring in an emergency like this, because I think I'm going to want almost everything to be off. With regular wiring it's too easy to use more power than you intend. Instead, I think I would be just plugging in my furnace, sump pump, basement freezer, etc as needed.
Whole Brain Emulation: No Progress on C. elgans After 10 Years

You have to or you haven't really solved the problem.  It's very much bounded, you do not need to simulate "reality", you need to approximate the things that the nematode can experience to slightly higher resolution than it can perceive.

So basically you need some kind of dirt dynamics model that has sufficient fidelity to the nematodes very crude senses to be equivalent.  It might be easier with an even smaller organism.

1FCCC3moMaybe someone should ask the people who were working on it what their main issues were.
Why Not a Natural Gas Generator?

Ok.  So specifically it's one of those probability things, the natural gas supply is more likely to work than the electric supply, so most widespread long term natural gas outages are going to leave you with service.  But similarly most such outages will leave you with some gasoline somewhere, even if you have to drive 100 miles to get it.  And depending on the climate zone you might still have solar panels.

Anyways I would like to know what you have in terms of "a bit for electronics, refrigeration, fans, sump pump, boiler, etc, depen... (read more)

2jefftk3mohttps://www.jefftk.com/p/backup-power [https://www.jefftk.com/p/backup-power] Yes https://www.jefftk.com/news/battery [https://www.jefftk.com/news/battery]
Why Not a Natural Gas Generator?

A system that uses batteries adds a lot more cost though. One power wall adds 10k plus to the tab and doesn't pay for itself in most areas. To do backup power like you want you basically need batteries, and you need the electrical wiring changes to make this work, which adds several thousand more in electrician costs.

So for the marginal cost increase for "backup" you can get cheaper backup via a generator. This is essentially objective fact in almost all areas, I can link many sources if you want to drill into the details.

4jefftk3moThat's a much more comprehensive backup than I'm looking at. I'm not worried about a short power outage, and I don't need to have something that will get me through an extended outage at anything close to normal levels of consumption. Instead, if power is out for a long time, I want to have a bit for electronics, refrigeration, fans, sump pump, boiler, etc, depending on the situation. Much of that I can directly run off of the solar, but I have a battery backup system I spent a few hundred dollars on to fill in the gaps. I'm not interested in to wiring it in: in an emergency, I'm okay running an extension cable to whatever it is I've decided to spend the power on. To use a generator in an extended outage, you either need to store very large amounts of fuel on site, or you need to be able to get more fuel. This post is specifically about whether powering a generator from your utility's natural gas line is worth it.
Why Not a Natural Gas Generator?

Residential solar like that is extremely expensive. Even if you DIY the installation. My source for the costs is a few YouTube channels and the site "signature solar" which seems to be the cheapest source of the parts you will need. A 4kw array even at recent dirt cheap prices of 40 cents a watt is $1600. A single 48v 5kwh battery is$1500, and this is a huge decrease in previous costs. The inverter and charge controller and transfer switch is \$1200 minimum. Not to mention rewiring your electrical panel as you need several subpanels to make this work... (read more)

8jefftk3moResidential solar, especially with incentives, will pay for itself in a large range of places. Our system here in MA would definitely not have been worth installing just for being prepared for power outages, that's only a side benefit.
Candy Innovation

Heres a candy not available 25 years ago: Justins peanut butter cups. It's a Reese's with better quality peanut butter, better texture, and better quality chocolate.

But yes it isn't "innovative" it's just an improved version of the same thing. And presumably 25 years ago there were gourmet peanut butter cups somewhere, even if you had to go in person to a shop that made them by hand.

And technically there are many more flavors of m&m and Skittles than previously. Though all the "new" flavors were probably inspired by things that existed 25 years ago.... (read more)

Whole Brain Emulation: No Progress on C. elgans After 10 Years

Let's look at a proxy task. "Rockets landing on their tail". The first automated landing of an airliner was in 1964. Using a similar system of guidance signals from antenna on the ground surely a rocket could land after boosting a payload around the same time period. While SpaceX first pulled it off in 2015.

In 1970 if a poorly funded research lab said they would get a rocket to land on its tail by 1980 and in 1980 they had not succeeded, would you update your estimated date of success to "centuries"? C elegans has 302 neurons and it takes, I think I ... (read more)

3jefftk3moThat isn't the right interpretation of the proxy task. In 2011, I was using progress on nematodes to estimate the timing of whole brain emulation for humans. That's more similar to using progress in rockets landing on their tail to estimate the timing of self-sustaining Mars colony. (I also walked back from "probably hundreds of years" to "I don't think we'll be uploading anyone in this century" after the comments on my 2011 post, writing the more detailed:https://www.jefftk.com/p/whole-brain-emulation-and-nematodes [https://www.jefftk.com/p/whole-brain-emulation-and-nematodes])
1FCCC3moWere they trying to simulate a body and an environment? Seems to me that would make the problem much harder, as you’d be trying to simulate reality. (E.g. How does an organic body move through physical space based on neural activity? How does the environment’s effects on the body stimulate neural changes?)
8CraigMichael4moThe DC-X did this first in 1993, although this video is from 1995. https://youtube.com/watch?v=wv9n9Casp1o [https://youtube.com/watch?v=wv9n9Casp1o] (And their budget was 60 million 1991 dollars, Wolfram Alpha says that’s 117 million in 2021 dollars)https://en.m.wikipedia.org/wiki/McDonnell_Douglas_DC-X [https://en.m.wikipedia.org/wiki/McDonnell_Douglas_DC-X]
9niconiconi4moGood points, I did more digging and found some relevant information I initially missed, see "Update". He didn't, and funding was indeed a major factor.
Cryosleep

I think the current tech curves suggest it will never be developed before it isn't needed. The human brain is extremely fragile, complex, and not designed to tolerate freezing. There may simply not be a way to freeze it without installing so much support nanotechnology that the brain is essentially artificial.

While making AI better than humans at task n is mostly an engineering problem, assuming a good definition of what it means to do well at task n is available.

Cryosleep

Neat ideas though fundamentally it requires the assumption that organic human minds will continue to offer value into the future. This assumption is almost certainly false, artificial circuitry already vastly outperforms brain tissue in almost every meaningful dimension but scale and power efficiency. (The brain is still significantly larger in scale than the biggest ANNs and needs much less power).

At discrete tasks of course AI software can trivially outperform humans albeit only on a limited subset of tasks so far. But the trend seems pretty clear.

1harsimony4moI generally agree. It seems unlikely to me that Cryosleep will be developed or in use for very long before brain emulations or AI become dominant. But like you point out, a lot of the benefits listed here would apply to brain emulations too. Even if it won't be useful for long, cryonics research seems like an important precursor to Em's. Developing tools to preserve/image the brain, determining which brain structures are important to preserve, and finding ways to upload organic minds will all be important.
How much should you be willing to pay for an AGI?

Even if the AGI were 10 times as expensive and only about as capable on average as a median human (with bursts of superhuman ability like how gpt-3 is way faster and makes few spelling or punctuation errors), there is value in just knowing it is possible. I would expect enormous investments in all of the support infrastructure. After all of you know it is possible and will scale it's a matter of national survival. You cannot afford to be a skeptic when you see the actual flash and mushroom cloud. (Referring to how the los Alamos test obviously converte... (read more)

I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead

Thanks for disclosing that.  The cogent seeming nature of the replies made me think that GPT-3 was much more advanced than toying with their "CYOA" playground.  The bot can babble but appears to have nonexistent context memory or validating that a statement is not negated by a previous statement.  For example "the earth exploded.  Steven landed on the earth".

If you're curious what happens if I don't curate answers, here are five responses to each of two prompts, uncurated.

# Prompt 1

Lsusr: I wanted to talk to the real Eliezer Yudkowsky but he's busy. So I used GPT-3 to simulate him. You're the simulated Eliezer Yudkowsky. Thank you for coming. (Not like you had a choice.) I'm a big fan of your work and it's a pleasure to finally meet a simulation of you.

Eliezer Yudkowsky:

## Possible Responses

Eliezer Yudkowsky: You're not the first person to say that, but it's good to hear.

Eliezer Yudkowsky: Let's get this over w... (read more)

All Possible Views About Humanity's Future Are Wild

The percentage of global electricity provided by hydroelectric power.

With 1783 technology you obviously don't need to build the things you mentioned. Your needs are much smaller, for textiles and to drive machinery. You have a vastly smaller population and cities so wood is sufficient for heating and metal forging, as it was in the real 1783.

You cannot grow as fast but in 1783 you have developed and are using the critical technologies that changed everything, the scientific method and the printing press. The printing press means that as people tinker an... (read more)

All Possible Views About Humanity's Future Are Wild

17 percent of the total electricity is still a lot of energy. You aren't taking the question seriously when you assume someone would make a pencil the same way in a world without fossil fuels. (And implicitly the same problems with nuclear we have now)

Focusing on the technology let's you develop a gears level model and predict how industrial and supply chains could adapt to more scarce energy and little in portable forms.

2CraigMichael4moI’m not sure what the 17 percent of total electricity figure is related to. I’m assuming that building a wind turbine would be a lot more difficult than building a pencil. Imagine it’s 1783, but all coal, oil, natural gas and rare Earth metals on Earth only exists in the places where they’re now in 2021. How do you build something like the Deep Water Horizon using 1783 technology? How do you build a the Smokey Hills Wind Farm using 1783 technology? How do you build a lithium ion battery using 1783 technology? How do you then Chicago Pile-1 using 1783 technology? And, yes, you have to think about the whole supply chain. We use fossil fuel burning machines to move parts around, to log, etc. You can log a bit and move them down rivers, then those trees are gone and what do you do? The problem is there’s only so much energy concentrated in wood, and it would be the most energy-dense material available. You’d burn it all and then you’d be done. The population would ultimately be limited by the amount of energy we have available to us, and there would be nothing we could to about it.
All Possible Views About Humanity's Future Are Wild

That's not even the correct staircase though.  It was heating fires -> wind/water mills -> steam engines -> internal combustion engines.  But we still use hydroelectric to produce 17% of all electricity used on earth.

In a hypothetical world with zero fossil fuels in concentrated, easily combusted form the tech tree would have been:

wind/water powering factories near rivers -> electricity -> well positioned factories powered by remote wind/water.  Cities would need to be denser and to use electric busses and trolleys and ... (read more)

2CraigMichael4moYou’re focused more on technology and less on fuel sources. Given what goes in to constructing a modern windmill, I don’t see it being viably done with a wood-burning stream engine. Consider all of the materials that go in to make a pencil and what parts of the world they come from, the multiply it by at least 1000.
The Duplicator: Instant Cloning Would Make the World Economy Explode

Ok fair enough.  I just cannot think of a physical realization of this duplication technology that wouldn't also give you the ability to sync copies and/or freeze policy updates to a copy.

What "freezing policy updates" means is that the neural network is unable to learn, though there would be storage of local context data that gets saved to a database and flushed once the individual switches tasks.

Doing it this way means that all clones of Sundar Pichai remain immutable and semi-deterministic, such that you can treat a decision made by any one of them... (read more)

The Duplicator: Instant Cloning Would Make the World Economy Explode

One benefit to meeting with the clone is you will get any advice or information that is the same as the original. In fact assuming truly identical duplicates a clone can be delegated the same credentials as the original. No reason not to, especially since the real duplicator technology is going to be perfect.

For an example today: you send a message to the Netflix account page wanting to update your credit card, using a web browser. Your computer is connecting to a "clone" of the server instance that does these updates. The cloned server has all the sam... (read more)

4ESRogs4moSticking with the hypothetical where what we have is a Calvin-and-Hobbes-style duplicator, I don't think this would work. You can't run a company with 100 different CEOs, even if at one point those people all had exactly the same memories. Sure, at the time of duplication, any one of the copies could be made the CEO. But from that point on their memories and the information they have access to will diverge. And you don't want Sundar #42 randomly overruling a decision Sundar #35 made because he didn't know about it. So no, I don't think they could all be given CEO-level decision making power (unless you also stipulate some super-coordination technology besides just the C&H-style duplicator).
Is LessWrong dead without Cox’s theorem?

Yes. Doesn't matter though.

Could a human exist that should rationally say no to cryo? In theory yes but probably none have ever existed. As long as someone extracts any positive utility at all from a future day of existing then continuing to exist is better than death. And while yes certain humans live in chronic pain any technology able to rebuild a cryo patient can almost certainly fix the problem causing it.

You need to say our of 100 billion humans someone lived who has a problem that can't be fixed that suffers more existing than not. This is a paradox and I say none exist as all problems are brain or body faults that can be fixed.

1TAG4moYou are assuming selfishness. A person has to trade off the cost of cryo against the benefits of leaving money to their family, or charity. Now assuming benevolent motivations.
Is LessWrong dead without Cox’s theorem?

Wrong about the actions they should take to maximize their values.

It's inconceivable because it's a failure of imagination. Someone who has many social connections now will potentially able to make many new ones then were they to survive cryo. Moreover reflecting on past successes requires one to still exist to remember

Could a human exist that should rationally say no to cryo? In theory yes but probably none have ever existed. As long as someone extracts any positive utility at all from a future day of existing then continuing to exist is better than death. And while yes certain humans live in chronic pain any technology able to rebuild a cryo patient can almost certainly fix the problem causing it.

-1TAG4moWaking from cryo is equivalent to exile. Exile is a punishment.
Assigning probabilities to metaphysical ideas

Yes. Thats what I meant, it you only compare hypotheses A and B when there is a very large number of hypotheses that fit all known data you may become unreasonably confident in B if A is false.

All Possible Views About Humanity's Future Are Wild

Well the fossil fuel scenario has the issue that as the earth gets hotter it would be more and more expensive and obviously a bad idea to extract and burn more fossil fuels. Moreover more and more of the earth would be uninhabitable and also difficult to drill or mine for hydrocarbons.

The other scenarios, we are very close I think far closer that most realize to self replicating machinery. All tasks involved to manufacture machinery are vulnerable to already demonstrated machine learn algorithms it is just a matter of scale and iterative improvement. (B... (read more)

Assigning probabilities to metaphysical ideas

So maybe the error here is that humans can't really hold thousands of hypotheses in their head. For example if you contrast the simulation argument vs "known physics is all there is" you can falsify the "known physics" argument because certain elements of the universe are impossible due to known physics. Or don't have an apparent underlying reason, which the simulation argument can explain. (the speed of light is explainable if the universe is made of discrete simulation cells that must finish by a deadline, and certain quantum entanglement effects could... (read more)

1Jotto9994moEDIT: I think I misunderstood. Just to confirm, did you mean this removes the point of bother with a base rate, or did you mean it helps explain why people are ending up at preposterously far distances from even a relatively generous base rate estimate? I have placed many forecasts on things where I am incapable of holding all the possible outcomes in my head. In fact that is extremely common for a variety of domains. In replication markets for example, I have little comprehension of the indefinite number of theories that could in principle be made about what is being tested in the paper. Doesn't stop me from having opinions about some ostensible result shown to me in a paper, and I'll still do better than a random dart-throwing chimp at that.
All Possible Views About Humanity's Future Are Wild

Regarding kesler: I understand that's just science press sensationalism. One method of dealing with it that the "math checks out on" is ground based laser brooms. High powered lasers would use photon pressure to deorbit each piece of debris, or at least enough debris to make spaceflight feasible. Theres a paper study on it if you are interested. Note also over a 100k period that most kesler debris will not be in a stable orbit. Small pieces of debris have high surface area to volume and deorbit quickly. Large pieces by definition are rare because hu... (read more)

2CraigMichael4moYou’re missing the crux here - say a substantial part of humanity dies and we lose most knowledge and access to the technologies that we use to extract fossil fuels in the ways that we currently do. This creates a “missing stair” for the next group of humans populating the Earth. Our progrsess: Burning wood, plants and poo -> burning of fossil fuels -> nuclear and renewables and whatever. If fossil fuels cannot be extracted by a society powered by wood (lol): Burning wood, plants and poo —> (how to use wood-burning machines to extract oil from the beneath the ocean floor ???) —> still burning wood, plants and poo forever. They would have no way to climb the “energy stair case.” (Edits: clarity)
1RedMan4moI think you're making a great case for optimism. Based on your last line, I don't think our positions are too far apart. Laser brooms on the ground are a heavier infrastructure investment than just the rocket, and they haven't been built yet. Rockets with no brooms are cheaper and easier. So needing the broom raises the threshold, perhaps the raised threshold is still in reach, but at some theoretical point, it will not be. The fossil fuel comment was more in the direction of 'if we insist on burning everything currently in the ground, the runaway greenhouse effect is lethal to the species at 500-1000 year timelines'. I assert that we could screw ourselves permanently, in this century, by digging into a hole (through inadequate investment of non renewable resources like helium or failure to solve engineering challenges) which we cannot exit before we wreck our habitat (plenty of non co2 scenarios for this). I'm not sure how much pessimism is warranted, I certainly don't think that failure is inevitable, but I absolutely do think it's on the table.
All Possible Views About Humanity's Future Are Wild

We made computers with billions of times as much compute and memory from the 1960s. Previously intractable problems like machine perception and machine planning to resolve arbitrary failures - were only really begun to be solved with neural networks around 2014ish.

Previously they were theoretical. Now it's just a matter of money and iterations.

See previously to define a subtask for a von neuman machibe like "mine rocks from the asteroid you landed on in other tasks and haul them to the smelter" could have a near infinite number of failures. And with pre... (read more)

1Self-Embedded Agent4moFair enough.
All Possible Views About Humanity's Future Are Wild

My point was that during von Neumanns time there was plenty of reason to think such probes might never be possible, or far in the future. The exponential nature of certain types of improvements wasn't yet known.

1Self-Embedded Agent4moWe can't build Von Neumann probes in the real world - though we can in the digital world. What kind of significant (!) new information have we obtained about the feasibility of galaxywide colonization through Von Neumann probes?
Is LessWrong dead without Cox’s theorem?

Or succinctly: to be the "least wrong" you need to be using the measured best available assessment of projected outcomes.  All tools available are approximations anyway and the best tools right now are 'black box' deep learning methods which we do not know exactly how they arrive at their answers.

This isn't a religion and this is what a brain or any other known form of intelligence, artificial or natural, does.