Gerald Monroe

Comments

Technological stagnation: Why I came around

Can you clarify the second point?  The first point is - and 'corrupt' is a relative term.  But for the overall society, inexpensive and large scale indoor space allows for progress.  It makes a city more productive, a country more efficient, it makes the overall pace of technological development slightly faster.  San Francisco blocking construction when they are arguably America's most productive city as it is is likely harming the city, the city residents, the state they are in, the country they are in, and to a small extent, the world.

However the benefits of blocking construction do accrue to present landowners in expensive cities who get more certain ROIs on their investments and get to maintain their views.  And in the way cities are allowed to block new construction in the US (versus it being handled at a higher level of government), the only votes come from current residents, many of whom are landowners and thus invested in the current system...

Are you referring to Broad Sustainable Building?  https://en.wikipedia.org/wiki/Broad_Sustainable_Building

AllAmericanBreakfast's Shortform

I think the current era is a novel phenomena.

Consider that 234 years ago, long dead individuals wrote in a statement calling for there to be "or abridging the freedom of speech, or of the press".  

Emotionally this sounds good, but consider.  In a our real universe, information is not always a net gain.  It can be hostile propaganda or a virus designed to spread rapidly causing harm to it's hosts.  

Yet in a case of 'bug is a feature', until recently most individuals didn't really have freedom of speech.  They could say whatever they wanted, but had no practical way for extreme ideas to reach large audiences.  There was a finite network of newspapers and TV news networks - less than about 10 per city and in many cases far less than that.  

Newspapers and television could be held liable for making certain classes of false statements, and did have to routinely pay fines.  Many of the current QAnon conspiracy theories are straight libel and if the authors and publishers of the statements were not anonymous they would be facing civil lawsuits.  

The practical reason to allow a freedom of speech today is current technology has no working method to objectively decide if a piece of information is true, partially true, false, or is hostile information intended to cause harm.  (we rely on easily biased humans to make such judgements and this is error prone and subject to the bias of whoever pays the humans - see Russia Today)

I don't know what to do about this problem.  Just that it's part of the reason for the current extremism.

Technological stagnation: Why I came around

If you examine your first 5, limited AI agents, similar to the kind demonstrated for autonomous cars, is capable of lifting the limits.  

Manufacturing - self replicating robotics would drive prices through the floor

MNT - build tool designing AI agents to crack this problem, once you make the equipment for working at this scale cheap by producing it autonomously.  Tool designing agents seem to be feasible per some of Open AI's recent results.

Construction - same robots can build the buildings at hyperspeed, pre-fabrication with human workers is already vastly faster

Agriculture - falls with the same robotics case

Energy - presently it's governed by the need for mass solar/battery production

Transportation - building a new type of car/engine/overhead transit pods is a case of the manufacturing/robotics problem.  Since it's too expensive right now to try anything but what we already have.

Medicine is a harder problem, I have a vague idea of using very advanced robotics and AI agents to build a "bottom up" understanding of biology so that it is possible to make new interventions in living humans and know they are going to work beforehand.  

Alas there are government/institution throttling issue with some of these advances.  For construction, corrupt local jurisdictions can block construction of modular buildings, forcing expensive custom designs.  For energy, solar/battery systems need government support for there to be demand management/grid backfeeding/permits.  For transportation, even if a new modality can be found (overhead maglev tracks, underground tunnels), a government has to permit the installation.  

And of course medicine is the big one.  We can posit an AI agent that could design a custom edit to a single patient's genome.  Or even invent a new treatment in realtime, during the period a single individual is in the process of dying.  The old model of "RCT on enough people for statistical significance and pay 1 billion dollars in fees and salaries" does not allow for such rapid iteration to be possible.  

Technological stagnation: Why I came around

So for the 2 hours, realize that the reason it's not zero is there is a 'residual' human labor input where currently shipping control systems are not robust enough to replace the human.  To summarize the problem (it's the same problem repeated everywhere): there is a near infinite number of rare 'edge cases' that a tractor can experience.  Current computer software is not feasible to engineer for all the edge cases, so the tractor has an autopilot that handles the 90-99% or so 'main happy case' of driving the tractor, and the person onboard watching netflix has to be ready to take over when it hits an edge.

This is pretty much the same problem repeated for packing boxes at Amazon and all the rest.  Too many varied items on the shelves.  (the 'picking' problem).  Or for manufacturing the goods that are being shipped - robots can make the injection molded main pieces, and be hand set up for commonly made goods, but there are all these little 'edge' cases where a factory worker has to do some of the steps by hand, making the human labor input more than zero.

Technological stagnation: Why I came around

Note that this can be modeled pretty simply.

Evolution is slow and for the sake of argument, assume human intelligence has not changed over the time period you have mapped.  If humans haven't gotten any smarter, and the next incremental step in the technology areas you have mentioned requires increasingly sophisticated solutions and thought, then you would expect progress to slow.  Fairly obvious case of diminishing returns.

Second, computers do act to augment human intelligence, and more educated humans alive increase the available brainpower, but this too contrasts with probably exponentially increasing difficulty in certain fields.

For example, in the field I presently work, I see huge teams of people and very sophisticated equipment used to support further improvements in microprocessors.  I think in the past the teams needed to be much smaller, and the equipment was simpler, and the gains were easier.  

The fields you mention there are specific bottlenecks we can discuss in detail but I feel that would make this post too long.  But the TLDR : nuclear turns out to have long tail risks that current human run organizations can't economically efficiently manage, transportation is gated by energy and human reflexes, medicine is limited by a number of inefficiencies, and manufacturing has seen enormous improvements, just not in the way you think.  The rise of China has made manufactured goods of varied quality levels far more abundant than in the past, and has made them accessible to billions more people.  Your "typical home, subtract the screens" model is implicitly assuming a nice home in Los Angeles in 1955 or so.  But over in China most people did not yet have running water and had minimal access to electricity.

The singularity model is simply, we know that each of the fields mentioned, we can create toy examples of "minimum limit case" where we don't know the limits of physics, but we do know what we could achieve if we had limitless intelligence to engineer to the limits rapidly.

Below I have listed what are the 'minimum limit cases'.  We don't know how far the fields can be pushed past the point I mentioned but everything I mention is pretty conclusively supported by evidence as being feasible.

Note that everything listed, including medicine advances, require supporting advances in AI to make them feasible.  We don't have that yet.  We have a lot of toy models made at a small scale but not the integrated systems just yet.  By AI I mean limited function agents able to choose good (but not perfect) control outputs for a robotic system, not self amplifying superintelligences.

  

Medicine - we know that aging is governed by processes that accumulate negative changes over time.  Stopping/reversing these changes is possible if we knew exactly which genes to rewrite.  At the limit case, medicine would be able to produce artificial replacements for any organ, stabilize any dying patient with high speed AI/robotics to identify the correct intervention and apply it, and turn off the root cause of most diseases.  

Transportation - a packet switched network of vacuum trains/point to point flying cars/overhead transit pods/autonomous cars

Energy - solar panels over all buildings and deserts, batteries at every electrical panel, a network of demand regulation

Agriculture - robotic farms in a sealed pod

Construction - robotics that assemble a building from pre-fabricated building blocks that come on a truck

Manufacturing - robotics that are sophisticated enough to manufacture themselves and to self-clear nearly all faults

What is the currency of the future? 5 suggestions.

Note the reason why a specific cryptocurrency, from a set of competing crypto currencies, gets used is an example of the network effect.  The more people use a specific cryptocurrency (bitcoin or ethereum or dogecoin or whatever), the more and better support there will be for transactions in this currency.  This means better and more reliable software (less likely to lose or corrupt your money), and more importantly, it means less volatility and more stability for carrying value from sender to receiver.  The more mega or giga-dollars being moved using that currency the better off you will be for your transaction.  Especially if your transaction if large or you are trying to be anonymous - either way, more traffic is better for you.

This is the network effect - using the network with more users has more utility to you.  In the cryptocurrency world this means the currency that both (1) offers all the features needed in practice (2) has the most users is ultimately going to become dominant.  

An Exploratory Toy AI Takeoff Model

There are 2 significant issues here.  Consider a robot in physical reality with limited manipulation capability.  Even with infinite intelligence, the robot has a maximum lifetime number of manipulations it can perform to the outside world.  With a human, that's 2 arms.  What if an animal without opposable thumbs had infinite intelligence?  Then it would be capable of less.

What does infinite intelligence mean?  It means an algorithm that, given a set of inputs and a heuristic, can always find the most optimal solution - the global maximum - for any problem the agent faces.  

Actual intelligent agents have to find a compromise - a local maxima.  But a "good approximation" may often be pretty close to the global maxima if the agent is intelligent enough.  This means that if the approximation an agent uses is 80% as good as the global maxima, infinite intelligence only gains the last 20 percent.

This is the first problem here.  You have discovered a way to build a self-improving algorithm that theoretically has the capability of finding the global maxima every time for a given regression problem.  (it won't but it might get close).  So what.  You still can't do any better than the best solution the information allows.  (and it may or may not make any progress on thought to be NP problems like encryption)

Consider a real problem like camera-based facial recognition.  The reason for remaining residual error - positive and negative misclassifications - may simply be the real world signal does not have sufficient information to identity the right human from 7 billion every time.  

The second problem is your heuristic.  We can easily today build agents that optimize for the wrong, 'sorcerer's apprentice', heuristic that goes awry.   Building a heuristic that even gives an interesting agent - one with self awareness and planning and everything else we expect - may take more than simply building a perfect algorithm to solve a single subproblem.  

A concrete example of this is ImageNet.  The best-in-class algorithms for solving it solve the problem of "get the right answer on ImageNet" but not the actual problem we meant which is "identify the real world object in this picture in the real world".  So the best algorithms tend to overfit and cheat.

SpaceX will have massive impact in the next decade

Well, the other way to check if I am right or wrong is to back calculate the rocket equation.  Instead of relying what I say, what's the payload mass to propellant mass of the BFR?  Saturn V (the rocket equation is the same for the BFR, and it is using recoverable boosters and a compromise fuel (liquid CH4) so I expect it to perform slightly worse) it's 6.5 million pounds total rocket mass, 85% payload, to 261,000 lbs to LEO.  So 4% of the mass is payload, 85/4 = 21.25 kg of propellant for every kilogram of payload.  

  Ok, CH4 + 202 = CO2 + 2H20

1/3 of the mass is the CH4, while 2/3 is O2.  That helps a lot as liquid oxygen is cheaper, only 16 cents per kilogram.  So $2.26 for the liquid oxygen.

Well, how much does 7.08kg of liquid methane cost?  (note that BFR needs purified methane and cannot use straight natural gas)  

Well, 1.14 Therm = 1 gge = 5.660 lb.  So 21.25kg = 15.61 pound,  15.61  pound/5.660 = 2.757 gge.

2.757 gge * 1.14 = 3.14 therm.  Average prices presently per therm are $0.92.  So $2.89 for the unpurified fuel.  Then you need to purify it to pure methane (obviously with some loss of energy/gas/filter media) and liquify it.  I am going to assume this raises the cost 50%.  So $4.33 for the natural gas.  Total cost per kg for the fuel is $4.33+2.26 = $6.59.

$10 a kg for payload to LEO, including the rocket, seems rather optimistic.  Remember the rocket needs repair and will occasionally blow up.  Helicopters and other much lower energy terrestrial machines, the maintenance + repairs are often either similar or more expensive than the price of the fuel.  I would expect the real minimum cost per kg to be at least 3 times the cost of fuel: 2 units of repair/replacing exploded rocket parts for every kg of propellant.  Or $19.78 per kg, which would be phenomenal results compared to today's $2720 a kg (using spaceX now), and just half as good as Elon Musk's promise.  

Hard laws of nature here.  I want to go to space as well but it takes a literal swimming pool of fuel under you to do it, and while SpaceX has made some impressive advances, it doesn't change the basic parameters of the problem.

SpaceX will have massive impact in the next decade

In the rocket industry, the 'payload' is the piece that reached orbit.  That is how it is defined.  You technically can occupy the entire upper portion of a Dragon spacecraft (the entire section above the second stage inside the fairing) with your mega-satellite.  That entire satellite is 'payload' and the source of the 'payload to LEO/geostationary orbit' that gets quoted as the capability of the spacecraft.

You have to assume that "$10" figure is the lowest number possible, which means Musk is accounting for the entire payload.

Anti-Aging: State of the Art

Regarding cryonics not working: this depends on your definition of 'working'.  Let me describe the problem succinctly.

Assume at some future date you can build a 'brain box'.  This is a machine, using some combination of hardware and dedicated circuitry, that is capable of modeling any human brain that nature could build.  It likely does this by simulating each synapse as a floating voltage, modulated by various coefficients (floating point weights) when an incoming pulse arrives.  

Well, you can choose randomly the weights, and assuming you also attach a simulated or robotic human body (a body with sufficient fidelity), and train the robot or simualated body with an appropriate environment, the 'being' inside the box will eventually achieve sentience and develop skills humans are capable of developing.

But you don't have to choose the weights at random.  If you obtain just 1 bit of information from a frozen brain sample, you can use that bit to bias your random rolls, reducing the possibility space from "any brain possible within the laws of nature" to "a subset of that space".

If you have an entire frozen brain, with whatever damage cryonics has done to it, and you first slice and scan it with electronic microscopes, you still get a lot more bits than just 1.  You will be able to instantiate a brain that has at least some of the characteristics of the original.  Will they have clear and coherent memories (as coherent as humans have...)?  Depends on the quality of the sample, obviously.  

But regardless of damage you can bring each cryonics patient 'back', limited by the remaining information.  This is actually no different than caring for a patient with a neurodegenerative disease, except that the brain box will not have any flaws in it's circuitry and once instantiated, the being occupying it will be able to redevelop any skills and abilities they are missing.

Now, yes, trying to 'repair' a once living brain to live again as a meat-system is probably unrealistic without technology we cannot really describe the boundaries of.  (as in, we can posit that the laws of physics do let you do this if you could make nanoscale waldos and put all the pieces back together again, but we can't really say with any confidence how feasible this is)

Load More