Gerald Monroe

Comments

A Brief Review of Current and Near-Future Methods of Genetic Engineering

My support for the last paragraph is that many of the things we credit "exceptionally smart" people with doing like solving equations can be automated. Or exploring function spaces for a better solution. Or, well, any problem that has a checkable answer, which are the very things iq tests measure.

It's not on an IQ test how to imagine a better aircraft that is both creative and meets design specs. It's always problems that a clear answer exists for.

Anyways in my personal experience I have met a lot of "brittle" people. They have no outer visualization for how a machine actually works and just get stuck the moment they hit a problem that wasn't in a training exercise at school. Basic ideas just don't occur to them.

But yeah if you put me up against them on rigidly defined problems taught in a book I might be slightly slower.

Note that I personally test at around 80-97th percentile depending on the test. (MCATs was 97). This tells me that whatever intelligence I have lucked into having is substantially above average but not the best.

I am saying an army of people only as good as me - top quintile - can and will create TAI decades before genetic engineering will matter.

A Brief Review of Current and Near-Future Methods of Genetic Engineering

There's a hole in the assumptions in your last paragraph.  Implicitly you are saying that you believe TAI will benefit from or require the actions of a few 'super-genius' human beings to make possible.

There are some flaws in your statements to unpack:

      a.  The existence of human 'super geniuses'.  Nature can only do so much to improve our intelligence, being stuck with living cells as computational circuits in a finite brain volume, with finite energy supply.  It isn't clear how meaningful the intelligence differences really are in terms of utility on actual tasks.

     b.  The kind of tasks that intelligence testing can measure being relevant to the task of designing a TAI.  Thing is, the road to get there isn't going to involve a whole lot of someone solving math problems in their head as they pound a keyboard through the night writing reams of custom code.  A whole lot of it will be careful, methodical organization of your problem into clear layers and carefully checked assumptions to prevent math leaks (a math leak would be where a heuristic being optimized for is slightly incorrect, leading to the system building a suboptimal solution.  I think of it as 'leaking' the delta between the incorrect approximation and the correct approximation).  A lot of the "keyboard pounding" can be automated by building early bootstrap agents that find for us a near optimal algorithm for a given piece of the AI problem.  Moreover, most code should be reused so we don't have humans just re-resolving the same problems over and over.  

     c.  A lot of the pieces needed to get there from here are probably organizational.  You need thousands of people and some way to standardize everyone's efforts and build APIs and frameworks and other mechanisms to gain benefit from all these separate workers.  A single person is not going to meaningfully solve this problem by themselves.  You'll very likely need an immense framework of support software, and some method of iteratively improving it over time without significant regression.  (the failure mode of most large software projects)

 If a-c has a 90% chance of being correct, then the actual probability would be 0.1*0.25 or 2.5%, and probably not worth the hassle.  Note that there is a cost - the medical procedures to create genetically modified embryos have risks of screwing something up, giving you humans who are doomed to die some horrific way.

Just as a general policy, anything current flesh and blood humans with are having trouble with, that smarter humans have less trouble with, current humans can probably write a piece of software that is better than the efforts of any humans.  With today's techniques.

Specializing in Problems We Don't Understand

So intentional problems would be markets, where noise is being injected and any clear pattern is being drained dry by automated systems, preventing you from converging to a model. Or public private encryption where you aren't supposed to be able to solve it? (But possibly you can)

Specializing in Problems We Don't Understand

building fusion power plants, treating and preventing cancer, high-temperature superconductors, programmable contracts, genetic engineering, fluctuations in the value of money, biological and artificial neural networks.

vs

building bridges and skyscrapers, treating and preventing infections, satellites and GPS, cars and ships, oil wells and gas pipelines and power plants, cell networks and databases and websites.

 

Note that there is a way to split these sets into "problems we can easily perform experiments both real and simulated" and "problems where experimentation is extremely expensive and sometimes unethical".

Perhaps the element making this problems less tractable is we cannot easily obtain a lot of good quality information about the problem itself.  

Fusion you need giga-dollars to actually tinker with the plasmas at the scale you would get net power from.  Cancer, you can easily find a way to kill cancer in a lab or lab rat but there are no functioning mockups of human bodies (yet) to try your approach on.  Also there are government barriers that create shortages of workers and slow down any trial of new ideas.  HTSC, well, the physical models predict these poorly and it is not certain if a solution exists under STP.  Programmable contracts are easy to write but difficult to prove impervious to assault.  Genetic engineering, easy to do on small scales, difficult to do on complex creatures like humans due to the same barriers behind cancer treatment.  Money fluctuations - there are hostile and irrational agents blocking you from learning clean information about how it works, so your model will be confused by the noise they are injecting [in real economies].  And biological NNs have the information barrier, artificial NNs seem to be tractable they are just new.


How is this relevant? Well to me it sounds like if we invent a high end AGI, it'll still be throttled for solving this problems until the right robotics/mockups are made for the AGI to get the information it needs to solve them.

The AGI will not be able to formulate a solution merely reading human writings and journals on these subjects, we will need to authorize it to build thousands of robotic research systems where it then generates it's own experiments to fill in the gaps in our knowledge and to learn enough to solve them.

Solving the whole AGI control problem, version 0.0001

I think you are missing something critical.

What do we need AGI for that mere 2021 narrow agents can't do?

The top item we need is for a system that can keep us biologically and mentally alive as long as possible.

Such an AGI is constrained by time and will constantly be in situations where all choices cause some harm to a person.

Solving the whole AGI control problem, version 0.0001

One comment: for a realtime control system, the trolley problem isn't even an ethical dilemna.

At design time, you made your system to consider the minimum[expected harm done(possible options)].

In the real world, harm done is never zero.  For a system calculating the risks of each path taken, every possible path has a non zero amount of possible harm.  

And every timestep [30-1000 times a second generally] the system must output a decision. "leaving the lever alone" is also a decision and there is no reason to privilege it over "flipping it".  

So a properly engineered system will, the instant it is able to observe the facts of the trolley problem (and maybe several frames later for filtering reasons), switch to the path with a single person tied to the tracks.

It has no sense of empathy or guilt and for the programmers looking at the decision later, well, it worked as intended.

Stopping the system when this happens has the consequence of killing everyone on the other track and is incorrect behavior and a bug you need to fix.

Air Quality and Cognition

Do you see a single study listed where the experiment design was to put the subject in a room full of visible pollutant particles and have them take an exam?  I don't.  

I'm kind of disappointed in the robustness of human bodies assuming the above general trends are true, but it is what it is.  

Get yourself an air purifier, then, one with measurably good performance : https://www.nytimes.com/wirecutter/reviews/best-air-purifier/

Evidence appears to be clearly in favor of doing it.  

Is there any plausible mechanisms for why taking an mRNA vaccine might be undesirable for a young healthy adult?

The converse of that is that 225 million doses have been given and the serious negative effect rate is extremely low.  It's improbable that merely another doubling of time and doses will reveal any new information.  

If there is some new way this method causes the human body to fail it won't be found for years.  

Conversely, there's still the risk of Covid, and isolation has holes.  The biggest one being you might get sick and have to see medical treatment, and hospital acquired infections are estimated to happen 1.7 million times a year.  And while being young your odds are good, there are illness 'stacks' where Covid would kill you.  (some respiratory or autoimmune illness as well as covid at the same time, etc)

Another (outer) alignment failure story

I like this story.  Here's what I think is incorrect:

      I don't think, from the perspective of humans monitoring single ML system running a concrete, quantifiable process - industry or mining or machine design - that it will be unexplainable.  Just like today, tech stacks are already enormously complex, but at each layer someone does know how they work, and we know what they do  at the layers that matter.  Ever more complex designs for, say, a mining robot might start to resemble more and more some mix of living creatures and artwork out of a fractal, but we'll still have reports that measure how much performance the design gives per cost.  

   And systems that "lie to us" are a risk but not an inevitability in that careful engineering, auditing systems where finding True Discrepancies is their goal, etc, might become a thing.  

  Here's the part that's correct:

      I was personally a little late to the smartphone party.  So it felt like overnight everyone has QR codes plastered everywhere and is playing on their phone in bed.  Most products adoption is a lot slower for reasons of cost (esp up front cost) and speed to make whatever new idea there is. 

     Self replicating robots that in vast swarms can make any product that the process to build is sufficiently defined would change all that.  New cities could be built in a matter of months by enormous swarms of robotics, installing prefabricated components from elsewhere.  Newer designs of cars, clothes, furniture - far less limits.

    ML systems that can find a predicted optimal design, and send it for physical prototyping for it's design parameters to be checked are another way to get rid of some of the bottlenecks behind a new technology.  Another one is that the 'early access' version might still have problems, but the financial model will probably be 'rental' not purchase.

    This sounds worse but the upside is rental takes away the barrier to adoption.  You don't need to come up with $XXX for the latest gadget, just make the first payment and you have it.  The manufacturer doesn't need to force you into a contract either because their cost to recycle the gadget if you don't want it is low.  

Anyways the combination of all these factors would create a world of, well, future shock.  But it's not "the machines" doing this to humans, it would be a horde of separate groups of mainly humans doing this to each other.  It's also quite possible this kind of technology will for some areas negate some of the advantages of large corporations, in that many types of products will be creatable without needing the support of a large institution.  

Which counterfactuals should an AI follow?

Why not define a sub agent and deliver to that subagent a list of "white listed" observations. These would be all nodes the judge allowed, and your "life experiences and observations" set that excludes anything from during the trial or any personal experience with the case.

As an AI you can actually do this and solve the problem as instructed Humans cannot.

As a human, well. Yes a major problem is that your very perception of other portions of the proceedings is going to be affected by this observation you have been told to ignore. You may now "perceive" many little things that convince you RM is a gang member.

The only way to solve this problem as a human is to be explicit. The "reasonable doubt" means to construct a series of nodes that each have probabilities of some threshold (maybe 10 percent? Law doesn't say) that result in the defendant being innocent.

There only needs to exist one causal chain that explains all evidence. ('there was noise in the sample' is fine I'd you can't explain a few low magnitude observations) it doesn't need to be the most probable explanation.

So a fair jury would write down these nodes on something. For example if an eye witness says they saw the defendant do it, the node has to be (p of lying or mistaken). If the probability is so small as to be "unreasonable" you are done, no reasonable doubt exists and you can issue a verdict.

This kind of explicit reasoning isn't told to jurors, the average person will not be able to do this, "unreasonable" isn't defined, and arguably the above standard fails to actually give "justice". But as far as I can tell this is a formal way to represent what the courts expect.

Load More