One thing I never understood in the internet sphere labelled "rationalists" (LW, OB, SSC... etc) is a series of seemingly strong beliefs about the future and/or about reality, the main one being around "AI".

Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).

I've come to believe that I (and I'm sure many other people) differ from the mainstream (around these parts, that is) in a belief I can best outline as:

"Reason" may not be a determining factor in achiving agency over the material world, but rather, the limiting factor might be resources (inlcuding e.g. the resources needed to faciliatate physical labour, or to faciliate the power supply of a super-computer). What is interpreted as "reason causing an expoential jump in technology", could and should be interpreted as random luck of experimenting in the right direction, but in hindsight we rationalize it by saying the people exploring that direction "were smarter". More importantly, science and theoretical models are linked to technological inovation less than people think in the first place (see most of post 19th century physics/chemistry, including things like general relativity, not being required for most technological applications, including those credited to physics science)

I've considered writing an article aimed solely at the LW/SSC crowd trying to defend something-like the above proposition with historical evidence, but the few times I tried it was rather tedious. I still want to do so at some point, but I'm curious if anyone wrote this sort of article before, essentially something that boils down to "A defence of a mostly-sceptical take on the world which can easily be digested by someone from the rationalist-blogosphere demographic"

I understand this probably sounds insane to the point of trolling to people here, but please keep an open mind, or at least please grant me that I'm not trolling, the position outlined above would be fairly close to what an empiricist or skeptic would hold, heck, even lightweight, since a skeptic might be skeptic of us being able to gain more knowledge/agency over the outside world in the first place, at least in a non-random way.

New to LessWrong?

New Answer
New Comment

7 Answers sorted by

Razied

Jan 28, 2021

100

You can send DNA sequences to businesses right now that will manufacture the proteins that those sequences encode. Literally the only thing standing between you and nanotechnology is a good enough theory of proteins and their functions. Developing a good theory of proteins seems pretty much a pure-Reason problem. 

You can make money by simply choosing a good product on Alibaba, making a website that appeals to people, using good marketing tactics and drop-shipping, no need for any physical interaction. The only thing you need is a good theory of consumer psychology. That seems like an almost-pure-Reason problem. 

It seems completely obvious to me that reason is by far the dominant bottleneck in obtaining control over the material world.

You can send DNA sequences to businesses right now that will manufacture the proteins that those sequences encode

Have you ever tried this ? I have, it comes with loads of *s

Developing a good theory of proteins seems pretty much a pure-Reason problem

Under the assumption that we know all there is about proteins, which I've seen no claims of made by anyone. Current knowledge is limited and in-vitro, and doesn't generalize to "weird" families of proteins".

"Protein-based nanotechnology" requires:

  • weird proteins with properties not encountered yet
  • complex in-vivo
... (read more)
4Razied3y
The situations where Reason stops being useful is when you make optimal bayesian use of sensory information, in that situation, yeah, additional experiments are required to make progress. However that is a monstrously high bar to pass. We already know that quantum mechanics governs everything about protein behavior in principle, if you gave a million motivated super-Einsteins 1000 years to think, do you seriously believe that they could not produce a theory of weird proteins never encountered before?  I also think we mean slightly different things by "bottlenecked by reason", what I mean is something like "given a problem and your current resources, there exists an amount N of Reasoning ability that will make you able to solve the problem, for most problems we have today in the developed world". The amount required for specific problems might be very large, and small increases below that might not overwhelm the noise and randomness of the world. So I don't find it surprising that intelligent people have not completely taken over the world.

DTX

Jan 27, 2021

50

I can't point to any single good canonical example, but this definitely comes up from time to time in comment threads. There's the whole issue that computers can't act in the world at all unless they're physically connected to hardware controllers that can interface with some physical system we actually care about being broken or misused. Usually, the workaround there is AI will be so persuasive that they can just get people with bodies to do the dirty work that requires being able to actually touch stuff in order to repurpose manufacturing plants or whatever it is we're worried they might do. 

That does seem like there is a missing step in there somewhere. I don't think the bottleneck right now to building out a terrorist organization is that the recruiters aren't smart enough, but AI threat tends to just use "intelligence" as a shorthand for good at literally anything.

Strangely enough, actual AI doomsday fiction doesn't seem to do this. Usually, the rogue AI directly controls military hardware to begin with, or in a case like Ex Machina, Eva is able to manipulate people at least in part because she is able to convincingly take the form of an attractive embodied woman. A sufficiently advanced AI could presumably figure out that being an attractive woman helps, but if the technology to create convincing artificial bodies doesn't exist, you can't use it. This tends to get handwaved away by assuming sufficiently advanced AI can invent whatever nonexistent technology they need from scratch. 

You don't need to be very persuasive to get people to take action in the real world. 

Especially right now a lot of people work from home and take their orders from a computer and trust it to give them good orders.

1DTX3y
Although this is probably true in general, it degrades when trying to get people to do something extremely high-cost like destroy all of humanity. You either need to be very persuasive or trick them about the cost. It's hard to get people to join ISIS knowing they're joining ISIS. It's a lot easier to get them to click on ransomware that can be used to fund ISIS.
2ChristianKl3y
You don't need to tell people "destroy all of humanity" to establish a dictatorship where the AGI is in control of everything and it becomes effectively impossible for individual humans to challenge AGI power.
1DTX3y
Helping someone establish a dictatorship is still a high cost action that I think requires being more persuasive than convincing someone to do their job without decisively proving you're actually their boss. 
4Dustin3y
I think the idea is that the AI doesn't say "help me establish a dictatorship".  The AI says "I did this one weird trick and made a million dollars, you should try it too!" but surprise, the weird trick is step 1 of 100 to establish The AI World Order.
6ChristianKl3y
Or it says: "Policing is very biased against Black people. There should be an impartial AI judge that's unbiased, so that there aren't biased court judgements against Black People" Or it says: "There's child porn on the computer of person X" [person X being a person that challenges the power of the AI and the AI puts it there]" Or it says: "We give pay you a good $1,000,000 salary to vote in the board the way we want to convert the top levels of the hierachy of the company into being AGI directed" And it does 100,000s of those things in parallel. 
2ChristianKl3y
There's no reason why the AGI can't decisively prove they are the boss. For big corporations being in control of the stock means being the boss who makes the decisions at the top.  A police bureau that switches to using a software that goes out and tell them were to patrol to be better at catching crime doesn't think they are establishing a dictatorship either.  The idea that an AGI wants to establish a dictatorship can easily be labeled as an irrational conspiracy theory. 

There’s the whole issue that computers can’t act in the world at all unless they’re physically connected to hardware controllers that can interface with some physical system we actually care about being broken or misused. Usually, the workaround there is AI will be so persuasive that they can just get people with bodies to do the dirty work that requires being able to actually touch stuff in order to repurpose manufacturing plants or whatever it is we’re worried they might do.

In those cases it probably wouldn't be very hard to get people to act in the w... (read more)

1DTX3y
The distinction in this specific case here is between intelligence and persuasiveness. To the extent that some elements of persuasiveness are inherently embodied, as in people are more likely to trust you if you're also a person, that is at best orthogonal to intelligence. More generally, "effectiveness" as some general purpose quality of agents that can do things is limited by the ability to acquire and process information, but also by the ability to act on it. You may know that being tall makes you more likely to be elected to office, but if you can't make yourself any taller, you can't use the information to make your campaign more likely to succeed.  As a more fantastical but maybe more relevant example, people often mention something like turning the moon into comptronium. Part of doing that is knowing how to do it. But we already know how to do it. We understand at the level of fusion and fission how to transmute elements into different elements, and we understand, given some elements that act as semiconductors, how to produce general-purpose computational processors. The actual reason we can't do it, aside from not wanting to disrupt the earth's orbit and potentially end human civilization, is (1) there is inherent propagation delay in moving material from wherever it is created to wherever it needs to be used and this delay is much greater when the distances to move are greater than planet-scale, (2) machines that can actually transmute rocks to silicon don't presently exist and there is non-zero manufacturing delay in creating them, and (3) we have no means of harnessing sufficient energy to actually transmute matter at the necessary scale.  Can gaining more information solve these problems? Maybe. There might exist unknown physics that enable easier or faster methods than we presently know of, but there is non-zero propagation delay in creation of new knowledge of physics as well. You have to conduct experiments. At high-energy, sub-particle scale, thes

I don't necessarily think you have to take the "AI" example for the point to make sense though.

I think "reasoning your way to a distant inference", as a human, is probably a far less controversial example that could be used here. In that most people here seem to assume there are ways to make distant inferences (e.g. about the capabilities of computers in the far off future), which historically seems fairly far fetched, it almost never happens when it does it is celebrated, but the success rate seems fairly small and there doesn't seem to be a clear formula for it that works.

Dirichlet-to-Neumann

Jan 27, 2021

50

I've always thought the same thing regarding a couple of claims that are well accepted around here, like galactic-scale space travel and never-ending growth. I'm not sure enough of my knowledge of physic to try to write a big post about it, but I'd be interested if someone did it (or I may want to work with someone on it).

 

[EDITED to replace "time" by "space" in "galactic-scale space travel". I guess there is a Freudian explanation of this kind of lapses, which is certainly either funny or true.

I don't see what you mean when you say galactic-scale time travel being a well-accepted claim here. I've never heard people talking about that as if it were something that obviously works (since, if I understand what you mean, it doesn't, unless it's just referring to simple relativistic effects, in which case it's trivial).

While something approximating never-ending growth may be a common assumption, I'm not sure what percentage of people here believe in genuinely unlimited growth (that never, at any point stops), and growth that goes on for a very long ex... (read more)

1Dirichlet-to-Neumann3y
I don't know why "time" somehow entered my comment, I was thinking about galactic-scale SPACE travel. The second part of your comment illustrates this corrected point : "consider the power output and abundance of all the stars in the reachable universe". You assume here that the reachable universe is more than just the Solar system. I think this claim is debatable at best in it's weakest versions (ie we will establish colonies on some other stellar systems), and very unlikely in the stronger version that you seems to accept (we will establish a lot of colonies in many different systems that will have significant economic interactions in both directions between other stellar systems). Concerning the second part of your comment, I tend to think our resource and energy consumption has good chances of dooming us before we get a chance to "escape" at the Solar level system. I am also sceptical of anything that sounds like a Dyson sphere...   
3[anonymous]3y
Concerning the second part of your comment, I tend to think our resource and energy consumption has good chances of dooming us before we get a chance to "escape" at the Solar level system. I am also sceptical of anything that sounds like a Dyson sphere...  The "has good chances of dooming us" unfortunately isn't a good sign that you have thought a lot about the problem.  What resource and energy consumption are you thinking of and why specifically do you believe it means 'doom'? Just taking a top level view:        a.  Most of the earth's surface and underwater have not yet been exploited for minerals.  (underwater isn't cost effective, deep enough mines are not cost effective, entire continents are too cold, Siberia has vast wealth but is too cold, and so on).  "Not cost effective" doesn't mean it's impractical or that mining companies wouldn't develop the technology to do it once it's needed - it means that there are easier, competing sources for minerals that have to be exhausted first, however long that takes.         b.  Energy is abundant, the squabble right now is that fossil fuels are cheaper if it's externalities are ignored.  If fossil fuels had their externalities priced in, we would already be using solar/wind/nuclear in whatever combination is most efficient.        c.  In the timescales that matter, resources are inexhaustible*.  There are hundreds of millions of billions of years of sunlight remaining, and every item "consumed" by a human is heaped mainly into landfills, where all of the elements remain, it is simply a matter of energy (and better robotics) to recover them.        d.  We do have a major problem with greenhouse gases.  But this problem isn't an "extinction of humanity" level problem, it is a "major real estate markdown and possibly mass destruction and death in equatorial regions".  There are colder areas of the planet that would become inhabitable in the worst  warming scenario, or even more extreme measures could be taken to ke
1MikkW3y
Why are you sceptical of "anything that sounds like a Dyson sphere"? It's not particularly unrealistic given modern technology (i.e. rockets and solar panels) - the only pain points are a) making use of the energy collected, b) getting the materials to make it, and c) getting the panels in place (which will require an upfront investment of energy). Regarding using the energy produced, it would be inefficient to try to transport the energy back to Earth (though if costs went down significantly, it could still be justified), but using solar satelites for either computation or a permanent off-earth colony would be justified- particularly with computation, this could allow us to redirect on-earth sources of energy to other uses of energy, or reduce overall Earthside consumption of energy. Regarding materials, there's a lot of materials on Earth and in other places in the solar system- at worst we can mine asteroids, but I'm not sure that'd even be neccesary. A Dyson sphere doesn't need to be built all at once. Once it becomes feasible to launch solar computers into space, and make a profit selling computing time, the sector will naturally grow exponentially- now, it may or may not be bounded by some ceiling of demand, but even if only 1 / 100th or 1/ 1,000th of the sun's output gets captured, that would represent a huge change in how things work
2Matt Goldenberg3y
Do we know of materials that could make a good dyson sphere?
3MikkW3y
A Dyson sphere wouldn't be much different from a big cloud of modern satellites, perhaps with bigger solar panels, but the materials would be the same.
3habryka3y
You don't need strong materials for a dyson sphere. You basically just put solar panels into low-orbit until you captured all of the outgoing light (or like any appreciable fraction of it, you just do it until you have the energy you need).
2gilch3y
You might be confusing "Dyson sphere" with the Dyson shells from science fiction, which is more specific type of Dyson sphere. You don't need "scrith" or "neutronium" to make a Dyson sphere out of satellites (a Dyson swarm) which is the more realistic type that Dyson originally proposed, or out of statites (a Dyson bubble).

This, I assume, you'd base on a "hasn't happened before, no other animal or thing similar to us is doing it as far as we know, so it's improbable we will be able to do it" type assumption? Or something different?

claims that are well accepted around here, like galactic-scale space travel and never-ending growth.

I don't think anyone is claiming that never-ending growth is possible. Even if measured in Utility rather than Mass/Energy. Well technically you have "never-ending growth" if you asymptotically approach the Limit.

As for galactic-scale space travel that is perfectly possible.

ChristianKl

Jan 27, 2021

40

Even more so, I never understood why people believe that thinking about certain problems (e.g. AI Alignment) is more efficient than random at solving certain problems, given no evidence of it being so (and no potential evidence, since the problems are in the future).

The point of focusing on AI Alignment isn't that it's an efficient way to discover new technology but that it's a way that makes it less likely that humanity will develop technology that destroys humanity. 

A trade that makes us develop technology slower but increases the chances that humanity survives is worth it. 

The point of focusing on AI Alignment isn't that it's an efficient way to discover new technology but that it's a way that makes it less likely that humanity will develop technology that destroys humanity. 

 

Is "proper alignment" not a feature of an AI system, i.e. something that has to be /invented/discovered/built/?

This sound like semantics vis-a-vis the potential stance I was referring to above. 

2ChristianKl3y
It is a feature of the AI system but it's very important to first discover proper alignment before discovering AGI. If you randomly go about making discoveries it's more likely that you end up discovering AGI and ending humanity before discovering proper alignment.

Jan 27, 2021

40

Not only do I agree with you, but I think a pretty compelling argument can be made.

The insight came to me when I was observing my pet's behaviors.  I realized that you might be able to make them 'smarter', but because the animal still has finite I/O - it has no thumbs, it cannot speak, it doesn't have the right kind of eyes for tool manipulation - it wouldn't get much benefit.

This led to a general realization.  The animal has a finite set of actions it can make each timestep.  (finite control channel outputs).  It needs to choose, from the set of all the actions it can take, one that will result in meeting the animal's goals 

Like any real control system, the actual actions taken are suboptimal.  When the animal jumps when startled, the direction it bounds may not always be the perfect one.  It may not use the best food-gathering behavior.

But if you could cram a bigger brain in and search more deeply for a better action, the gain might be very small.  An action that is 95% as good as the best action means that the better brain only gains you 5%.

This applies to "intelligence" in general.  A smarter cave man may only be able to do slightly better than his competitors, not hugely better.  Ultimate outcomes may be heavily governed by luck or factors intelligence cannot affect, such as susceptibility to disease.  

This is true even if the intelligence is "infinite".  A infinitely intelligent cave person is one who every action is calculated to be the most optimal one he can make with the knowledge he/she has.  

Another realization that comes out of this is our modern world may only be possible because of stupid people.  Why is that?  Well, the most optimal action you can take as a human being is the one that gives you descendants who survive to mate.  Agriculture, machinery,  the printing press, the scientific method - the individual steps to reach these things were probably often done by tinkerers who would have been better served individually by finding a way to murder their rivals for mates/spending it on food gathering in the immediate term, etc.  For example, agriculture may not have paid off in the lifespan of the first cave person to discover it.

Anyways, a millions of times smarter AI is like a machine, given a task, that can pick the 99th percentile action instead of the 95th percentile action (humans).  This isn't all that effective alone.  The real power of AI would be that they don't need to sleep, and can be used to in vast arrays that coordinate better with each other, and they always pick that 99th percentile action, they don't get tired or bored or distracted.  And they can be made to coordinate with each other rationally where they share data and don't argue with each other.  And you can clone them over and over.  

This should allow for concrete, near term goals we have as humans to be accomplished.  

But I don't think, for the most part, the scary possibilities could be done invisibly.  For example, in order for the AI to develop a bioweapon to can kill everyone, it would need to do it like humans would do it, just more efficiently.  As in, by building a series of mockups of human bodies - at least the lungs, compared to what modern day researchers do - and trying out incremental small changes to known to work viruses.  Or trying out custom proteins on models of cell biology.  

It needs the information to do it, and the only way to get that information requires a series of controlled experiments done by physical systems, controlled by the AI , in the real world.  

Same with developing MNT or any of the other technologies we are pretty sure physics allows, we just don't have the ability to exploit.  I think these things are all possible but the way to make them real would take a large amount of physical resources to methodically work your way up the complexity chain.

I realized that you might be able to make them 'smarter', but because the animal still has finite I/O - it has no thumbs, it cannot speak, it doesn't have the right kind of eyes for tool manipulation - it wouldn't get much benefit.

I'm fairly skeptical of this claim. It seems to me that even moderate differences in animal intelligence in E.G dogs leads to things like tool use and better ability to communicate things to humans.

1DTX3y
To expand, I actually think it applies much more to AI than to animals. Part of the advantage of being an animal is our interface to the rest of the world is extremely flexible regarding the kinds of inputs it can accept and outputs it can produce. Software systems often crash because xml doesn't specify whether you can include whitespace in a message or not. Part of why AlphaGo isn't really "intelligent" isn't anything about the intrinsic limitations of what types of functions its network architecture can potentially learn and represent. It isn't intelligent because it can't even accept an input that isn't a very specific encoding of a Go board and can't produce any outputs except moves in a game of Go.  It's isn't like a dog and more like a dog that can only eat one specific flavor of one specific brand of dog food. Much of the practical difficulty in creating general purpose software systems is just that there is no general purpose communication protocol. It's why we have succeeded so far in producing things that can accept and produces images and text, because they analogize well to how animals communicate with the rest of the world, so we understand them and can create digital encodings of them. But even those still rely upon character set encodings, pixel metadata specifications, and video codecs that themselves have no ability to learn or adapt. 

I believe this echos out my thoughts perfectly, I might quote it in full if I ever do get around to reviving that draft.

The bit about "perfect" as not giving slack for development, I think, could be used even in the single individual scenario if you assume any given "ideal" action as lower chance of discovering something potential useful than a "mistake". I.e. adding:

  • Actions have unintended and unknown consequences that reveal an unknown landscape of possibilities
  • Actions have some % chance of being "optimal", but one can never be 100% certain they are so,
... (read more)

This led to a general realization.  The animal has a finite set of actions it can make each timestep.  (finite control channel outputs).  It needs to choose, from the set of all the actions it can take, one that will result in meeting the animal's goals 

It seems that by having access to things like language, a computer, programming languages, the problems of a finite problem space quickly get resolves and no longer pose an issue. Theoretically I could write a program to make me billions of dollars on the stock market tomorrow. So the sp... (read more)

1[anonymous]3y
Please note that the set of actions is constrained to the set, from all the high value actions you know about, that have the highest value. While yes such a program probably exists (a character sequence that could be typed in at a human timescale to earn 1 billion), you don't have the information to even consider it as a valid action. Therefore you (probably) cannot do it. You would need to become a quant and it would take both luck and years of your life. And as a perfect example, this isn't the optimal action per nature. The optimal action was probably to socially defeat your rivals back in high school and to immediately start a large family, then cheat on your wife later for additional children. If your brain were less buggy - aka 'smarter' in an evolutionary sense - this and similar "high value" moves would be the only action you could consider and humans would still be in the dark ages.
2habryka3y
Well, sure, because I am a fleshy meat human, but it sure seems that you could build a hypothetical mind that is much better at being a quant than humans, who wouldn't need years of their life to learn it (the same way that we build AIs that are much much better at Go than humans, and don't need years of existence to train to a level that vastly outperforms human players). 
2[anonymous]3y
That's the part I am saying isn't true, or it wasn't until recently. The mind if it is limited to a human body has finite I/O. It may simply not be possible to read enough in a humans working lifespan to devise a reliable way to get a billion dollars. (Getting lucky in a series of risky bets is a different story - in that case you didn't really solve the problem you just won the lottery) And even if I posit it is true now, imagine you had this kind of mind but were a peasant in russia in 1900? What meaningful thing can you do? You might devise a marginally better way to plow the fields - but again, with limited I/O and lifespan your revised way may not be as overall robust and effective as the way the village elders show you to do it. This is because your intelligence cannot increase the observations you need and you might need decades of data to devise an optimal strategy. So this relate to the original topic, that to make a hyper intelligent AI it needs to have access to data, clean data with cause and effect, and the best way to do that is to give it access to robotics and the ability to build things. This limiting factor of physicality might end up making AIs controllable even if they are in theory exponential, the same way a void coefficient makes a nuclear reactor stable.
5habryka3y
I really don't buy the "you need to run lots of experiments to understand how the world works" hypothesis. It really seems like we could have figured out relativity, and definitely newtonian physics, without any fancy experiments. The experiments were necessary to create broad consensus among the scientific community, but basically any video stream a few minutes long would have been sufficient to derive newtonian physics, and probably even sufficient to derive relativity and quantum physics. Definitely if you include anything like observations about objects in the night sky. And indeed, Einstein didn't really run any experiments, just thought-experiments, with a few physical facts about the constant nature of the speed of light which can easily be rederived from visual artifacts that occur all the time.  For some theoretical coverage of the bayesian ideal here (which I am definitely not saying is achievable), see Eliezer's posts on Occam's razor and Solomonoff induction. If I had this kind of mind as a Russian peasant in 1900? I would have easily developed artificial fertilizer, which is easily producible given common-household items in 1900, and became rich, then probably used my superior ability to model other people to become extremely socially influential, and then develop some pivotal technology like nanotechnology or nukes to take over the world.  I don't see why I would be blocked on IO in any meaningful way? Modern scientists don't magically have more I/O than historical people, and a good fraction of our modern inventions don't require access to particularly specialized resources. What they have is access to theoretical knowledge and other people's observations, but that's exactly what a superintelligent AI would be able to independently generate much better. 
1[anonymous]3y
Well, for relativity you absolutely required observations that couldn't be seen in a simple video stream.  And unfortunately I think you are wrong, I think there are a very large number of incorrect physical models that would also fit the evidence in a short video that a generative network for this.  (also there is probably a simpler model than relativity that is still just as correct, it is improbable that we have found the simplest possible model over the space of all of mathematics) My evidence for this is pretty much any old machine learning model will overfit to an incorrect/non general model unless the data set is very, very large and you are very careful on the training rules. I think you could not have invented fertilizer for the same reason.  Remember, you are infinitely smart but you have no more knowledge than a russian peasant.  So you will know nothing of chemistry, and you have no knowledge of how to perform chemistry with household ingredients.  Also, you have desires - to eat, to mate, shelter - and your motivations are the same as the russian peasant, just now with your infinite brainpower you will be aware of the optimal path possible from the possible strategies you are able to consider given the knowledge that you have. Learning chemistry does not accomplish your goals directly, you may be aware of a shorter-term mechanism to do this, and you do not know you will discover anything if you study chemistry.
2habryka3y
What observations do I need that are not available in a video stream? I would indeed bet that within the next 15 years, we will derive relativity-like behavior from nothing but videostreams using AI models. Any picture of the night sky will include some kind of gravitational lensing behavior, which was one of the primary pieces of evidence we used to derive relativity. Before we discovered general relativity we just didn't have a good hypothesis for why that lensing was present (and the effects were small, so we kind of ignored them). The space of mathematical models that are as simple as relativity strikes me as quite small, probably less than 10000 bits. Like, encoding a simulation in Python with infinite computing power to simulate relativistic bodies is really quite a short program, probably less than 500 lines. There aren't that many programs of that length that fit the observations of a video stream. Indeed, I think it is very likely that no other models that are even remotely as simple fit the data in a videostream. Of course it depends on how exactly you encode things, but I could probably code you up a python program that simulates general relativity in an afternoon, assuming infinite compute, under most definitions of objects.
1[anonymous]3y
Again, what you are missing is there are other explanations that also will fit the data.  As an analogy, if someone draws from a deck of cards and presents the cards as random numbers, you will not be able to deduce what they are doing if you have no prior knowledge of cards, and only a short sequence of draws.  There will be many possible explanations and some are simpler than 'is drawing from a set of 52 elements'.
2habryka3y
Yeah, that's why I used the simplicity argument. Of course there are other explanations that fit the data, but are there other explanations that are remotely as simple? I would argue no, because relativity is just already really simple, and there aren't that many other theories at the same level of simplicity.   
1[anonymous]3y
I see that we need to actually do this experiment in order for you to be convinced. But I don't have infinite compute. Maybe you can at least vaguely understand my point : given the space of all functions in all of mathematics, are you certain nothing fits a short sequence of observed events better than relativity? What if there is a little bit of noise in the video? I would assume other functions also match. Heck, ReLu with the right coefficients matches just about anything so...
6habryka3y
ReLu with the right coefficients in a standard neural net architecture is much much more complicated than general relativity. General relativity is a few thousand bits long when written in Python. Normal neural nets almost never have less than a megabyte of parameters, and state of the art models have gigabytes and terrabytes worth of parameters.  Of course there are other things in the space of all mathematical functions that will fit it as well. The video itself is in that space of functions, and that one will have perfect predictive accuracy.  But relativity is not a randomly drawn element from the space of all mathematical functions. The equations are exceedingly simple. "Most" mathematical functions have an infinite number of differing terms. Relativity has just a few, so few indeed that translating it into a language like python is pretty easy, and won't result in a very long program. Indeed, one thing about modern machine learning is that it is producing models with an incredibly long description length, compared to what mathematicians and physicists are producing, and this is causing a number of problems for those models. I expect future more AGI-complete systems to produce much shorter description-length models. 

habryka

Jan 28, 2021

20

The most basic argument is that it really doesn't take a lot of material resources to be very smart. Human brains run on a few watts, and we have more than enough easily available material resources in our environment to build much much much bigger brains. 

Then, it doesn't seem like "access to material resources" is what distinguishes humanity's success from other animals' success. Sure seems like we pretty straightforwardly won by being smarter and better at coordinating. 

Also, between groups of humans, it seems that development of better technologies has vastly outperformed access to more resources (i.e. having a machine gun doesn't take very much materials, but easily allows you to win wars against less technologically advanced civilizations). Daniel Kokotajlo's work has studied in-depth the effect that better technology seems to have had on conquerors when trying to conquer the americas. 

Now, you might doubt the connection between intelligence and developing new technologies. To me, it seems really obvious that there are some properties of a mind that determine how good it is at developing new technologies, holding environmental factors constant. We've seen drastic differences between different societies and different species in this respect, so there clearly is some kind of property here. I don't see how the environmental effects would dominate, given that most technologies we are developing just involve the use of existing components we already have (like, writing a new computer program that is better at doing something doesn't require special new resources). 

Now the risk is that you get an AI that is much better at solving problems and developing new technologies than humans. It seems that humans are really not great at it, and that the upper bound for competence is far above where we are. This makes both sense on priors (why would the first species to make use of extensive tool-making already be at the maximum), but also from an inside-view (human minds sure don't seem very optimized for actually developing new technologies, given that we have a brain that only takes in a few watts, and have been mostly optimized for other constraints). I don't care whether you call it intelligence, and it definitely shouldn't be conflated with the concept of human intelligence. Like, humans are sometimes smarter in a very specific and narrow way and the variation between individuals humans is overall pretty minimal. When I talk about machine intelligence I mean a much broader set of potential ways to be better at thinking.

We've seen drastic differences between different societies and different species in this respect, so there clearly is some kind of property here

Is there?

Writing, agriculture, animal husbandry, similar styles of architecture and most modern inventions from flight to nuclear energy to antibiotics seem to have been developed in a convergent way given some environmental factors.

But I guess it boils down to a question of studying history, which ultimately has no good data and is only good for overfitting bias. So I guess it may be that there's no way to actuall... (read more)

4habryka3y
What a weird statement. Of course history rules out 99.9% of hypotheses about how the world came to be. We can quibble over the remaining hypotheses, but obvious ones like "the world is 10000 years old" and "human populations levels reached 10 billion at some point in the past" are all easily falsified. Yes, there is some subjectivity in history, but overall, it still reduces the hypothesis space by many many orders of magnitude.  We know that many thousands of years of history never had anything like the speed of technological development as we had in the 20th century. There was clearly something that changed during that time. And population is not sufficient, since we had relatively stable population levels for many thousands of years before the beginning of the industrial revolution, and again before the beginning of agriculture.   
3George3d63y
I will note that the 10,000 years-old thing is hardly ruled out by "history", more so by geology or physics, but point taken, even very little data and bad models of reality can lead to ruling out a lot of things with very high certainty. This is however the kind of area where I always find history doesn't provide enough evidence, which is not to say this would help my point or harm yours. Just to say that I don't have enough certainty that statements like the above have any meaning, and in order to claim what I'd have wanted (what I was asking the question about) I would have to make a similar claim regarding history. In brief I'd want to argue with the above statement by pointing out: 1. Ongoing process since the ancient Greeks, with some interruptions. But most of the "important stuff" was figured out a long time ago (I'm fine living with Greek architecture, crop selection, heating, medicine and even logic and mathematics). 2. "Progress" bringing about issues that we solve and call "progress", i.e. smallpox and the bubonic plague up until we "progressed" to cities that could make them problematic. On the whole there's no indication lifespan or happiness has greatly increased, the increases in lifespan exist, but once you take away "locked up in a nursing home" as "life" and exclude "death of kids <1 year" (or, alternatively, if you want to claim kids <1 year are as precious as a fully developed conscious human, once you include abortions into our own death statistics)... we haven't made a lot of "progress" really. 3. A "cause" being attributed to the burst of technology in some niches in the 20th century, instead of it just being viewed as "random chance", i.e. the random chance of making the correct 2 or 3 breakthroughs at the same time. And those 3 points are completely different threads that would dismantle the idea you present, but I'm just bringing them up as potential threads that would dismantle your idea. Overall I hold very little faith in them be

ChristianKl

Jan 28, 2021

20

Your paragraph that outlines your position mixes multiple different things into the concept of reason.

There's the intelligence of individual scientists or engineers, there are conceptual issues and there's the quality of institutions.

An organizations that's a heavily disfunctional immoral maze is going to innovate less new technology then an organization with access to the same resources but with a better organizational setup. 

When it comes to raw intelligence that a lot of the productive engineers have an IQ that far exceeds that of the average population.

Conceptual insights like the idea of running controlled trials heavily influence the medical technology that can be developed in our society. We might have had concepts that would have allowed us to produce a lot more vaccines against COVID-19 much earlier