Wiki Contributions

Comments

Many complex physical systems are still largely modelled empirically (ad-hoc models validated using experiments) rather than it being possible to derive them from first principles. While physicists sometimes claim to derive things from first principles, in practice these derivations often ignore a lot of details which still has to be justified using experiments.

The argument here seems to be "humans have not yet discovered true first-principles justifications of the practical models, therefore a superintelligence won't be able to either".

I agree that not being able to experiment makes things much harder, such that an AI only slightly smarter than humans won't one-shot engineer things humans can't iteratively engineer. And I agree that we can't be certain it is possible to one-shot engineer nanobots with remotely feasible compute resources. But I don't see how we can be sure what isn't possible for a superintelligence.

Had it turned out that the brain was big because blind-idiot-god left gains on the table, I'd have considered it evidence of more gains lying on other tables and updated towards faster takeoff.

I agree the blackbody formula doesn't seem that relevant, but it's also not clear what relevance Jacob is claiming it has. He does discuss that the brain is actively cooled. So let's look at the conclusion of the section:

Conclusion: The brain is perhaps 1 to 2 OOM larger than the physical limits for a computer of equivalent power, but is constrained to its somewhat larger than minimal size due in part to thermodynamic cooling considerations.

If the temperature-gradient-scaling works and scaling down is free, this is definitely wrong. But you explicitly flag your low confidence in that scaling, and I'm pretty sure it wouldn't work.* In which case, if the brain were smaller, you'd need either a hotter brain or a colder environment.

I think that makes the conclusion true (with the caveat that 'considerations' are not 'fundamental limits').

(My gloss of the section is 'you could potential make the brain smaller, but it's the size it is because cooling is expensive in a biological context, not necessarily because blind-idiot-god evolution left gains on the table').

* I can provide some hand-wavy arguments about this if anyone wants.

The capabilities of ancestral humans increased smoothly as their brains increased in scale and/or algorithmic efficiency. Until culture allowed for the brain’s within-lifetime learning to accumulate information across generations, this steady improvement in brain capabilities didn’t matter much. Once culture allowed such accumulation, the brain’s vastly superior within-lifetime learning capacity allowed cultural accumulation of information to vastly exceed the rate at which evolution had been accumulating information. This caused the human sharp left turn.

This is basically true if you're talking about the agricultural or industrial revolutions, but I don't think anybody claims evolution improved human brains that fast. But homo sapiens have only been around 300,000 years, which is still quite short on the evolutionary timescale, and it's much less clear that the quoted paragraph applies here.

I think a relevant thought experiment would be to consider the level of capability a species would eventually attain if magically given perfect parent-to-child knowledge transfer—call this the 'knowledge ceiling'. I expect most species to have a fairly low knowledge ceiling—e.g. meerkats with all the knowledge of their ancestors would basically live like normal meerkats but be 30% better at it or something.

The big question, then, is what the knowledge ceiling progression looks like over the course of hominid evolution. It is not at all obvious to me that it's smooth!

Upvoted mainly for the 'width of mindspace' section. The general shard theory worldview makes a lot more sense to me after reading that.

Consider a standalone post on that topic if there isn't one already.

I feel that there's something true and very important here, and (as the post acknowledges) it is described very imperfectly.

One analogy came to mind for me that seems so obvious that I wonder if you omitted it deliberately: a snare trap. These very literally work by removing any slack the victim manages to create.

There's definitely something here.

I think it's a mistake to conflate rank with size. The point of the whole spherical-terrarium thing is that something like 'the presidency' is still just a human-sized nook. What makes it special is the nature of its connections to other nooks.

Size is something else. Big things like 'the global economy' do exist, but you can't really inhabit them—at best, you can inhabit a human-sized nook with unusually high leverage over them.

That said, there's a sense in which you can inhabit something like 'competitive Tae Kwon Do' or 'effective altruism' despite not directly experiencing most of the specific people/places/things involved. I guess it's a mix of meeting random-ish samples of other people engaged the same way you are, sharing a common base of knowledge... Probably a lot more. Fleshing out the exact nature of this is probably valuable, but I'm not going to do it right now.

I might model this as a Ptolemaic set of concentric spheres around you. Different sizes of nook go on different spheres. So your Tae Kwon Do club goes on your innermost sphere—you know every person in it, you know the whole physical space, etc. 'Competitive Tae Kwon Do' is a bigger nook and thus goes on an outer sphere.  

Or maybe you can choose which sphere to put things in—if you're immersed in competitive Tae Kwon Do, it's in your second sphere. If you're into competitive martial arts in general, TKD has to go on the third sphere. And if you just know roughly what it is and that it exists, it's a point of light on your seventh sphere. But the size of a thing puts a minimum on what sphere can fit the whole thing. You can't actually have every star in a galaxy be a Sun to you; most of them have to be distant stars.

(Model limitations: I don't think the spheres are really discrete. I'm also not sure if the tradeoff between how much stuff you can have in each sphere works the way the model suggests)

Maybe it's an apple of discord thing? You claim to devote resources to a good cause, and all the other causes take it as an insult?

If you really want to create widespread awareness of the broad definition, the thing to do would be to use the term in all the ways you currently wouldn't.

E.g. "The murderer realized his phone's GPS history posed a significant infohazard, as it could be used to connect him to the crime."

If Bostrom's paper is our Schelling point, 'infohazard' encompasses much more than just the collectively-destructive smallpox-y sense.

Here's the definition from the paper.

Information hazard: A risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.

'Harm' here does not mean 'net harm'. There's a whole section on 'Adversarial Risks', cases where information can harm one party by benefitting another party:

In competitive situations, one person’s information can cause harm to another even if no intention to cause harm is present. Example:  The rival job applicant knew more and got the job.

ETA: localdeity's comment below points out that it's a pretty bad idea to have a term that colloquially means 'information we should all want suppressed' but technically also means 'information I want suppressed'. This isn't just pointless pedantry.

Load More