Wiki Contributions

Comments

Dach2y120

But, sure, if you're somehow magically unhackable and very good at keeping the paperclipper boxed until you fully understand it, then there's a chance you can trade, and you have the privilege of facing the next host of obstacles.

Now's your chance to figure out what the next few obstacles are without my giving you spoilers first. Feel free to post your list under spoiler tags in the comment section.

Ideas:

  1. Someone else definitely builds and deploys an UFAI before you finish studying Clippy. (This would almost always happen?)
  2. Clippy figures out that it's in a prisoner's dilemma with the other cobbled-together UFAI humanity builds, wherein each UFAI is given the option to shake hands with Humanity or pass 100% of the universe to whichever UFAI Humanity eventually otherwise deploys. Clippy makes some models, does some decision theory, predicts that if it defects and handshakes other UFAIs are more likely to defect too based on their models, and decides to not trade. The multiverse contains twice as many paperclips.
  3. The fact that you're going to forfeit half of the universe to Clippy leaks. You lose, but you get the rare novelty Game Over screen as compensation?

Interlocutor:  Well, maybe we can train the infant paperclipper in games of increasing complexity, so that it's never quite sure whether it's in reality yet. The paperclipper will then be uncertain about whether we humans are simulating it, and will want to cater to our preferences to some extent.

Me:  Uh, yeah, your paperclipper will be able to tell when it is finally in the real world.

Interlocutor:  What? How?

Me:  I suggest maybe spending five minutes thinking about your own answer to that question before I give mine.

Ideas: 

  1. It could just act friendly for enough time to be sure it's not in a simulation on the grounds that a civilization that could simulate what it was doing on its computers wouldn't simulation-fakeout it for non-exotic reasons. Imagine Clippy mulling over its galaxy-sized supercomputing cluster and being like "Hm, I'm not sure if I'm still in those crude simulations those stupid monkeys put me in or I'm in the real world."
  2. I would be surprised if we're able to build a simulation (before we build AGI) that I couldn't discern as a simulation 99.99% of the time. Simulation technology just won't advance fast enough.
Dach2y40

Is Humbali right that generic uncertainty about maybe being wrong, without other extra premises, should increase the entropy of one's probability distribution over AGI, thereby moving out its median further away in time?

Writing my response in advance of reading the answer, for fun.

One thought is that this argument fails to give accurate updates to other people. Almost 100% of people would give AGI medians much further away than what I think is reasonable, and if this method wants to be a generally useful method for getting better guesses by recognizing your uncertainty then it needs to push them towards shorter timelines, to whatever degree I trust short timelines. 

In fact, this argument seems to only be useful for people whose AGI timelines are shorter than whatever the true timeline ends up being. If this were a real comment I would say this revealed behavior was unsurprising because the argument was generated to argue someone towards longer timelines and thus I couldn't trust it to give reality-aligned answers.

It strikes me that such a system probably doesn't exist? At the very least, I don't know how to turn my "generic uncertainty about maybe being wrong, without other extra premises" into anything. You need to actually exert intelligence, actually study the subject matter, to get better probability distributions. Suppose I have a random number generator that I think gives 0 10% of the time and 1 90% of the time. I can't improve this without exerting my intelligence- I can't just shift towards 50% 0 and 50% 1 with no further evidence. That would rely on the assumption that my uncertainty signals I'm biased away from the prior of 50% 0 and 50% 1, which is completely arbitrary.

Note that if you have reason to think your guess is biased away from the prior, you can just shift in the direction of the prior. In this case, if you think you're too confident relative to a random distribution over all years, which basically means you think your timeline is too short, you can shift in the direction of a random distribution over all years. 

In this context, you can't get better AGI estimates by just flattening over years. You need to actually leverage intelligence to discern reality. There are no magical "be smarter" functions that take the form of flattening your probability distribution at the end.

Dach2y20

If I try to answer that question as written, I'd say that any time I see a probability estimate with on-the-order-of-hundreds of zeroes, when I know that event actually happened (at least) once in Earth's past light cone, I'm going to assume there is an error in the model that generated the estimate, whether I know what it is or not.

I would agree for pretty much any other topic. This is an event required for people to be around to observe it. Imagine a universe in which abiogenesis events really were absurdly rare- unlikely to ever occur in a given observable universe sized area. Every observer in this universe would still look back and see an abiogenesis event occurring in their past! Having observed exactly one event is completely required and provides no evidence. This is essentially the weak anthropic principle.

The fact that we observe only exactly one event is thus bayesian evidence in the direction of rare abiogenesis. You bring up the point that the fact we observe only exactly one event in earth's history isn't that strong of evidence because e.g. events being concealed by significantly more advanced competitors. I certainly don't disagree- that was just unfortunate overemphasis on my part.

So what I way trying to point to is that if a catalytic cycle of many (much smaller) RNA strands was sufficient for an abiogenesis event, that could lower the probability estimate enough to make such events more likely by enough that there could have been multiple even just on Earth without straining credulity, and the world today would likely look basically the same either way since the more-competitive biochemistry would have long since reach fixation (and/or the lineages could have merged in some analog of later endosymbiosis events).

I was thinking of systems more complex than the random coalescence of nucleotides when I wrote my post, but I didn't know how to productively model that.  Someone else had a similar argument and I responded that I slightly shifted away from rare-abiogenesis because I wasn't thinking of complicated groups of cross-catalyzing RNA in particular, which are dumb enough to have no modern analogues but not so dumb as to not be competitive against single-strand solutions which require the random coalescence of another 100 base pairs, or whatever it is. It's unclear to what degree this should affect my model, given a lack of understanding of how such systems work.

Dach2y20

I'm not aware of an argument that there was only on abiogenesis event on Earth, just the observation that all known surviving lineages come from a universal common ancestor fairly early on. In principle that would be compatible with any number of initial events. It's just that once a given lineage evolved enough adaptions/improvements, it would spread and take over, and then no new lineage would be able to compete/get started.

Your observation is an argument for only one abiogenesis event, and your claim that one would spread and take over and no new lineages would be able to compete is a counterargument. I think mentally using this wording is helpful for properly probabilistic thinking. 

I didn't mean for the opening comments of my question to be interpreted as justification for the structure of my vague estimate. Said estimate (although plausibly off by several hundred orders of magnitude) says abiogenesis events are cosmically unlikely, and this notion was used in the presentation of the question.

Dach2y10

While cool, I didn't expect indefinite self-replication to be hard under these circumstances. The enzymes work by combining two halves of the other enzyme- i.e. they are not self-replicating using materials we would expect to ever naturally occur, they are self-replicating using bisected versions of themselves.

I've slightly downgraded my estimate for the minimum viable genome size for self-replicating RNA because I wasn't thinking about complicated groups of cross-catalyzing RNA.

Dach3y210

Scott: if you believe that people have auras that can implant demons into your mind then you're clearly insane and you should seek medical help.

Also Scott: beware this charismatic Vassar guy, he can give you psychosis!

These so obviously aren't the same thing- what's your point here? If just general nonsense snark, I would be more inclined to appreciate it if it weren't masquerading as an actual argument.

People do not have auras that implant demons into your mind, and alleging so is... I wish I could be more measured somehow. But it's insane and you should probably seek medical help. On the other hand, people who are really charismatic can in fact manipulate others in really damaging ways, especially when combined with drugs etc. These are both simultaneously true, and their relationship is superficial.

Scott 2015: Universal love, said the cactus person

Scott 2016: acritically signal boosts Aella talking about her inordinate drug use.

Scott 2018: promotes a scamcoin by Aella and Vinay Gupta, a differently sane tech entrepreneur-cum-spiritual guru, who apparently burned his brain during a “collaborative celebration” session.

Personally, when I read the cactus person thing I thought it was a joke about how using drugs to seek "enlightenment" was dumb, and aside from that it was just entertainment? That Aella thing is a single link in a sea of 40 from 5 years ago, so I don't care. I don't know who Vinay Gupta is- from reading Scott's comments on that thread I get the impression he also didn't really know who he was?

I'll add a fourth silly piece of evidence to this list for laughs. In Unsong, the prominent villain known as the Drug Lord is evil and brainwashes people. Must be some sort of hidden message about Michael Vassar, huh? He warned us in advance!

Dach3y10

Why is this limit unique? Why can't we be working on "distribution inefficiencies and similar" for the next 100 years?

In the case of real GDP per capita per hour worked, this limit is exactly unique- "distribution inefficiencies and similar" doesn't apply. Indeed, this is tautologically true as you say. Think about what it would look like for an increase in real GDP per capita per hour worked to not have the form of "Something allowed for more work to be done per hour per person". It wouldn't look like anything- that doesn't make any sense.

I would completely ignore my comment on "distribution inefficiencies and similar" until you know what I mean by this. To explain my comment, real GDP per capita per hour worked is not the same as the nebulous "prosperity" I was referring to, which also contains some level of preference for how material goods are distributed.

Maybe I'll ask this, does your statement regarding universal bottleneck apply explicitly to humans? Or does it also apply to apes and bacteria and AI?

Because we particularly care about how much work humans do, and how wealthy they are. We do not really care about the work hours or prosperity of bacteria. Economic productivity is measured relative to how much money people have and how much they must work to get it. Just read my previous comment and/or the post again- this would seem to be a really basic sort of confusion that I can't fix for you.

Dach3y10

Self-driving technology is advancing and will soon(ish) allow us to move cars without humans being directly involved, except in terms of maintenance and management. This will be a major boon because it will partially remove humans from the equation- the bottleneck is partially removed. This has no real bearing on the title statement- I even remark about this in my post.

The "universality" here is trivial- here is a copy-paste of part of my response to a similar comment:

For everyone to become richer without working harder, we must develop technologies that allow more work to be done per man-hour. Aside from working out distribution inefficiencies and similar, this is the unique limit on prosperity. This is what I mean by "humans are the universal bottleneck"- we only have so many man-hours, so any growth is going to be of the form "With the same amount of hours, we do more".

Imagine if every area of economic activity was automated- humans were fully removed. This would look very sci-fi: think of von Neumann probes. In this situation there is no practical limit- the probes will expand and convert our entire light cone. Assuming constant population, per capita wealth would approach 50 billion stars, I guess.

Dach3y10

For everyone to become richer without working harder, we must develop technologies that allow more work to be done per man-hour. Aside from working out distribution inefficiencies and similar, this is the unique limit on prosperity. This is what I mean by "humans are the universal bottleneck"- we only have so many man-hours, and any growth is going to be of the form "With the same amount of hours, we do more".

Some segments of the economy have not had as much growth in the above department. For example, houses are assembled manually- all major parts must be done by hand, many with the assistance of only hand-held tools. Because we require shelter and the choice way of getting that shelter is owning a manually built house, this is a drag on the economy. 

Domains such as this which have not been as revolutionized by automation have scaling costs due to how directly they are pegged to the price of labor- in particular, houses of constant size cost the same general fraction of our income (if not more) despite real GDP per capita having grown significantly over time, because richer workers demand more pay. Why aren't houses cheap? This is one of many reasons, and doing some quick googling on how many man-hours are required to build a house, the main factor in many areas of the united states.

Is this way of thinking "new"? Surely not to humanity, but hopefully to the reader.

Dach3y10

Interesting, thank you.

Is the quadrupling of drag and octupling of rolling resistance related to the assumption that drag is proportional to the surface area of the side on which the drag is produced, and that rolling resistance is proportional to weight? Either way, cost would still decrease due to larger and more complex engines, as rolling resistance per kg would not change. 

Of course, railway sizes are fixed, so there is little to be done. I was just speculating where the relative efficiency of cargo ships comes from. I made an edit at the end of the post which contains a very rough approximation of how large savings on wages are in the case of cargo container ships.

Load More