Posts

Sorted by New

Wiki Contributions

Comments

lmnop11y00

Is the pay strictly by hours or by work produced? Is it possible to make more than $10-$12/hr by e.g. reading the essays faster?

[This comment is no longer endorsed by its author]Reply
lmnop11y50

No, but I read it just now, thank you for linking me. The example takeover strategy offered there was bribing a lab tech to assemble nanomachines (which I am guessing would then be used to facilitate some grey goo scenario, although that wasn't explicitly stated). That particular strategy seems a bit far-fetched, since nanomachines don't exist yet and we thus don't know their capabilities. However, I can see how something similar with an engineered pandemic would be relatively easy to carry out, assuming ability to fake access to digital currency (likely) and the existence of sufficiently avaricious and gullible lab techs to bribe (possible).

I was thinking in terms of "how could an AI rule humanity indefinitely" rather than "how could an AI wipe out most of humanity quickly." Oops. The second does seem like an easier task.

lmnop11y00

Government orders the major internet service providers to shut down their services, presumably :) Not saying that that would necessarily be easily to coordinate, nor that the loss of internet wouldn't cripple the global economy. Just that it seems to be a different order of risk than an extinction event.

My intuition on the matter was that an AI would be limited in its scope of influence to digital networks, and its access to physical resources, e.g. labs, factories and the like would be contingent on persuading people to do things for it. But everyone here is so confident that FAI --> doom that I was wondering if there was some obvious and likely successful method of seizing control of physical resources that everyone else already knew and I had missed.

lmnop11y00

Worst case scenario, can't humans just abandon the internet altogether once they realize this is happening? Declare that only physical currency is valid, cut off all internet communications and only communicate by means that the AI can't access?

Of course it should be easy for the AI to avoid notice for a long while, but once we get to "turn the universe into computronium to make paperclips" (or any other scheme that diverges from business-as-usual drastically) people will eventually catch on. There is an upper bound to the level of havoc the AI can wreak without people eventually noticing and resisting in the manner described above.

lmnop11y00

What are concrete ways that an unboxed AI could take over the world? People seem to skip from "UFAI created" to "UFAI rules the world" without explaining how the one must cause the other. It's not obvious to me that superhuman intelligence necessarily leads to superhuman power when constrained in material resources and allies.

Could someone sketch out a few example timelines of events for how a UFAI could take over the world?

lmnop11y00

Could you elaborate on the difference between continual and ongoing growth? Dweck-style growth mindset seems similar to LW-style life optimization on a practical level to me.

lmnop13y40

I'm guessing it's that Albus's own father was committed to and died in Azkaban.

lmnop13y00

whereas for an uncoached entrant it's almost purely wealth --> ability --> score.

And coaching can't make up a large part of the score difference, either. There's more than 100 points discrepancy on Critical Reading or Math alone between the lowest and highest income groups, whereas coaching only creates improvements of 30 points in Reading and Math combined.

lmnop13y40

In book 7, Voldemort visits Grindelwald at Nurmengard in order to interrogate him about the location of the Elder Wand, and then kills him. So Grindelwald was definitely alive in book 1.

lmnop14y00

Short term or long term? If long, how long?

Load More