Posts

Sorted by New

Wiki Contributions

Comments

Thanks for the links! My intuition was that space is big enough that global coordination isn’t always needed to avoid basic failures like collisions but I definitely do need to do more reading/thinking/modeling to figure out how valid that intuition is.

Does anyone have a link handy related to complexity of coordinating Earth’s satellites?

Coordination is only exponential if most units have to coordinate with most other units rather than following more localized coordination. Huge flocks of birds and swarms of drones coordinate just fine. They only need to be aware of local variation and broad trends.

There is some association between vengeance and the just-world hypothesis. How does this resonate with you?

To me, this framing seems like it might be taking advantage of some human cognitive biases like loss aversion and attribution biases like the fundamental attribution error. It feels like a hack that exploits known bugs.

One might argue that there are defeating reasons that corporations do not destroy the world: they are made of humans so can be somewhat reined in; they are not smart enough; they are not coherent enough. But in that case, the original argument needs to make reference to these things, so that they apply to one and not the other.

I don't think this is quite fair. You created an argument outline that doesn't directly reference these things, so you can only blame yourself for excluding them unless you are claiming that such things have not been discussed extensively.

One extremely important difference between corporations and potential AGIs is the level of high-speed, high-bandwidth coordination (which has been discussed extensively) that may be possible for AGIs. If a massive corporation could be as internally coordinated and self-aligned as might be possible for an AGI, it would be absolutely terrifying. Imagine Elon Musk as a Borg Queen with everyone related to Tesla as part of the "collective" under his control...

Latency, regardless of the cause, is one of the biggest hurdles. No matter how perfect the VR tech is, if the connection between participants has significant latency, then the experience will be inferior to in-person communication.

This video breaks it down nicely along the lines of what you describe as the "common theme".

https://www.youtube.com/watch?v=SxGYPqCgJWM

I don't know that intelligence will be "easy" but it doesn't seem intuitive to me that evolution has optimized intelligence close to any sort of global maximum. Evolution is effective over deep time but is highly inefficient compared to more intelligent optimization processes like SGD (stochastic gradient descent) and incapable of planning ahead.

Even if we assume that evolution has provided the best possible solution within its constraints, what if we are no longer bounded by those constraints? A computer doesn't have to adhere to the same limitations as an organic human (and some human limitations are really severe).

Most actors in society - businesses, governments, corporations, even families - aren't monolithic entities with a single hierarchy of goals. They're composed of many individuals, each with their own diverse goals.

The diversity of goals of the component entities is good protection to have. In the case of an AI, do we still have the same diversity? Is there a reason why a monolithic AI with a single hierarchy of goals cannot operate on the level of a many-human collective actor?

I'm not sure how the solutions our society have evolved apply to an AI due to the fact that it isn't necessarily a diverse collective of individually motivated actors.

Sure. I’m not saying it won’t happen, just that an AI will already be transformative before it does happen.

AI solving a millennium problem within a decade would be truly shocking, IMO. That’s the kind of thing I wouldn’t expect to see before AGI is the world superpower. My best guess coming from a mathematics background is that dominating humanity is an easier problem to for an AI.

Load More