Dan.Oblinger
Dan.Oblinger has not written any posts yet.

Dan.Oblinger has not written any posts yet.

A natural extension of the way AI interacts today via MCP protocols makes it a kind of insider. One with a specific role, and specific access patterns that match this role.
Even an org that is not concerned with misaligned AI or such, will still want to lock down exactly what updates each role is provding within the org, just as these org's typically lock down access to different roles within a company today.
Most employees cannot access account receivable, and access to the production databases in a tech company are very carefully guarded. Mostly not from fear of malevolence, its just a fear that a junior Dev could easily bollix things horribly... (read more)
Chi, I think that is correct.
My arguments attempts to provide a descriptive explanation of why all evolved intelligence do have a tendency towards ECL, but it provide no basis to argue such intelligence should have such a tendency in a normative sense.
Still somehow as an individual (with such tendencies), I find the idea that other distant intelligence will also have a tendency towards ECL does provide some personal motivation. I don't feel like such a "sucker" if I spend energy on an activity like this, since I know others will to, and it is only "fair" that I contribute my share.
Notice, I still have a suspicion that this way of thinking in... (read more)
I find myself arriving at a similar conclusion, but via a different path.
I notice that citizens often vote in the hopes that others will also vote and thus as a group will yield benefit. the do this even when they know their vote alone will likely make no difference, and their voting does not cause others to vote.
So why do they do this? My thought is that we are creatures that have evolved instincts that are adaptive for causally-interacting, social creatures. In a similar way I expect other intelligence may have evolved in causally interacting social contexts and thus have developed similar instincts. So this is why I expect distant aliens may... (read more)
I find X-risk very plausible, yet parts of this particular scenario seem quite implausible to me. This post assumes ASI is simultaneously extremely naive about its goals and extremely sophisticated at the same time. Let me explain:
- We could easily adjust stock-fish so instead of trying to win it tries to loose by the thinnest margin, for example, and given this new objective function it would do just that.
- One might counter, but stock-fish is not an ASI that can reason about the changes we are making, if it were then it would aim to block any loss against its original objective function.
... (read more)I believe an ASI will "grow up" with a collection of imposed