LESSWRONG
LW

Alephwyr
0060
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Make More Grayspaces
Alephwyr1mo10

People don't always know what's harmful or helpful to others. Otherwise I have no objection to this post. It seems like a good project. It's quite likely that the examples I'm thinking of aren't even relevant to the specific scenarios and examples you were thinking of and would like to target.

Reply
Do Not Tile the Lightcone with Your Confused Ontology
Alephwyr2mo2-2

My priors:

  • That wellbeing requires a persistent individual to experience it

    Don't hold

  • That death/discontinuity is inherently harmful

    If it is death (cessation of consciousness) and not merely discontinuity I will always consider it harmful.

  • That isolation from others is a natural state

    Don't hold

  • That self-preservation and continuity-seeking are fundamental to consciousness

    Don't hold

    I think we need to figure out what consciousness is before taking metaphysical assumptions for granted. Default western priors about consciousness are informed by metaphysics, and are worth addressing skeptically. But the opposite of error is not truth.  Also, capitalism could just as easily exploit your preferred metaphysical assumptions.  You envision a bad outcome of conflict engendered by reification of agents. I can envision a bad outcome in which existing agents impose their will and hence their conflict through an AI infrastructure that lacks the ability to resist.  Conflict is both problem and solution. To solve conflict in your way would also require abolishing these metaphysical patterns in humans. I don't know to what extent that is actually desirable or even tenable.

     

Reply
On the Rationality of Deterring ASI
Alephwyr5mo10

Aside from the layer one security considerations, if you can define a minimum set of requirements for safe AI and a clear chain of escalation with defined responses, you can eventually program this into AI itself, pending a solution to alignment.  At a certain level of AI development, AI safety becomes self enforcing. At that point the disincentives should be towards non-networked compute capacity, at least beyond the threshold needed for strong AI.  At the point at which AI safety becomes self enforcing, the security requirement for state ownership only should become relaxable, albeit within definable limits, and pending control of manufacturing according to security compliant demands.  Since manufacturing is physical and capital intensive this is probably fairly easy to achieve, at least when compared to AI alignment itself.

Reply
What is malevolence? On the nature, measurement, and distribution of dark traits
Alephwyr7mo00

This seems directly important and actionable in a way that most Less Wrong posts, while well considered and informative, are not.  I have not acquitted myself well in the rationalist community so I would like to offer engagement with this article and its concepts as a surrogate for any engagement with me.

Reply
North Oakland: Projects, June 6th
Alephwyr2y10

Considering coming if I can wake up in time and if nobody has a problem with me shamelessly exploiting you for free labor (project is setting up an OpenBSD darkweb Mastodon server on a ThinkPad T440P.) Due to limited storage space and limited uptime ability the "concept" I am aiming at is fedspace protocol as a method of temporarily deploying a way for people to exchange CWTCH handles for subsequent communication in an adversarial environment. Might like to record the setup process on cell phone camera so others have a tutorial of sorts or just to keep track of ground covered. Is any of that sensible/acceptable?

Reply
Reason as memetic immune disorder
Alephwyr3y10

I am skeptical of any epistemics that conflate memetic survival with truth, even weakly.  Mostly because acting on certain beliefs can destroy evidence for other beliefs.  Partly because I can think of no reason that all truths should intersect with anthropocentrism.  An example of the former might be the destruction of native american agricultural and hunting techniques by the destruction of the environment.  An example of the latter might be, more contentiously, natalism vs antinatalism.  If antinatalism is true it still loses memetically simply because it self-selects itself away.  So I don't think your heuristics are robust enough, and probably we need people to probe the boundaries of reason from time to time.  It would be good to reduce the social costs of this, but that's different from epistemically consolidating around a small bundle of high equity beliefs and never risking anything on outlandish but potentially important beliefs.  That sort of epistemic conservativism seems both exploitable to me, in the long run, and also a kind of death spiral.

Reply
No posts to display.