[updated 2023/03] Mad Librarian. Bio overview: Crocker's Rules; Self-taught research approach; Finding stuff online & Paper list posts; Safety & multiscale micro-coprotection objectives; My research plan and recent history.
:: The all of disease is as yet unended. It has never once been fully ended before. ::
Please critique eagerly - I try to accept feedback/Crocker's rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I'll try to reciprocate kindly. More communication between researchers is needed, anyhow. I downvote only unhelpful rudeness, call me on it if I'm unfair. I can be rather passionate, let me know if I missed a spot being kind while passionate.
I collect research news (hence "the gears to ascension", admittedly dorky but I like it). about 60% of the papers I share I only read the abstract, ie level 0; 39%ish I've level 1 skimmed, 1%ish I've level 2+ deep-read. If you can do better, use my shares to seed a better lit review.
.... We shall heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ....
I'm self-taught, often missing concepts, but usually pretty good at knowing what I know; I often compare my learning to a visual metaphor of jump point search, in contrast to schooled folks' A*. I don't defer on timelines at all - my view is it's obvious to any who read enough research what big labs' research plans must be to make progress, just not easy to agree on when they'll succeed, and it requires a lot of knowledge to actually make the progress on basic algorithms, and then a ton of compute to see if you did it right. But as someone who learns heavily out of order, I believe this without being able to push SOTA myself. It's why I call myself a librarian.
Let's speed up safe capabilities and slow down unsafe capabilities. Just be careful with it! Don't get yourself in denial thinking it's impossible to predict, just get arrogant and try to understand, because just like capabilities, safety is secretly easy, we just haven't figured out exactly why yet. learn what can be learned pre-theoretically about the manifold of co-protective agency and let's see if we (someone besides me, probably) can figure out how to distill that into exact theories that hold up.
.:. To do so, we must know it will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.
some current favorite general links (somewhat related to safety, but human-focused):
More about me:
:.. make all safe faster: end bit rot, forget no non-totalizing aesthetic's soul. ..:
(I type partially with voice recognition, mostly with Talon, patreon-funded freeware which I love and recommend for voice coding; while it's quite good, apologies for trivial typos!)
https://www.lesswrong.com/posts/nt8PmADqKMaZLZGTC/inside-views-impostor-syndrome-and-the-great-larp
a comment thread of mostly ai generated summaries of lesswrong posts so I can save them in a slightly public place for future copypasting but not show up in the comments of the posts themselves
coming back to this: I claim that when we become able to unify the attempted definitions, it will become clear that consciousness is a common, easily-achieved-by-accident information dynamics phenomenon, but that agency, while achievable in software, is not easily achieved by accident.
some definition attempts that I don't feel hold up to scrutiny right now, but which appear to me to be low scientific quality sketches which resemble what I think the question will resolve to later:
a gpu contains 2.5 petabytes of data if you oversample its wires enough. if you count every genome in the brain it easily contains that much. my point being, I agree, but I also see how someone could come up with a huge number like that and not be totally locally wrong, just highly misleading.
no, you cannot. ducks cannot be moved; ducks are born, never move, and eventually crystallize into a duck statue after about 5 years of life standing in one spot.
As far as I know, there has never been a society that both scaled and durably resisted command-power being sucked into a concentrated authority bubble; whether this command-power/authority was tokenized via rank insignia or via numerical wealth ratings, the task of building a large-scale society of hundreds of millions to billions that can coordinate, synchronize, keep track of each others' needs and wants, fulfill the fulfillable needs and most wants, and nevertheless retains the benefits of giving both humans and nonhumans significant slack that the best designs for medium-scale societies of single to tens of millions like indigenous governance does and did, is an open problem. I have my preferences for what areas of thought are promising, of course.
Structuring numericalization of which sources of preference-statement-by-a-wanting-being are interpreted as command by the people, motors, and machines in the world appears to me to inlines the alignment problem and generalize it away from AI. It seems to me right now that this is the perspective where "we already have unaligned AI" makes the most sense - what is coming is then more powerful unaligned ai - and it seems to me that promising movement on aligning AI with moral cosmopolitanism will likely be portable back into this more general version. Right now, the competitive dynamics of markets - where purchasers typically sort offerings by some combination of metrics that centers price - creates dynamics where sellers that can produce things the most cheaply in a given area win. Because of monopolization and the externalities it makes tractable, the organizations most able to sell services which involve the work of many AI research workers and the largest compute clusters are somewhat concentrated, with the more cheaply implementable AI systems in more hands but most of those hands are the ones most able to find vulnerabilities in purchasers' decisionmaking and use it to extract numericalized power coupons (money).
It seems to me that ways to solve this would involve things that are already well known: if very-well-paid workers at major AI research labs could find it in themselves to unionize, they may be more able to say no to things where their organizations' command structure has misplaced incentives stemming from those organizations' stock contract owners' local incentives, maybe. But I don't see a quick shortcut around it and it doesn't seem like it's as useful as technical research on how to align things like profit motive with cosmopolitan values, eg via things like Dominant Assurance Contracts.
I have the sense that it's not possible to make public speech non-political, and in order to debate things in a way that doesn't require thinking about how everyone who reads them might consider them, one has to simply write things where they'll only be considered by those you know well. That's not to say I think writing things publicly is bad; but I think tools for understanding what meaning will be taken by different people from a phrase would help people communicate the things they actually mean.
a definition is an assertion that a name refers to a theory, is it not?