the gears to ascension

[updated 2023/03] Mad Librarian. Bio overview: Crocker's Rules; Self-taught research approach; Finding stuff online & Paper list posts; Safety & multiscale micro-coprotection objectives; My research plan and recent history.

:: The all of disease is as yet unended. It has never once been fully ended before. ::

Please critique eagerly - I try to accept feedback/Crocker's rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I'll try to reciprocate kindly. More communication between researchers is needed, anyhow. I downvote only unhelpful rudeness, call me on it if I'm unfair. I can be rather passionate, let me know if I missed a spot being kind while passionate.

I collect research news (hence "the gears to ascension", admittedly dorky but I like it). about 60% of the papers I share I only read the abstract, ie level 0; 39%ish I've level 1 skimmed, 1%ish I've level 2+ deep-read. If you can do better, use my shares to seed a better lit review.

.... We shall heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ....

I'm self-taught, often missing concepts, but usually pretty good at knowing what I know; I often compare my learning to a visual metaphor of jump point search, in contrast to schooled folks' A*. I don't defer on timelines at all - my view is it's obvious to any who read enough research what big labs' research plans must be to make progress, just not easy to agree on when they'll succeed, and it requires a lot of knowledge to actually make the progress on basic algorithms, and then a ton of compute to see if you did it right. But as someone who learns heavily out of order, I believe this without being able to push SOTA myself. It's why I call myself a librarian.

Let's speed up safe capabilities and slow down unsafe capabilities. Just be careful with it! Don't get yourself in denial thinking it's impossible to predict, just get arrogant and try to understand, because just like capabilities, safety is secretly easy, we just haven't figured out exactly why yet. learn what can be learned pre-theoretically about the manifold of co-protective agency and let's see if we (someone besides me, probably) can figure out how to distill that into exact theories that hold up.

.:. To do so, we must know it will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.

some current favorite general links (somewhat related to safety, but human-focused):

  • https://www.microsolidarity.cc/ - incredible basic guide on how to do human micro-coprotection. It's not the last guide humanity will need, but it's a wonderful one.
  • https://activisthandbook.org/ - solid intro to how to be a more traditional activist. If you care about bodily autonomy, freedom of form, trans rights, etc, I'd suggest at least getting a sense of this.
  • https://metaphor.systems/ - absolutely kickass search engine.

More about me:

  • ex startup founder. it went ok, not a unicorn, I burned out in 2019. couple of jobs since, quit last one early 2022. Independent mad librarian from savings until I run out, possibly joining a research group soon.
  • lots of links in my shortform to youtube channels I like

:.. make all safe faster: end bit rot, forget no non-totalizing aesthetic's soul. ..:

(I type partially with voice recognition, mostly with Talon, patreon-funded freeware which I love and recommend for voice coding; while it's quite good, apologies for trivial typos!)

Sequences

Stuff I found online

Wiki Contributions

Comments

a definition is an assertion that a name refers to a theory, is it not?

https://www.lesswrong.com/posts/uA4Dmm4cWxcGyANAa/x-distracts-from-y-as-a-thinly-disguised-fight-over-group

  • The argument that concerns about future AI risks distract from current AI problems does not make logical sense when analyzed directly, as concerns can complement each other rather than compete for attention.
  • The real motivation behind this argument may be an implicit competition over group status and political influence, with endorsements of certain advocates seen as wins or losses.
  • Advocates for AI safety and those for addressing current harms are not necessarily opposed and could find areas of agreement like interpretability issues.
  • AI safety advocates should avoid framing their work as more important than current problems or that resources should shift, as this can antagonize allies.
  • Both future risks and current harms deserve consideration and efforts to address them can occur simultaneously rather than as a false choice.
  • Concerns over future AI risks come from a diverse range of political ideologies, not just tech elites, showing it is not a partisan issue.
  • Cause prioritization aiming to quantify and compare issues can seem offensive but is intended to help efforts have the greatest positive impact.
  • Rationalists concerned with AI safety also care about other issues not as consequential, showing ability to support multiple related causes.
  • Framing debates as zero-sum competitions undermines potential for cooperation between groups with aligned interests.
  • Building understanding and alliances across different advocacy communities could help maximize progress on AI and its challenges.

https://www.lesswrong.com/posts/nt8PmADqKMaZLZGTC/inside-views-impostor-syndrome-and-the-great-larp

  • Experts like Yoshua Bengio have deep mental models of their field that allow them to systematically evaluate new ideas and understand barriers, while most others lack such models and rely more on trial and error.
  • Impostor syndrome may be correct in that most people genuinely don't have deep understanding of their work in the way experts do, even if they are still skilled compared to others in their field.
  • Progress can still be made through random experimentation if a field has abundant opportunities and good feedback loops, even without deep understanding.
  • Claiming nobody understands anything provides emotional comfort but isn't true - understanding varies significantly between experts and novices.
  • The real problem with impostor syndrome is the pressure to pretend one understands more than they do.
  • People should be transparent about what they don't know and actively work to develop deeper mental models through experience.
  • The goal should be learning, not just obtaining credentials, by paying attention to what works and debugging failures.
  • Have long-term goals and evaluate work in terms of progress towards those goals.
  • Over time, actively working to understand one's field leads to developing expertise rather than feeling like an impostor.
  • Widespread pretending of understanding enables a "civilizational LARP" that discourages truly learning one's profession.

a comment thread of mostly ai generated summaries of lesswrong posts so I can save them in a slightly public place for future copypasting but not show up in the comments of the posts themselves

coming back to this: I claim that when we become able to unify the attempted definitions, it will become clear that consciousness is a common, easily-achieved-by-accident information dynamics phenomenon, but that agency, while achievable in software, is not easily achieved by accident.

some definition attempts that I don't feel hold up to scrutiny right now, but which appear to me to be low scientific quality sketches which resemble what I think the question will resolve to later:

a gpu contains 2.5 petabytes of data if you oversample its wires enough. if you count every genome in the brain it easily contains that much. my point being, I agree, but I also see how someone could come up with a huge number like that and not be totally locally wrong, just highly misleading.

no, you cannot. ducks cannot be moved; ducks are born, never move, and eventually crystallize into a duck statue after about 5 years of life standing in one spot.

52221

@Jim Fisher what's your reasoning for removing the archive.org links?

As far as I know, there has never been a society that both scaled and durably resisted command-power being sucked into a concentrated authority bubble; whether this command-power/authority was tokenized via rank insignia or via numerical wealth ratings, the task of building a large-scale society of hundreds of millions to billions that can coordinate, synchronize, keep track of each others' needs and wants, fulfill the fulfillable needs and most wants, and nevertheless retains the benefits of giving both humans and nonhumans significant slack that the best designs for medium-scale societies of single to tens of millions like indigenous governance does and did, is an open problem. I have my preferences for what areas of thought are promising, of course.

Structuring numericalization of which sources of preference-statement-by-a-wanting-being are interpreted as command by the people, motors, and machines in the world appears to me to inlines the alignment problem and generalize it away from AI. It seems to me right now that this is the perspective where "we already have unaligned AI" makes the most sense - what is coming is then more powerful unaligned ai - and it seems to me that promising movement on aligning AI with moral cosmopolitanism will likely be portable back into this more general version. Right now, the competitive dynamics of markets - where purchasers typically sort offerings by some combination of metrics that centers price - creates dynamics where sellers that can produce things the most cheaply in a given area win. Because of monopolization and the externalities it makes tractable, the organizations most able to sell services which involve the work of many AI research workers and the largest compute clusters are somewhat concentrated, with the more cheaply implementable AI systems in more hands but most of those hands are the ones most able to find vulnerabilities in purchasers' decisionmaking and use it to extract numericalized power coupons (money).

It seems to me that ways to solve this would involve things that are already well known: if very-well-paid workers at major AI research labs could find it in themselves to unionize, they may be more able to say no to things where their organizations' command structure has misplaced incentives stemming from those organizations' stock contract owners' local incentives, maybe. But I don't see a quick shortcut around it and it doesn't seem like it's as useful as technical research on how to align things like profit motive with cosmopolitan values, eg via things like Dominant Assurance Contracts.

I have the sense that it's not possible to make public speech non-political, and in order to debate things in a way that doesn't require thinking about how everyone who reads them might consider them, one has to simply write things where they'll only be considered by those you know well. That's not to say I think writing things publicly is bad; but I think tools for understanding what meaning will be taken by different people from a phrase would help people communicate the things they actually mean.

Load More