the gears to ascension

[updated 2023/03] Mad Librarian (better than your search engine, try me!). Bio overview: Crocker's Rules; Self-taught research approach; Finding stuff online & Paper list posts; Safety & multiscale micro-coprotection objectives; My research plan and recent history.

:: The all of disease is as yet unended. It has never once been fully ended before. ::

Please critique eagerly - I try to accept feedback/Crocker's rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I'll try to reciprocate kindly. More communication between researchers is needed, anyhow. I downvote only unhelpful rudeness, call me on it if I'm unfair. I can be rather passionate, let me know if I missed a spot being kind while passionate.

I collect research news (hence "the gears to ascension", admittedly dorky but I like it). about 60% of the papers I share I only read the abstract, ie level 0; 39%ish I've level 1 skimmed, 1%ish I've level 2+ deep-read. If you can do better, use my shares to seed a better lit review.

.... We shall heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ....

I'm self-taught, often missing concepts, but usually pretty good at knowing what I know; I often compare my learning to a visual metaphor of jump point search, in contrast to schooled folks' A*. I don't defer on timelines at all - my view is it's obvious to any who read enough research what big labs' research plans must be to make progress, just not easy to agree on when they'll succeed, and it requires a lot of knowledge to actually make the progress on basic algorithms, and then a ton of compute to see if you did it right. But as someone who learns heavily out of order, I believe this without being able to push SOTA myself. It's why I call myself a librarian.

Let's speed up safe capabilities and slow down unsafe capabilities. Just be careful with it! Don't get yourself in denial thinking it's impossible to predict, just get arrogant and try to understand, because just like capabilities, safety is secretly easy, we just haven't figured out exactly why yet. learn what can be learned pre-theoretically about the manifold of co-protective agency and let's see if we (someone besides me, probably) can figure out how to distill that into exact theories that hold up.

.:. To do so, we must know it will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.

some current favorite general links (somewhat related to safety, but human-focused):

  • https://www.microsolidarity.cc/ - incredible basic guide on how to do human micro-coprotection. It's not the last guide humanity will need, but it's a wonderful one.
  • https://activisthandbook.org/ - solid intro to how to be a more traditional activist. If you care about bodily autonomy, freedom of form, trans rights, etc, I'd suggest at least getting a sense of this.
  • https://metaphor.systems/ - absolutely kickass search engine.

More about me:

  • ex startup founder. it went ok, not a unicorn, I burned out in 2019. couple of jobs since, quit last one early 2022. Independent mad librarian from savings until I run out, possibly joining a research group soon.
  • lots of links in my shortform to youtube channels I like

:.. make all safe faster: end bit rot, forget no non-totalizing aesthetic's soul. ..:

(I type partially with voice recognition, mostly with Talon, patreon-funded freeware which I love and recommend for voice coding; while it's quite good, apologies for trivial typos!)

Sequences

Stuff I found online

Wiki Contributions

Comments

Welcome to my pinned comment

For best results browsing lesswrong comments, force-enable the visited link styling in your browser by installing the stylus extension (or a similar custom-css-injector extension of your choice) and inject this css into all pages:

/* the repetition is to elevate the css specificity, overriding even css styles from sites that use !important in their code; apply to all sites with the stylus extension in chrome: */


a:not(:visited):not([href="#"])[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href], a:not([href="#"]):not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href] * {
   color: #32a1ce !important;
}
a:visited[href]:not([href="#"]):visited[href]:visited[href]:visited[href]:visited[href], a:visited[href]:not([href="#"]):visited[href]:visited[href]:visited[href]:visited[href] * {
   color: #939393 !important;
}

Comments added May 3, 2023:

Comments added Feb 25, 2023

For what it's worth, I think most people I know expect most professed values to be violated most of the time, and so they think that libertarians advocating for this is perfectly ordinary; the surprising thing would be if professed libertarians weren't constantly showing up advocating for regulating things. Show don't tell in politics and ideology. That's not to say professing values is useless, just that there's not an inconsistency to be explained here, and if I link people in my circles this post, they'd respond with an eyeroll at the possibility that if only they were more libertarian they'd be honest - because the name is most associated with people using the name to lie.

fair nuff! yeah properly demonstrating online sounds really hard.

it only works when you are able to reduce social anxiety by showing that they're welcome. someone who is cripplingly anxious typically wants to feel like they're safe, so showing them a clearer map to safety includes detecting the structure of their social anxiety first and getting in sync with it. then you can show them they're welcome in a way that makes them feel safer, not less. to do this requires gently querying their anxiety's agentic target and inviting the group to behave in ways that satisfy what their brain's overactivation wants.

I think the only content left would be the actual art. not the stuff that only deserves the name content.

Well, spotify isn't profitable in the first place, for one.

looks good, ish, though now it's barely noticeable:

Sam Altman's world tour has highlighted both the promise and risks of AI. While AI could solve major issues like climate change, super intelligence poses existential risks that require careful management. Current AI models may still provide malicious actors with expertise for causing mass harm. OpenAI aims to balance innovation with addressing risks, though some regulation of large models may be needed. Altman believes AI will be unstoppable and greatly improve lives, but economic dislocation from job loss will be significant and AI may profoundly change our view of humanity. Scaling up AI models tends to reveal surprises, showing how little we still understand about intelligence.

  • Sam Altman warns that AI systems designing their own architecture could be a mistake and humanity should determine the future.
  • OpenAI is concerned about the risks of super intelligence and AI building AI.
  • Altman enjoys the power of being CEO of OpenAI but realizes they may have to make strange decisions in the future.
  • Altman hints that OpenAI may have regrets over firing the starting gun in the AI race and pushing the AI revolution forward.
  • Altman thinks current AI models should not be regulated but a recent study shows that even current large language models pose risks and should undergo evaluation.
  • OpenAI is working on customizing AI models to follow guardrails and listen to user instructions.
  • Altman realizes that open source AI cannot be stopped and society must adapt to it.
  • Altman has a utopian vision of AI improving lives and making the current world seem barbaric.
  • Both Altman and Sutskever think solving climate change will not be difficult for super intelligence.
  • Greg Brockman notes that every time AI is scaled up, it reveals surprises we did not anticipate.

https://www.youtube.com/watch?v=3sWH2e5xpdo

the resolution criteria of a bet should not rely heavily on reasonableness of participants unless the bet is very small such that both parties can tolerate misresolution. the manifold folks can tell you all about how it goes when you get this wrong, there are many seemingly obvious questions that have been derailed by technicalities, and it was not the author's reasonableness most centrally at play. (edit: in fact, the author's reasonableness is why the author had to say "wait... uh... according to those criteria this pretty clearly went x way, which I didn't expect and so the resolution criteria were wrong")

  • The models discuss the paradox of diversity in cultural evolution and how specialization affects cultural complexity and innovation rates in societies. Diversity fuels innovation through recombination but also divides people.
  • Social learning is most effective when the environment is moderately variable, not too stable or unstable.
  • Larger population sizes and connectivity enable higher cultural complexity and innovation through a "collective brain" effect, but diversity also creates inequality.
  • There is a trade-off between diversity, which enables more innovation potential, and coordination and communication, which diversity hinders.
  • As cultural domains become more complex, larger effective population sizes are needed to maintain skill levels due to the knowledge that needs to be transmitted.
  • There are strategies to deal with the paradox of diversity, like using translators and partially acculturated populations.
  • Cooperation enables larger scales of collective action but is also undermined by lower scales of cooperation, like when nepotism undermines institutions.
  • The availability of resources and energy affects the scale of cooperation, enabling larger collective efforts when more abundant.
  • Abundance enables a "collective brain" mindset while scarcity fosters a zero-sum, competitive psychology.
  • Punctuated rises in cooperation may occur when new levels of resources unlock higher scales of collective action.

https://www.youtube.com/watch?v=oqV23pC4mhA

Load More