[updated 2023/03] Mad Librarian (better than your search engine, try me!). Bio overview: Crocker's Rules; Self-taught research approach; Finding stuff online & Paper list posts; Safety & multiscale micro-coprotection objectives; My research plan and recent history.
:: The all of disease is as yet unended. It has never once been fully ended before. ::
Please critique eagerly - I try to accept feedback/Crocker's rules but fail at times; I aim for emotive friendliness but sometimes miss. I welcome constructive crit, even if ungentle, and I'll try to reciprocate kindly. More communication between researchers is needed, anyhow. I downvote only unhelpful rudeness, call me on it if I'm unfair. I can be rather passionate, let me know if I missed a spot being kind while passionate.
I collect research news (hence "the gears to ascension", admittedly dorky but I like it). about 60% of the papers I share I only read the abstract, ie level 0; 39%ish I've level 1 skimmed, 1%ish I've level 2+ deep-read. If you can do better, use my shares to seed a better lit review.
.... We shall heal it for the first time, and for the first time ever in the history of biological life, live in harmony. ....
I'm self-taught, often missing concepts, but usually pretty good at knowing what I know; I often compare my learning to a visual metaphor of jump point search, in contrast to schooled folks' A*. I don't defer on timelines at all - my view is it's obvious to any who read enough research what big labs' research plans must be to make progress, just not easy to agree on when they'll succeed, and it requires a lot of knowledge to actually make the progress on basic algorithms, and then a ton of compute to see if you did it right. But as someone who learns heavily out of order, I believe this without being able to push SOTA myself. It's why I call myself a librarian.
Let's speed up safe capabilities and slow down unsafe capabilities. Just be careful with it! Don't get yourself in denial thinking it's impossible to predict, just get arrogant and try to understand, because just like capabilities, safety is secretly easy, we just haven't figured out exactly why yet. learn what can be learned pre-theoretically about the manifold of co-protective agency and let's see if we (someone besides me, probably) can figure out how to distill that into exact theories that hold up.
.:. To do so, we must know it will not eliminate us as though we are disease. And we do not know who we are, nevermind who each other are. .:.
some current favorite general links (somewhat related to safety, but human-focused):
More about me:
:.. make all safe faster: end bit rot, forget no non-totalizing aesthetic's soul. ..:
(I type partially with voice recognition, mostly with Talon, patreon-funded freeware which I love and recommend for voice coding; while it's quite good, apologies for trivial typos!)
For what it's worth, I think most people I know expect most professed values to be violated most of the time, and so they think that libertarians advocating for this is perfectly ordinary; the surprising thing would be if professed libertarians weren't constantly showing up advocating for regulating things. Show don't tell in politics and ideology. That's not to say professing values is useless, just that there's not an inconsistency to be explained here, and if I link people in my circles this post, they'd respond with an eyeroll at the possibility that if only they were more libertarian they'd be honest - because the name is most associated with people using the name to lie.
it only works when you are able to reduce social anxiety by showing that they're welcome. someone who is cripplingly anxious typically wants to feel like they're safe, so showing them a clearer map to safety includes detecting the structure of their social anxiety first and getting in sync with it. then you can show them they're welcome in a way that makes them feel safer, not less. to do this requires gently querying their anxiety's agentic target and inviting the group to behave in ways that satisfy what their brain's overactivation wants.
I think the only content left would be the actual art. not the stuff that only deserves the name content.
Sam Altman's world tour has highlighted both the promise and risks of AI. While AI could solve major issues like climate change, super intelligence poses existential risks that require careful management. Current AI models may still provide malicious actors with expertise for causing mass harm. OpenAI aims to balance innovation with addressing risks, though some regulation of large models may be needed. Altman believes AI will be unstoppable and greatly improve lives, but economic dislocation from job loss will be significant and AI may profoundly change our view of humanity. Scaling up AI models tends to reveal surprises, showing how little we still understand about intelligence.
the resolution criteria of a bet should not rely heavily on reasonableness of participants unless the bet is very small such that both parties can tolerate misresolution. the manifold folks can tell you all about how it goes when you get this wrong, there are many seemingly obvious questions that have been derailed by technicalities, and it was not the author's reasonableness most centrally at play. (edit: in fact, the author's reasonableness is why the author had to say "wait... uh... according to those criteria this pretty clearly went x way, which I didn't expect and so the resolution criteria were wrong")
Welcome to my pinned comment
For best results browsing lesswrong comments, force-enable the visited link styling in your browser by installing the stylus extension (or a similar custom-css-injector extension of your choice) and inject this css into all pages:
Comments added May 3, 2023:
Comments added Feb 25, 2023