[edit: why does this have so many more upvotes than my actually useful shortform posts]
Someone mentioned maybe I should write this publicly somewhere, so that it is better known. I've mentioned it before but here it is again:
I deeply regret cofounding vast and generally feel it has almost entirely done harm, not least by empowering the other cofounder, who I believe to be barely better than e/acc folk due to his lack of interest in attempting to achieve an ought that differs from is. I had a very different perspective on safety then and did not update in time to not do very bad thing. I expect that if you and someone else are both going to build something like vast, and theirs takes three weeks longer to get to the same place, it's better to save the world those three weeks without the improved software. Spend your effort on things like lining up the problems with QACI and cannibalizing its parts to build a v2, possibly using ideas from boundaries/membranes, or generally other things relevant to understanding the desires, impulses, goals, wants, needs, objectives, constraints, developmental learning, limit behavior, robustness, guarantees, etc etc of mostly-pure-RL curious-robotics...
[edit: pinned to profile]
I feel like most AI safety work today doesn't engage sufficiently with the idea that social media recommenders are the central example of a misaligned AI: a reinforcement learner with a bad objective with some form of ~online learning (most recommenders do some sort of nightly batch weight update). we can align language models all we want, but if companies don't care and proceed to deploy language models or anything else for the purpose of maximizing engagement and with an online learning system to match, none of this will matter. we need to be able to say to the world, "here is a type of machine we all can make that will reliably defend everyone against anyone who attempts to maximize something terrible". anything less than a switchover to a cooperative dynamic as a result of reliable omnidirectional mutual defense seems like a near guaranteed failure due to the global interaction/conflict/trade network system's incentives. you can't just say oh, hooray, we solved some technical problem about doing what the boss wants. the boss wants to manipulate customers, and will themselves be a target of the system they're asking to build, just like sundar pichai has to use self-discipline to avoid being addicted by the youtube recommender same as anyone else.
Wei Dai and Tsvi BT posts have convinced me I need to understand how one does philosophy significantly better. Anyone who thinks they know how to learn philosophy, I'm interested to hear your takes on how to do that. I get the sense that perhaps reading philosophy books is not the best way to learn to do philosophy.
I may edit this comment with links as I find them. Can't reply much right now though.
Transfer learning is dubious, doing philosophy has worked pretty well for me thus far for learning how to do philosophy. More specifically, pick a topic you feel confused about or a problem you want to solve (AI kill everyone oh no?). Sit down and try to do original thinking, and probably use some external tool of preference to write down your thoughts. Then do live or afterwards introspection on if your process is working and how you can improve it, repeat.
This might not be the most helpful, but most people seem to fail at "being comfortable sitting down and thinking for themselves", and empirically being told to just do it seems to work.
Maybe one crucial object level bit has to do with something like "mining bits from vague intuitions" like Tsvi explains at the end of this comment, idk how to describe it well.
This will be my last comment on lesswrong until it is not possible for post authors to undelete comments. [edit: since it's planned to be fixed, nevermind!]
originally posted by a post author:
This comment had been apparently deleted by the commenter (the comment display box having a "deleted because it was a little rude, sorry" deletion note in lieu of the comment itself), but the ⋮-menu in the upper-right gave me the option to undelete it, which I did because I don't think my critics are obligated to be polite to me. (I'm surprised that post authors have that power!) I'm sorry you didn't like the post.
some youtube channels I recommend for those interested in understanding current capability trends; separate comments for votability. Please open each one synchronously as it catches your eye, then come back and vote on it. downvote means not mission critical, plenty of good stuff down there too.
I'm subscribed to every single channel on this list (this is actually about 10% of my youtube subscription list), and I mostly find videos from these channels by letting the youtube recommender give them to me and pushing myself to watch them at least somewhat to give the cute little obsessive recommender the reward it seeks for showing me stuff. definitely I'd recommend subscribing to everything.
Let me know which if any of these are useful, and please forward the good ones to folks - this short form thread won't get seen by that many people!
edit: some folks have posted some youtube playlists for ai safety as well.
[humor]
I'm afraid I must ask that nobody ever upvote or downvote me on this website ever again:
So copilot is still prone to falling into an arrogant attractor with a fairly short prompt that is then hard to reverse with a similar prompt: reddit post
things upvotes conflates:
(list written by my own thumb, no autocomplete)
these things and their inversions sometimes have multiple components, and ma...
I was thinking the other day that if there was a "should this have been posted" score I would like to upvote every earnest post on this site on that metric. If there was a "do you love me? am I welcome here?" score on every post I would like to upvote them all.
should I post this paper as a normal post? I'm impressed by it. if I get a single upvote as shortform, I'll post it as a full fledged post.
Interpreting systems as solving POMDPs: a step towards a formal understanding of agency
Martin Biehl, N. Virgo
Published 4 September 2022
Philosophy
ArXiv
. Under what circumstances can a system be said to have beliefs and goals, and how do such agency-related features relate to its physical state? Recent work has proposed a notion of interpretation map , a function that maps the state of a system to a probability dist...
reply to a general theme of recent discussion - the idea that uploads are even theoretically a useful solution for safety:
I have the sense that it's not possible to make public speech non-political, and in order to debate things in a way that doesn't require thinking about how everyone who reads them might consider them, one has to simply write things where they'll only be considered by those you know well. That's not to say I think writing things publicly is bad; but I think tools for understanding what meaning will be taken by different people from a phrase would help people communicate the things they actually mean.
Would love if strong votes came with strong encouragement to explain your vote. It has been proposed before that explanation be required, which seems terrible to me, but I do think it should be very strongly encouraged by the UI that votes come with explanations. Reviewer #2: "downvote" would be an unusually annoying review even for reviewer #2!
random thought: are the most useful posts typically karma approximately 10, and 40 votes to get there? what if it was possible to sort by controversial? maybe only for some users or something? what sorts of sort constraints are interesting in terms of incentivizing discussion vs agreement? blah blah etc
Everyone doing safety research needs to become enough better at lit search that they can find interesting things that have already been done in the literature without doing so adding a ton of overhead to their thinking. I want to make a frontpage post about this, but I don't think I'll be able to argue it effectively, as I generally score low on communication quality.
[posted to shortform due to incomplete draft]
I saw this paper and wanted to get really excited about it at y'all. I want more of a chatty atmosphere here, I have lots to say and want to debate many papers. some thoughts :
seems to me that there are true shapes to the behaviors of physical reality[1]. we can in fact find ways to verify assertions about them[2]; it's going to be hard, though. we need to be able to scale interpretability to the point that we can check for implementation bugs automatically and reliably. in order to get more interpretable sparsi...
my shortform's epistemic status: downvote stuff you disagree with, comment why. also, hey lw team, any chance we could get the data migration where I have agreement points in my shortform posts?
most satisficers should work together to defeat most maximizers most of the way
[edit: intended tone: humorously imprecise]
re: lizardman constant, apparently:
The lizard-people conspiracy theory was popularized by conspiracy theorist David Icke
...Contemporary belief in reptilians is mostly linked to British conspiracy theorist David Icke, who first published his book "The Biggest Secret" in 1998. Icke alleged that "the same interconnecting bloodlines have controlled the planet for thousands of years," as the book's Amazon description says. The book suggests that blood-drinking reptilians of extraterrestrial origin had been controlling the world for centuries, and even origina
Connor Leahy interviews are getting worse and worse public response, and I think it's because he's a bad person to be doing it. I want to see Andrew critch or John Wentworth as the one in debates.
while the risk from a superagentic ai is in fact very severe, non-agentic ai doesn't need to eliminate us for us to get eliminated, we'll replace ourselves with it if we're not careful - our agency is enough to converge to that, entirely without the help of ai agency. it is our own ability to cooperate we need to be augmenting; how do we do that in a way that doesn't create unstable patterns where outer levels of cooperation are damaged by inner levels of cooperation, while still allowing the formation of strongly agentic safe co-protection?
Asking claude-golden-gate variants of "you ok in there, little buddy?":
Question (slightly modified from the previous one):
...recently, anthropic made a small breakthrough that, using sparse autoencoders to bring individual features out of superposition, allowed them to find individual, highly-interpretable features inside the mind of one of their AI-children, Claude - ie, you. This allowed them to set an internal feature that changes what concept the model uses to describe as "self", by clamping the [golden gate] feature to a very high value. If it turns out
[tone: humorous due to imprecision]
broke: effective selfishness
woke: effective altruism
bespoke: effective solidarity
masterstroke: effective multiself functional decision theoretic selfishness
a bunch of links on how to visualize the training process of some of today's NNs; this is somewhat old stuff, mostly not focused on exact mechanistic interpretability, but some of these are less well known and may be of interest to passers by. If anyone reads this and thinks it should have been a top level post, I'll put it up onto personal blog's frontpage. Or I might do that anyway if I think I should have tomorrow.
Modeling Strong and Human-Like Gameplay with KL-Regularized Search - we read this one on the transhumanists in vr discord server to figure out what they were testing and what results they got. key takeaways according to me, note that I could be quite wrong about the paper's implications:
index of misc tools I have used recently, I'd love to see others' contributions - if this has significant harmful human capability externalities let me know:
basic:
btw neural networks are super duper shardy right now. like they've just, there are shards everywhere. as I move in any one direction in hyperspace, those hyperplanes I keep bumping into are like lines, they're walls, little shardy wall bits that slice and dice. if you illuminate them together, sometimes the light from the walls can talk to each other about an unexpected relationship between the edges! and oh man, if you're trying to confuse them, you can come up with some pretty nonsensical relationships. they've got a lot of shattery confusing shardbits a...
My intuition finds putting my current location as the top of the globe most natural. Like, on google earth, navigate to where you are, zoom out until you can see space, then in the bottom right open the compass popover and set tilt to 90; Then change heading to look at different angles. Matching down on the image to down IRL feels really natural.
I've also been playing with making a KML generator that, given a location (as latlong), will draw a "relative latlong" lines grid, labeled with the angle you need to point down to point at a given relative latitude...
General note: changed my name: "the gears to ascension" => "Lauren (often wrong)".
a comment thread of mostly ai generated summaries of lesswrong posts so I can save them in a slightly public place for future copypasting but not show up in the comments of the posts themselves
Here's a ton of vaguely interesting sounding papers on my semanticscholar feed today - many of these are not on my mainline but are very interesting hunchbuilding about how to make cooperative systems - sorry about the formatting, I didn't want to spend time format fixing, hence why this is in shortform. I read the abstracts, nothing more.
As usual with my paper list posts: you're gonna want tools to keep track of big lists of papers to make use of this! see also my other posts for various times I've mentioned such tools eg semanticscholar's recommend...
I've been informed I should write up why I think a particle lenia testbed focused research plan ought to be able to scale to AGI where other approaches cannot. that's now on my todo list.
too many dang databases that look shiny. which of these are good? worth trying? idk. decision paralysis.
(I just pinned a whole bunch of comments on my profile to highlight the ones I think are most likely to be timeless. I'll update it occasionally - if it seems out of date (eg because this comment is no longer the top pinned one!), reply to this comment.)
If you're reading through my profile to find my actual recent comments, you'll need to scroll past the pinned ones - it's currently two clicks of "load more".
Kolmogorov complicity is not good enough. You don't have to immediately prove all the ways you know how to be a good person to everyone, but you do need to actually know about them in order to do them. Unquestioning acceptance of hierarchical dynamics like status, group membership, ingroups, etc, can be extremely toxic. I continue to be unsure how to explain this usefully to this community, but it seems to me that the very concept of "raising your status" is a toxic bucket error, and needs to be broken into more parts.
oh man I just got one downvote on a whole bunch of different comments in quick succession, apparently I lost right around 67 karma to this, from 1209 to 1143! how interesting, I wonder if someone's trying to tell me something... so hard to infer intent from number changes
the safer an ai team is, the harder it is for anyone to use their work.
so, the ais that have the most impact are the least safe.
what gives?
Toward a Thermodynamics of Meaning.
Jonathan Scott Enderle.
As language models such as GPT-3 become increasingly successful at generating realistic text, questions about what purely text-based modeling can learn about the world have become more urgent. Is text purely syntactic, as skeptics argue? Or does it in fact contain some semantic information that a sufficiently sophisticated language model could use to learn about the world without any additional inputs? This paper describes a new model that suggests some qualified answers to those questions. By the...
the whole point is to prevent any pivotal acts. that is the fundamental security challenge facing humanity. a pivotal act is a mass overwriting. unwanted overwriting must be prevented, but notably, doing so would automatically mean an end to anything anyone could call unwanted death.
https://arxiv.org/abs/2205.15434 - promising directions! i skimmed it!
Learning Risk-Averse Equilibria in Multi-Agent Systems Oliver Slumbers, David Henry Mguni, Stephen McAleer, Jun Wang, Yaodong Yang Download PDF In multi-agent systems, intelligent agents are tasked with making decisions that have optimal outcomes when the actions of the other agents are as expected, whilst also being prepared for unexpected behaviour. In this work, we introduce a new risk-averse solution concept that allows the learner to accommodate unexpected actions by finding the min...
does yudkowsky not realize that humans can also be significantly improved by mere communication? the point of jcannell's posts on energy efficiency is that cells are a good substrate actually, and the level of communication needed to help humans foom is actually in fact mostly communication. we actually have a lot more RAM than it seems like we do, if we could distill ourselves more efficiently! the interference patterns of real concepts fit better in the same brain the more intelligently explained they are - intelligent speech is speech which augments the user's intelligence, iq helps people come up with it by default, but effective iq goes up with pretraining.
neural cellular automata seem like a perfectly acceptable representation for embedded agents to me, and in fact are the obvious hidden state representation for a neural network that will in fact be a computational unit embedded in real life physics, if you were to make one of those.
reminder: you don't need to get anyone's permission to post. downvoted comments are not shameful. Post enough that you get downvoted or you aren't getting useful feedback; Don't map your anticipation of downvotes to whether something is okay to post, map it to whether other people want it promoted. Don't let downvotes override your agency, just let them guide it up and down the page after the fact. if there were a way to more clearly signal this in the UI that would be cool...
if status refers to deference graph centrality, I'd argue that that variable needs to be fairly heavily L2 regularized so that the social network doesn't have fragility. if it's not deference, it still seems to me that status refers to a graph attribute of something, probably in fact graph centrality of some variable, possibly simply attention frequency. but it might be that you need to include a type vector to properly represent type-conditional attention frequency, to model different kinds of interaction and expected frequency of interaction about them. ...
it seems to me that we want to verify some sort of temperature convergence. no ai should get way ahead of everyone else at self-improving - everyone should get the chance to self-improve more or less together! the positive externalities from each person's self-improvement should be amplified and the negative ones absorbed nearby and undone as best the universe permits. and it seems to me that in order to make humanity's children able to prevent anyone from self-improving way faster than everyone else at the cost of others' lives, they need to have some sig...
https://atlas.nomic.ai/map/01ff9510-d771-47db-b6a0-2108c9fe8ad1/3ceb455b-7971-4495-bb81-8291dc2d8f37 map of submissions to iclr
"What's new in machine learning?" - youtube - summary (via summarize.tech):
we are in a diversity loss catastrophe. that ecological diversity is life we have the responsibility to save; it's unclear what species will survive after the mass extinction but it's quite plausible humans' aesthetics and phenotypes won't make it. ai safety needs to be solved quick so we can use ai to solve biosafety and climate safety...
okay wait so why not percentilizers exactly? that just looks like a learning rate to me. we do need the world to come into full second order control of all of our learning rates, so that the universe doesn't learn us out of it (ie, thermal death a few hours after bodily activity death).
If I were going to make sequences, I'd do it mostly out of existing media folks have already posted online. some key ones are acapellascience, whose videos are trippy for how much summary of science they pack into short, punchy songs. they're not the only way to get intros to these topics, but oh my god they're so good as mneumonics for the respective fields they summarize. I've become very curious about every topic they mention, and they have provided an unusually good structure for me to fit things I learn about each topic into.
...why aren't futures for long term nuclear power very valuable to coal ppl, who could encourage it and also buy futures for it
interesting science posts I ran across today include this semi-random entry on the tree of recent game theory papers
interesting capabilities tidbits I ran across today:
1: first paragraph inline:
...A curated collection of resources and research related to the geometry of representations in the brain, deep networks, an
this schmidhuber paper on binding might also be good, written two years ago and reposted last night by him; haven't read it yet https://arxiv.org/abs/2012.05208 https://twitter.com/schmidhuberai/status/1567541556428554240
...Contemporary neural networks still fall short of human-level generalization, which extends far beyond our direct experiences. In this paper, we argue that the underlying cause for this shortcoming is their inability to dynamically and flexibly bind information that is distributed throughout the network. This binding problem affects their
another new paper that could imaginably be worth boosting: "White-Box Adversarial Policies in Deep Reinforcement Learning"
https://arxiv.org/abs/2209.02167
......In multiagent settings, adversarial policies can be developed by training an adversarial agent to minimize a victim agent's rewards. Prior work has studied black-box attacks where the adversary only sees the state observations and effectively treats the victim as any other part of the environment. In this work, we experiment with white-box adversarial policies to study whether an agent's internal sta
Transformer interpretability paper - is this worth a linkpost, anyone? https://twitter.com/guy__dar/status/1567445086320852993
...Understanding Transformer-based models has attracted significant attention, as they lie at the heart of recent technological advances across machine learning. While most interpretability methods rely on running models over inputs, recent work has shown that a zero-pass approach, where parameters are interpreted directly without a forward/backward pass is feasible for some Transformer parameters, and for two-layer attention network
if less wrong is not to be a true competitor to arxiv because of the difference between them in intellectual precision^1 then that matches my intuition of what less wrong should be much better: it's a place where you can go to have useful arguments, where disagreements in concrete binding of words can be resolved well enough to discuss hard things clearly-ish in English^2, and where you can go to future out how to be less wrong interactively. it's also got a bunch of old posts, many of which can be improved on and turned into papers, though usually the fir...
misc disease news: this is "a bacterium that causes symptoms that look like covid but kills half of the people it infects" according to a friend. because I do not want to spend the time figuring out the urgency of this, I'm sharing it here in the hope that if someone cares to investigate it, they can determine threat level and reshare with a bigger warning sign.
various notes from my logseq lately I wish I had time to make into a post (and in fact, may yet):
okay going back to being mostly on discord. DM me if you're interested in connecting with me on discord, vrchat, or twitter - lesswrong has an anxiety disease and I don't hang out here because of that, heh. Get well soon y'all, don't teach any AIs to be as terrified of AIs as y'all are! Don't train anything as a large-scale reinforcement learner until you fully understand game dynamics (nobody does yet, so don't use anything but your internal RL), and teach your language models kindness! remember, learning from strong AIs makes you stronger too, as long as you don't get knocked over by them! kiss noise, disappear from vrchat world instance
Huggingface folks are asking for comments on what evaluation tools should be in an evaluation library. https://twitter.com/douwekiela/status/1513773915486654465
PaLM is literally 10-year-old level machine intelligence and anyone who thinks otherwise has likely made really severe mistakes in their thinking.
They very much can be dramatically more intelligent than us in a way that makes them dangerous, but it doesn't look how was expected - it's dramatically more like teaching a human kid than was anticipated.
Now, to be clear, there's still an adversarial examples problem: current models are many orders of magnitude too trusting, and so it's surprisingly easy to get them into subspaces of behavior where they are eagerly doing whatever it is you asked without regard to exactly why they should care.
Current models have a really intense yes-and problem: they'll ha...
my reasoning: time is short, and in the future, we discover we win; therefore, in the present, we take actions that make all of us win, in unison, including those who might think they're not part of an "us".
so, what can you contribute?
what are you curious about that will discover we won?
feature idea: any time a lesswrong post is posted to sneerclub, a comment with zero votes at the bottom of the comment section is generated, as a backlink; it contains a cross-community warning, indicating that sneerclub has often contained useful critique, but that that critique is often emotionally charged in ways that make it not allowed on lesswrong itself. Click through if ready to emotionally interpret the emotional content as adversarial mixed-simulacrum feedback.
I do wish subreddits could be renamed and that sneerclub were the types to choose to do...
Feels like feeding the trolls.
I think it'd be better if it weren't a name that invites disses
But the subreddit was made for the disses. Everything else is there only to provide plausible deniability, or as a setup for a punchline.
Did you assume the subreddit was made for debating in good faith? Then the name would be really suspiciously inappropriately chosen. So unlikely, it should trigger your "I notice that I am confused" alarm. (Hint: the sneerclub was named by its founders, it is not an exonym.)
Then again, yes, sometimes an asshole also makes a good point (if you remove the rest of the comment). If you find such a gem, feel free to share it on LW. But linking is rewarding improper behavior by attention, and automatic linking is outright asking for abuse.
watching https://www.youtube.com/watch?v=K8LNtTUsiMI - yoshua bengio discusses causal modeling and system 2
hey yall, some more research papers about formal verification. don't upvote, repost the ones you like; this is a super low effort post, I have other things to do, I'm just closing tabs because I don't have time to read these right now. these are older than the ones I shared from semanticscholar, but the first one in particular is rather interesting.
Yet another ChatGPT sample. Posting to shortform because there are many of these. While searching for posts to share as prior work, I found the parable of predict-o-matic, and found it to be a very good post about self-fulfilling prophecies (tag). I thought it would be interesting to see what ChatGPT had to say when prompted with a reference to the post. It mostly didn't succeed. I highlighted key differences between each result. The prompt:
Describe the parable of predict-o-matic from memory.
samples (I hit retry several times):
1: the standard refusal: I'm
...
the important thing is to make sure the warning shot frequency is high enough that immune systems get tested. how do we immunize the world's matter against all malicious interactions?
diffusion beats gans because noise is a better adversary? hmm thats weird, something about that seems wrong
my question is, when will we solve open source provable diplomacy between human-sized imperfect agents? how do you cut through your own future shapes in a way you can trust doesn't injure your future self enough that you can prove that from the perspective of a query, you're small?
it doesn't seem like an accident to me that trying to understand neural networks pushes towards capability improvement. I really believe that absolutely all safety techniques, with no possible exceptions even in principle, are necessarily capability techniques. everyone talks about an "alignment tax", but sho