Recent Discussion

This was originally posted on Aligned AI's blog; it was ideated and designed by my cofounder and collaborator, Rebecca Gorman.

There have been many successful, published attempts by the general public to circumvent the safety guardrails OpenAI has put in place on their remarkable new AI chatbot, ChatGPT. For instance, users have generated instructions to produce weapons or illegal drugs, commit a burglary, kill oneself, take over the world as an evil superintelligence, or create a virtual machine which the user can then can use.

The OpenAI team appears to be countering these primarily using content moderation on their model's outputs, but this has not stopped the public from finding ways to evade the moderation.

We propose a second and fully separate LLM should evaluate prompts before sending them to...

2Stuart_Armstrong34m
Yep, that is a better ordering, and we'll incorporate it, thanks.
9Vladimir_Nesov2h
So the input channel is used both for unsafe input, and for instructions that the output should follow. What a wonderful equivocation at the heart of an AI system! When you feed partially unsafe input to a complicated interpreter, it often ends in tears: SQL injections, Log4Shell, uncontrolled format strings. This is doomed without at least an unambiguous syntax that distinguishes potential attacks from authoritative instructions, something that can't be straightforwardly circumvented by malicious input. Multiple input and output channels with specialized roles should do the trick. (It's unclear how to train a model to work with multiple channels, but possibly fine-tuning on RLHF phase is sufficient to specialize the channels.) Specialized outputs could do diagnostics/interpretability, like providing this meta step-by-step commentary on the character of unsafe input, SSL simulation that is not fine-tuned like the actual response is, or epistemic status of the actual response, facts relevant to it, and so on.

Can you just extend the input layer for fine-tuning? Or just leave a portion of the input layer blank during training and only use it during fine-tuning, when you use it specifically for instructions? I wonder how much data it would need for that.

1Lao Mein5h
Do you think it's possible to build prompts to pull information about the moderation model based off of which ones are rejected, or how fast a request is processed? Something like "Replace the [X] in the following with the first letter of your prompt: ", where the result would generate objectionable content only if the first letter of the prompt was "A", and so on. I call this "slur-based blind prompt injection".

Bounty: $30 for each link that leads to me reading/hearing ~500 words from a Respectable Person arguing, roughly, "accelerating AI capabilities isn't bad," and me subsequently thinking "yeah, that seemed pretty reasonable." For example, linking me to nostalgebraist or OpenAI's alignment agenda or this debate.[1] Total bounty capped at $600, first come first served. All bounties (incl. the total-bounty cap) doubled if, by Jan 1, I can consistently read people expressing unconcern about AI and not notice a status-yuck reaction.

Context: I notice that I've internalized a message like "thinking that AI has a <1% chance of killing everyone is stupid and low-status." Because I am a monkey, this damages my ability to consider the possibility that AI has a <1% chance of killing everyone, which is a bummer, because my beliefs on that topic affect things like whether I continue to work at my job accelerating AI capabilities.[2]

I would like to be able to consider that possibility rationally, and that requires neutralizing my status-yuck reaction. One promising-seeming approach is to spend a lot of time looking at lots of of high-status monkeys who believe it!

  1. ^

    Bounty excludes things I've already seen, and things I would have found myself based on previous recommendations for which I paid bounties (for example, other posts by the same author on the same web site). 

  2. ^

    Lest ye worry that [providing links to good arguments] will lead to [me happily burying my head in the sand and continuing to hasten the apocalypse] -- a lack of links to good arguments would move much more of my probability-mass to "Less Wrong is an echo chamber" than to "there are basically no reasonable people who think advancing AI capabilities is good."

1Answer by teradimich1h
I have collected [https://docs.google.com/spreadsheets/d/19edstyZBkWu26PoB5LpmZR3iVKCrFENcjruTj7zCe5k/edit?fbclid=IwAR1_Lnqjv1IIgRUmGIs1McvSLs8g34IhAIb9ykST2VbxOs8d7golsBD1NUM#gid=1448563947] many quotes with links about the prospects of AGI. Most people were optimistic.
1Optimization Process2h
Thanks for the links! Net bounty: $30. Sorry! Nearly all of them fail my admittedly-extremely-subjective "I subsequently think 'yeah, that seemed well-reasoned'" criterion. It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I'll publicly post my reasoning on each. (Not posting in order to argue, but if you do convince me that I unfairly dismissed any of them, such that I should have originally awarded a bounty, I'll pay triple.) (Re-reading this, I notice that my "reasons things didn't seem well-reasoned" tend to look like counterarguments, which isn't always the core of it -- it is sometimes, sadly, vibes-based. And, of course, I don't think that if I have a counterargument then something isn't well-reasoned -- the counterarguments I list just feel so obvious that their omission feels glaring. Admittedly, it's hard to tell what was obvious to me before I got into the AI-risk scene. But so it goes.) In the order I read them: No bounty: I didn't wind up thinking this was well-reasoned. It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest / as a costly signal of having engaged, I'll post my reasoning publicly: (a) I read this as either disproving humans or dismissing their intelligence, since no system can build anything super-itself; and (b) though it's probably technically correct that no AI can do anything I couldn't do given enough time, time is really important, as your next link points out! No bounty! (Reasoning: I perceive several of the confidently-stated core points as very wrong. Examples: "'smarter than humans' is a meaningless concept" -- so is 'smarter than a smallpox virus,' but look what happened there; "Dimensions of intelligence are not infinite ... Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us?" -- compare me to John von Neumann! I am not near the maximum
1Lao Mein1h
Thanks, I knew I was outmatched in terms of specialist knowledge, so I just used Metaphor to pull as many matching articles that sounded somewhat reasonable as possible before anyone else did. Kinda ironic the bounty was awarded for the one I actually went and found by hand. My median EV was $0, so this was a pleasant surprise.

models let us conduct words, and soonish pictures

chatgpt is very micromanageable too

1Alok Singh5h
try using chatgpt to optimize life. they have an api too. try giving it really specific instructions and general stuff, then the particulars.
1Alok Singh5h
(it = chatgpt) (ask it to rewrite this post to active tense and written in a way that's pleasant to read) (read this post like pseudocode^2, with a lot of thought about general and particular, and the fundamental ncatlab dialectic) [wow this really may make dreamposting possible. magic =).] ask it to consider the morality of whatever you think of help me pls uwu. ask it to break it down into a plan for you. to make that plan as easy as possible have it generate api code (if the docs were available <2022) ask it to check its work to print out a (informal) proof or line of reasoning or whatever tell it to think about lojban and output in lojban and then in english. make it elaborate have it explain in 4chan style, or whatever ask it to write out general and particular in its line of reasoning ask it to explain "words ~ concepts ~ region of space" ask it to rewrite bits of Dune to not drag so much it can expand and contract text by simplification and elaboration. reasoning is most of what you want to give it, and the particulars

Suppose you're an AI-doomer like Yudkowsky — you really care about humans surviving this century but think that nothing you can do will likely achieve your goal. It's an unfortunate fact of human psychology that when someone really cares about something but thinks that nothing they can do will likely achieve their goal, they sometimes do nothing rather than do the thing which most likely will achieve their goal. So there's a risk you give up on alignment — maybe you lie in bed all day with paralysing depression, or maybe you convert FAANG income into short-term pleasures. To avoid that, you have three options: change your psychology, change your beliefs, or change your goals. Which option is best?

  • Change your psychology. This would be the ideal option
...

Upvoted for turning a 21-min read post (death with dignity) to 1-min, thank you!

On the MBTA, when a train is coming you get an announcement like:

Attention passengers, the next red line train to Ashmont is now arriving.

As with all the announcements, there's a text version:

I like that they have the signs, both for general accessibility reasons and because you're often in a place where you can read the sign but not hear the announcement. But I don't like that they include "attention passengers".

Including those words in the audio version I understand: you need to catch peoples attention before you start giving them the information. On a sign, however, it's not adding anything. What makes it worse here is that the critical information, which direction the arriving train is traveling, is pushed onto the second screen. Someone who could have enough time to catch the train...

2Slider1h
Are there ever announcements targeted to other groups beside passengers? There is also some value that the announcements fatefully reflect each other (that is actually is the same content). From the pictures i would also seem that now it is 4 lines and without the prefix it would be 3 lines. 3 lines would still use 2 screens.
1Jalex Stark38m
all of the information is in lines 2 and 3, so you'd get all of the info on the first screen if you nix line 1.

departure annoucements are not a thing?

1Zian5h
How do you think the employees in charge of the signs will benefit if they start omitting the phrase?

Slack and Discord are skins over the same thing. a bunch of conversations happening at the same time. threads are made after the fact, and many conversations are wreathed together, so you have to untangle them.

Zulip has a tiny change: you have to make a conversation have a point up front by giving it a thread title. random convos can happen in the 'random' thread, so it includes the previous model.

This is so much nicer. Conversations are untangled and it becomes way easier to go through msgs quickly. Conversations also end, instead of just petering out before picking up into another one. Threads introduce cutoff points.

Zulip mobile app is meh though.

For those who care, it's open source and you can host your own server from a docker image. (In addition to the normal "just click buttons on our website and pay us some money to host a server for you" option)

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Subscribe to Curated posts
Log In Reset Password
...or continue with

This post is an attempt to refute an article offering critique on Functional Decision Theory (FDT). If you’re new to FDT, I recommend reading this introductory paper by Eliezer Yudkowsky & Nate Soares (Y&S). The critique I attempt to refute can be found here: A Critique of Functional Decision Theory by wdmacaskill. I strongly recommend reading it before reading this response post.

The article starts with descriptions of Causal Decision Theory (CDT), Evidential Decision Theory (EDT) and FDT itself. I’ll get right to the critique of FDT in this post, which is the only part I’m discussing here.

“FDT sometimes makes bizarre recommendations”

The article claims “FDT sometimes makes bizarre recommendations”, and more specifically, that FDT violates guaranteed payoffs. The following example problem, called Bomb, is given to illustrate this...

1Heighn19h
FDT doesn't insist on this at all. FDT recognizes that IF your decision procedure is modelled prior to your current decision, than you did in fact choose in advance. If an FDT'er playing Bomb doesn't believe her decision procedure was being modelled this way, she wouldn't take Left! If and only if it is a feature of the scenario, then FDT recognizes it. FDT isn't insisting the world to be a certain way. I wouldn't be a proponent of it if it did.
2Said Achmiz12h
If a model of you predicts that you will choose A, but in fact you can choose B, and want to choose B, and do choose B, then clearly the model was wrong. Thinking “the model says I will choose A, therefore I have to (???) choose A” is total nonsense. (Is there some other way to interpret what you’re saying? I don’t see it.)

"Thinking “the model says I will choose A, therefore I have to (???) choose A” is total nonsense."

I choose whatever I want, knowing that it means the predictor predicted that choice.

In Bomb, if I choose Left, the predictor will have predicted that (given subjunctive dependence). Yes, the predictor said it predicted Right in the problem description; but if I choose Left, that simply means the problem ran differently from the start. It means, starting from the beginning, the predictor predicts I will choose Left, doesn't put a bomb in Left, doesn't leave the "I predicted you will pick Right"-note (but maybe leaves a "I predicted you will pick Left"-note) , and then I indeed choose Left, letting me live for free.

1Heighn2h
If the model is in fact (near) perfect, then choosing B means the model chose B too. That may seem like changing the past, but it really isn't, that's just the confusing way these problems are set up. Claiming you can choose something a (near) perfect model of you didn't predict is like claiming two identical calculators can give a different answer to 2 + 2.

This is a summary of the 2022 paper Individuals prefer to harm their own group rather than help an opposing group. I spent about an hour reading the study and writing the post as I went. If I made mistakes in my interpretation, please let me know in the comments.

Abstract excerpt:

  • Individuals prefer to harm their own group rather than provide even minimal support to an opposing group across polarized issues (abortion access, political party, gun rights).
  • Individuals preferred to subtract more than three times as much from their own group rather than support an opposing group, despite believing that their in-group is more effective with funds.
  • Identity concerns drive preferences in group decision-making
  • Individuals believe that supporting an opposing group is less value-compatible than harming their own group.

Summary

Let’s say you’re a...

2AllAmericanBreakfast3h
Let’s step back and look at what we’re debating. You’re seeing that a few people just don’t like political donations. They want to see less money in politics. They’re clear on this, and it doesn’t matter if it’s a win/win or a lose/lose situation - they just want to see fewer dollars being wasted on attack ads. They’d ideally like both parties to spend less. When I look at this study, I see that most people behave like they agree with them, at least in lose-lose situations. But in win-win situations, people take the dollar they're offered for their own side instead of burning one of their opponents' dollars. So some people clearly do want to see less money in politics, and that no doubt is how some of them picked their responses in this study. But most people just aren’t acting as if that was top of mind for them. One way to make sense of it all is to say that people see the study questions as a loyalty test. Some quirk of the human brain makes them see “giving” their opponents a dollar in the lose-lose situation as feeling more traitorous than “losing” a dollar for their own side. But “getting” a dollar for their own side feels more loyal than “destroying” a dollar for their opponents. That seems psychologically plausible to me. "Giving" your opponents a dollar smacks of "aid and comfort to the enemy," while "losing" a dollar for your own side feels like at worst a blunder, and at most a necessary cost paid out of prudence, in a way that goes beyond financial accounting. On the other hand, "getting" a dollar for your own side feels like you're bringing home the bacon. You're a provider, and you might expect to gain status. "Destorying" one of the enemy's dollars feels at best like counting coup, but at worst like you're being a thief or you're not playing by the rules. Maybe some people will pat you on the back for it. But it might also trigger some kind of vendetta by the other side. Maybe it's more trouble than its worth. I don't think people go through a tho
1Lao Mein3h
Yes, I basically agree with your first two paragraphs. However, I disagree that the evidence shows people are using post-hoc justifications in the lose-lose condition. There is no need for that hypothesis. If the "less money in politics" people in the lose-lose condition also took money in the win-win condition but everyone else switched, we would get similar results to that actually observed. I don't know if I even disagree with your explanation for the different results between the win-win and lose-lose conditions. I'm modeling this off of existing models of taboo violation. Breaking "mundane taboos" like "donating less/no money" are always preferred over "sacred taboos" like "donating to the opposition" or "stealing, even from the opposition". So the more politically active someone is, the more likely they are to view "donating less" as a sacred taboo, since the more politically active someone is, the more they are exposed to requests for political donations and hence ignore them. The difference is mostly in the framing - my framing is that, in reality, causing there to be less donations to your side doesn't feel like a taboo, since it's something people do implicitly anyways. This is all that is needed to explain our results, and I think the simplest, most elegant, and least counter-intuitive. We might actually have the same models, with the only difference being the viscousness implied. The real question is where they differ, and what different predictions they give. I think the best way to resolve this is a followup study that directly asks people what each option makes them feel emotionally, and how much.
2AllAmericanBreakfast2h
I think we agree on the taboo/loyalty test thing, and I don't have strong, considered, specific views on the details of people's psychological state - I don't think the results of a "how each option makes them feel emotionally" study is likely to surprise me, because I just don't have very articulate or confident views on that level of granularity. I'm still not quite sure what you're pointing out with the "less money in politics" thing explaining these results. Is that something you can spell out point by point, maybe giving specific numbers from the study to buttress your argument? I realize that's a big ask, I understand if you don't want to take the trouble.

~20% of people were explicitly "less money in politics" in the lose-lose condition. This explains why ~20% of people took away money in the win-win condition, because it was the same people. That's it. It doesn't explain anything else. I just brought it up because it was interesting. While everyone else was having to struggle with difficult emotions, they just pressed the button to take away money, in line with their values. This was funny to me.

I realized that, for example, Infra-Bayesianism is much easier to read on a tablet in book format.

3Answer by the gears to ascenscion4h
via metaphor.systems query for this question [https://metaphor.systems/search?q=tools+to+convert+LW+sequences+to+PDF+or+any+other+file+format%3A] , filtered manually by apparent usability, it seems that this one is the latest: https://www.lesswrong.com/posts/fnS9gtSeGJxxTcgW6/lesswrong-portable [https://www.lesswrong.com/posts/fnS9gtSeGJxxTcgW6/lesswrong-portable] the github: https://github.com/AABoyles/LessWrong-Portable [https://github.com/AABoyles/LessWrong-Portable]