All of Matt Goldenberg's Comments + Replies

Viliam's Shortform

I think that simple might actually be transitive I'm this case.

Samuel Shadrach's Shortform

It seems like something like "An AI that acts and reasons in a way that most people who are broadly considered moral consider moral" would be a pretty good outcome.

1acylhalide13dFair. Then one more intuition: Assume AI is sufficiently capable it can establish a new world order all by itself. I wouldn't trust most people I otherwise consider moral with such power.
Samuel Shadrach's Shortform

But if your definition of alignment is "an AI that does things in a way such that all humans agree on it's ethical choices" I think you're doomed from the start, so this counterintuition proves too much.  I don't think there is an action an AI could take or a recommendation it could make that would satisfy that criteria (in fact, many people would say that the AI by it's nature shouldn't be taking actions or making recommendations)

1acylhalide13dOkay. I'd be keen on your definition of alignment. P.S. This discussion which we're having right now is exactly what I'd be keen on, in a compressed fashion, written by an alignment researcher who has anticipated lots of intuitions and counterintuitions.
1acylhalide14dHere's a countering intuition (which is also weak to me, but to show why stronger intuitions are needed): Humans have disagreements on ethics, and have done so for millenia, so they're not 100% aligned.

Hey HS2021,

I want to acknowledge that I have multiple competing goals in engaging here.  I want to engage with you with compassion and understanding.  I also want to do my best to answer honestly and clarify what I see is happening for both you and others, and finally I want to personally understand what did happen and clarify if I'm seeing the organization clearly and if we need to make changes (or in the extreme case do I need to distance myself from the organization).

So  I likely won't make anyone completely happy with this response, incl... (read more)

Samuel Shadrach's Shortform

Logical uncertainty is hard.  But the intuition that I have is that humans exist, so there's at least a proof of concept for a sort of aligned AGI (although admittedly not a proof of concept for an ASI)

1acylhalide14dThat's weak though, I'm hoping alignment researchers have stronger intuitions than that.
An Observation of Vavilov Day

This was my experience as well. Fasting started out pretty hard to me but eventually moved to regular 84 hour fasts for a while.

lc's Shortform

It varies but usually not long. My uninformed guess is that your recent post was deliberately not frontpaged because it's a political topic that could attract non-rationalists to comment and flame in an unproductive manner.

The Machine that Broke My Heart

Is the implication that this story is a rescuer -> victim arc?

2Valentine20dI didn't mean to imply that per se. But yes, I do see that playing a strong role here, and that's why I thought to bring Forrest's article forward here.

I don't think I can provide context for that, but can certainly provide context for people who are considering training here or have concerns about the organization.

From personal experience I can speak to

  • The training at the Vermont branch MAPLE, and how it matches your experience at OAK.

  • The things the organization is doing to investigate and improve based on your post and other feedback.

  • The new container and leadership at OAK, and how it's different and similar to what you describe.

3HS202119d* Does the program structure significantly differ from OAK? Is there separation between staff and participant roles? Is MAPLE practicing informed consent? What about oversight and accountability? What is the onboarding process? * I would like to know what the organization is doing to "investigate" and improve based on my post and other feedback. The last "investigation" conducted by this organization consisted of Soryu sending his girlfriend to sort things out during which she never spoke to me about the events in question. The organization's recent public statement was incredibly disappointing to myself and other former members. When I read this statement these were the things that stood out to me from my pov and the information available to me: distortion of information about my interactions with leadership and interactions with other former members, shifting the blame onto the past trauma of participants rather than acknowledging they have created a VERY high-risk environment that exceeds others intensive training practices, denial of and justification of other serious risk factors, mischaracterization of my prior relationship, denial of knowledge of allegations which I have email records of, lack of transparency, a continued pattern of appointing persons with conflicts of interests to handle grievances, lack of a 3rd party investigation which would generally be expected of any other spiritual community or organization in similar circumstances. Based on this public statement alone I would conclude that my concerns are not being taken seriously as they indicate, there are still serious issues that are actively compromising accountability and growth, protecting Soryu from being accountable to harm caused to past students is still a primary concern for some leaders, and it is uncertain whether or not significant changes will actually happen that ensure current and future participants will be

I'm a current resident at the Vermont branch of Monastic Academy and also happy to talk to people about what's happening and do my best to provide context.

1HS202121dI am curious as to why you feel you can "provide context" for my experience and for events that you were not present for?
Risks from AI persuasion

I don't understand what you're asking apparently. Do you want se books you can read that take you through the history of direct response marketing?

Risks from AI persuasion

ion, because it had both the biggest sample size & was the most positive result

And it was the only study (unless the other ones that you didn't explicate had it) that focused on the type of marketing (direct response marketing) that I was referring to.  You certainly could have strawmanned my position by picking a study that was referring to brand advertising, but I would hardly call it a steelman to select the one study that was relevant.

So, where's this literature on how effective and scalable advertising is at manipulating people?

As I said, it's in the robust history of effective A/B testing/campaigns with measurable effects on revenue in this space.

2gwern23dWhich history is...? (I now ask for the third time.)
Risks from AI persuasion

Its significant that you had to point to the one positive result you posted about. The rest (Johnsohn and Johnson, P & G, Political Campaigns) that have negative results would all fall much more under brand advertising (of course, brand advertising to direct response marketing is a spectrum, but if you're wanting more clarity on the distinction this article is decent: https://yourbusiness.azcentral.com/importance-promotional-marketing-strategies-13197.html)

5gwern24dYes. Specifically, the significance is that I picked it to steelman your position, because it had both the biggest sample size & was the most positive result I knew of (in a body of negative results), and still didn't support your claims. So, where's this literature on how effective and scalable advertising is at manipulating people?
Risks from AI persuasion

As far as I can tell the studies you mention are all brand advertising, which I agree is not super well supported. I'm referring here to direct response marketing (spam mail, online direct sales ads, etc) , in which the effects of e.g. A/B testing are immediately clear and apparent. The history of that testing is what I'm referring to.

5gwern24dI'm not sure what you mean by 'brand advertising'. In for example Johnson et al 2017, they are measuring a conversion funnel from 2.2 billion Google ads to final 'conversion' (often but not always a purchase, "Conversions may include purchases, sign-ups, or store location lookup"). That sounds like 'online direct sales ads' to me. You have an online ad, which is directly selling. It's definitely not some vague brand-building ad in a magazine. And the effects are small, and far from 'immediately clear and apparent'.
An Open Letter to the Monastic Academy and community members

I think it's worth noting that the writer here has never been to MAPLE and never talked to Soryu Forall. I think the author is really only qualified to give First hand opinions on the training and leadership at OAK.

5ChristianKl24dAs the OP points out Soryu asserts that he has an organizational structure that makes him responsible for what happens in his organization. Are you arguing that Soryu is not a guru in the sense he describes in his talk and thus does not carry the responsibilities he describes?
An Open Letter to the Monastic Academy and community members

Note: While I'm a resident at the Vermont branch of Monastic Academy, I'm not representing them here, nor am I stating a final position as obviously there's ongoing sensemaking and investigating going on.

My understanding of this was that after they found out that a relationship had started between two participants in the training (which is explicitly against the agreement one of the participants signed, although not the OP), they wanted to make sure that the relationship was indeed ethical and consensual (which involved two people getting into a relationsh... (read more)

1HS202122dI can tell you how it played out from my perspective. The man I was in love with came to me and said Soryu asked me to do write this letter stating that this was loving and consensual and we are abiding by the rules of the Monastic container ( all of which was true except for the consent piece) because the board of directors is worried you might sue the organization or speak up publicly (something I had no intention of at the time); he then repeatedly brought this up to me despite my hesitancy and tried to get me to sign this letter. Finally I was told not asked told we would sign this in front of the whole community. I felt extremely pressured both by him and by the community and other leaders to do so. It seems pretty messed up to me that Soryu personally asked the man I was in love with and whom had sexually assaulted me to write this letter and get me to sign it - followed by immediately instructing OAKs leadership to send me away with 24 hrs notice while this man resumed leadership. All of these to my knowledge were decisions made by Soryu and MAPLE leadership NOT OAK's leadership though they are certainly responsible for their participation. Sending this person was problematic for many reasons not only was I more vulnerable to this person because we had fallen in love; already feeling confused about my experience because nobody was talking to me about what happened or available to walk through the incident with me; but this person had more power in the community as the recently removed ED, had been in the community far longer, and as a donor who had pledged 200,000 to the organization which still hadn't been received and whose personal and professional ties were key in the organization receiving a 300,000 grant from BERI that they were being considered for - all of these are power dynamics; and ultimately he stood the most to gain from securing a letter that stated consent. If there is a question about whether an interaction was consensual or not you don't sen
1HS202124dThat maybe your understanding. But that's actually bullshit on so many levels.
Risks from AI persuasion

There aren’t clear examples of easy and scalable ways to influence people.

 

Direct response marketing has a quite robust evidence base that certain types of arguments can scalably persuade people to buy things.  

7gwern24dWhat evidence base did you have in mind? Because there's a pretty robust evidence base that the effects of advertising are tiny [https://www.gwern.net/Ads#rossi-1987-2] and greatly overestimated by less rigorous methods.
A Cautionary Note on Unlocking the Emotional Brain

If some conscious activations the process of consolidating is itself causing "one idea to win... sometimes the wrong one", then trying consolidation on "the feelings about the management of consolidation and its results" seems like it could "meta-consolidate" into a coherently "bad" result.

 

Can you give an example of how this would happen? Do you have examples of it? I think the only way that the process of consolidating can cause one idea to win in the way described is through suppression of a higher level concern.  At some point as you keep going meta there's nowhere left to suppress it.

Frame Control

I've never heard frame control used that way despite being fairly familiar with the modern NLP literature. First page of Google search also seems to mostly talk about controling other people's frames.

6ChristianKl1moThere's some sense in the PUA literature (and what comes up at SEO optimized blog posts) that they are written for an audience who's insecure and seeks to learn techniques to gain power over other people. In reality, dealing with one's own issues is often more important for the outcomes that are sought. Frame control in the NLP sense is about things like not letting anything that the other person says trigger you. That's useful in a coaching context for not letting the emotional problems of the coach interfere with the coaching intervention. I have a few times heard stories of therapists getting angry at their patients for something that the patient said. That's behavior I wouldn't expect from anyone I know that's skilled in NLP. Those people are generally in control of their own emotionals well enough to not switch into a state of anger because something triggers them. For using principles such as pacing&leading it's also necessary to have control over the state that you want to apply this towards.
6pjeby1moThat's odd. When I googled "frame control" (prior to my comment) the first result was about programming, the second was this post, and the third was a 14-point article in which most of the illustrative examples were about ways of responding to social bullying, dominance displays, or manipulation of various sorts. That is, frame control as reaction to social maneuvering by others. That's also fairly consistent with things I've previously read, that establish the very first rule of frame control as not letting others trick, trap, or threaten you out of your intended frame for an interaction. And while some works do treat frame control as a zero sum game, the core message of most things I've read have been about internal frame defense and non-zero sum games. For example, one book (literally entitled "Frame Control") notes many times that "basing the strength of your frame on the weakness of others is not a good strategy" and provides quite a lot of exercises that are aimed at changing one's internal beliefs and interpretation of situations, with frequent examples roughly of the form, "don't try to argue, fight, trick, persuade, etc. people - instead just accept what people say and hold to your opinion, instead of being emotionally dependent on others agreeing with you". The type of "frame control" described in this post seems rather the opposite of that!
Frame Control

Maybe the main difference is that the neutral leaders you talk about try to set up frames that their subjects find positively exciting, whereas frame controllers set up frames that are disempowering and make the person smaller? 

Yeah this makes sense.

. I worry that whatever stated mission an organization has cannot be easily compressed into slogans or rituals, and if people have to do these things in order for the organization to work, then maybe it's lacking in authentically mission-driven individuals, and that spells trouble. 

I don't think the p... (read more)

4Lukas_Gloor2moRight, I was strawmanning with that phrasing, sorry. I guess the whole point of the strategy you're describing is that it scales well, and my criticism of it is that it's scaling too quickly, so is at risk of losing nuance. This seems like a spectrum and I happen to be at the extreme end of "if your mission is more complicated than 'make money', you're likely doomed unless you prioritize hiring people with a strong ability to stay on the path/mission." (And for those latter people, activities like the ones you describe wouldn't be necessary.)
Frame Control

In my experience very good organizations are cult-like in their very strong cultural practices. For instance, I was part of City Year in Boston, which has people wear bright red jackets everywhere, do physical training in the Middle of Copley Square every Wednesday, and has you answer "Fired Up!" when someone asks you how you're doing.  You are expected to memorize their values as you do your job.

In my experience the heads of City Year, people like Charlie Rose,  are incredibly good at the thing I'm calling frame control in this post. They make y... (read more)

6HS20212moPITW #159 "This is hard, Be strong" Fellow former City Year Member here who served in Columbia, SC. Reading your comment definitely brought up memories and makes me feel like I need to go back over that experience with a new lense now. City Year was definitely challenging to ones sense of individuality and had a very rigid structure. Yes they have very specific ways of building culture (red jackets, morning chants, PITWs, ect.) That could described as culty and definitely focus on instilling a particular view/set of values - hadn't quite thought about it that way at the time. There is definitely a clear hierarchy in structure and a bit of a glorified image put forward that is umm.. different then the experience. The work and the year also yielded a lot of important lessons. I can totally see how "frame control" showed up with certain leaders. That being said to my knowledge there were also clear agreements being made with consent, organizational and financial transparency, clear codes of conduct, people feel comfortable complaining and giving feedback, and at least within the branch I served at the overall cohort lacked many of the defining features of a cult (i.e the cult personality and many group dynamics). People still maintained a level of individuality and agency even within that and nobody was ever pressured to stay beyond their original commitment of one year. Even then people did not meet resistance if they chose to leave mid contract. Though given a particular leader with narcissistic and charismatic authoritarian qualities (like Soryu) I could totally see how a dynamic could easily become more cult like. There were things City Year was really good at - and then there were things that they really weren't. You mentioned above you think frame control is probably necessary for good leadership - but what if that's based on a cultural script and model of leadership that doesn't actually serve to create a better or more equitable world. What if that model of l
7Lukas_Gloor2moThanks for explaining! You're definitely pointing out a real phenomenon and "skill," but I feel like it's different somehow than the thing aella was gesturing at. Maybe the main difference is that the neutral leaders you talk about try to set up frames that their subjects find positively exciting, whereas frame controllers set up frames that are disempowering and make the person smaller? For instance, I don't necessarily think it's "frame control" when Lucius Malfoy rallies his fellow death eaters around hating Dumbledore. He's just being a good leader. It becomes frame control when he gaslights his underlings and underhandedly blames them for everything that when wrong with his latest plan. But we might just be interpreting the OP differently. I can see why you want to use "frame control" for both the good thing and the neutral thing. Maybe it would be appropriate to coin a different term for the thing aella means. Maybe something like "frame erosion" or "frame distortion" that emphasizes the potential adverse effect on victims when someone uses frame control (a more neutral behavioral strategy under this meaning) in an exploitative and uncaring way. Or maybe another dimension here has to do with consent. If you sign up for an organization that makes you learn special greetings or mantras, you give consent to let yourself be shaped in some kind of cult-like direction. By contrast, in the examples aella talks about, the frame controller starts to get more and more influence over aspects of the person's thinking that seem like they shouldn't be under someone else's influence. On the merits of the type of leadership you describe: I'm skeptical. I worry that whatever stated mission an organization has cannot be easily compressed into slogans or rituals, and if people have to do these things in order for the organization to work, then maybe it's lacking in authentically mission-driven individuals, and that spells trouble. Of course, the counterpoint is "authenticall
Frame Control

But it does not follow from this that you would therefore be right to take this view.

Unless you've solved the Is/Ought distinction, it doesn't follow from any fact that it's right to take a certain view (at best, you can state that given a certain set of goals, virtues, etc, different behaviors are more coherent or useful), that's why it's important to state your ethical assumptions/goals up front.

Like what, do you think?

I don't know, from previous comments I think you value truth a lot but it'd really be better for you to state your values than me.

Frame Control

Normal and sane contain a bunch of hidden normative claims about your goals. Fwiw I agree that the suggestions on Aella's post go overboard, but if I had endured the abuse she had maybe I wouldn't.

My point is that without saying something like "I think it's better to have a bit higher chance of being abused and a smaller chance of ignoring good advice" you can't make normative claims -> they imply some criteria that others may not agree with. It's worth trying to tease out what you're optimizing for with your normative suggestions.

8ChristianKl2moIt seems to me that the key difference between Said and Aella is that Aella basically says: "If you go into a group and interact in an emotional vulnerable way, you should expect receprocity in emotional vulnerability." On the other hand Said says "Don't go into groups and be emotionally vulnerable". Aella is pro-Circling, Said is anti-Circling.
6Said Achmiz2moLike what, do you think? But it does not follow from this that you would therefore be right to take this view. I agree that if your view includes goals like the quoted one, you should make this explicit.
Frame Control

I think both of those are probably good guidelines if your primary goal is to avoid abuse at all costs. They're effective trauma responses. However, they're not actually the best if you have more nuanced goals.

More nuanced goals like what?

I do not have “avoid abuse at all costs” in mind when I suggest such things. Rather, I am recommending general norms of discussion and interaction.

It seems to me that a lot of people, among “rationalists” and so on, do things and behave in ways that (a) make themselves much more vulnerable to abuse and abusers, for no really good reason at all, and (b) themselves constitute questionable behavior (if not “abuse” per se).

My not-so-radical belief is that doing such things is a bad idea.

In any case, the suggestions I lay out have n... (read more)

Frame Control

I think both of those are underselling competent frame control. Good frame controllers are actually competent, can switch between styles of communication depending on the person, and offer genuine value along with the frame theyre offering.

Frame Control

Giving a highly mimetic name to something, a really compelling object-level mental framework, and putting a personal narrative behind it is a really big deal and actually significantly alters people's thought processes in a way they don't easily detect. I don't think any of you realize how powerful this is and I'm not actually sure that anyone should do this in any situation.

This is frame control. It's interesting that several commentors have expressed unease about this post because in some sense it's doing the thing it's trying to point out.

1blueiris22moRight -- in my opinion it's better if it's obvious!
Frame Control

Fwiw I think it's entirely possible to just get frame controlled by them using all the "right conversational moves" to push their frames. I don't think there's a set of communication norms that are fully protective against frame control.

Agreed but it seems to me that agreeableness/conflict-avoidance makes you far more susceptible to frame-control. Not that it's the only factor which matters or that a disagreeable person is immune.

Frame Control

Here's a few things I believe:

 

  1. Frame control is definitely real. I think if I were to try to operationalize it, it's something like the ability influence the ontologies people use and the valence they assign to objects with in those ontologies.  This caches out as influencing how important and virtuous people find certain ideas and actions.
  2. Frame control is probably necessary for good leadership.  A good leader is a Kegan 5 individual who can find the ontology that they can use to educate and motivate Kegan 4 and Kegan 3 underlings in an orga
... (read more)
2Lukas_Gloor2moHm.. The idea that positive leadership also involves frame control is interesting. I never thought of it that way. I suspect that you only get a cult-like group/organization if the leader uses frame control, rather than something with independent-thinking, healthy group members. Maybe good leaders are skilled at something frame-related, but it's not frame control; rather, it's about listening to what people's motivations actually are and then crafting a frame for the group as a whole where people will be motivated to pursue the mission, based on their needs and so on. Maybe this is the same thing you also mean. I guess I just assumed that in the "frame control as bad" connotation, there's something coercive about it where the frame that is imposed over you is actually bad for you and your goals.
4Duncan_Sabien2moI agree with all of this, and just wanted to note that there are Kegan 4 frame-controllers and Kegan 3 frame-controllers, too.
Matt Goldenberg's Short Form Feed

Yes, but people also constantly exchange increased reproductive capacity for love, truth, and beauty (the world would look very different if reproductive capacity was the only terminal value people were optimizing for).  It's not that reproductive capacity isn't a terminal value of humans, it's that it's not the only one, and people make tradeoffs for other terminal values all the time.

Matt Goldenberg's Short Form Feed

I just realized that humans are misaligned mesaoptimizers. Evolution "wanted" us to be pure reproduction maximizers but because of our training distribution we ended up valuing things like love, truth and beauty as terminal values. We're simply misaligned AIs run amok.

Secure homes for digital people

Woah 😳. Didn't even think of this use case for a Blockchain.

Tell the Truth

US government has never taken down an illicit drug rung by breaking Tor itself.

The policy of the US government is to reconstruct a plausible narrative of how they caught the offender if they use exploits like this so that they can continue to use them in future.

If you look at the number of dark markets that have used Tor and been shutdown by the feds, I don't think it's implausible that Tor is already compromised.

True Stories of Algorithmic Improvement

Moore's law simply means that the 44x less compute is 11x cheaper, right? Moore's law doesn't make algorithms need less compute, just lowers the cost of that compute.

1Measure3moMakes sense.
3Gunnar_Zarncke3moTo some degree, sure.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I guess it depends whether you care about evolution's goals or your own.  If the way that evolution did it was to massively change what you care about/what's meaningful after you have children, then it seems it did it in a way that's mind warping.

3James_Miller3mo"warping" means shifting away from the intended shape so since evolution "programed" us to have kids the effect of having kids on the brain should not be considered "mind warping".
Prioritization Research for Advancing Wisdom and Intelligence

But I'd be up for more research to decide if things like that are the best way forward :)

 

And I'd be up for more experiments to see if this is a better way forward.

My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

I suppose one hypothesis here is that having a kid is dangerously mind warping on the same level as psychedelics.

7algekalipso3moThis is substantiated by data in "Logarithmic Scales of Pleasure and Pain [https://forum.effectivealtruism.org/posts/gtGe8WkeFvqucYLAF/logarithmic-scales-of-pleasure-and-pain-rating-ranking-and] " (quote): Birth of children I have heard a number of mothers and father say that having kids was the best thing that ever happened to them. The survey showed this was a very strong pattern, especially among women. In particular, a lot of the reports deal with the very moment in which they held their first baby in their arms for the first time. Some quotes to illustrate this pattern: No luck for anti-natalists [https://qualiacomputing.com/2018/07/23/open-individualism-and-antinatalism-if-god-could-be-killed-itd-be-dead-already/] … the super-strong drug-like effects of having children will presumably continue to motivate most humans to reproduce no matter how strong the ethical case against doing so may be. Coming soon: a drug that makes you feel like “you just had 10,000 children”.
2James_Miller3moYes or "Nothing in Biology Makes Sense Except in the Light of Evolution" and brains of many adults without kids generate the "your life is meaningless" feeling.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

a willingness to violate drug laws is likely a negative signal about someone.

 

I'm curious where you're getting this from. What's your evidence?

1James_Miller3moThe illegal drug trade inflicts massive misery on the world, just look at what the drug gangs in Mexico do. A person's willingness to add to this misery to increase his short-term pleasure in a manner that also likely harms his health is, for me a least, a huge negative signal about him.
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

He straightforwardly agreed, and said he provides the environment for long term dedication to meditation because there is a market demand for that product. 🤷

 

FWIW as a resident of MAPLE, my sense is Soryu believes something like:

"Smaller periods of meditation will help you relax/focus and probably have only a very small risk of harm. Larger/longer periods of meditation come with deeper risks of harm,  but are also probably necessary to achieve awakening, which is important for the good of the world." 

 

But I am a newer resident and could easily misunderstanding here.

Note that anyone considering this (as I am) also has to consider, outside of the fact that this person might be accidentally wrong, the fact that this person might be deliberately running a pump and dump scam (although LW does seem a weird place to target for this).

3acylhalide3moTotally agree, a random internet tip shouldn't subsitute for your own research. That being said I'm happy to share my research and further discuss why I believe what to do. And ofcourse prove that I am not, in fact, running a pump and dump scam :p I mean yes I do own the tokens I'm mentioning here but I can reasonably prove I'm not on their core teams, and I can prove that the tokens are in fact linked to legitimate products (or atleast as legitimate as things get in the crypto space).
Prioritization Research for Advancing Wisdom and Intelligence

In general I think this is a promising area of research, not just for prioritization, but also for recognition that it is indeed an EA cause area. In fact, because in most respects a lot of this research is quite nascent, it's not clear to me that cause prioritization in the classic sense makes a ton of sense over simply running small experiments in these different areas and seeing what we learn. I expect that the value of information is high enough for most of the things you suggested that running say 15 grant experiments each costing $5,000 - $15, 000 is... (read more)

2ozziegooen3moThat's an interesting perspective. It does already assume some prioritization though. Such experimentation can only really be done in a very few of the intervention areas. I like the idea, but am not convinced of the benefit of this path forward, compared to other approaches. We already have had a lot of experiments in this area, many of which cost a lot more than $15,000; marginal exciting ones aren't obvious to me. But I'd be up for more research to decide if things like that are the best way forward :)
My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage)

This may be slightly overconfident. My guess is that the effects can vary wildly depending on the individual.

It's definitely overconfident.  Source: twenty years of listening to a wide range of stories from my mother's experiences as a mental health nurse in a psychiatric emergency room.  Some of those psychedelic-related cases involved all sorts of confounding factors, and some of them just didn't.

There's quite a bit of discussion of this on discussions of various proof of stake algorithms and their strengths and weaknesses (or used to be).

Whole Brain Emulation: No Progress on C. elgans After 10 Years

There is a popular idea that some very large amount of suffering is worse than death. I don't subscribe to it. If I'm tortured for X billions of years, and then my mind is repaired, then this fate is still much better than permanent death. There is simply nothing worse than permanent death - because (by definition) it cannot be repaired.

 

This sweeps a large amount of philosphical issues under the rug by begging the conclusion (that death is the worst thing), and then using that justify itself (death is the worst thing, and if you die, you're stuck dead, so that's the worst thing).

0RomanS3moI predict that most (all?) ethical theories that assume that some amount of suffering is worse than death - have internal inconsistencies. My prediction is based on the following assumption: * permanent death is the only brain state that can't be reversed, given sufficient tech and time The non-reversibility is the key. For example, if your goal is to maximize happiness of every human, you can achieve more happiness if none of the humans ever die, even if some humans will have periods of intense and prolonged suffering. Because you can increase happiness of the humans who suffered, but you can't increase happiness of the humans who are non-reversibly dead. If your goal is to minimize suffering (without killing people), then you should avoid killing people. Killing people includes withholding life extension technologies (like mind uploading), even if radical life extension will cause some people to suffer for millions of years. You can decrease suffering of the humans who are suffering, but you can't do that for the humans who are non-reversibly dead. The mere existence of the option of voluntary immortality necessitates some quite interesting changes in ethical theories. Personally, I simply don't want to die, regardless of the circumstances. The circumstances might include any arbitrary large amount of suffering. If a future-me ever begs for death, consider him in the need of some brain repair, not in the need of death.
Whole Brain Emulation: No Progress on C. elgans After 10 Years

It's not just an AI safety risk, it's also an S-risk in it's own right.

1RomanS3moWhile discussing a new powerful tech, people often focus on what could go horribly wrong, forgetting to consider what could go gloriously right. What could could go gloriously right with mind uploading? It could eliminate involuntary death, saving trillions of future lives. This consequence alone massively outweighs the corresponding X- and S-risks.
Chris Dixon's Crypto Claims are Logically Flimsy

I think judging individual entrant moves, especially 8 years out, is not very realistic. Rather, I'd say something like "I expect at least 3 of the major tech companies to have a Dapp with at least 10 million Monthly Active Users within 10 years" 

Load More