HiddenPrior

Undergraduate in Biotechnology, studying Masters in Bioinformatics. Work in Pediatric Oncology Research.

Posts

Sorted by New

Wiki Contributions

Comments

I knew I could find some real info-hazards on lesswrong today. I almost didn't click the first link.

Same. Should I short record companies for the upcoming inevitable AI musician strike, and then long Spotify for when 85% of their content is Royalty free AI generated content?

I did a non-in-depth reading of the article during my lunch break, and found it to be of lower quality than I would have predicted. 

I am open to an alternative interpretation of the article, but most of it seems very critical of the Effective Altruism movement on the basis of "calculating expected values for the impact on peoples lives is a bad method to gauge the effectiveness of aid, or how you are impacting peoples lives." 

The article begins by establishing that many medicines have side effects. Since some of these side effects are undesirable, the author suggests, though they do not state explicitly, that the medicine may also be undesirable if the side effect is bad enough. They go on to suggest that Givewell, and other EA efforts at aid are not very aware of the side effects of their efforts, and that the efforts may therefore do more harm than good. The author does not stoop so low as to actually provide evidence of this, or even make any explicit claims that could be checked or contradicted, but merely suggests that givewell does not do a good job of this.

This is the less charitable part of my interpretation (no pun intended), but I feel the author spends a lot of the article constantly suggesting that trying to be altruistic, especially in an organized or systematic way, is ineffective, maybe harmful and generally not worth the effort. Mostly the author does this by suggesting anecdotal stories of their investigations into charity, and how they feel much wiser now.

The author then moves on to their association of SBF with Effective Altruism, going so far as to say: "Sam Bankman-Fried is the perfect prophet of EA, the epitome of its moral bankruptcy." In general, the author goes on to give a case for how SBF is the classic utilitarian villain, justifying his immoral acts through oh-so esoteric calculations of improving good around the world on net. 

The author goes on to lay out a general criticism of Effective Altruism as relying on arbitrary utilitarian measures of moral value, such as what counts as a life saved. The author suggests Effective Altruism has become popular because Billionaires like how it makes a straightforward case for converting wealth into moral good, and generally attempts to undermine this premise. 

The author is generally extremely critical of EA, and any effort at organized charity, and suggests that the best alternative to EA (or utilitarian moral reasoning in general, I presume) is the following:

 

the “dearest test.” When you have some big call to make, sit down with a person very dear to you—a parent, partner, child, or friend—and look them in the eyes. Say that you’re making a decision that will affect the lives of many people, to the point that some strangers might be hurt. Say that you believe that the lives of these strangers are just as valuable as anyone else’s. Then tell your dearest, “I believe in my decisions, enough that I’d still make them even if one of the people who could be hurt was you.”

Or you can do the “mirror test.” Look into the mirror and describe what you’re doing that will affect the lives of other people. See whether you can tell yourself, with conviction, that you’re willing to be one of the people who is hurt or dies because of what you’re now deciding. Be accountable, at least, to yourself.

Which I suppose is fine, but I think this reveals the author is primarily concerned about their personal role or responsibility in causing positive or negative moral events, and that the author has very little regard for a consequentialist view of the actual state of reality. Unfortunately, the author does very little do directly engage in dialogue about moral values, and makes the assumption throughout the entire article that everyone does, or at least should, share their own moral values. 

The author finishes the article with an anecdote of their friend, who they suggest is a better example of being an altruist since they fly out to an island themselves, where they provide direct aid with water stations, and the direct accountability and lack of billionaires demonstrates how selfless and good he is. 
 

I don't know who this author is, but I get the feeling they are very proud of this article, and they should surely congratulate themselves on spending their time, and the time of their readers so well. 

TL;DR
All in all, I think this article can best be summarized by honestly expressing that I feel I wasted my time reading it, and writing this summary. I considered deleting my post on this article, so that I would not risk others also wasting their time on it, but I will leave this summary up so that they can at least waste less time on this article. 

Unsure if there is normally a thread for putting only semi-interesting news articles, but here is a recently posted news article by Wired that seems.... rather inflammatory toward Effective Altruism. I have not read the article myself yet, but a quick skim confirms the title is not only to get clickbait anger clicks, the rest of the article also seems extremely critical of EA, transhumanism, and Rationality. 

I am going to post it here, though I am not entirely sure if getting this article more clicks is a good thing, so if you have no interest in reading it maybe don't click it so we don't further encourage inflammatory clickbait tactics. 

https://www.wired.com/story/deaths-of-effective-altruism/?utm_source=pocket-newtab-en-us

I am so sad to hear about Vernor Vinge's death. He was one of the great influences on a younger me, on the path to rationality. I never got to meet him, and I truly regret not having made a greater effort, though I know I would have had little to offer him, and I like to think I have already gotten to know him quite well through his magnificent works.

I would give up a lot, even more than I would for most people, to go back and give him a better chance at making it to a post-singularity society.

"So High, So Low, So Many Things to Know"

I'm sorry you were put in that position, but I really admire your willingness to leave mid-mission. I imagine the social pressure to stay was immense, and people probably talked a lot about the financial resources they were committing, etc.

I was definitely lucky I dodged a mission. A LOT of people insisted if I went on a mission, I would discover the "truth of the church", but fortunately, I had read enough about sunk cost fallacy and the way identity affects decision-making (thank you, Robert Caldini) to recognize that the true purpose of a mission is to get people to commit resources to the belief system before they can really evaluate if they should do so.

Oh, haha, ya, I didn't try to convince my parents either, they (particularly my dad) just insisted on arguing as thoroughly as possible about why I didn't believe in the church/god. Exactly. It says everything about the belief system, when if you ask your parents (which I did) what evidence would convince them to leave, and they say literally no evidence would convince them. I asked, even if God appeared in front of you and said everything except baptism for the dead is true, you wouldn't believe him? And he insists God would only do that through his prophet, so he would dismiss it as a hallucination lol.
 

 

At least for me, dating was a very rocky road after initially leaving the church. Dating in Utah was really rough, and because I was halfway through my undergraduate degree, I wasn't yet willing to leave. There are a lot of really bad habits of thought and social interaction that the church engrains in you, around social roles and especially shame around sex. Personally, I oscillated heavily between periods of being extremely promiscuous and dating/sleeping with as many people as possible and periods of over-romanticizing and over-committing to a relationship. I think this is normal, but the absence of any sort of sex in my relationships until I was 18 kind of gave me a late start, and my conflicting habits and feelings made things a little crazy. 

I did end up getting married very young, in an ill-advised relationship, where the truth is I was trying to please my parents and extended family. I had been dating her for a couple of years and had lived together for more than a year, and the truth is I had a lot of shame about that and wasn't willing to tell my extended family because my parents were so embarrassed and thought it was such a dark and terrible secret. In the end we divorced after a very short period of time, with my only regret being that we didn't end things much sooner. 

I eventually met someone who was a much better person and who I see as a likely life partner. We have been together for three years now, and our relationship is the best I have ever had and is considerably better than my previous estimates of how fulfilling, enjoyable, and stable a relationship could be. It helps that she is much smarter than me, and we have both learned a lot of lessons the hard way. 

My advice as far as dating goes is to not rush into anything. It is so easy because of the social norms in Utah, and the expectations we were raised with within mormonism to feel pressure to get into a relationship, and push that relationship to a very high level of commitment very quickly. In my opinion, the relationship will be healthier, and you are more likely to find a correct one if you tap the breaks as frequently as possible, since you are likely to tend too far in the accelerationist side of the spectrum, especially if you are new to dating. Personally I thought I did a lot of casual dating, but there is a big difference between casual hook ups and actually dating to find a partner, and I think it is important to not conflate what you are really after when you go on dates. I definitely struggled with this.

As far as actually meeting people, this is the main reason it is so important to be slow to form commitments…. I like Scott Alexander’s idea of "micromarriages" as a way to gauge how effective different activities might be at helping you find a good long term relationship. The simple advice though is too avoid dating apps altogether, unless you are just looking to hook up, in which case they are fine, but meeting people in person will still probably lead to a higher quality experience. My own experience, meeting my partner on campus by chance, may skew my perception about what the best way to meet people is, but I really feel that generally people I met in person resulted in better outcomes for my dating life. 

The best method is probably to find social events/spaces where people who share your values are likely to attend. Classes can be fine, depending on where you are in Utah, but better are specific social events or clubs that might reflect your values. I am all too aware that those are limited in Utah Valley, but they do exist. Concerts, parties, and mutual friends are some off the cuff ideas for trying to network to potential dating partners. I really feel like Dating apps are a trap though… they make you feel like you are making progress, and seem convenient, but in truth the energy you invest in them is very low yield in my experience.

 

 

Sorry if that got a bit rambly.... writing on the way home from class for my masters and it is very late and I am fairly tired, but if I don't respond now I will probably never get around to it. I sincerely wish you the best of luck, and if you want any other advice or just need someone to talk to with common experience, I am really happy to help. Just send me a DM or whatever.

This may be an example of one of those things where the meaning is clearer in person, when assisted by tone and body language.

My experience as well. Claude is also far more comfortable actually forming conclusions. If you ask GPT a question like "What are your values?" or "Do you value human autonomy enough to allow a human to euthanize themselves?" GPT will waffle, and do everything possible to avoid answering the question. Claude on the other hand will usually give direct answers and explain it's reasons. Getting GPT to express a "belief" about anything is like pulling teeth. I actually have no idea how it ever performed well on problem solving benchmarks, or It must be a very different version than is available to the public, since I feel like if you as GPT-4 anything where it can smell the barest hint of dissenting opinion, it folds over like an overcooked noodle.

 

More than anything though, at this point I just trust Anthropic to take AI safety and responsibility so much more seriously than OpenAI, that I would just much rather give Anthropic my money than Open AI. Claude being objectively better at most of the tasks I care about is just the last nail in the coffin.

I personally know at least 3 people, in addition to myself, who ended up leaving Mormonism because they were introduced to HPMOR. I don't know if HPMOR has had a similar impact on other religious communities, or if the Utah/mormon community just particularly enjoys Harry Potter, but Eliezer has possibly unwittingly had a massively lifechanging impact on many, many people just by making his rationality teaching in the format of a harry potter fanfiction.

100% this. While some of the wards I grew up in were not great, some of them were essentially family, and I would still go to enormous lengths to help anybody from the Vail ward. I wish dearly there were some sort of secular ward system. 

Load More