Emiya

Wiki Contributions

Comments

SSC Journal Club: AI Timelines

Since I wrote my comment I had lots of chances to prod at the apathy of people to act against imminent horrible doom.

I do believe that a large obstacle it's that going "well, maybe I should do something about it, then. Let's actually do that" requires a sudden level of mental effort and responsibility that's... well, it's not quite as unlikely as oxygen turning into gold, but you shouldn't just expect people doing that (it took me a ridiculous amount of time before starting to do so). 

People are going to require a lot of prodding or an environment where taking personal responsibility for a collective crisis is the social norm to get moving. 10 millions would cont as lot of prodding, yeah. 100k... eh, I'd guess lots of people would still jump at that, but not many of those who are paid the same amount or more.

So a calculation like "I can enjoy my life more by doing nothing, lots of other people can try to save the world in my place" might be involved, even if not explicitly. It's a mixture of the Tragedy of the Commons and of Bystander Apathy, two psychological mechanism with plenty of literature.

Habitual Productivity

She gave me the answer of someone who had recently stopped liking fritos through an act of will. Her answer went something like this: "Just start noticing how greasy they are, and how the grease gets all over your fingers and coats the inside of the bag. Notice that you don't want to eat things soaked in that much grease. Become repulsed by it, and then you won't like them either."

 

This woman's technique stuck with me. She picked out a very specific property of a thing she wanted to stop enjoying and convinced herself that it repulsed her.

I completely stopped smoking four years ago with the exact same method. It's pretty powerful, I'm definitely making a technique out of this.

Dark Arts of Rationality

I think I've been able to make outstanding progresses last year in improving rationality and starting to work on real problems mostly because of megalomaniac beliefs that were somewhat compartmentalised but that I was able to feel at a gut level each time I had to start working.

Lately, as a result of my progresses, I've started slowing down because I was able to come to terms with these megalomaniac beliefs and realise at a gut level they weren't accurate, so a huge chunk of my drive faded, and my predictions about my goals updated on what I felt I could realise with the drive I was feeling, even if I knew that destroying these beliefs was a sign I really improved and was learning how actually hard it is to do world changing stuff...

I'll definitely give this a trial run, trying to chain down those beliefs and pull them out as fuel when I need to.

"If You're Not a Holy Madman, You're Not Trying"

Mh... I guess "holy madman" is a definition too vague to make a rational debate on it? I had interpreted it as "sacrifice everything that won't negatively affect your utility function later on". So the interpretation I imagined would be someone that won't leave himself an inch of comfort more than what's needed to keep the quality of his work constant.

I see slack as leaving yourself enough comfort that you'd be ready to use your free energy in ways you can't see at the moment, so I guess I was automatically assuming an "holy madman" would optimise for outputting the current best effort he can in the long term, rather than sacrificing some current effort to bet on future chances to improve the future output. 

I'd define someone who's leaving this level of slack as someone who's making a serious or full effort, but not an holy madman, but I guess this doesn't means much.

If I were to try to summarise my thoughts on what would happen in reality if someone were to try these options... I think the slack one would work better in general, both by managing to avoid pitfalls and to better exploit your potential for growth.

 

I still feel there's a lot of danger to oneself in trying to take ideas seriously though. If you start trying to act like it's your responsibility to solve a problem that's killing people, the moment you lose your grip on your thoughts it's the moment you cut yourself badly, at least in my experience.

In these days I've managed to reduce the harm that some recurrent thoughts were doing by focusing on distinguish between 1) me legitimately wanting A and planning/acting to achieve A and 2) my worries related to not being able to get A or distress for things currently being not A, telling myself that 2) doesn't helps me get what I want in the least, and that I can still make a full effort for 1), likely a better one, without paying to 2) much attention. 

(I'm afraid I've started to slightly rant from this point. I'm leaving it because I still feel it might be useful)


This strategy worked for my gender transition. 
I'm not sure how I'd react if I were to try telling myself I shouldn't care/feel bad/worry if people die because I'm not managing to fix the problem, even if I KNOW that worrying myself about people dying hinders my effort to fix the problem because feeling sick and worried and tired wouldn't in any way help toward actually working on the problem, I still don't trust my corrupted hardware to not start running some guilt trip against me because I'm trying to be, in a sense that's not utilitarian at all, callous, because I'm trying to not care/feel bad/worry about something like that.


Also, as a personal anecdote of possible pitfalls, trying to take personal responsibility for a global problem had drained my resources in ways I could't foreseen easily. When I got jumped by an unrelated problem about my gender, I found myself without the emotional resources to deal with both stresses at once, so some recurrent thoughts started blaming me because I was letting a personal problem that was in no way as bad as being dead, and didn't blipped at all on any screen in confront to a large number of deaths, screw up with my attempt of working on something that was actually relevant. I realised immediately that this was a stupid thing to think and in no way healthy, but that didn't do much to stop it, and climbing out of that pit of stress and guilt took a while.

In short, my emotional hardware is stupid and bugged and it irritates me to no end how it can just go ahead and ignore my attempts of thinking sanely about stuff.

I'm not sure if I'm just particularly bad at this, or if I just have expectations that are too high. An external view would likely tell me that it's ridiculous for me to expect to be able to go from "lazy and detached" to "saving the world (read reducing X risk), while effortlessly holding at bay emotional problems that would trip most people". I'd surely tell anyone that. On the other hand, it just feels like a stupid thing to not manage doing.

(end of the rant)

 

 (in contrast to me; I'm closer to the standard 40 hours)

Can I ask if you have some sort of external force that makes you do these hours? If not, any advice on how to do that?

I'm coming from a really long tradition of not doing any work whatsoever, and so far I'm struggling to meet my current goal of 24 hours (also because the only deadlines are the ones I manage to give myself... and for reasons I guess I have explained above).

Getting to this was a massive improvement, but again, I feel like I'm exceptionally bad at working hard.

"If You're Not a Holy Madman, You're Not Trying"

I think that the approaches based on being a holy madman greatly underestimates the difficulty on being a value maximiser running on corrupted, basic human hardware.

I'd be extremely skeptical on anyone who claims to have found a way to truly maximise it's utility function, even if they claim to have avoided all the obvious pitfalls of burning out and so-so.

It would be extremely hard to conciliate "put forth your full effort" and staying rational enough to notice you're burning yourself out or noticing that you're getting stuck in some suboptimal route because you're not leaving yourself enough slack to notice better opportunities.

 

The detached academic seems to me an odd way to describe Scott Alexander, who seems to make a really effective effort to spread his values and live his life rationally, for him most of the issues he talks about seem to be pretty practical and relevant, even if he often takes interest on what makes him curious and isn't dropping everything to work on AI - maximise the number of competent people who would work on AI.

 

I'm currently in a now-nine-months-long attempt to move from detached-lazy-academic to make an extraordinary effort

So far every attempt to accurately predict how much of a full effort I can make without getting backlash that makes me worse at it in the next period has failed.

Lots of my plans have failed, so if I had went along with plans that required me to make sacrifices, as taking an idea Seriously would require you to do, would have left me at a serious loss.

What worked most and obtained the most result was keeping a curious attitude toward plans and subjects that are related to my goal, studying to increase my competence in related areas even if I don't see any immediate way it could be of help, and monitoring on how much "weight" I'm putting on the activities that produce the results I need.

I feel I started out being unbelievably bad at working seriously at something, but in nine months I got more results than in a lifetime (in a broad sense, not just related to my goal) and I feel like I went up a couple levels.

I try to avoid going toward any state that resembles a "holy madman" for fear of crashing hard, and I notice that what I'm doing already has me pass as one to even my most informed friends on related subjects, when I don't censor to look normally modest and uninterested. 

 

I might be just at such a low level in the skill of "actually working" that anything that would work great for a functional adult with a good work ethic is deadly to me.

But I'd strongly advise anyone trying the holy madman path to actively pump for as much "anti-holy-madmannes" as they can, since making the full effort to maximise for something seems to me the best way to make sure your ambition burns through any defence your naive, optimistic plans think you have put in place to protect your rationality and your mental health.

 

Cults are bad, becoming a one-man-cult is entirely possible and slightly worse.

Sex, Lies, and Dexamethasone

The review seem pretty balanced and interesting, however the bit about Bailey struck me as really misguided. 

I'll try to explain why, I apologise if at some times I might come off as angry but the whole issue about autogynephilia annoys me both at a personal level as a trans person and at a professional level as a graduated in psychologist and scientist. Alice Dreger seems to have massively botched this part of her work.

In 2006, Dreger decided to investigate the controversy around J. Michael Bailey's book The Man Who Would be Queen. The book is a popularized account of research on transgenderism, including a typology of transsexualism developed by Ray Blanchard. This typology differentiates between homosexual transsexuals, who are very feminine boys who grow up into gay men or straight trans women, and autogynephiles, men who are sexually aroused by imagining themselves as women and become transvestites or lesbian trans women.

Bailey's position is that all transgender people deserve love and respect, and that sexual desire is as good a reason as any to transition. This position is so progressive that it could only cause outrage from self-proclaimed progressives. 

Bailey's position caused outrage in nearly every trans woman who read the book or heard the theory, and in a lot others trans persons who felt delegitimised and misrepresented by the implications. 

If you are transgender, you are suffering from gender dysphoria and you aren't transitioning for sexual reasons at all, though your sexual health would often improve. You are doing what science shows to be the one thing that solves your symptoms that are ruining your life and making you miserable. 

But then, someone who's not trans comes along and says "no, it's really a sex thing" based on a single paper that presented no evidence whatsoever. 

This person, rather than very rigorously trying to test the theory with careful research, which is what everyone, especially someone who's not feeling what trans women are feeling and thus is extremely clueless about the subject because it's really easy to misunderstand a sensation your brain isn't capable of feeling, should do, bases one of the two clusters of the book mostly on a single case study of a trans woman, who has a sex life which isn't representative of the average trans woman at all, but who makes for a very vivid, very peculiar account of sexual practices, and the rest of the "evidence" are just unstructured observations and interviews.

The book doesn't talk at all about how most trans person, men and women and non-binary, discover they are trans, and doesn't describes accurately their internal experience at all. It instead presents all trans women as being motivated by sex, and half of them by sexual tendencies that psychology depicts as pathological.

And then, somehow, this completely unfounded theory becomes one of the most known theories about trans women.

So, if you are a trans woman, best case is, your extremely progressive friends and family come to you and say "oh, we didn't knew it was just a sex thing, you could have told us you had this very weird sexual tendencies rather than make up all of that stuff about how your body and how society's way of treating you like a man makes you feel horribly, it's fine, we understand and love you anyway".

Worst and more common case, your friend, family, work associates and whatever, aren't extremely progressive. They still believe Blanchard's and Bailey's theory about you, though.

And then, when the trans community starts yelling more or less in unison "what the hell?!" at what Bailey wrote in his book, the best response he can come up is saying that the trans women attacking him are in a narcissistic rage because they are narcissists whose pride has been wounded by the truths he wrote, and that they are autogynephiles in denial.

 

Bailey attracted the ire of three prominent transgender activists who proceeded to falsely accuse him of a whole slew of crimes and research ethics violations. The three also threatened and harassed anyone who would defend Bailey; this group included mostly a lot of trans women who were grateful for Bailey's work, and Alice Dreger.

I'm not aware if some transgender women tried to defend the book, but "a lot of transgender women" seem to be a more accurate description for the books detractors than its supporters.

I'm aware of the fact that the three activists mentioned went way too far to be justified in any way. But presenting those as the only critics he received is completely wrong, because there was a huge number of wounded people who saw their lives get worse because of the book.

 

Autogynephilia was made popular as a theory mostly by Bailey's book, and trans exclusionary radical feminist groups, which are currently doing huge damages to trans rights and healthcare, are using it as one of their main arguments to delegitimate trans women and routinely attack trans women with it. Even if Bailey's intentions were good, he failed miserably and produced far more harm than anything else.

I'll try my best to express it, even if I feel it makes me look stupid:

Short version:

Trying to improve how activism is done, figuring out ways to maximise the positive impact activists and activism organisations can have to advance their cause and that can be reasonably taught.

 

Reasoning

Activism organisations that are composed by volunteers and that don't hire professionals are limited in what they can learn about their craft. Typically, activists can figure out by trial and errors and by looking at others what seem to work or not, but only if there is a feedback one can correctly eyeball.

So there is no reason to believe that the efficiency of these activists and organisations can't be improved.

An individual studying communication and organisation wouldn't likely be able to improve the frontier of efficiency in marketing or professional organisations that deal in communication, but even bringing the efficiency of volunteers closer to the current efficiency of professionals would be a huge improvement, able to produce a lot of positive value for the world, if one choose the right organisations to boost.  

Currently I'm focusing on the communication of main-stream causes that deal with x-risks related issues, the second step would be to use the strategies learned to boost non-mainstream causes that try to address stuff that's even more relevant to x-risks (if anyone is involved in a similar attempt or cause already they are welcomed to contact me, I'd love a chance to talk about this and see if cooperation is possible).

 

First steps I should currently be doing

Essentially I think I should be developing a system in excel that would allow one to classify posts on social media according to their characteristics and that would allow to investigate at a statistical level what works and what not.

I've started but continuing slowly, because it's hard, it's something I'm really not familiar with and I have an unhealthy attitude of flinching away from anything that's hard to do in a way that makes me feel stupid and out of my league.

The second thing I should be doing is an "inadequacy analysis" of the current processes in the organisation I'm in to see all the low hanging fruits one could pick to improve performance.

I so far failed to identify any fruit but two (the statistical analysis is one, the second one is how work is distributed to volunteers and seem an easier fix), because I'm likely overly worried of "shooting my foot off and falling flat on my face in a way that makes me look stupid" so I'm flinching off again.

 

I did managed to correct some other major procrastination problem and I'm not able to reliably get hours of work done for this project, but I have so far oriented this work  in too many directions (like trying to study negotiation tactics for the future, rationality, persuasion strategy, and communication strategies on social medias, all at once) and so I couldn't really focus enough of a significant effort in actually making progress with something. 

Trying to fix the problem by creating habits and incentives that would orient me toward the most important things I should be doing, rather than the most "interesting" things I could be doing that is somehow related to the project.

I might also need to learn more efficient ways to study and practice stuff, so far I'm still studying as if to pass a written exam on it.

Status Regulation and Anxious Underconfidence

I'm not 100% sure I understood the first paragraph, could you clarify it for me if I got it wrong?

Essentially, the "efficient-markets-as-high-status-authorities" mindset I was trying to describe seems to me that work as such:

Given a problem A, let's say providing life saving medicine to max number of people, it assumes that letting agents motivated by profit act freely, unrestricted by regulations or policies that even be aimed to try fix problem A, would provide said medicine to more people than an intentional policy of a government that's trying to provide said medicine to max number of people.

The market doesn't seem to have a utility function in this model, but any agent in this market (that is able to survive in it) is motivated by an utility function that just wants to maximise profit.

Part of the reason for the assumption that "free market of agents motivated by profit" should be so good at producing solution for problem A (save lives with medicine) is that the "free market" is awesomely good at pricing actions and at finding ways to get profits, because a lot of agents are trying different things at their best to get profit and everything that works get copied. (If anyone has a roughly related theory and feels I butchered or got wrong the reasoning involved, you are welcomed to express it right, I'm genuinely interested).

 

My main objection to this is that I fail to see how this is different by asking an unaligned AI that's not super intelligent, but still a lot smarter than you, to get your mother out of a burning building so you'd press the reward button the AI wants you to press.

 

If I understood your first paragraph correctly, we are both generally skeptic that a market of agents set about to maximise profit would be, on average in many different possible cases, good at generating value that's different than maximising profit.

 

Thank you for the clarification between unregulated and free. 

I was aware of how one wouldn't lead to the other, but I'm now unsure about how many of the people I talked to about this had this distinction in mind. 
I saw a lots of arguments for deregulation in political press that made appeals to the idea of the "free market", so I think I usually assumed that one arguing for one of these positions would assume that a free market would be an unregulated one and not foresee this obvious problem. 

Status Regulation and Anxious Underconfidence

I actually can’t recall seeing anyone make the mistake of treating efficient markets like high-status authorities in a social pecking order.

I've seen often enough, or at least I think I've seen often enough, people treating efficient markets or just "free, deregulated market" as some kind of benevolent godly being that is able to fix just any problem.

I admit that I came from the opposite corner and that I flinched at the first paragraphs of the explanation on efficient market, but I still feel that a lot of bright people aren't asking the questions 

"Is it more profit-efficient to fix the problem or to just cheat?"

"Can actors get more profit by causing damages worse than the benefits they provide?" 

"Is the share of actors that, seeing that the cheaters niche of the market is already filled when they get there, would go on to do okayish profits by trying to genuinely fix the problem able to produce more public value than  the damage cheaters produce?"

Asking an unregulated free market to fix a problem in exchange for rewards is like asking an unaligned human intelligence with thousands of brains to do it.

 

I have seen more blatant examples of this toward the concept of free market, but a lot of people still seem to interpret the notion of "efficient market" as "and given the wisdom of the efficient market, the economy would improve and produce more value for everyone", and I feel the two views are related, though I might be wrong about how many people have a clear difference of the two concepts in their heads.

"If these investments really are bogus and will horribly crush the economy when they collapse, surely someone in the efficient market would have seen it coming" is the mindset I'm trying to describe, though this mindset seem to have a blurry idea of what an efficient market is about.

Moloch's Toolbox (2/2)

A journalist thinks that a candidate who talks about ending the War on Drugs isn’t a “serious candidate.” And the newspaper won’t cover that candidate because the newspaper itself wants to look serious… or they think voters won’t be interested because everyone knows that candidate can’t win, or something? Maybe in a US-style system, only contrarians and other people who lack the social skill of getting along with the System are voting for Carol, so Carol is uncool the same way Velcro is uncool and so are all her policies and ideas? I’m not sure exactly what the journalists are thinking subjectively, since I’m not a journalist. But if an existing politician talks about a policy outside of what journalists think is appealing to voters, the journalists think the politician has committed a gaffe, and they write about this sports blunder by the politician, and the actual voters take their cues from that. So no politician talks about things that a journalist believes it would be a blunder for a politician to talk about. The space of what it isn’t a “blunder” for a politician to talk about is conventionally termed the “Overton window.”

I'd agree with Simplicio that voter's "stupidity" as in "ignorance and inability to discern correctly on even issues where a scientific consensus has been reached and it really feels like a good, intuitive idea to make an internet search of ten minutes and check what the most accredited institutions are saying on the matter" would interact a lot with the border of the Overton window.

If 90% of the voters were able to mock any "stupid" idea suggested, moving out of the Overton window by going down with the quality of idea discussed would be plain suicide, moving up would sometime reward. Attempts to shift the Overton window downward, such as "hey, let's completely go against what (insert science field) says about (insert important issue), and lets (choose between prohibiting particular subgroup therapies even if science says it's really a good idea to provide said therapies/talk against preventing key crisis that will product unbelievable damage in short term future/suggest completely unfounded model about social issue x works and propose solution unrelated to any actual finding on the matter that has a track history of failures)" would be harshly punished by the voters, while right now these seem to be roughly 30% of politics discussed.

Still, I guess Cecie's theory can explain the source of this "stupidity" with systemic failures that happen in other parts of society such as information and education, while if we just ascribe this to widespread individual "stupidity" and "sheepness" we are not less confused, but perhaps more so.

Load More