Mh... I guess "holy madman" is a definition too vague to make a rational debate on it? I had interpreted it as "sacrifice everything that won't negatively affect your utility function later on". So the interpretation I imagined would be someone that won't leave himself an inch of comfort more than what's needed to keep the quality of his work constant.
I see slack as leaving yourself enough comfort that you'd be ready to use your free energy in ways you can't see at the moment, so I guess I was automatically assuming an "holy madman" would optimise for outputting the current best effort he can in the long term, rather than sacrificing some current effort to bet on future chances to improve the future output.
I'd define someone who's leaving this level of slack as someone who's making a serious or full effort, but not an holy madman, but I guess this doesn't means much.
If I were to try to summarise my thoughts on what would happen in reality if someone were to try these options... I think the slack one would work better in general, both by managing to avoid pitfalls and to better exploit your potential for growth.
I still feel there's a lot of danger to oneself in trying to take ideas seriously though. If you start trying to act like it's your responsibility to solve a problem that's killing people, the moment you lose your grip on your thoughts it's the moment you cut yourself badly, at least in my experience.
In these days I've managed to reduce the harm that some recurrent thoughts were doing by focusing on distinguish between 1) me legitimately wanting A and planning/acting to achieve A and 2) my worries related to not being able to get A or distress for things currently being not A, telling myself that 2) doesn't helps me get what I want in the least, and that I can still make a full effort for 1), likely a better one, without paying to 2) much attention.
(I'm afraid I've started to slightly rant from this point. I'm leaving it because I still feel it might be useful)
This strategy worked for my gender transition. I'm not sure how I'd react if I were to try telling myself I shouldn't care/feel bad/worry if people die because I'm not managing to fix the problem, even if I KNOW that worrying myself about people dying hinders my effort to fix the problem because feeling sick and worried and tired wouldn't in any way help toward actually working on the problem, I still don't trust my corrupted hardware to not start running some guilt trip against me because I'm trying to be, in a sense that's not utilitarian at all, callous, because I'm trying to not care/feel bad/worry about something like that.
Also, as a personal anecdote of possible pitfalls, trying to take personal responsibility for a global problem had drained my resources in ways I could't foreseen easily. When I got jumped by an unrelated problem about my gender, I found myself without the emotional resources to deal with both stresses at once, so some recurrent thoughts started blaming me because I was letting a personal problem that was in no way as bad as being dead, and didn't blipped at all on any screen in confront to a large number of deaths, screw up with my attempt of working on something that was actually relevant. I realised immediately that this was a stupid thing to think and in no way healthy, but that didn't do much to stop it, and climbing out of that pit of stress and guilt took a while.
In short, my emotional hardware is stupid and bugged and it irritates me to no end how it can just go ahead and ignore my attempts of thinking sanely about stuff.
I'm not sure if I'm just particularly bad at this, or if I just have expectations that are too high. An external view would likely tell me that it's ridiculous for me to expect to be able to go from "lazy and detached" to "saving the world (read reducing X risk), while effortlessly holding at bay emotional problems that would trip most people". I'd surely tell anyone that. On the other hand, it just feels like a stupid thing to not manage doing.
(end of the rant)
(in contrast to me; I'm closer to the standard 40 hours)
Can I ask if you have some sort of external force that makes you do these hours? If not, any advice on how to do that?
I'm coming from a really long tradition of not doing any work whatsoever, and so far I'm struggling to meet my current goal of 24 hours (also because the only deadlines are the ones I manage to give myself... and for reasons I guess I have explained above).
Getting to this was a massive improvement, but again, I feel like I'm exceptionally bad at working hard.
I think that the approaches based on being a holy madman greatly underestimates the difficulty on being a value maximiser running on corrupted, basic human hardware.
I'd be extremely skeptical on anyone who claims to have found a way to truly maximise it's utility function, even if they claim to have avoided all the obvious pitfalls of burning out and so-so.
It would be extremely hard to conciliate "put forth your full effort" and staying rational enough to notice you're burning yourself out or noticing that you're getting stuck in some suboptimal route because you're not leaving yourself enough slack to notice better opportunities.
The detached academic seems to me an odd way to describe Scott Alexander, who seems to make a really effective effort to spread his values and live his life rationally, for him most of the issues he talks about seem to be pretty practical and relevant, even if he often takes interest on what makes him curious and isn't dropping everything to work on AI - maximise the number of competent people who would work on AI.
I'm currently in a now-nine-months-long attempt to move from detached-lazy-academic to make an extraordinary effort.
So far every attempt to accurately predict how much of a full effort I can make without getting backlash that makes me worse at it in the next period has failed.
Lots of my plans have failed, so if I had went along with plans that required me to make sacrifices, as taking an idea Seriously would require you to do, would have left me at a serious loss.
What worked most and obtained the most result was keeping a curious attitude toward plans and subjects that are related to my goal, studying to increase my competence in related areas even if I don't see any immediate way it could be of help, and monitoring on how much "weight" I'm putting on the activities that produce the results I need.
I feel I started out being unbelievably bad at working seriously at something, but in nine months I got more results than in a lifetime (in a broad sense, not just related to my goal) and I feel like I went up a couple levels.
I try to avoid going toward any state that resembles a "holy madman" for fear of crashing hard, and I notice that what I'm doing already has me pass as one to even my most informed friends on related subjects, when I don't censor to look normally modest and uninterested.
I might be just at such a low level in the skill of "actually working" that anything that would work great for a functional adult with a good work ethic is deadly to me.
But I'd strongly advise anyone trying the holy madman path to actively pump for as much "anti-holy-madmannes" as they can, since making the full effort to maximise for something seems to me the best way to make sure your ambition burns through any defence your naive, optimistic plans think you have put in place to protect your rationality and your mental health.
Cults are bad, becoming a one-man-cult is entirely possible and slightly worse.
The review seem pretty balanced and interesting, however the bit about Bailey struck me as really misguided.
I'll try to explain why, I apologise if at some times I might come off as angry but the whole issue about autogynephilia annoys me both at a personal level as a trans person and at a professional level as a graduated in psychologist and scientist. Alice Dreger seems to have massively botched this part of her work.
In 2006, Dreger decided to investigate the controversy around J. Michael Bailey's book The Man Who Would be Queen. The book is a popularized account of research on transgenderism, including a typology of transsexualism developed by Ray Blanchard. This typology differentiates between homosexual transsexuals, who are very feminine boys who grow up into gay men or straight trans women, and autogynephiles, men who are sexually aroused by imagining themselves as women and become transvestites or lesbian trans women.Bailey's position is that all transgender people deserve love and respect, and that sexual desire is as good a reason as any to transition. This position is so progressive that it could only cause outrage from self-proclaimed progressives.
In 2006, Dreger decided to investigate the controversy around J. Michael Bailey's book The Man Who Would be Queen. The book is a popularized account of research on transgenderism, including a typology of transsexualism developed by Ray Blanchard. This typology differentiates between homosexual transsexuals, who are very feminine boys who grow up into gay men or straight trans women, and autogynephiles, men who are sexually aroused by imagining themselves as women and become transvestites or lesbian trans women.
Bailey's position is that all transgender people deserve love and respect, and that sexual desire is as good a reason as any to transition. This position is so progressive that it could only cause outrage from self-proclaimed progressives.
Bailey's position caused outrage in nearly every trans woman who read the book or heard the theory, and in a lot others trans persons who felt delegitimised and misrepresented by the implications.
If you are transgender, you are suffering from gender dysphoria and you aren't transitioning for sexual reasons at all, though your sexual health would often improve. You are doing what science shows to be the one thing that solves your symptoms that are ruining your life and making you miserable.
But then, someone who's not trans comes along and says "no, it's really a sex thing" based on a single paper that presented no evidence whatsoever.
This person, rather than very rigorously trying to test the theory with careful research, which is what everyone, especially someone who's not feeling what trans women are feeling and thus is extremely clueless about the subject because it's really easy to misunderstand a sensation your brain isn't capable of feeling, should do, bases one of the two clusters of the book mostly on a single case study of a trans woman, who has a sex life which isn't representative of the average trans woman at all, but who makes for a very vivid, very peculiar account of sexual practices, and the rest of the "evidence" are just unstructured observations and interviews.
The book doesn't talk at all about how most trans person, men and women and non-binary, discover they are trans, and doesn't describes accurately their internal experience at all. It instead presents all trans women as being motivated by sex, and half of them by sexual tendencies that psychology depicts as pathological.
And then, somehow, this completely unfounded theory becomes one of the most known theories about trans women.
So, if you are a trans woman, best case is, your extremely progressive friends and family come to you and say "oh, we didn't knew it was just a sex thing, you could have told us you had this very weird sexual tendencies rather than make up all of that stuff about how your body and how society's way of treating you like a man makes you feel horribly, it's fine, we understand and love you anyway".
Worst and more common case, your friend, family, work associates and whatever, aren't extremely progressive. They still believe Blanchard's and Bailey's theory about you, though.And then, when the trans community starts yelling more or less in unison "what the hell?!" at what Bailey wrote in his book, the best response he can come up is saying that the trans women attacking him are in a narcissistic rage because they are narcissists whose pride has been wounded by the truths he wrote, and that they are autogynephiles in denial.
Bailey attracted the ire of three prominent transgender activists who proceeded to falsely accuse him of a whole slew of crimes and research ethics violations. The three also threatened and harassed anyone who would defend Bailey; this group included mostly a lot of trans women who were grateful for Bailey's work, and Alice Dreger.
I'm not aware if some transgender women tried to defend the book, but "a lot of transgender women" seem to be a more accurate description for the books detractors than its supporters.
I'm aware of the fact that the three activists mentioned went way too far to be justified in any way. But presenting those as the only critics he received is completely wrong, because there was a huge number of wounded people who saw their lives get worse because of the book.
Autogynephilia was made popular as a theory mostly by Bailey's book, and trans exclusionary radical feminist groups, which are currently doing huge damages to trans rights and healthcare, are using it as one of their main arguments to delegitimate trans women and routinely attack trans women with it. Even if Bailey's intentions were good, he failed miserably and produced far more harm than anything else.
I'll try my best to express it, even if I feel it makes me look stupid:
Trying to improve how activism is done, figuring out ways to maximise the positive impact activists and activism organisations can have to advance their cause and that can be reasonably taught.
Activism organisations that are composed by volunteers and that don't hire professionals are limited in what they can learn about their craft. Typically, activists can figure out by trial and errors and by looking at others what seem to work or not, but only if there is a feedback one can correctly eyeball.
So there is no reason to believe that the efficiency of these activists and organisations can't be improved.
An individual studying communication and organisation wouldn't likely be able to improve the frontier of efficiency in marketing or professional organisations that deal in communication, but even bringing the efficiency of volunteers closer to the current efficiency of professionals would be a huge improvement, able to produce a lot of positive value for the world, if one choose the right organisations to boost.
Currently I'm focusing on the communication of main-stream causes that deal with x-risks related issues, the second step would be to use the strategies learned to boost non-mainstream causes that try to address stuff that's even more relevant to x-risks (if anyone is involved in a similar attempt or cause already they are welcomed to contact me, I'd love a chance to talk about this and see if cooperation is possible).
First steps I should currently be doing
Essentially I think I should be developing a system in excel that would allow one to classify posts on social media according to their characteristics and that would allow to investigate at a statistical level what works and what not.
I've started but continuing slowly, because it's hard, it's something I'm really not familiar with and I have an unhealthy attitude of flinching away from anything that's hard to do in a way that makes me feel stupid and out of my league.The second thing I should be doing is an "inadequacy analysis" of the current processes in the organisation I'm in to see all the low hanging fruits one could pick to improve performance.
I so far failed to identify any fruit but two (the statistical analysis is one, the second one is how work is distributed to volunteers and seem an easier fix), because I'm likely overly worried of "shooting my foot off and falling flat on my face in a way that makes me look stupid" so I'm flinching off again.
I did managed to correct some other major procrastination problem and I'm not able to reliably get hours of work done for this project, but I have so far oriented this work in too many directions (like trying to study negotiation tactics for the future, rationality, persuasion strategy, and communication strategies on social medias, all at once) and so I couldn't really focus enough of a significant effort in actually making progress with something.
Trying to fix the problem by creating habits and incentives that would orient me toward the most important things I should be doing, rather than the most "interesting" things I could be doing that is somehow related to the project.
I might also need to learn more efficient ways to study and practice stuff, so far I'm still studying as if to pass a written exam on it.
I'm not 100% sure I understood the first paragraph, could you clarify it for me if I got it wrong?
Essentially, the "efficient-markets-as-high-status-authorities" mindset I was trying to describe seems to me that work as such:
Given a problem A, let's say providing life saving medicine to max number of people, it assumes that letting agents motivated by profit act freely, unrestricted by regulations or policies that even be aimed to try fix problem A, would provide said medicine to more people than an intentional policy of a government that's trying to provide said medicine to max number of people.
The market doesn't seem to have a utility function in this model, but any agent in this market (that is able to survive in it) is motivated by an utility function that just wants to maximise profit.
Part of the reason for the assumption that "free market of agents motivated by profit" should be so good at producing solution for problem A (save lives with medicine) is that the "free market" is awesomely good at pricing actions and at finding ways to get profits, because a lot of agents are trying different things at their best to get profit and everything that works get copied. (If anyone has a roughly related theory and feels I butchered or got wrong the reasoning involved, you are welcomed to express it right, I'm genuinely interested).
My main objection to this is that I fail to see how this is different by asking an unaligned AI that's not super intelligent, but still a lot smarter than you, to get your mother out of a burning building so you'd press the reward button the AI wants you to press.
If I understood your first paragraph correctly, we are both generally skeptic that a market of agents set about to maximise profit would be, on average in many different possible cases, good at generating value that's different than maximising profit.
Thank you for the clarification between unregulated and free.
I was aware of how one wouldn't lead to the other, but I'm now unsure about how many of the people I talked to about this had this distinction in mind. I saw a lots of arguments for deregulation in political press that made appeals to the idea of the "free market", so I think I usually assumed that one arguing for one of these positions would assume that a free market would be an unregulated one and not foresee this obvious problem.
I actually can’t recall seeing anyone make the mistake of treating efficient markets like high-status authorities in a social pecking order.
I've seen often enough, or at least I think I've seen often enough, people treating efficient markets or just "free, deregulated market" as some kind of benevolent godly being that is able to fix just any problem.
I admit that I came from the opposite corner and that I flinched at the first paragraphs of the explanation on efficient market, but I still feel that a lot of bright people aren't asking the questions
"Is it more profit-efficient to fix the problem or to just cheat?"
"Can actors get more profit by causing damages worse than the benefits they provide?"
"Is the share of actors that, seeing that the cheaters niche of the market is already filled when they get there, would go on to do okayish profits by trying to genuinely fix the problem able to produce more public value than the damage cheaters produce?"
Asking an unregulated free market to fix a problem in exchange for rewards is like asking an unaligned human intelligence with thousands of brains to do it.
I have seen more blatant examples of this toward the concept of free market, but a lot of people still seem to interpret the notion of "efficient market" as "and given the wisdom of the efficient market, the economy would improve and produce more value for everyone", and I feel the two views are related, though I might be wrong about how many people have a clear difference of the two concepts in their heads.
"If these investments really are bogus and will horribly crush the economy when they collapse, surely someone in the efficient market would have seen it coming" is the mindset I'm trying to describe, though this mindset seem to have a blurry idea of what an efficient market is about.
A journalist thinks that a candidate who talks about ending the War on Drugs isn’t a “serious candidate.” And the newspaper won’t cover that candidate because the newspaper itself wants to look serious… or they think voters won’t be interested because everyone knows that candidate can’t win, or something? Maybe in a US-style system, only contrarians and other people who lack the social skill of getting along with the System are voting for Carol, so Carol is uncool the same way Velcro is uncool and so are all her policies and ideas? I’m not sure exactly what the journalists are thinking subjectively, since I’m not a journalist. But if an existing politician talks about a policy outside of what journalists think is appealing to voters, the journalists think the politician has committed a gaffe, and they write about this sports blunder by the politician, and the actual voters take their cues from that. So no politician talks about things that a journalist believes it would be a blunder for a politician to talk about. The space of what it isn’t a “blunder” for a politician to talk about is conventionally termed the “Overton window.”
I'd agree with Simplicio that voter's "stupidity" as in "ignorance and inability to discern correctly on even issues where a scientific consensus has been reached and it really feels like a good, intuitive idea to make an internet search of ten minutes and check what the most accredited institutions are saying on the matter" would interact a lot with the border of the Overton window.
If 90% of the voters were able to mock any "stupid" idea suggested, moving out of the Overton window by going down with the quality of idea discussed would be plain suicide, moving up would sometime reward. Attempts to shift the Overton window downward, such as "hey, let's completely go against what (insert science field) says about (insert important issue), and lets (choose between prohibiting particular subgroup therapies even if science says it's really a good idea to provide said therapies/talk against preventing key crisis that will product unbelievable damage in short term future/suggest completely unfounded model about social issue x works and propose solution unrelated to any actual finding on the matter that has a track history of failures)" would be harshly punished by the voters, while right now these seem to be roughly 30% of politics discussed.
Still, I guess Cecie's theory can explain the source of this "stupidity" with systemic failures that happen in other parts of society such as information and education, while if we just ascribe this to widespread individual "stupidity" and "sheepness" we are not less confused, but perhaps more so.
I wonder about that.
I'd expect we'd first see a huge number of newspaper articles and internet websites trying to make health scares about "lab meat" and an ungodly about of memes about "real men eating real meat", or "only real meat has real taste", and then governments would ramp up subsidies to traditional farms because "cultural activities" and whatever. Oh, and a lot of jokes about the synthetic meat that many sci-fi dystopias have as an element.
Old, powerful lobbies don't like the free market regulating itself, at all, and making harmful/obsolete stuff a cultural/identity/political tribes battle is the first strategy to hinder it.
I'd agree it eventually will become the solution, but I expect it to go slightly worse than the energetic transition.
Computer game characters also exhibit ”intentions” and such, but there’s nobody home a lot of the time, unless you’re playing against another person.
Yes, but what we know about the structure of a computer program is greatly different than what we know about the structure of an animal brain. More complex brains seem to share a lot of our own architecture, mammals brains are ridiculously complex, and mammals show a lot of behaviours that isn't purely directed to acquiring food, reproducing and running from predators.
For animals such as frogs and bugs, which seem to be built more like a "sensory input goes in, reflex goes out" I'd accept more doubt on whether the "somebody's home" metaphor can be considered true, for mammals and other smarter animals the doubt are a lot less believable.
It seems cows might be smarter than dogs and highly intelligent, and right now dogs are discussed as possibly having self-recognition, since they pass olfactory tests that require self recognition (from what I saw it seems the tests are a bit more complex than just requiring the dog to have a "this-is-your-urine-mark-for-your-territory.exe" in its brain).
Generally speaking, cows show to have long term social relations with each others, good problem solving skills, and long term effects on their emotional range from negative experiences. I haven't been able to find information on cows passing or failing self-recognition tests, visual or not, but from the intelligence they show I'd put them pretty high on moral meaningfulness.
Pigs are notoriously smart and have passed the self-recognition test, as Pattern commented.
Though, I think my main point it's that even simpler animals, as long as the brain architecture allows for doubts that our experience of "being home", feeling pain and etc, is in some way generalisable to theirs, would have some scaled down moral weight.
If I had to lose my higher cognitive function and be reduced to animal levels of intelligence, I wouldn't really be okay with agreeing to be subjected to significative pain in exchange for a trivial benefit now, on the ground that I wouldn't be sapient.
Note: this isn't really aimed at turning lesswrongers vegan. There are convincing reasons to be vegan based on the impact over humans, but if you are already trying to be an effective altruist by doing a hard job I can accept the need of conserving willpower and efficiency, though I guess one could consider if he/she/they could reduce consumption without risks.
I think the issue of the moral weight of animals should be considered independently from the consequences it might hold for one's diet or behaviour, or we're just back to plain rationalisation.
I do agree on everything you said.
Right now farming animals seems to be a huge risk for zoonosis, if I remember correctly Covid-19 could have spread from exotic animals being sold in high numbers, and it jumped from man to minks in farms, spread like wildfire in the packed environment, gathering all sort of mutations, and then jumped back to man.
Farming animal is also not sustainable at all with the level of tech, resources and consumption we have now. I'd expect the impact of farming to kill at least some tens-millions people in a moderately bad global warming scenario, it's already producing humanitarian crises now, and I'm afraid global warming increases extinction risks due to how we would be more likely to botch AGI.
I had just suggested the rule for an entirely hypothetical scenario where we are asked to trade human lives against animal lives, because I was trying to discuss the moral situation "trade animal lives and suffering against human convenience" on its own.