Recent Discussion

I've been thinking a lot about exploiting the capitalistic tendencies of most social systems to solve difficult social issues on a theoretical level. (not very well, unfortunately).

I think most ideas usually prescribe more/less/same amount of something and that instead of fighting the capitalistic tendency to produce more and more by calling for less and less, we should push for more and more, collapse the existing system (and future systems like it) and instead force a better, more equitable system to form. How to push without plain causing harm/destroying the entire system/in a realisti... (Read more)

Congrats, you've invented accelerationism!

1interstice2h A better system won't just magically form itself after the existing system has been destroyed. In all likelihood what will form will be either a far more corrupt and oligarchical system, or no system at all. I think a better target for intervention would be attempting to build superior alternatives so that something is available when the existing systems start to fail. In education for example, Lambda School is providing a better way for many people to learn programming than college. Note also that existing systems of power are very big, so efforts to damage them probably have low marginal impact. Building initially small new things can have much higher marginal impact. If the systems are as a corrupt as you think they are, they should destroy themselves on their own in any case.
Conditions for Mesa-OptimizationΩ
554mo11 min readΩ 16Show Highlight
10Wei_Dai17h The Risks from Learned Optimization paper and this sequence don't seem to talk about the possibility of mesa-optimizers developing from supervised learning and the resulting inner alignment problem. The part that gets closest is First, though we largely focus on reinforcement learning in this sequence, RL is not necessarily the only type of machine learning where mesa-optimizers could appear. For example, it seems plausible that mesa-optimizers could appear in generative adversarial networks. I wonder if this was intentional, and if not maybe it would be worth making a note somewhere in the paper/posts that an oracle/predictor that is trained on sufficiently diverse data using SL could also become a mesa-optimizer (especially since this seems counterintuitive and might be overlooked by AI researchers/builders). See related discussion here [] .
4Wei_Dai17h I meant that claim to apply to "realistic" tasks (which I don't yet know how to define). Machine learning seems hard to do without search, if that counts as a "realistic" task. :) I wonder if you can say something about what your motivation is to talk about this, i.e., are there larger implications if "just heuristics" is enough for arbitrary levels of performance on "realistic" tasks?
Machine learning seems hard to do without search, if that counts as a "realistic" task. :)

Humans and systems produced by meta learning both do reasonably well at learning, and don't do "search" (depending on how loose you are with your definition of "search").

I wonder if you can say something about what your motivation is to talk about this, i.e., are there larger implications if "just heuristics" is enough for arbitrary levels of performance on "realistic" tasks?

It's plausible to me that for task... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

I live in Iran, and here people strongly believe in Avicenna’s humorism (or what is thought of it in popular culture anyways.). It is believed on the level of it being “common sense.” For example, if you eat fish, milk, broccoli, and tomato sauce, all of which are “cold”, you’re supposed to balance that out by eating walnuts and dates. My personal impression is that there is probably some truth to this simplistic model of nutrition, as I see a lot of anecdotal evidence for it, but well, I like to see what science is on the subject.

Note that the humorism believed in here (Iran) is not a strawma

... (Read more)
The unexpected difficulty of comparing AlphaStar to humans
3614h25 min readShow Highlight

Artificial intelligence defeated a pair of professional Starcraft II players for the first time in December 2018. Although this was generally regarded as an impressive achievement, it quickly became clear that not everybody was satisfied with how the AI agent, called AlphaStar, interacted with the game, or how its creator, DeepMind, presented it. Many observers complained that, in spite of DeepMind’s claims that it performed at similar speeds to humans, AlphaStar was able to control the game with greater speed and accuracy than any human, and that this was the reason why it prevailed.

Although... (Read more)

I found a youtube channel that has been providing commentary on suspected games of AlphaStar on the ladder. They're presented from a layman's perspective, but they might be valuable for people to get an idea of what the current AI is capable of.

3gwern3h I'd add [] to the chronology.
False Dilemmas w/ exercises
1718h5 min readShow Highlight

This is the third post in the Arguing Well sequence, but it can be understood on its own. This post is influenced by False Dilemma, The Third alternative.

A false dilemma is of the form “It’s either this, or that. Pick one!” It tries to make you choose from a limited set of options, when, in reality, more options are available. With that in mind, what’s wrong with the following examples?

Ex. 1: You either love the guy or hate him

Counterargument 1: “Only a Sith deals in absolutes!”

Counterargument 2: I can feel neutral towards the guy

Ex. 2: You can only ad... (Read more)

1elriggs2h Underneath the Algorithm heading, everything below it is a spoiler and has a scroll bar. Is this what you see?

I fixed it the issue on mine. I created and shared a draft with you reproducing the error.

6Donald Hobson2h "Either my curtains are red, or they are blue." Would be a false dilemma that doesn't fit any category. You can make a false dilemma out of any pair of non mutually exclusive predicates, there is no need for them to refer to values or actions.
1elriggs2h This is good, thanks! I want to get this right though, so the general form would be: X is only compatible with YAnd your example is "My curtain is only compatible with being red or blue". Which could generalize to "This object is only compatible with these properties". Would this work as a 5/6th category? I think it's useful to have categories, but it might be better to just give the above general form, and then give possible general categories (like actions, values, properties, etc.) Thoughts?
Category Qualifications (w/ exercises)
243d4 min readShow Highlight

This is the second post in the Arguing Well sequence. This post is influenced by A Human's Guide to Words and Against Lie Inflation.


In the last post, we discussed a common problem in arguments that Prove Too Much. In this post, we’ll generalize that problem to help determine useful categories. But before we go on, what’s wrong with these arguments?

Ex. 1 [Stolen from slatestarcodex]

“A few months ago, a friend confessed that she had abused her boyfriend. I was shocked, because this friend is one of the kindest and gentle
... (Read more)
2elriggs15h A high status person violating norms for trendsetting/ counter-signalling is violating expectations for a specific purpose very much like a comedian violates them for laughs. I agree that violating expectations helps achieve certain goals. But, if my goal is to argue well, communicate well, find more accurate beliefs, then I should be focused on not violating expectations for the sake of clarity. [Note: I am finding a lot of value in our conversation thread so far, and I appreciate your input. It's really forcing me to figure out when and why and how this concept is useful or when it's not.]
1Slider3h What if the audiences expectations are based on faulty beliefs? In particular some given topic might have a bunch of entrenched assumptions so that there are positions that can't be expressed without violating expectations. In the very limit if the communication doesn't violate expectations then it can't convey information, the Shannon entropy is zero. There are probably multiple kinds of surprise here. The "easy" kind would be if nobody expected anybody to say "the sky is red". The "hard" kind would be if one means the lowest wavelength kind of light with "the sky is blue". Exhausting the easy kind can be done relatively effectively and straighforwardly. But when there are conceptual problems then the hard kind of thing is the main source of progress. If you encounter evidence that can't be handled with your current conceptual palette you must come up with new concepts in order to accomodate reality. Those updates tend to be laboursome but they tend to be the valuable ones.
1elriggs15h If you're claiming "Threats should be taken seriously and punished", then we agree. If you're claiming "we should punish groups that threaten violence for political reasons as 'terrorist'", then we might agree, but it's not a big deal and not the point of this post. If you're claiming that If 100 random US citizens are told "X is a group of terrorists" and are told to ask what actions the speaker is trying to imply X engages in, the majority of the group will write "only threatens violence for political reasons", then we disagree. I predict they will write a mix of "threatens and commits acts of terror", but never "only threatens". I think you would not be misleading if you said "X is a group of terrorists, but only the kind that threatens violence but hasn't actually injured/murdered people, but that's still bad and I think we should take it much more seriously than we are right now", and that this statement would pass the "100 random people" test above. If you disagree with my prediction, then that's just a difference in our priors on how other people qualify that word. This isn't the point of the post. If you disagree with "100 random people" test as being a good test, then this is relevant to the post.

While the policy suggestion is indeed outside the scope of the discussion I feel it woud be important to process it differently. "Groups that threaten violence for political reasons are terrorist" and "We should punish terrorists". Calling someone a terrorist is not itself a punishment (unless again the label triggers unstated mechanisms that are beyond deliberate, concious or official control). In the topic area it is not unheard of to be issues where "terrorist" is a special position that warrants different procedure. There ... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

The 3 Books Technique for Learning a New Skilll
1368mo1 min readShow Highlight

When I'm learning a new skill, there's a technique I often use to quickly gain the basics of the new skill without getting drowned in the plethora of resources that exist. I've found that just 3 resources that cover the skill from 3 separate viewpoints(along with either daily practice or a project) is enough to quickly get all the pieces I need to learn the new skill.

I'm partial to books, so I've called this The 3 Books Technique, but feel free to substitute books for courses, mentors, or videos as needed.

The "What" Book

The "What" book is used as r... (Read more)

Other than suspecting I may have an aptitude, my interest in computer programming is driven by finding two fields cool and fun sounding: data science and applications of blockchain technology to stuff like verifying carbon sequestration and other changes in reality. Quite a bit of social science I deeply admire has been done using data science, and I have a couple of friends working to improve the world using blockchain technologies whom I also admire. I want to see if I am good at programming to see if I can participate in these endeavors I admire—or at l

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
1DragorCochrane2h Thank you so much for all your advice. You guys are awesome. Like An1lam and Mr.-hire point out though, I would actually need a project to work on. A practical, immediate application makes skills learnable much faster. Any advice on that?

The "paradox of tolerance" is a continually hot topic, but I've not seen it framed as a member in a category of fallacies where a principle is conceptualized as either absolute or hypocritical and the absolute conception then rejected as self-contradictory or incoherent. Other examples of commonly absolutized principles are pacifism, pluralism, humility, openness, specific kinds of freedoms, etc.

I've been provisionally calling it the 'false self-contradiction fallacy', meaning a specialized case of black-and-white fallacy as applied to ethical, moral or practical ... (Read more)

The closest existing label turns out to be absolutism fallacy; I've posted a more focused question about the same topic elsewhere.

Meetups as Institutions for Intellectual Progress
471d7 min readShow Highlight

Epistemic status: Not a historian of science, but I have thought fairly extensively about meetups. Kind of making this up as I go along, almost certainly missing important points.

Other meta: Written all in one sitting, to not let the perfect be the enemy of the good. No one proofread it, so hopefully there aren't sentences that just cut off in the middle. Also, forgive my excessive use of scare quotes.

tl;dr: The difference between historical salons and LW meetups is that meetups do not feel like the place where progress is made. They’re not doing research or publishing anything. I... (Read more)

But as things are, I expect that even if one of your meetup experiments failed, it would give us useful data.

It's one thing to run a meetup experiment. It's another to globally say that everyone should run their meetups in a certain way.

Global coordination needs much more buy-in from other people.

5ChristianKl3h I’m also in the process of creating a Facebook group for attendees of all meetups worldwide. There used to be a LessWrong Facebook group and now there isn't anymore. Are you aware of what happened? What kind of governance do you want for the new group?
2ChristianKl3h I'm not sure how "official meetups" would be any different by nature of being "official" or even what official means. The idea also seems a bit strange to me because I don't think the team has any claim of being an official arbiter on the term LessWrong. LessWrong Germany e.V. is a NGO with a 5-figure yearly budget. It seems to me that an individual toastmasters club has a lot less license to innovate then our local meetup has. A toastmasters club can't say: "Let's run this marathon together under the logo of our club." but our local meetup would have no problem with running a marathon together as an LessWrong team. It seems to me like making a top-down decision about what topics people should discuss is a way to remove agency from individual meetups. Our meetups in Berlin also aren't discussions about topics but rather about doing rationality exercises together.
2Charlie Steiner11h Yeah - retreading the same ground seems like a necessary and normal part of having a culture with common knowledge. Creating new stuff is hard, and creating it as a group rather than working alone is an additional hard thing. This can often be at odds of the normal meetup goal of inclusiveness. The most cool new progress I've experienced at a meetup was in attacking epistemology problems (like the raven paradox), which kind of makes sense if we imagine that this sort of problem was hard because it was confusing, but not too complicated and not requiring much domain expertise outside the LW curriculum.
The YouTube Revolution in Knowledge Transfer
3120h3 min readShow Highlight

Growing up as an aspiring javelin thrower in Kenya, the young Julius Yego was unable to find a coach: in a country where runners command the most prestige, mentorship was practically nonexistent. Determined to succeed, he instead watched YouTube recordings of Norwegian Olympic javelin thrower Andreas Thorkildsen, taking detailed notes and attempting to imitate the fine details of his movements. Yego went on to win gold in the 2015 World Championships in Beijing, silver in the 2016 Rio de Janeiro Olympics, and holds the 3rd-longest javelin throw on world record. He acquired a coach only six mon... (Read more)

"ChuckMcM 3 days ago [-]

I am always amazed when people make comments like this: "The results of the University of Texas at Austin’s first full-semester foray into massive open online courses, or MOOCs, are in."

"Professor Michael Webber’s “Energy 101,” which had an enrollment that peaked at around 44,000 students, had 5,000 receive a certificate of completion — about 13 percent of the roughly 38,000 students who ultimately participated."

So let's unpack this a bit. Professor Webber created a class called "Energy 101" and processed 5,000 students through it t

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
4uncomputable18h I'm with Viliam, as regards the MOOCs. If we looked at statistics about how many people go from searching for javelin throwing videos on Youtube to successfully throwing a javelin without injuring themselves or others, the percentage is probably quite low. We'd see MOOCs doing better if we looked at whatever subset of the population who click the "start" button cared fractionally as much about the material as those who jump through the process of applying to a university, paying piles of money, and waiting until the start of a semester to begin learning.
Realism and Rationality
193d19 min readShow Highlight

Format warning: This post has somehow ended up consisting primarily of substantive endnotes. It should be fine to read just the (short) main body without looking at any of the endnotes, though. The endnotes elaborate on various claims and distinctions and also include a much longer discussion of decision theory.

Thank you to Pablo Stafforini, Phil Trammell, Johannes Treutlein, and Max Daniel for comments on an initial draft.

When discussing normative questions, many members of the rationalist community identify as anti-realists. But normative anti-realism seems to me to be in tension with some o

... (Read more)

The quote from Eliezer is consistent with #1, since it's bad to undermine people's ability to achieve their goals.

More generally, you might believe that it's morally normative to promote true beliefs (e.g. because they lead to better outcomes) but not believe that it's epistemically normative, in a realist sense, to do so (e.g. the question I asked above, about whether you "should" have true beliefs even when there are no morally relevant consequences and it doesn't further your goals).

Sayan's Braindump
214dShow Highlight

What gadgets have improved your productivity?

For example, I started using a stylus few days ago and realized it can be a great tool for a lot of things!

I am thinking about these questions about a lot without actually reaching anywhere.

What is the nature of non-dual epistemology? What does it mean to 'reason' from the (Intentional Stance)[], from inside of an agent?

1Answer by Teerth Aloke8h In one word : Bicameralism.
The strategy-stealing assumptionΩ
462d11 min readΩ 17Show Highlight

Suppose that 1% of the world’s resources are controlled by unaligned AI, and 99% of the world’s resources are controlled by humans. We might hope that at least 99% of the universe’s resources end up being used for stuff-humans-like (in expectation).

Jessica Taylor argued for this conclusion in Strategies for Coalitions in Unit-Sum Games: if the humans divide into 99 groups each of which acquires influence as effectively as the unaligned AI, then by symmetry each group should end, up with as much influence as the AI, i.e. they should end up with 99% of the influence.

This argument rests on what I... (Read more)

Is the aligned AI literally applying a planning algorithm to the same long-term goal as the unaligned AI, and then translating that plan into a plan for acquiring flexible influence, or is it just generally trying to come up with a plan to acquire flexible influence?

The latter

It is trying to find a strategy that's instrumentally useful for a variety of long-term goals

It's presumably trying to find a strategy that's good for the user, but in the worst case where it understands nothing about the user it still shouldn't do any worse than ... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Just Imitate Humans?Ω
122mo1 min readΩ 8Show Highlight

Do people think we could make a singleton (or achieve global coordination and preventative policing) just by imitating human policies on computers? If so, this seems pretty safe to me.

Some reasons for optimism: 1) these could be run much faster than a human thinks, and 2) we could make very many of them.

Acquiring data: put a group of people in a house with a computer. Show them things (images, videos, audio files, etc.) and give them a chance to respond at the keyboard. Their keyboard actions are the actions, and everything between actions is an observation. Then learn the policy of the group ... (Read more)

RE: "Imitation learning considered unsafe?" (I'm the author):

The post can basically be read as arguing that human imitation seems especially likely to produce mesa-optimization.

I agree with your response; this is also why I said: "Mistakes in imitating the human may be relatively harmless; the approximation may be good enough".

I don't agree with your characterization, however. The concern is not that it would have roughly human-like planning, but rather super-human planning (since this is presumably simpler according to most ... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

1michaelcohen15h Another complication here is that the people trying to build ~AIXI can probably build an economically useful ~AIXI using less compute than you need for ~HSIFAUH (for jobs that don't need to model humans), and start doing their own doublings. Good point. Regarding the other two points, my intuition was that a few dozen people could work out the details satisfactorily in a year. If you don't share this intuition, I'll adjust downward on that. But I don't feel up to putting in those man-hours myself. It seems like there are lots of people without a technical background who are interested in helping avoid AI-based X-risk. Do you think this is a promising enough line of reasoning to be worth some people's time?
Three Stories for How AGI Comes Before FAI
2317h5 min readShow Highlight

To do effective differential technological development for AI safety, we'd like to know which combinations of AI insights are more likely to lead to FAI vs UFAI. This is an overarching strategic consideration which feeds into questions like how to think about the value of AI capabilities research.

As far as I can tell, there are actually several different stories for how we may end up with a set of AI insights which makes UFAI more likely than FAI, and these stories aren't entirely compatible with one another.

Note: In this document, when I say "FAI", I mean any superintelligent system which do

... (Read more)

Trying to create an FAI from alchemical components is obviously not the best idea. But it's not totally clear how much of a risk these components pose, because if the components don't work reliably, an AGI built from them may not work well enough to pose a threat.

I think that using alchemical components in an possible FAI can lead to a serious risk if the people developing it aren't sufficiently safety conscious. Suppose that either implicitly or explicitly, the AGI is structured using alchemical components as follows:

  1. A module for forming beliefs abou
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
notjaelkoh's Shortform
120hShow Highlight

Solving Climate Change/Environmental degredation(CC/ED)

I use lobbyists as the root cause of the problem, but CC/ED is probably a unavoidable facet of Capitalism. (Marx probably said something about it idk)

Stuff that might work:

1.Bringing down the Capitalistic Democratic model of governance. (haha)

Stuff that won't work:

1. A ranking system/app like Facebook that just ranks everyone's ability in stopping lobbyists (this is a horrible example, i just use it because it's so general- you can literally rank anyone's ability at doing anything ... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

1notjaelkoh14h Intuition Pump -> Break down every word of a sentence and "pump" (edit/adjust/change) it to learn something about it: He hit me when i was eating a piece of bread. -> She hit her when she was eating 100 pieces of meat. (We adjust the "variables" of the sentence to derive some higher level meaning, namely some cultural significance on is female-on-female assaults perceived as worse than male-on-(presumably)male assault) Ignore the dumb example though. How do we solve poverty? A good policy (independent on government and dependent on capitalistic greed) is 1)profitable and 2)replicable. Suppose there's a family of 5 people. Call them 1,2,3,4,5 respectively. 1 is a 10 year old boy that will die in 20 days if you don't help them because of starvation. In most cases, we assume that we should help 1 through charities and what not. Unfortunately, 1s die al the time. How do we fix this? Suppose greedy businessman meets 1 on the street and offers to feed 1 for 1 year in exchange for 1% of 1's salary for the rest of his life. Is this ethical? Is this legal? Suppose it takes $500 to feed 1 for 1 year. Suppose there's a 50% chance that 1% of 1's lifetime earnings is more than $500 (presumably due to the fact that 1 is likely to die at ages 11-18) The first reaction to this (pretty bad) hypothetical situation is disgust and outcry. How could you possibly profit from the poor? Well, in some parts of Asia, sweatshops are already doing some combination of this situation. (replace greedy businessman with sweatshop, replace 1 year with 5 years, replace 1% of salary with X amount of work and X% chance of death) Is it not unethical to let 1s without access to sweatshops die? What if we replace GB (greedy businessman) with kind hearted Bill Gates? what about Ugandan Government? what about Goldman Sachs? What if it was to feed 1s family instead? What if instead of asking for money, you ask that 1 help other 1s in the future? (Surely the pay it forward system has to be both e
Effective Altruism and Everyday Decisions
222d1 min readShow Highlight
  • Ask for your drink without a straw.
  • Unplug your microwave when not in use.
  • Bring a water bottle to events.
  • Stop using air conditioning.
  • Choose products that minimize packaging.

I've recently heard people advocate for all of these, generally in the form of "here are small things you can be doing to help the planet." In the EA Facebook group someone asked why we haven't tried to make estimates so we can prioritize among these. Is it more important to reuse containers, or to buy locally made soap?

I think the main reason we haven't put a lot of work into quantifying the impacts of t... (Read more)

I think this ties into the Hansionian view that people don't want A for X, they actually want Y; and that we should design systems that give them Y while appearing to give them.

Apologies for the generalisation. With regard to your topic: People don't want to A: Refuse Straws because of Environmental Concerns, rather, they actually want to Look like they care about the enviroment because it's high status.

If one is really concerned with solving the root problems of environmentalism, then we need to somehow make people want to prevent agricult... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

3Donald Hobson17h If we stop doing something that almost all first world humans are doing (say 1 billion people), then our impact will be about a billionth of the size of the problem. Given the size of impact that an effective altruist can hope to have, this tells us why non actions don't have super high utilities in comparison. If there were 100 000 effective altruists (probably an overestimate), This would mean that all effective altruists refraining from doing X, would make the problem 0.01% better. Both how hard it is to refrain, and the impact if you manage it depend on the problem size, all pollution vs plastic straws. Assuming that this change took only 0.01% of the effective altruists time. (10 seconds per day, 4 of which you are asleep for). Clearly this change has to be something as small as avoiding plastic straws, of smaller. Assume linearity in work and reward, the normal assumption being diminishing returns. This makes the payoff equivalent to all effective altruists working full time on solving the problem, and solving it. Technically, you need to evaluate the marginal value of one more effective altruist. If it was vitally important that someone worked on AI, but you have far more people than you need to do that, and the rest are twiddling their thumbs, get them reusing straws (Actually get them looking for other cause areas, reusing straws only makes sense if you are confidant that no other priority causes exist) Suppose omega came to you and said that if you started a compostable straw buisness, there was an 0.001% chance of success, by which omega means solving the problem without any externalities. (The straws are the same price, just as easy to use, don't taste funny ect.) Otherwise, the buisness will waste all your time and do nothing. If this doesn't seem like a promising opportunity for effective altruism, don't bother with reusable straws either. In general the chance of success is 1/( Number of people using plastic straws X Proportion of time wasted av
Utility uncertainty vs. expected information gainΩ
65d1 min readΩ 3Show Highlight

It is a relatively intuitive thought that if a Bayesian agent is uncertain about its utility function, it will act more conservatively until it has a better handle on what its true utility function is.

This might be deeply flawed in a way that I'm not aware of, but I'm going to point out a way in which I think this intuition is slightly flawed. For a Bayesian agent, a natural measure of uncertainty is the entropy of its distribution over utility functions (the distribution over which possible utility function it thinks is the true one). No matter how uncertain a Bayesian agent is abou... (Read more)

It seems this would only be the case if it had a deeper utility function that placed great weight on it 'discovering' its other utility function.

This isn't actually necessary. If it has a prior over utility functions and some way of observing evidence about which one is real, you can construct the policy which maximizes expected utility in the following sense: it imagines a utility function is sampled from the set of possibilities according to its prior probabilities, and it imagines that utility function is what it's scored on. This na... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Matt Goldenberg's Short Form Feed
323mo1 min readShow Highlight

Where I write up some small ideas that I've been happening that may eventually become their own top level posts. I'll start populating with a few ideas I've posted up as twitter/Facebook thoughts.

1An1lam17h This note won't make sense to anyone who isn't already familiar with the Sociopath framework in which you're discussing this, but I did want to call out that Venkat Rao (at least when he wrote the Gervais Principle) explicitly stated that sociopaths are amoral and has fairly clearly (especially relative to his other opinions) stated that he thinks having more Sociopaths wouldn't be a bad thing. Here are a few quotes from Morality, Compassion, and the Sociopath [] which talk about this: So yes, this entire edifice I am constructing is a determinedly amoral one. Hitler would count as a sociopath in this sense, but so would Gandhi and Martin Luther King. In all this, the source of the personality of this archetype is distrust of the group, so I am sticking to the word “sociopath” in this amoral sense. The fact that many readers have automatically conflated the word “sociopath” with “evil” in fact reflects the demonizing tendencies of loser/clueless group morality. The characteristic of these group moralities is automatic distrust of alternative individual moralities. The distrust directed at the sociopath though, is reactionary rather than informed. Sociopaths can be compassionate because their distrust only extends to groups. They are capable of understanding and empathizing with individual pain and acting with compassion. A sociopath who sets out to be compassionate is strongly limited by two factors: the distrust of groups (and therefore skepticism and distrust of large-scale, organized compassion), and the firm grounding in reality. The second factor allows sociopaths to look unsentimentally at all aspects of reality, including the fact that apparently compassionate actions that make you “feel good” and assuage guilt today may have unintended consequences that actually create more evil in the long term. This is what makes even good sociopaths often seem callous to even those among the c
2mr-hire16h Rao's sociopaths are Kegan 4.5, they're nihilistic and aren't good for long lasting organizations because they view the notion of organizational goals as nonsensical. I agree that there's no moral bent to them but if you're trying to create an organization with a goal they're not useful. Instead, you want an organization that can develop Kegan 5 leaders.
7Raemon16h This doesn't seem like it's addressing Anlam's question though. Gandhi doesn't seem nihilist. I assume (from this quote, which was new to me), that in Kegan terms, Rao probably meant something ranging from 4.5 to 5.

I think Rao was at Kegan 4.5 when he wrote the sequence and didn't realize Kegan 5 existed. Rao was saying "There's no moral bent" to Kegan 4.5 because he was at the stage of realizing there was no such thing as morals.

At that level you can also view Kegan 4.5's as obviously correct and the ones who end up moving society forward into interesting directions, they're forces of creative destruction. There's no view of Kegan 5 at that level, so you'll mistake Kegan 5's as either Kegan 3's or other Kegan 4.5... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Load More