Shortform Content [Beta]

Write your thoughts here! What have you been thinking about?
Exploratory, draft-stage, rough, and rambly thoughts are all welcome on Shortform.

What gadgets have improved your productivity?

For example, I started using a stylus few days ago and realized it can be a great tool for a lot of things!

Solving Climate Change/Environmental degredation(CC/ED)

I use lobbyists as the root cause of the problem, but CC/ED is probably a unavoidable facet of Capitalism. (Marx probably said something about it idk)

Stuff that might work:

1.Bringing down the Capitalistic Democratic model of governance. (haha)

Stuff that won't work:

1. A ranking system/app like Facebook that just ranks everyone's ability in stopping lobbyists (this is a horrible example, i just use it because it's so general- you can literally rank anyone's ability at doing anything ... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Intuition Pump -> Break down every word of a sentence and "pump" (edit/adjust/change) it to learn something about it:

He hit me when i was eating a piece of bread. -> She hit her when she was eating 100 pieces of meat. (We adjust the "variables" of the sentence to derive some higher level meaning, namely some cultural significance on is female-on-female assaults perceived as worse than male-on-(presumably)male assault)

Ignore the dumb example though. How do we solve poverty?

A good policy (independent on government and dependent o... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post


The role of the Kegan 5 in a good organization:

1. Reinvent the rules and mission of the organization as the landscape changes, and frame them in a way that makes sense to the kegan 3 and 4s.

2. Notice when sociopaths are arbitraging the difference between the rules and the terminal goals, and shut it down.


Sociopaths (in the Gervais principle sense) are powerful because they're Kegan 4.5. They know how to take the realities of Kegan 4's and 3's and deftly manipulate them, forc... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 8 replies (Click to show all)
2mr-hire11h Rao's sociopaths are Kegan 4.5, they're nihilistic and aren't good for long lasting organizations because they view the notion of organizational goals as nonsensical. I agree that there's no moral bent to them but if you're trying to create an organization with a goal they're not useful. Instead, you want an organization that can develop Kegan 5 leaders.
3Raemon11h This doesn't seem like it's addressing Anlam's question though. Gandhi doesn't seem nihilist. I assume (from this quote, which was new to me), that in Kegan terms, Rao probably meant something ranging from 4.5 to 5.

I think Rao was at Kegan 4.5 when he wrote the sequence and didn't realize Kegan 5 existed. Rao was saying "There's no moral bent" to Kegan 4.5 because he was at the stage of realizing there was no such thing as morals.

At that level you can also view Kegan 4.5's as obviously correct and the ones who end up moving society forward into interesting directions, they're forces of creative destruction. There's no view of Kegan 5 at that level, so you'll mistake Kegan 5's as either Kegan 3's or other Kegan 4.5... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

(This is a list.)

Since bookmarking comments hasn't been implemented yet, I think I'll put them here.

(Without votes so they don't clog up space on recent discussion.)

Also, comments on these might better go on the page where they are.

Showing 3 of 8 replies (Click to show all)
0Pattern8d []
0Pattern19d Sequences: []

There's a phenomenon I currently hypothesize to exist where direct attacks on the problem of AI alignment are criticized much more often than indirect attacks.

If this phenomenon exists, it could be advantageous to the field in the sense that it encourages thinking deeply about the problem before proposing solutions. But it could also be bad because it disincentivizes work on direct attacks to the problem (if one is criticism averse and would prefer their work be seen as useful).

I have arrived at this hypothesis from my observations: I have watched p... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Did you have some specific cases in mind when writing this? For example, HCH is interesting and not obviously going to fail in the ways that some other proposals I've seen would, and the proposal there seems to have gotten better as more details have been fleshed out even if there's still some disagreement on things that can be tested eventually even if not yet. Against this we've seen lots of things, like various oracle AI proposals, that to my mind usually have fatal flaws right from the start due to misunderstanding something that they ca... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

6Raemon2d Nod. This is part of a general problem where vague things that can't be proven not to work are met with less criticism than "concrete enough to be wrong" things. A partial solution is a norm wherein "concrete enough to be wrong" is seen as praise, and something people go out of their way to signal respect for.

Meta-philosophy hypothesis: Philosophy is the process of reifying fuzzy concepts that humans use. By "fuzzy concepts" I mean things where we can say "I know it when I see it." but we might not be able to describe what "it" is.

Examples that I believe support the hypothesis:

  • This shortform is about the philosophy of "philosophy" and this hypothesis is an attempt at an explanation of what we mean by "philosophy".

  • In epistemology, Bayesian epistemology is a hypothesis that explains the process of learning.

  • In ethics, an ethical theory attempts to make e

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

>"I know it when I see it." but we might not be able to describe what "it" is.

hard to generate easy to verify functions. Related: Gendlin's 'sharp' blank, or a blank that knows what it is looking for, eg tip of the tongue phenomena, or forgetting what you're looking for and then remembering when you see it.

One of my favorite little tidbits from working on this post: realizing that idea innoculation and the Streisand effect are opposite sides of the same heuristic.

Bubbles in Thingspace

It occurred to me recently that, by analogy with ML, definitions might occasionally be more like "boundaries and scoring-algorithms in thingspace" than clusters per-say (messier! no central example! no guaranteed contiguity!). Given the need to coordinate around definitions, most of them are going to have a simple and somewhat-meaningful center... but for some words, I suspect there are dislocated "bubbles" that use the same word for a completely different concept.

Homophones are one of the clearest examples.

Been mulling around about doing a podcast in which each episode is based on acquiring a particular skillset (self-love, focus, making good investments) instead of just interviewing a particular person.

I interview a few people who have a particular skill (e.g. self-love, focus, creating cash flow businesses), and model the cognitive strategies that are common between them. Then interview a few people who struggle a lot with that skill, and model the cognitive strategies that are common between them. Finally, model a few people who used to be bad at the skil... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 4 replies (Click to show all)
2mr-hire2d I think its' probably likely that gaining knowledge in this way will have systematic biases (OK, this is probably true of all types of knowledge acquisition strategies, but you pointed out some good ones for this particular knowledge gathering technique.) Anyways, based on my own research (and practical experience over the past few months doing this sort of modelling for people with/without procrastination issues) here are some of the things you can do to reduce the bias: * Try to inner sim using the strategy yourself and see if it works. * Model multiple people, and find the strategies that seem to be commonalities. * Check for congruence with people as they're talking. Use common indicators of cached answers like instant answers or lack of emotional charge. * Make sure people are embodied in a particular experience as they discuss, rather than trying to "figure themselves out" from the outside. * Use introspection tools from a variety of disciplines like thinking at the edge, coherence therapy, etc that can allow people to get better access to internal models. All that being said, there will still be bias, but I think with these techniques there's not SO much bias that its' a useless endeavor.
2William_Darwin2d Sounds interesting. I think it may be difficult to find a person, let alone multiple people on a given topic, who are have a particular skill but are also able to articulate it and/or identify the cognitive strategies they use successfully. Regardless, I'd like to hear about how people reduce repetitive talk in their own heads - how to focus on new thoughts as opposed to old, recurring ones...if that makes sense.

Is this ruminating, AKA repetively going over bad memories and negative thoughts? Or is it more getting stuck with cached thoughts and not coming up with original things?

I think it's safe to say that many LW readers don't feel like spirituality is a big part of their life, yet many (probably most) people do experience a thing that goes by many names---the inner light, Buddha-nature, shunyata, God---and falls under the heading of "spirituality". If you're not sure what I'm talking about, I'm pointing to a common human experience you aren't having.

Only, I don't think you're not having it, you just don't realize you are having those experiences.

One way some people get in ... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 44 replies (Click to show all)
Ben PaceModerator Comment11

[Mod note] I thought for a while about how shortform interacts with moderation here. When Ray initially wrote the shortform announcement post, he described the features, goals, and advice for using it, but didn’t mention moderation. Let me follow-up by saying: You’re welcome and encouraged to enforce whatever moderation guidelines you choose to set on shortform, using tools like comment removal, user bans, and such. As a reminder, see the FAQ section on moderation for instructions on how to use the mod tools. Do whatever you want to help you think your tho... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

7G Gordon Worley III5d So, having a little more space from all this now, I'll say that I'm hesitant to try to provide justifications because certain parts of the argument require explaining complex internal models of human minds that are a level more complex than I can explain even though I'm using them (I only seem to be able to interpret myself coherently one level of organization less than the maximum level of organization present in my mind) and because other parts of the argument require gnosis of certain insights that I (and to the best of my knowledge, no one) knows how to readily convey without hundreds to thousands of hours of meditation and one-on-one interactions (though I do know a few people who continue to hope that they may yet discover a way to make that kind of thing scalable even though we haven't figured it out in 2500 years, maybe because we were missing something important to let us do it). So it is true that I can't provide adequate episteme of my claim, and maybe that's what you're reacting to. I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan). So given that, I can see why from your point of view it looks like I'm just making stuff up or worse since I can't offer "justified belief" that you'd accept as "justified", and I'm not really much interested in this particular case in changing your mind as I don't yet completely know myself how to generate that change in stance towards epistemology in others even though I encountered evidence that lead me to that conclusion myself.
23Vaniver4d There's a dynamic here that I think is somewhat important: socially recognized gnosis. That is, contemporary American society views doctors as knowing things that laypeople don't know, and views physicists as knowing things that laypeople don't know, and so on. Suppose a doctor examines a person and says "ah, they have condition X," and Amy responds with "why do you say that?", and the doctor responds with "sorry, I don't think I can generate a short enough explanation that is understandable to you." It seems like the doctor's response to Amy is 'socially justified', in that the doctor won't really lose points for referring to a pre-existing distinction between those-in-the-know and laypeople (except maybe for doing it rudely or gracelessly). There's an important sense in which society understands that it in fact takes many years of focused study to become a physicist, and physicists should not be constrained by 'immediate public justification' or something similar. But then there's a social question, of how to grant that status. One might imagine that we want astronomers to be able to do their astronomy and have their unintelligibility be respected, while we don't want to respect the unintelligibility of astrologers. So far I've been talking 'nationally' or 'globally' but I think a similar question holds locally. Do we want it to be the case that 'rationalists as a whole' think that meditators have gnosis and that this is respectable, or do we want 'rationalists as a whole' to think that any such respect is provisional or 'at individual discretion' or a mistake? That is, when you say: I don't consider this a problem, but I also recognize that within some parts of the rationalist community that is considered a problem (I model you as being one such person, Duncan). I feel hopeful that we can settle whether or not this is a problem (or at least achieve much more mutual understanding and clarity). So it is true that I can't provide adequate episteme of my claim,

I seem to differently discount different parts of what I want. For example, I'm somewhat willing to postpone fun to low-probability high-fun futures, whereas I'm not willing to do the same with romance.

I keep seeing these articles about the introduction of artificial intelligence/data science to football and basketball strategy. What's crazy to me is that it's happening now instead of much much earlier. The book Moneyball was published in 2003 (the movie in 2011) spreading the story of how use of statistics changed the game when it came to every aspect of managing a baseball team. After reading it, I and many others thought to ourselves "this would be cool to do in other sports" - using data would be interesting in every area of every... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 5 replies (Click to show all)

Part of the problem was that doing the work to apply those insights and doing so in a way that beats trained humans is hard because until recently those models couldn't handle all the variables and data humans could and so ignored many things that made a difference. Now that more data can be fed into the models they can make the same or better predictions that humans can make and thus stand a chance of outperforming them rather than making "correct" but poorly-informed decisions that, in the real world, would have lost games.

2hereisonehand5d Another weird takeaway is the timeline. I think my intuition whenever I hear about a good idea currently happening is that because it's happening right now, it's probably too late for me to get in on it at all because everyone already knows about it. I think that intuition is overweighted. If there's a spectrum from ideas being fully saturated to completely empty of people working on them, when good ideas break in the news they are probably closer to the latter than I give them credit for being. At least, I need to update in that direction.
1JustMaier5d I think this is caused by the fact that we lack tooling to adequately assess the amount of free-energy available in new markets sparked by new ideas. Currently it seems the only gauge we have is media attention and investment announcements. Taking the time to assess an opportunity is operationally expensive and I think I've optimized to accept that there's probably little opportunity given that everyone else is observing the same thing. However, I'm not sure that it makes sense to adjust my optimization without first increasing my efficiency in assessing opportunities.

If you deal with some of your problems by distracting yourself, then as long as you have those problems you'll be distracted. You can do most of the other stuff you want to do, even while being distracted. But there are some things you can't do while distracted, like some kinds of intellectual work.

1Pattern9d Can't you distract yourself with intellectual work?
2jimrandomh8d In theory you might, but in practice you can't. Distraction-avoidant behavior favors things that you can get into quickly, on the order of seconds--things like checking for Facebook notifications, or starting a game which has a very fast load time. Most intellectual work has a spinup, while you recreate mental context, before it provides rewards, so distraction-avoidant behavior doesn't choose it.

Hmmm..I think personal experience tells me that distraction-avoidant behaviour will still choose intellectual work, as long as it is quicker than the alternative.

I might choose a game over writing a LW shortform but I will still choose a LW shortform over writing a novel.

Many biohacking guides suggest using melatonin. Does liquid melatonin spoil under high temperature if put in tea (95 degree Celcius)?

More general question: how do I even find answers to questions like this one?

When I had a quick go-ogle search I started with:

"melatonin stability temperature"



A quick flick through a few abstracts I can't see anything involving temperatures higher than 37 C i.e. body temperature.

Melatonin is a protein, many proteins denature at temperatures above 41 C.

My (jumped to) conclusion:

No specific data found.

Melatonin may not be stable at high temperatures, so avoid putting it in hot tea.

I would appreciate an option to hide the number of votes that posts have. Maybe not hide entirely, but set them to only display at the bottom of a post, and not at the top nor on the front page. With the way votes are currently displayed, I think I'm getting biased for/against certain posts before I even read them, just based on the number of votes they have.

Showing 3 of 5 replies (Click to show all)
3Raemon3d ah, whoops.
3habryka3d Yeah, this was originally known as "Anti-Kibitzer" on the old LessWrong. It isn't something we prioritized, but I think greaterwrong has an implementation of it. Though it would also be pretty easy to create a stylish script for it (this hides it on the frontpage, and makes the color white on the post-page, requiring you to select the text to see the score): []
pretty easy to create a stylish script for it

Oh, good idea! I don't have Stylish installed, but I have something similar, and I was able to hide it that way. Thanks!

Epistemic status: Thinking out loud.

Introducing the Question

Scientific puzzle I notice I'm quite confused about: what's going on with the relationship between thinking and the brain's energy consumption?

On one hand, I'd always been told that thinking harder sadly doesn't burn more energy than normal activity. I believed that and had even come up with a plausible story about how evolution optimizes for genetic fitness not intelligence, and introspective access is pretty bad as it is, so it's not that surprising that we can't crank up our brains energy con

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

A competition on solving math problems via AI is coming.

  • The problems are from the International math olympiad (IMO)
  • They want to formalize all the problems in Lean (theorem prover) language. They haven't figured out how to do that, e.g. how to formalize problems of form "determine the set of objects satisfying the given property", as can be seen in
  • A contestant must submit a computer program that will take a problem's descriptio
... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
Showing 3 of 4 replies (Click to show all)
2An1lam3d Can you quantify soon :) ? For example, I'd be willing to bet at 1/3 odds that this will be solved in the next 10 years conditional on a certain amount of effort being put in and more like 1/1 odds for the next 20 years. It's hard to quantify the conditional piece but I'd cash it out as something like if researchers put in the same amount of effort into this that they put into NLP/image recognition benchmarks. I don't think that'll happen, so this is purely a counterfactual claim, but maybe it will help ground any subsequent discussion with some sort of concrete claim?
5Matthew Barnett3d By soon I mean 5 years. Interestingly, I have a slightly higher probability that it will be solved within 20 years, which highlights the difficulty of saying ambiguous things like "soon."

That is interesting! I should be clear that my odds ratios are pretty tentative given the uncertainty around the challenge. For example, I literally woke up this morning and thought that my 1/3 odds might be too conservative given recent progress on 8th grade science tests and theorem proving.

I created three PredictionBook predictions to track this if anyone's interested (5 years, 10 years, 20 years).

Converting this from a Facebook comment to LW Shortform.

A friend complains about recruiters who send repeated emails saying things like "just bumping this to the top of your inbox" when they have no right to be trying to prioritize their emails over everything else my friend might be receiving from friends, travel plans, etc. The truth is they're simply paid to spam.

Some discussion of repeated messaging behavior ensued. These are my thoughts:

I feel conflicted about repeatedly messaging people. All the following being factors in this conflict... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Eliezer has written about the notion of security mindset, and there's an important idea that attaches to that phrase, which some people have an intuitive sense of and ability to recognize, but I don't think Eliezer's post quite captured the essence of the idea, or presented anything like a usable roadmap of how to acquire it.

An1lam's recent shortform post talked about the distinction between engineering mindset and scientist mindset, and I realized that, with the exception of Eliezer and perhaps a few people he works closely with, all o... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post

Showing 3 of 6 replies (Click to show all)
3Wei_Dai3d Can you give some specific examples of me having security mindset, and why they count as having security mindset? I'm actually not entirely sure what it is or that I have it, and would be hard pressed to come up with such examples myself. (I'm pretty sure I have what Eliezer calls "ordinary paranoia" at least, but am confused/skeptical about "deep security".)

Sure, but let me clarify that I'm probably not drawing as hard a boundary between "ordinary paranoia" and "deep security" as I should be. I think Bruce Schneier's and Eliezer's buckets for "security mindset" blended together in the months since I read both posts. Also, re-reading the logistic success curve post reminded me that Eliezer calls into question whether someone who lacks security mindset can identify people who have it. So it's worth noting that my ability to identify people with security mindset is itself suspect by this criteria (there's no pub

... (Read more)(Click to expand thread. ⌘F to Expand All)Cmd/Ctrl F to expand all comments on this post
5Wei_Dai3d Combining hash functions is actually trickier than it looks, and some people are doing research in this area and deploying solutions. See [] and []. It does seem that if cryptography people had more of a security mindset (that are not being defeated) then there would be more research and deployment of this already.
Load More