Shortform Content [Beta]

AllAmericanBreakfast's Shortform

A celebrity is someone famous for being famous.

Is a rationalist someone famous for being rational? Someone who’s leveraged their reputation to gain privileged access to opportunity, other people’s money, credit, credence, prestige?

Are there any arenas of life where reputation-building is not a heavy determinant of success?

A physicist is someone who is interested in and studies physicist.

A rationalist is someone who is interested in and studies rationality.

2Viliam8hA rationalist is someone who can talk rationally about rationality, I guess. :P One difference between rationality and fame is that you need some rationality in order to recognize and appreciate rationality, while fame can be recognized and admired also (especially?) by people who are not famous. Therefore, rationality has a limited audience. Suppose you have a rationalist who "wins at life". How would a non-rational audience perceive them? Probably as someone "successful", which is a broad category that also includes e.g. lottery winners. Even people famous for being smart, such as Einstein, are probably perceived as "being right" rather than being good at updating, research, or designing experiments. A rationalist can admire another rationalist's ability of changing their mind. And also "winning at life" to the degree we can control for their circumstances (privilege and luck), so that we can be confident it is not mere "success" we admire, but rather "success disportionate to resources and luck". This would require either that the rationalist celebrity regularly publishes their though processes, or that you know them personally. Either way, you need lots of data about how they actually succeeded. You could become a millionaire by buying Bitcoin anonymously, so that would be one example. Depends on what precisely you mean by "success": it is something like "doing/getting X" or rather "being recognized as X"? The latter is inherently social, the former you can often achieve without anyone knowing about it. Sometimes it easier to achieve things if you don't want to take credit; for example if you need a cooperation of a powerful person, it can be useful to convince them that X was actually their idea. Or you can have the power, but live in the shadows, while other people are in the spotlight, and only they know that they actually take commands from you. To be more specific, I think you could make a lot of money by learning something like programming, getting
4AllAmericanBreakfast4hCertainly it is possible to find success in some areas anonymously. No argument with you there! I view LW-style rationality as a community of practice, a culture of people aggregating, transmitting, and extending knowledge about how to think rationally. As in "The Secret of Our Success," we don't accomplish this by independently inventing the techniques we need to do our work. We accomplish this primarily by sharing knowledge that already exists. Another insight from TSOOS is that people use prestige as a guide for who they should imitate. So rationalists tend to respect people with a reputation for rationality. But what if a reputation for rationality can be cultivated separately from tangible accomplishments? In fact, prestige is already one step removed from the tangible accomplishments. But how do we know if somebody is prestigious? Perhaps a reputation can be built not by gaining the respect of others through a track record of tangible accomplishments, but by persuading others that: a) You are widely respected by other people whom they haven't met, or by anonymous people they cannot identify, making them feel behind the times, out of the loop. b) That the basis on which people allocate prestige conventionally is flawed, and that they should do it differently in a way that is favorable to you, making them feel conformist or conservative. c) That other people's track record of tangible accomplishments are in fact worthless, because they are not of the incredible value of the project that the reputation-builder is "working on," or are suspect in terms of their actual utility. This makes people insecure. d) Giving people an ability to participate in the incredible value you are generating by convincing them to evangelize your concept, and thereby to evangelize you. Or of course, just donating money. This makes people feel a sense of meaning and purpose. I could think of other strategies for building hype. One is to participate in cooperative games, whereb
AllAmericanBreakfast's Shortform

Thinking, Too Fast and Too Slow

I've noticed that there are two important failure modes in studying for my classes.

Too Fast: This is when learning breaks down because I'm trying to read, write, compute, or connect concepts too quickly.

Too Slow: This is when learning fails, or just proceeds too inefficiently, because I'm being too cautious, obsessing over words, trying to remember too many details, etc.

One hypothesis is that there's some speed of activity that's ideal for any given person, depending on the subject matter and their current level of comfort wi... (read more)

Alex Ray's Shortform

1. What am I missing from church?

(Or, in general, by lacking a religious/spiritual practice I share with others)

For the past few months I've been thinking about this question.

I haven't regularly attended church in over ten years.  Given how prevalent it is as part of human existence, and how much I have changed in a decade, it seems like "trying it out" or experimenting is at least somewhat warranted.

I predict that there is a church in my city that is culturally compatible with me.

Compatible means a lot of things, but mostly means that I'm better off ... (read more)

jacobjacob's Shortform Feed

Very uncertain about this, but makes an interesting claim about a relation between temperature, caffeine and RSI: 

Troy Macedon's Shortform

What are the odds that an immortal is experiencing youthhood?

There's a temporal self-location issue with being immortal. Since an immortal lives for an infinite amount of time, almost all of his experiences are in the year infinity AD. This implies that you're not an immortal since your age is around 30yo, not infinity. The odds of you having your experience right now, as a relatively youthful man, is basically 0 if almost all of your experiences are in the infinite future. It seems like an immortal should always be at some unknown age; his boyhood so far ... (read more)

Pontor's Shortform

Schelling talks about “the right to be sued” as an important right that businesses need to protect for themselves, not because anyone likes being sued, but because only businesses that can be sued if they slip up have enough credibility to attract customers.

-- Scott Alexander

I think about this every couple of weeks. Seems deep and underappreciated.

15Zack_M_Davis7hI would recommend actually reading Schelling's The Strategy of Conflict []. Popularizers like Alexander do great and important work in exposing the highlights to a large audience, but there's no real substitute for getting the details from the primary literature.

Seconded. Also Skyrms' "Evolution of the Social Contract".

Measure's Shortform

Is there a name for this mental state I sometimes find myself in after reading for a while? My breathing slows, I start to feel very "focused", and my physical perceptions start to blur. The book or screen I'm reading seems simultaneously both small/near and huge/far, my limbs feel simultaneously heavy and weightless, and time seems to be simultaneously slowing down and rushing forward. Has anyone here experienced this, and do you know what it's called?

You're describing the state of Flow.

3Kaj_Sotala11hCould it be a form of hyperfocus []?
3avturchin13hMaybe, trance [] ?
ricraz's Shortform

Oracle-genie-sovereign is a really useful distinction that I think I (and probably many others) have avoided using mainly because "genie" sounds unprofessional/unacademic. This is a real shame, and a good lesson for future terminology.

After rereading the chapter in Superintelligence, it seems to me that "genie" captures something akin to act-based agents. Do you think that's the main way to use this concept in the current state of the field, or do you have other applications in mind?

2DanielFilan1dPerhaps the lesson is that terminology that is acceptable in one field (in this case philosophy) might not be suitable in another (in this case machine learning).
2adamShimi1dIs that from Superintelligence? I googled it, and that was the most convincing result.
ricraz's Shortform

I suspect that AIXI is misleading to think about in large part because it lacks reusable parameters - instead it just memorises all inputs it's seen so far. Which means the setup doesn't have episodes, or a training/deployment distinction; nor is any behaviour actually "reinforced".

Showing 3 of 9 replies (Click to show all)
2DanielFilan2dWell now I'm less sure that it's incorrect. I was originally imagining that like in Solomonoff induction, the TMs basically directly controlled AIXI's actions, but that's not right: there's an expectimax. And if the TMs reinforce actions by shaping the rewards, in the AIXI formalism you learn that immediately and throw out those TMs.

Oh, actually, you're right (that you were wrong). I think I made the same mistake in my previous comment. Good catch.

2[comment deleted]2d
Troy Macedon's Shortform

Proposing a "Law Law": applying the Law of Identity to decision-making.

The law of identity is descriptive. It states that some thing is described as itself: X = X. A valid description describes itself.

I think there's a prescriptive analog as well: a Law Law. It would state that some decision-making framework must output itself along with its answer. A valid prescription prescribes itself. Otherwise, it's necessarily a wrong decision-making framework. No further research into the proposed prescription would be needed!

An obvious example of a prescription tha... (read more)

Showing 3 of 5 replies (Click to show all)
2Pattern4dWe shouldn't be using should statements. (And yet we are.) The statement can only be made if it isn't being followed - where's the paradox? For comparison: A library has a sign which says "No talking in the library." Someone talks in the library. Someone goes "Shhh!". "Why?" A librarian says "No talking in the library."
1Troy Macedon4dLibrarians are allowed to talk. To correct your analogy: it would be as if no one was talking and then a patron suddenly told another patron in the library "No talking in the library," out loud, by talking. When you introduce different reference classes like that, you have to be careful because of implicit assertions. For ex: a robot vegan cafe where there's a sign "No meat allowed," and a human walks in who is composed of meat, without breaking the rules.

I'm saying the rules differ from how they are said - and the apparent conflict results from the difference.

AllAmericanBreakfast's Shortform

The Rationalist Move Club

Imagine that the Bay Area rationalist community did all want to move. But no individual was sure enough that others wanted to move to invest energy in making plans for a move. Nobody acts like they want to move, and the move never happens.

Individuals are often willing to take some level of risk and make some sacrifice up-front for a collective goal with big payoffs. But not too much, and not forever. It's hard to gauge true levels of interest based off attendance at a few planning meetings.

Maybe one way to solve this is to ask for ... (read more)

Matt Goldenberg's Short Form Feed

Alright, now somebody needs to write the "Pain is a contextually useful unit of effort of which the value varies depending on your situation, genetics, and upbringing" post.

I sort of want to create a gpt-3 bot that automatically does this for any X is Good or X is Bad post.

TurnTrout's shortform feed

Over the last 2.5 years, I've read a lot of math textbooks. Not using Anki / spaced repetition systems over that time has been an enormous mistake. My factual recall seems worse-than-average among my peers, but when supplemented with Anki, it's far better than average (hence, I was able to learn 2000+ Japanese characters in 90 days, in college). 

I considered using Anki for math in early 2018, but I dismissed it quickly because I hadn't had good experience using that application for things which weren't languages. I should have at least tried to see if... (read more)

Alex Ray's Shortform

Thinking more about the singleton risk / global stable totalitarian government risk from Bostrom's Superintelligence, human factors, and theory of the firm.

Human factors represent human capacities or limits that are unlikely to change in the short term.  For example, the number of people one can "know" (for some definition of that term), limits to long-term and working memory, etc.

Theory of the firm tries to answer "why are economies markets but businesses autocracies" and related questions.  I'm interested in the subquestion of "what factors giv... (read more)

4MakoYass3dDid Bostrom ever call it singleton risk? My understanding is that it's not clear that a singleton is more of an x-risk than its negative; a liberal multipolar situation under which many kinds of defecting/carcony factions can continuously arise.

I don't know if he used that phrasing, but he's definitely talked about the risks (and advantages) posed by singletons.

MikkW's Shortform

I'm quite baffled by the lack of response to my recent question asking about which AI-researching companies are good to invest in (as in, would have good impact, not necessarily most profitable)- It indicates either A) most LW'ers aren't investing in stocks (which is a stupid thing not to be doing), or B) are investing in stocks, but aren't trying to think carefully about what impact their actions have on the world, and their own future happiness (which indicates a massive failure of rationality)

Even putting this aside, the fact that nobody jumped at the c... (read more)

Showing 3 of 13 replies (Click to show all)
2MakoYass3dI don't understand. What do you mean by contextualizing?
5John_Maxwell2moFor what it's worth, I get frustrated by people not responding to my posts/comments on LW all the time. This post [] was my attempt at a constructive response to that frustration. I think if LW was a bit livelier I might replace all my social media use with it. I tried to do my part to make it lively by reading and leaving comments a lot for a while, but eventually gave up.
DanielFilan's Shortform Feed

A rough and dirty estimate of the COVID externality of visiting your family in the USA for Christmas when you don't feel ill:

You incur some number of μCOVIDs[*] a week, let's call it x. Since the incubation time is about 5 days, let's say that your chance of having COVID is about 5x/7,000,000 when you arrive at the home of your family with n other people. In-house attack rate is about 1/3, I estimate based off hazy recollections, so in expectation you infect 5xn/21,000,000 people, which is about xn/4,000,000 people.

How bad is it to infect one family member... (read more)

Note: this calculation only accounts for you infecting your relatives who then infect others, and not your relatives infecting you and you infecting others. Accounting for this should probably raise the cost by a factor of 2.

2DanielFilan3dNote: this calculation assumes that travelling is not risky at all. Realistically that should be bundled into x.
Troy Macedon's Shortform

How to simulate observers without risking yourself getting simulated.

A common argument for the simulation hypothesis is that if we simulate too many observers relative to our unsimulated population then we'll end up in a state where most observers are simulated, therefore by the self indication assumption we're already in a situation of being simulated ourselves! I'll refer to this as the simulation tragedy.

 I'll introduce a few methods we can use to avoid such an outcome. Why? Because I believe it's better to be unsimulated than simulated. The goal h... (read more)

For a while, we've been exploring a similar question but more in the direction of pre-committing to giving simulants better lives, rather than just not bringing them into existence:

Trivially, if we prevent simulatees from using anthropic reasoning, or any method of self-location, then the only thing you'll need to do to ensure your status as a nonsimulatee is to just self-locate every once in a while.

Doesn't that protocol just allow some people to prov... (read more)

Rafael Harth's Shortform

More on expectations leading to unhappiness: I think the most important instance of this in my life has been the following pattern.

  • I do a thing where there is some kind of feedback mechanism
  • The reception is better than I expected, sometimes by a lot
    • I'm quite happy about this, for a day or so
    • I immediately and unconsciously update my standards upward to consider the reception the new normal
  • I do a comparable thing, the reception is worse than the previous time
    • I brood over this failure for several days, usually with a major loss of productivity

O... (read more)

Showing 3 of 5 replies (Click to show all)

I hope you are trying to understand the causes of the success (including luck) instead of just mindlessly following a reward signal. Not even rats mindlessly obey reward signals.

2Viliam4dThere are relative differences in both poor and rich countries; people anywhere can imagine what it would be like to live like their more successful neighbors. But maybe the belief in social mobility makes it worse, because it feels like you could be one of those on the top. (What's your excuse for not making a startup and selling it for $1M two years later?) I don't have a TV and I use ad-blockers online, so I have no idea what a typical experience looks like. The little experience I have suggests that TV ads are about "desirable" things, but online ads mostly... try to make you buy some unappealing thing by telling you thousand times that you should buy it. Although once in a while they choose something that you actually want, and then the thousand reminders can be quite painful. People in poor countries probably spend much less time watching ads.
3Troy Macedon4dYou touched on a good point. There seems to be tension between expecting what your life could be (like in the movies), vs expecting what your self could be (like a genius). When those two don't match up you get issues. Advertising seems to be about trying to define what is "good" and what is "bad" in the audience's psyche. It's why Mercedes ads still get to shown in poor neighborhoods. Those ads aren't there for the poor to buy a Mercedes. They're there to remind the poor that a Mercedes is "good" so when they see a Mercedes owner, that association follows and benefits said owner.
sguyrep's Shortform

Testing and Quarantine

Toon Alfrink's sketchpad

Here's an idea: we hold the Ideological Turing Test (ITT) world championship. Candidates compete to pass an as broad range of views as possible.

Points awarded for passing a test are commensurate with the amount of people that subscribe to the view. You can subscribe to a bunch of them at once.

The awarding of failures and passes is done anonymously. Points can be awarded partially, according to what % of judges give a pass.

The winner is made president (or something)

Showing 3 of 9 replies (Click to show all)

Why aren't presidential races already essentially ITT Tournaments? It would seem like that skill would make you really good at drawing support from lots of different demographics.

2ChristianKl4dPrizes are not constraints for running tournaments and treating them like they are makes it harder to bring a tournament to reality.
2mr-hire3dI guess it depends on why you want to run the tournament.
Load More