All of toonalfrink's Comments + Replies

Toon Alfrink's sketchpad

Cause X candidate:

Buddhists claim that they can put brains in a global maximum of happiness, called enlightenment. Assuming that EA aims to maximize happiness plain and simple, this claim should be taken seriously. It currently takes decades for most people to reach an enlightened state. If some sort of medical intervention can reduce this to mere months, this might drive mass adoption and create a huge amount of utility.

Sabien on "work-life" balance

Whereas if you're good at your work and you think that your job is important, there's an intervening layer or three—I'm doing X because it unblocks Y, and that will lead to Z, and Z is good for the world in ways I care about, and also it earns me $ and I can spend $ on stuff...


Yes initially there might be a few layers, but there's also the experience of being really good at what you do, being in flow, at which point Y and Z just kind of dissolve into X, making X feel valuable in itself like jumping on a trampoline.

Seems like this friend wants to be in this... (read more)

Toon Alfrink's sketchpad

A thought experiment: would you A) murder 100 babies or B) murder 100 babies? You have to choose!

The consequentialist case for social conservatism, or “Against Cultural Superstimuli”

Sidestepping the politics here: I've personally found that avoiding (super)stimuli for a week or so, either by not using any electronic devices or by going on a meditation retreat, tends to be extremely effective in increasing my ability to regulate my emotions. Semi-permanently.

I have no substitute for it, it's my panacea against cognitive dissonance and mental issues of any form. This makes me wonder: why aren't we focusing more on this from an applied rationality point of view? 

1Kenny3moI think this basic idea – 'avoid superstimuli' – is in fact focused-on pretty reliably! What makes you think "we" aren't focusing on it as much as "we" should? (And who's "we" exactly?)
Forcing yourself to keep your identity small is self-harm

This seems to be a fully general counterargument against any kind of advice.
As in: "Don't say 'do X' because I might want to do not X which will give me cognitive dissonance which is bad"

You seem to essentially be affirming the Zen concept that any kind of "do X" will imply that X is better than not X, i.e. a dualistic thought pattern, which is the precondition for suffering.

But besides that idea I don't really see how this post adds anything. 

Not to mention that identity tends to already be an instance of "X is better than not X". Paul Graham is sayi... (read more)

Well, I don't know if I'd call it a fully general argument against taking any kind of advice. I'd say it's more like a fully general argument against directly trying to optimize for the thing being measured, i.e. an argument that you should avoid Goodhart's curse.

As to to the post not adding anything, I guess that's true in a certain sense, but I find many people, myself included, are kinda bad at taking general advice and successfully applying it in all specific situations, so it's often helpful to take something general and consider specific instances of... (read more)

RSS Feeds are fixed and should be properly functional this time

Since this is the first Google result and seems out of date, how do we get the RSS link nowadays?

2habryka5moIn the left sidebar menu, click on Subscribe (RSS/Email).
Toon Alfrink's sketchpad

I may have finally figured out the use of crypto.

It's not currency per se, but the essential use case of crypto seems to be to automate the third party.

This "third party" can be many things. It can be a securities dealer or broker. It can be a notary. It can be a judge that is practicing contract law.

Whenever there is a third party that somehow allows coordination to take place, and the particular case doesn't require anything but mechanical work, then crypto can do it better.

A securities dealer or broker doesn't beat a protocol that matches buyers and sel... (read more)

Ego syntonic thoughts and values

I was in this "narcissist mini-cycle" for many years. Many google searches and no luck. I can't believe that I finally found someone who recognizes it. Thank you so much.

fwiw, what got me out of it was to attend a Zen temple for 3 months or so. This didn't make me less narcissistic, but somehow gave me the stamina to actually achieve something that befit my inflated expectations, and now I just refer back to those achievements to quell my need for greatness. At least while I work on lowering my expectations.

Babies and Bunnies: A Caution About Evo-Psych

It does not, but consider 2 adaptations:
A: responds to babies and more strongly to bunnies
B: responds to babies only

B would seem more adaptive. Why didn't humans evolve it?

Plausible explanation: A is more simple and therefore more likely to result from a random DNA fluctuation. 
Is anyone doing research into which kinds of adaptations are more likely to appear like this?

When Money Is Abundant, Knowledge Is The Real Wealth

Can you come up with an example that isn't AI? Most fields aren't rife with infohazards, and 20% certainty of funding the best research will just divide your impact by a factor 5, which could still be good enough if you've got millions.

For what it's worth, given the scenario that you've at least got enough to fund multiple AI researchers and your goal is purely to fix AI, I concede your point.

4johnswentworth6moHow about cancer research? This page [https://projectalpha.mit.edu/pos/] lists rate of success of clinical trials in different subfields; oncology clinical trials have a success rate of around 4%. I would also guess that a large chunk of the "successes" in fact do basically-nothing and made it through largely by being the one-in-twenty which hit 95% significance by chance, or managed to p-hack, or the like. From an inside view, most cancer research I've seen indeed looks pretty unhelpful based on my understanding of biology and how-science-works in general (and this goes double for any cancer research "using machine learning", which is a hot subfield). More generally: we live in a high-dimensional world [https://www.lesswrong.com/posts/pT48swb8LoPowiAzR/everyday-lessons-from-high-dimensional-optimization] . Figuring out "which direction to search in" is usually a much more taut constraint than having the resources to search. Brute-force searching a high-dimensional space requires resources exponential in the dimension of the space. Combine that with misaligned incentives for researchers, and our default expectation should usually be that finding the right researchers to fund is more of a constraint than resources.
When Money Is Abundant, Knowledge Is The Real Wealth

I don't like this post because it ignores that instead of yachts you can simply buy knowledge for money. Plenty of research that isn't happening because it isn't being funded. 

2johnswentworth6moThis runs into the same problem: unless you're already an expert, you can't distinguish actually-useful research from the piles of completely useless research (much of it by relatively high-status researchers). An example close to LW's original goals: imagine an EA five years ago, wanting to donate to research on safe/friendly AI. They hear somebody argue about how important it is for AI research to be open-source so that the benefits of AI can be reaped by everyone. They're convinced, and donate to a group trying to create widely-available versions of cutting-edge algorithms. From an X-risk standpoint, they've probably done close-to-nothing at best, and there's an argument to be made that their impact was net harmful. One needs to already have some amount of expertise in order to distinguish useful research to fund.
Great minds might not think alike

A Shor-to-Constance translation would be lossy because the latter language is not as expressive or precise as the former

Great minds might not think alike

Great work. 

I wonder just how far this concept can be stretched. Is focusing a translation from the part of you that thinks in feelings to the part of you that thinks in words? If you're translating some philosophical idea into math, are you just translating from the language of one culture to the language of another?

And if so, it strikes me that some languages are more effective than others. Constance may have had better ideas, but if Shor knew the same stuff as Constance (in his own language) perhaps he would have done better. Shor's language seems ... (read more)

Toon Alfrink's sketchpad

Personalized mythic-mode rendition of Goodhart's law:

"Everyone wants to be a powerful uncompromising force for good, but spill a little good and you become a powerful uncompromising force for evil"

How to Ignore Your Emotions (while also thinking you're awesome at emotions)

The parent-child model is my cornerstone of healthy emotional processing. I'd like to add that a child often doesn't need much more than your attention. This is one analogy of why meditation works: you just sit down for a while and you just listen

The monks in my local monastery often quip about "sitting in a cave for 30 years", which is their suggested treatment for someone who is particularly deluded. This implies a model of emotional processing which I cannot stress enough: you can only get in the way. Take all distractions away from someone and t... (read more)

Toon Alfrink's sketchpad

Look, if you can't appreciate the idea because you don't like it's delivery, you're throwing away a lot of information

-1ChristianKl8moGiven specific criticisms to proposals that people make isn't throwing away information.
Toon Alfrink's sketchpad

It's supposed to read like "this idea is highly unpolished"

-5ChristianKl9mo
Toon Alfrink's sketchpad

Here's an idea: we hold the Ideological Turing Test (ITT) world championship. Candidates compete to pass an as broad range of views as possible.

Points awarded for passing a test are commensurate with the amount of people that subscribe to the view. You can subscribe to a bunch of them at once.

The awarding of failures and passes is done anonymously. Points can be awarded partially, according to what % of judges give a pass.

The winner is made president (or something)

2MakoYass8moWhy aren't presidential races already essentially ITT Tournaments? It would seem like that skill would make you really good at drawing support from lots of different demographics.
-3ChristianKl9moThat basically reads like "just so you know, I'm kidding".
Sunny's Shortform

It might be hard to take a normative stance, but if culture 1 makes you feel better AND leads to better results AND helps people individuate and makes adults out of them, then maybe it's just, y'know, better. Not "better" in the naive mistake-theorist assumption that there is such a thing as a moral truth, but "better" in the correct conflict-theorist assumption that it just suits you and me and we will exert our power to make it more widely adopted, for the sake of us and our enlightened ideals.

Toon Alfrink's sketchpad

Case study: A simple algorithm for fixing motivation

So here I was, trying to read through an online course to learn about cloud computing, but I wasn't really absorbing any of it. No motivation.

Motives are a chain, ending in a terminal goal. Lack of motivation meant that my System 1 did not believe what I was doing would lead to achieving any terminal goal. The chain was broken.

So I traversed the chain to see which link was broken.

  • Why was I doing the online course? Because I want to become better at my job.
    • Do I still think doing the online course wil
... (read more)
3Matt Goldenberg1yAnother thing you can do when you get to that top level: "Is this the best way to get that? (living with close friends at the center of a vibrant community), if not, what is?"
Toon Alfrink's sketchpad

Question for the Kegan levels folks: I've noticed that I tend to regress to level 3 if I enter new environments that I don't fully understand yet, and that this tends to cause mental issues because I don't always have the affirmative social environment that level 3 needs. Do you relate? How do you deal with this?

4Matt Goldenberg1yBy dealing with trauma and taking shame and guilt as object. By incorporating the need for belonging into my systemic understanding. I had this for a long time but it's pretty rare now.
Toon Alfrink's sketchpad

As someone who never came across religion before adulthood, I've been trying to figure it out. Some of it's claims seem pretty damn nonsensical, and yet some of it's adherents seem pretty damn well-adjusted and happy. The latter means there's gotta be some value in there.

The most important takeaway so far is that religious claims make much more sense if you interpret them as phenomenological claims. Claims about the mind. When buddhists talk about the 6 worlds, they talk about 6 states of mood. When christians talk about a coven... (read more)

1MikkW2moMy shortform on religion being about belief taxes [https://www.lesswrong.com/posts/D4dAic97EWM9S7Lnu/mikkw-s-shortform?commentId=Y8EH7y5GZ5zHbsHtQ] may interest you
Toon Alfrink's sketchpad

As for being on ibogaine, a high dose isn't fun for sure, but microdoses are close to neutral and their therapeutic value makes them net positive

Toon Alfrink's sketchpad

Have you tried opiates? You don't need to be in pain for it to make you feel great

4Swimmer9631yI had an above knee amputation due to cancer in March and have been on opiates – several different kinds, less over time, sometimes when I was in a lot of pain and sometimes more prophylactically when I'm not in pain but am preparing for something I expect to be painful. I mostly hate the experience of being on them, especially the "high" if I take it before I'm actually in pain from physiotherapy or whatever. (I do appreciate being in less pain. Pain is bad.) I...guess it's interesting and I could see a different person liking the experience? I get a lot of dissociative effects, especially with the IV opiates they gave me in hospital. (Feeling like I'm floating above my body, feeling like I don't have free will and am just watching my actions happen from a distance.) I don't particularly enjoy this. They also make me feel tired and out of it / cognitively impaired, and I am really, really averse to that. I ended up drinking so much coffee in the hospital trying to fight this off. My guess is that brains vary and some people would experience this as "feeling great". (I've noticed this with other things like stimulants; I really like how coffee makes me feel, for example, but I know a lot of people who experience it as anxiety/unpleasant jitteriness.)
2toonalfrink1yAs for being on ibogaine, a high dose isn't fun for sure, but microdoses are close to neutral and their therapeutic value makes them net positive
Toon Alfrink's sketchpad

Ibogaine seems to reset opiate withdrawal. There are many stories of people with 20 year old heroin addictions that are cured within one session.

If this is true, and there are no drawbacks, then we basically have access to wireheading. A happiness silver bullet. It would be the hack of the century. Distributing ibogaine + opiates would be the best known mental health intervention by orders of magnitude.

Of course, that's only if there are no unforeseen caveats. Still, why isn't everybody talking about this?

4romeostevensit1yAnti-tolerance drugs seem neglected, tractable, and scalable. We've done some shallow investigation at QRI and think it is pretty promising. Have been keeping it as a bullet point as we ask around in funding and academic circles. It's an area that could use a dedicated effort for sure.
4Swimmer9631yBased on a quick glance at the Wikipedia page, it looks like ibogaine may have a significant risk of toxicity (and also the experience of being on it does not sound necessarily fun? I would not choose to take it): https://en.wikipedia.org/wiki/Ibogaine [https://en.wikipedia.org/wiki/Ibogaine] Also, I think this would rely on opiates being a pleasure-causing experience for everyone or almost everyone, which doesn't seem obviously true to me. (Source: recently had major surgery, had experience of various opiates including given by IV, kind of hate all of them except for the part where they result in less physical pain.)
Paul Crowley's Shortform

Did Dominic Cummings in fact try a "Less Wrong approach" to policy making? If so, how did it fail, and how can we learn from it? (if not, ignore this)

Toon Alfrink's sketchpad

I did all the epistemic virtue. I rid myself of my ingroup bias. I ventured out on my own. I generated independent answers to everything. I went and understood the outgroup. I immersed myself in lots of cultures that win at something, and I've found useful extracts everywhere.

And now I'm alone. I don't fully relate to anyone in how I see the world, and it feels like the inferential distance between me and everyone else is ever increasing. I've lost motivation for deep friendships, it just doesn't seem compatible with learning new t... (read more)

2Eli Tyre1yI have often felt similarly.
2romeostevensit1yThe sense of ungroundedness sine waves over time afaict. The old strategies for connection had untenable foundations (e.g. tacit shared metaphysical assumptions), so you'll need to learn new ones. The Charisma Myth and NVC are good for bootstrapping some of the skills. Motivation in the new regime can't arise because you don't have even the proto-skills necessary to get a rewarding feedback loop going yet.
4Dagon1yCan't speak for anyone else, and I don't know what my counterfactual selves feel like. https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing [https://www.lesswrong.com/posts/6NvbSwuSAooQxxf7f/beware-of-other-optimizing] - I don't know if you and I are similar in ways that matter on this topic. In fact, I don't know what mental features are important for how to optimize on this topic. Anyway, this is not advice, simply a framing that works for me. For me, I believe it's worth it. The tunnel widens a lot, and has LOTS of interesting features in it, but it does not end - there is a fairly fundamental truth underlying that loneliness, and I don't know of any acceptable ways for me to deny or forget that truth (to myself). I've become hyper-aware of the complexity and variance in humanity, and in myself moment-to-moment and year to year. This makes me quite able to have deep connections with many people, EVEN WHILE understanding that they model the universe differently than I on many dimensions. We can't have and don't need agreement on everything, or even on ontologically fundamental topics. We can agree that sitting around a campfire talking about our human experiences is desirable, and that's enough. With other groups, I can explore moral philosophy without a realism assumption, even if I don't particularly want to hang out with them on less intellectual topics.
Is competition good?

I don't recall, this is one of those concepts that you kind of assemble out of a bunch of conversations with people that already presuppose it

What Money Cannot Buy

Here's another: probing into their argument structure a bit and checking if they can keep it from collapsing under its own weight.

https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies

What Money Cannot Buy

Probably the skill of discerning skill would be easier to learn than... every single skill you're trying to discern.

Conflict vs. mistake in non-zero-sum games
The outgroup is evil, not negotiating in good faith, and it's an error to give them an inch. Conflict theory is the correct one for this decision.

Which outgroup? Which decision? Are you saying this is universally true?

3Dagon1y[note: written awhile ago, and six votes netting to zero indicate that it's at best a non-helpful comment] For some outgroups and decisions, this applies. It doesn't need to be universal, only exploitable. Often mistake theory is helpful in identifying acceptable compromises and maintaining future cooperation. Occasionally, mistake theory opens you to disaster. You shouldn't bias toward one or the other, you should evaluate which one has the most likely decent outcomes. Also, I keep meaning to introduce "incompetence theory" (or maybe "negligence theory")- some outgroups aren't malicious and aren't so diametrically opposed to your goals that it's an intentional conflict, but they're just bad at thinking and can't be trusted to cooperate.
"human connection" as collaborative epistemics

Forgive me for stating things more strongly than I mean them. It’s a bad habit of mine.

I’m coming from the assumption that people are much more like Vulcans than we give them credit for. Feelings are optimizers. People that do things that aren’t in line with their stated goals, aren’t always biased. In many cases they misstate their goals but don’t actually fail to achieve them.

See my last shortform for more on this

1Pattern2yThe one about lookup tables?
Toon Alfrink's sketchpad

So here's two extremes. One is that human beings are a complete lookup table. The other one is that human beings are perfect agents with just one goal. Most likely both are somewhat true. We have subagents that are more like the latter, and subsystems more like the former.

But the emphasis on "we're just a bunch of hardcoded heuristics" is making us stop looking for agency where there is in fact agency. Take for example romantic feelings. People tend to regard them as completely unpredictable, but it is actually possible to predict to some extent whether y

... (read more)
2Dagon2yThese aren't contradictory or extremes of a continuum, they're different levels of description of agency. A complete enough lookup table is indistinguishable from having goals. A deep enough neural net (say, a brain) is a pretty complete lookup table. The "one goal" idea is a slightly confused modeling level - "goal" isn't really unitary or defined well enough to say whether a set of desires is one conjunctive goal or many coordinated and weighted goals.
5[anonymous]2yNeither one of those assumptions are true. There’s a lot we don’t know about neuroscience, but we do know that we’re not giant lookup tables, and we don’t have singular goals.
Moloch feeds on opportunity
I am aware that confessing to this in most places would be seen as a huge social faux pas, I'm hoping LW will be more understanding.

You're good. You're just confessing something that is true for most of us anyway.

Where I have a big disagreement is in the lesson to take from this. Your argument is that we should essentially try to turn off status as a motivator. I would suggest it would be wiser to try to better align status motivations with the things we actually value.

Up to a point. It is certainly true that status motivations have led to g... (read more)

4Bucky2yI think that’s a good explanation. I agree that the solution to Akrasia I describe is kind of hacked together and is far from ideal. If you have a better solution to this I would be very interested and it would change my attitude to status significantly. I suspect that this is the largest inferential gap you would have to cross to get your point across to me, although as I mentioned I’m not sure how central I am as an example. I’m not sure suffering is the correct frame here - I don’t really feel like Akrasia causes me to suffer. If I give in then I feel a bit disappointed with myself but the agent which wants me to be a better person isn’t very emotional (which I think is part of the problem). Again there may be an inferential gap here.
Toon Alfrink's sketchpad

Right, right. So there is a correlation.

I'll just say that there is no reason to believe that this correlation is very strong.

I once won a mario kart tournament without feeling my hands.

Give praise
People generally only discuss 'status' when they're feeling a lack of it

While this has been true for other posts that I wrote about the subject, this post was actually written from a very peaceful, happy, almost sage-like state of mind, so if you read it that way you'll get closer to what I was trying to say :)

Give praise

I appreciate your review.

Most of your review assumes that my intent was to promote praise regardless of honesty, but quite the opposite is true. My intent was for people to pause, take a breath, think for a few moments what good things others are doing, and then thank them for it, but only if they felt compelled to do so.

Or I'll put it this way: it's not about pretending to like things, it's about putting more attention to the things about others that you already like. It's about gratefulness, good faith and recognition. It's abou... (read more)

Toon Alfrink's sketchpad

I have gripes with EA's that try to argue about which animals have consciousness. They assume way too readily that consciousness and valence can be inferred from behavior at all.

It seems quite obvious to me that these people equate their ability to empathize with an animal with the ability for the animal to be conscious, and it seems quite obvious to me that this is a case of mind projection fallacy. Empathy is just a simulation. You can't actually see another mind.

If you're going to make guesses about whether a species is conscious, you sho... (read more)

4Matthew Barnett2yNot to be pedantic, but what else could consciousness possibly be, except for a way of describing the behavior of some object at a high level of abstraction? If consciousness was not a behavior, but instead was some intrinsic property of a system, then you run into the exact same argument [https://plato.stanford.edu/entries/zombies/] that David Chalmers uses to argue that philosophical zombies are conceivable. This argument was forcefully rebutted [https://www.lesswrong.com/posts/7DmA3yWwa6AT5jFXt/zombies-redacted] in the sequences. ETA: When I say behavior, I mean it in the physical sense. A human who is paralyzed but nonetheless conscious would not be behaviorally identical to a dead human. Superficially yes, but behavior means more than seeing what goes on outside. While you might say that I'm simply using a different definition of behavior than you were, I think it's still relevant, because any evolutionary reason for consciousness must necessarily show up in observational behavior, or else there is no benefit and we have a mystery.
2Isnasene2yI think people who refer to animal behavior in making statements about consciousness are making a claim more along the lines of "given that a being has a brain with superficial similarities to ours and was evolved via a process similar to our own evolution, we can take it's behavior as higher level indicators of what its brain is doing and infer things about consciousness." Otherwise, these people would also grant consciousness to all sorts of things we make with superficially human behavior but obviously different mechanisms (ie non-playable characters in MMOs, chatbots). I read a lot more about consciousness back in the day and I'm not convinced that neural correlates are any better evidence for consciousness than behavior, given that the beings we're considering already have brains. I'm no expert but per wikipedia on neural correlations of consciousness, we don't have much in terms of neural correlates: Per Open Philanthropy's 2017 report [https://www.openphilanthropy.org/2017-report-consciousness-and-moral-patienthood] on consciousness on cortex-requiring views (CRVs), we're not really sure how important having a cortex is for consciousness: And from the same report, there aren't really any clear biological factors that can be used to draw lines about consciousness: Moreover, people who have done way more thorough research into correlates of consciousness [https://www.rethinkpriorities.org/blog/2019/6/13/invertebrate-sentience-table] than me use both (ie anatomical features [https://forum.effectivealtruism.org/posts/HrkBavxo4eLwK9C9s/features-relevant-to-invertebrate-sentience-part-1] as an example of neural correlates, motivational trade-offs [https://forum.effectivealtruism.org/posts/QNEEwERsuAd5ockbQ/features-relevant-to-invertebrate-sentience-part-2] , as an example of behavior). Given that animals already have a bunch of similarities to humans, it strikes me as a mistake not to consider behavior at all.
2Dagon2yA reductio ad absurdum for this is the strong skeptical position: I have no particular reason to believe that anything is conscious. All configurations of quantum space are equally valuable, and any division into "entities" with different amounts of moral weight is ridiculous.
2Lanrian2yThe strong version of this can't be true. You claiming that you're conscious is part of your behaviour. Hopefully, it's approximately true that you would claim that you're conscious iff you believe that you're conscious. If behaviour doesn't at all correlate with consciousness, it follows that your belief in consciousness doesn't at all correlate with you being conscious. Which is a reductio, because the whole point with having beliefs is to correlate them with the truth.
Raemon's Shortform

I'm looking forward to a bookshelf with LW review books in my living room. If nothing else, the very least this will give us is legitimacy, and legitimacy can lead to many good things.

5Hazard2y+1 excitement about bookshelves :)
Affordance Widths

To me, the most useful part of this post is that it introduces this idea that affordances are personal, i.e. some people are allowed to do X while others are not. I like to see this as part of the pervasive social machinery that is Omega.

I imagine people of a certain political background to want to sneer at me, as in, "why did it take someone in your in-group to tell you this?"

To which I admit that, indeed, I should have listened. But I suppose I didn't (enough), and now I did, so here we are with a post that made my worldview more empathetic. The bottom line is what matters.

Transhumanism as Simplified Humanism

This post has been my go-to definition of Transhumanism ever since I first read it.

It's hard to put into words why I think it has so much merit. To me it just powerfully articulates something that I hold as self-evident, that I wish others would recognize as self-evident too.

Tradition is Smarter Than You Are

To me, this is exactly what the LW community (and the broader progressive tribe surrounding it) needs to hear. This post, along with other developments of thought in the same direction, has caused a major shift in how I think about changing things.

The first quote is most important, and I find myself using it quite often if I'm met with a person my age (or even older) that dismisses a tradition as obviously dumb. Why do you think the tradition exists in the first place? If you don't know, how can you be so sure it doesn't serve some function?... (read more)

The Tails Coming Apart As Metaphor For Life

I still find myself using this metaphor a lot in conversations. That's a good benchmark for usefulness.

The Costly Coordination Mechanism of Common Knowledge

I could write a paragraph to explain some concept underlying a decision I made. OR there could be a word for this concept, in which case I can just use the word. But I can't use that word if it's not commonly understood.

The set of things that are common knowledge in a group of people is epistemic starting point. Imagine you had to explain your niche ideas about AI without using any concepts invented past 1900. You'd be speaking to toddlers.

I needed "common knowledge" to be common knowledge. It is part of our skill of upgrading skills. It's at the core of group rationality.

A voting theory primer for rationalists

This post introduced me to a whole new way of thinking about institutional/agency design. The most merit, for me, was pointing out this field existed. The subject is close to one of the core subjects of my thoughts, which is how to design institutions that align selfish behavior with altruistic outcomes on different hierarchical levels, from the international, to the cultural, national, communal, relational, and as far down as the subagent level.

2Ben Pace2yNote that this is just a nomination, there'll be a whole month for reviews. But the nomination deadline is Monday, so get them in quick! :)
Is daily caffeine consumption beneficial to productivity?

I don't think this is the right question to ask. Even if the net alertness gain of a cup of coffee is 0, it is still worth consuming during those moments that alertness is worth more, and abstaining during those moments where relaxation is worth more. Net alertness is not net EV.

4ChristianKl2yThere are different ways to consume caffeine. A lot of people consume caffeine as part of their morning routine. It's useful to know whether or not that increases productivity even when there are also other ways to consume caffeine.
RAISE post-mortem

Good catch, fixed it.

100x is obviously a figure of speech. I'd love to see someone do some research into this and publish the actual numbers

4moses2yThe number could easily be infinity; I have no problem imagining that most people have zero positive impact for more than half the years of their careers (even the ones that end up having some positive impact overall)
RAISE post-mortem

I suppose I was naive about the amount of work that goes into creating an online course. I had been a student assistant where my professor would meet with me and the other assistants to plan the entirety of the course a day before it started. Of course this was different because there was already a syllabus and the topic was well understood and well demarcated.

Also, I had visited Berkeley around that time, and word was out about a new prediction that the singularity was only 15 years ahead. I felt like I had no choice but to try and do something. Start mov... (read more)

8Pablo2yCan you say more about this?
Load More