All of ZeitPolizei's Comments + Replies

The preprint for the article on cognitive decline due to long COVID was shared in the LessWrong Telegram group last October. I looked over it at the time and wrote down some notes, which I will reproduce here. Note I haven't looked through the final version of the article to check if things still match up.

I skimmed the article a bit, some things I noted:
- N=84,285 is the number of people who took the cognitive tests. Only 361 had a positive corona test
- In the abstract they say they controlled for age, gender, education level, income, racial-ethnic group a

... (read more)

I would not interpret your case as severe according to this grade. The specification "some assistance usually required" seems like they mean your reaction is so bad you need help eating/washing/using the toilet, which I assume was not the case for you. While -- especially for a young healthy person -- staying in bed all day is a "marked reduction" of activity, there's still quite some room for further worsening before you're at a point where it's life threatening.

While the wording could be more clear, if my interpretation is correct I would agree this is an OK grading to use.

What about Israel? R-value is the highest since the first wave despite ~60% vaccinated with afaict mostly mRNA vaccines: OurWolrdInData Link.

Luckily hospitalizations/deaths also appear to be not strongly affected.

The main advantage for underage drinking is that a bartender only has to check the birth date on the ID, whereas for self-exclusion, they would have to check the id against a database or there would have to be some kind of icon on the id.

In principle, I guess you could also think about low-tech solutions. For example, people who want to opt out of alcohol might have some slowly dissolving tattoo / dye placed somewhere on their hand or something. This would eliminate the need for any extra ID checks, but has the big disadvantage it would be visible most of the time.

What was wrong with the original plan to open source the vaccine to any company that wanted to make it and have them compete to scale up production?

Bill Gates himself addresses this question here:

The reasoning as I understand it: Vaccine production is too complicated for open access to work well. There is a significant risk that something goes wrong and the vaccine factory has to shut down, so it is better for that factory to be producing a vaccine they know exactly how to do. Oxford partnering with AstraZeneca ensures th... (read more)

Some information regarding the start and end times:

  • Lunch + Shuttle (included) 30.08. 12:00-14:00 @ Akazienstraße 27, 10823 Berlin
  • Regular check in 15:00-16:00 @ Badeweg 1, 14129 Berlin
  • Check out on 02.09. until 10:00
  • Meeting Rooms are free to use until 15:00 on 02.09 for co-working after the event
Most importantly, this framing is always about drawing contrasts: you're describing ways that your culture _differs_ from that of the person you're talking to. Keep this point in the forefront of your mind every time you use this method: you are describing _their_ culture, not just yours. [...] So, do not ever say something like "In my culture we do not punish the innocent" unless you also intend to say "Your culture punishes the innocent" -- that is, unless you intend to start a fight.

Does this also apply to your own personal... (read more)

Does this also apply to your own personal culture (whether aspiring or as-is), or "just" the broader context culture?

We're talking about a tool for communicating with many different people with many different cultures, and with people whose cultures you don't necessarily know very much about. So the bit you quoted isn't just making claims about my culture, or even one of the (many) broader context cultures, it's making claims about the correct prior over all such cultures.

But what claims exactly? I intended these two:

  1. When you say, "In my culture X",
... (read more)

Oooooh, I like this a lot. In particular, this resolves for me a bit of tension about why I liked the above comment and also disagreed with it—you've helped me split those reactions out into two different buckets. Seems relevant to common-knowledge-type stacks as well.

test comment

I agree in principle, though this depends of course how far in advance it is announced. If it's reasonable to expect that it's possible to fill 20 slots with 2 months advance notice this gives more flexibility in planning.

"Retreat" in the sense of a spiritual retreat, but with the topic of rationality instead of meditation or spirituality. Following the same principle as, e.g. the Czech EA retreat.

"Rationality" as it is generally understood on LessWrong. So this is aimed at people who aspire to be more rational and want to interact with like-minded people.

Good point! I hadn't really thought of Facebook and the local groups for advertising.

[Note: mostly just me trying to order my thoughts, kind of hoping someone can see and tell me where my confusion comes from]

So the key insight regarding suffering seems to be that pain is not equal to suffering. Instead there is a mental motion (flinching away from pain) that produces (or is equal to?) suffering. And whereas most people see pain as intrinsically bad, Looking allows you to differentiate between the pain and the flinching away, realizing that pain in and of itself is not bad. It also allows you to get rid of the flinching away, thus eliminat... (read more)

1Eisher Saroya5y
I wonder about this too. If there is pleasure and the mental experience of welcoming a pleasure, then what happens if you stop 'welcoming a pleasure'? Wouldn't you no longer be motivated to pursue pleasure? How would you ever feel happy? Would pleasure feel 'bland' or unsatisfying? I also wonder if it's possible to mistakenly decouple pleasure and 'welcoming a pleasure' without ever meditating?
In this pos []t, Nate Soares outlines a thing he calls "Moving Towards the Goal", which feels incredibly relevant to this conversation. I'd highly recommend Nate's Replacing Guilt [] sequence. In a very concrete, "traditional LW" way, he lays out how you can still do cool stuff, yet not think in terms of shoulds, guilt, or intrinsically good or bad.
I'm confused about what you mean by "intrinsically bad" here, and especially given the relationship of the second question to the first question, suspect that your concept of "intrinsically bad" conflates at least two things. Your second question is much easier to answer: yes, you can defuse from flinching, and yes, that makes it less unpleasant. Yes, there is a mental motion of welcoming an experience, and you can do it to any experience, not just pleasurable ones; you can even find joy in welcoming any experience, not just pleasurable ones. I am still confused about what you mean by "intrinsically good." Because you want to. (I'm not sure how to explain what I mean by this. For me the internal experience of "I want this" is quite different from the experience of "I am chasing after this in order to escape from pain / suffering," but the distinction may not be experientially clear for many / most people.) Yes, lots. I used to flinch away from pain constantly; I do it less now, which means I'm more free to do things that I want to do, like make music and hug people and generally flourish and encourage human flourishing. Also, I increasingly suspect you have some confusion wrapped up in your concept of "intrinsically good / bad." Nope. Uh... all of... the other ones? I'm confused about what you mean by this and what it would mean to answer this question, mostly because I don't know what you mean by "matter." Yes, I think so too. Can you try paying a lot of attention to what comes up when you think about the concept of "intrinsically good" or "intrinsically bad" (edit: also "suffering" and "mattering") and just write down literally everything that pops into your head, including words or sentences that sound outrageous or too dramatic or whatever?
Another theme of the book that reaches its crescendo…

The paragraph beginning with this sentence is duplicated and butchered.

1Charlie Steiner5y
Thanks, fixed! I probably tried to edit the post on mobile.

I haven't really understood where the fakeness in the framework is. And the other comments also seem to not acknowledge, that it is a fake framework, which I am interpreting as people taking this framework at face value to be true or real. I suspect I haven't quite understood what is meant by "fake framework".

I'm currently seeing two main ways in which I can make the fakeness make sense to me:

  1. People do step out of their roles quite often in real life, breaking the expectations of the web. So the framework works better for broad strokes predicti
... (read more)
I haven't really understood where the fakeness in the framework is.

Well, by my model of epistemic hygiene, it's therefore especially important to label it "fake" as you step into using it. Otherwise you risk forgetting that it's an interpretation you're adding, and when you can't notice the interpretations you're adding anymore then you have a much harder time Looking at what's true.

In my usage, "fake" doesn't necessarily mean "wrong". It means something more like "illusory". T... (read more)

I would say that it's fake in that we're not literally actors playing roles while being unaware of it, events in our life are not literally scenes in a play, etc. Lots of metaphors that are easy to understand and which import useful reasoning rules [] from the domain of theater, but the things that they refer to are probably implemented quite differently in brains.
3[comment deleted]5y

Ah, you seem to automatically interpret the "math is useless" as meaning "math is useless to me". But people can also mean − and that's what I was trying to get at − that "math is of no use for anything, to anyone". This would be the X being good threatened as Kaj pointed out.

I still don't find this threatening. It's clearly false and I'm not worried about more people believing it; furthermore, again, even if they did, it would only hurt them, not me.

Is there a difference between what you are describing and simply having a more or less nuanced view on the matter? It seems like you're confirming exactly what Paul Graham describes. You've made your identity as a mathematician smaller and are thus no longer threatened by people expressing certain opinions on math. But there are still things that are fundamental to your identity as a mathematician, that need protecting. If someone says "math is useless" does that not evoke a feeling of needing to defend maths?

When Paul Graham says "smaller" he means chucking out labels entirely: And no, people saying "math is useless" does not evoke any feeling in me of needing to defend math. For many people it's just true for them and that's something I can get behind. Most people throughout history did not learn math and were fine, and even most people today need very little math to get by. It also just wouldn't actually hurt me personally for this meme to spread; if anything, to the extent that I think math is useful, other people thinking math is useless reduces my competition. They're just denying themselves an incredibly useful tool.

Could you paint a more detailed picture of what you mean by happiness? There is a wide range of things that can be called happiness, and I assume you only mean some of them. In particular, I don't think you mean the happiness you feel when you get a reward, because that's what we are actually optimized for achieving.

A lot of people asked this, so I've added a note at the end of the post.

It seems no one on LW is able to explain to you how and why people want different material. To my mind, Kaj's explanation is perfectly clear. I'm afraid it's up to you, to figure it out for yourself. Until you do, people will keep giving you invalid arguments, or downvote and ignore you.

how will writing it again change anything?

Why should anyone answer this question? Kaj has already written an answer to this question above, but you don't understand it. How will writing it again change anything? You still won't understand it. This request for an explanation makes no sense whatsoever. It's not that you understand the answer and have some complaint and want it to be better in some way, you just won't understand.

You claim you want to be told when you're mistaken, but you completely dismiss any and all arguments. You're just like "thes... (read more)

Do you want new material which is the same as previous material, or different? If the same, I don't get it. if different, in what ways and why?

I suspect you may be thinking of the thing where people prefer e.g. a (A1) 100% chance of winning 100€ (how do I make a dollar sign?) to a (A2) 99% chance of winning 105€, but at the same time prefer (B2) a 66% chance of winning 105€ to (B1) a 67% chance of winning 100€. This is indeed irrational, because it means you can be exploited. But depending on your utility function, it is not necessarily irrational to prefer both A1 to A2 and B1 to B2.

The current topic is epistemology, not the color of the sky, so you don't get to gloss over epistemology as you might in a conversation about some other topic.

So because the discussion in general is about epistemology, you won't accept any arguments for which the epistemology isn't specified, even if the topic of that argument doesn't pertain directly to epistemology, but if the discussion is about something else, you will just engage with the arguments regardless of the epistemology others are using?

That seems… unlikely to work well... (read more)

I'm literally asking you to specify your epistemology. Offer some rival to CR...? Instead you offer me Occam's Razer which is correct according to some unspecified epistemology you don't want to discuss. CR is a starting point. Do you even have a rival starting point which addresses basic questions like how to create and evaluate ideas and arguments, in general? Seems like you're just using common sense assumptions, rather than scholarship, to evaluate a variant of Occam's Razor (in order to defend induction). CR, as far as I can tell, is competing not with any rival philosophy (inductivist or otherwise) but with non-consumption of philosophy. (But philosophy is unavoidable so non-consumption means using intuition, common sense, cultural defaults, bias, etc., rather than thinking about it much.) If you want stories about my discussions with DD, ask on the FI forum, not here.

Given some data and multiple competing hypotheses that explain the data equally well, the laws of probability tell us that the simplest hypothesis is the likeliest. We call this principle of preferring simpler hypotheses Occam's Razor. Moreover, using this principle works well in practice. For example in machine learning, a simpler model will often generalize better. Therefore I know that Occam's Razor is "any good". Occam's Razor is a tool that can be used for problems as described by the italicized text above. It makes no claims ... (read more)

Epistemology tells you things like what an argument is and how to evaluate whether ideas are good or bad, correct or incorrect. I'm saying you need to offer any epistemology at all under which the arguments you're currently making are correct. Supposedly you have an induction-based epistemology (I presume), but you haven't been using it in your comments, you're using some other unspecified epistemology to guide what you think is a reasonable argument. The current topic is epistemology, not the color of the sky, so you don't get to gloss over epistemology as you might in a conversation about some other topic.
how do you know Occam's Razor is any good?

Imo chapter 28 of this book gives a good sense why Occam's Razor is good. I'll try to explain it here briefly as I understand it.

Suppose we have a class of simple models, with three free binary parameters, and a class of more complex models, with ten free binary parameters. We also have some data, and we want to know which model we should choose to explain the data. A priori, out of the parameter sets for the simple model each has a probability of 1/8 of being the best one, whereas for the complex m... (read more)

None of this is relevant to specifying the prior epistemology you are using to make this argument, plus you begin with "simple models" but don't address evaluating explanations/arguments/criticisms.

No, because the main reason I recommended this is that I only have a vague understanding of what is meant by civilisational inadequacy.

Eliezer lately wrote a book about it and expanded the concept in multiple post. I would assume that a good portion of the LesserWrong readership read it. If there's a shorter definition somewhere I'm happy to link it.

It would be nice to include a definition of what is meant by civilisational inadequacy, or at least a link to a reference.

Is there a link you would recommend?

OK, if I'm interpreting this correctly, "consistency" could be said to be the ability to make a plan and follow through with it, barring new information or unexpected circumstances. So the actions the CDT agent has available aren't just "say yes" and "say no" but also "say yes, get into the car, and bring the driver 1000$ once you are in the city", interpreting all of that as a single action.

However in that case, it is not necessary to distinguish between detecting lies and simulating.

1[comment deleted]6y

How is detecting lies fundamentally different from simulation? What is a lie? If I use a memory charm on myself, such that I honestly believe I will pay 1000$, but only until I arrive in the city, would that count as a lie? Isn't the whole premise of the hitchhiker problem, that the driver cannot be tricked, and you're just saying "Ah, but if the driver can be tricked in this way, this type of decision theorist can trick her!"

1[comment deleted]6y

Is anything known about how many people who weren't already rationalists have been inspired by HPMOR to make a serious effort at being rational and changing the world, and (even harder to find out) what they have actually done as a result?

I have been keeping track of which people have read at least parts of HPMOR either directly or indirectly because of my recommendation, so I think I can give at least a rough idea of what the answer may look like.

All of this is as far as I know, I haven't directly asked many of the people about this.

Including my

... (read more)
Thanks! Just to check I've understood, these are all people who had no previous exposure to the LW community before you said "hey, why don't you read this thing"? Do you have any conjectures about how the results might have been different if HPMOR hadn't existed and instead you'd pointed them at the LW website, or the "Sequences", or some non-LW resource with related content (e.g., one of the various books about irrationality and cognitive biases and the like)?

Right now I'm exploring the possibility of setting up a site similar to yourmorals so that the survey can be effectively broken up and hosted in a way where users can sign in and take different portions of it at their leisure.

It may be worth collaborating with the EA community on this, since there is considerable overlap, both in participants and in the kinds of surveys people may be interested in.

I'd consider putting FRI closer to Effective Altruism, since they are also concerned with suffering more generally.

Do you have criteria for including fiction? Other relevant fiction I am aware of:

Also Vernor Vinge is spelled with an 'o'.

Thank you for your comments. I have included them in version 1.1 of the map [], where I have swapped FRI and OpenAI/DeepMind, added Crystal Trilogy and corrected the spelling of Vernor Vinge.

I think both private non-anonymous reactions and public anonymous reactions are likely to be valuable, whereas public non-anonymous reactions could be potentially harmful and private anonymous reactions seem mostly useless.

"I've seen this" coming from the parent poster and "nice post" are valuable feedback for the author of the post/comment, but less useful information for other people so it would best be private and non-anonymous.

Reactions that say something about the content of a comment, like "interesting" or "con

... (read more)

StackExchange also has a minimum reputation requirement for votes to count. When you try to vote on something it displays a box saying the vote was recorded but doesn't change the publicly displayed vote count.

What I don't like about the way it is implemented on StackExchange is that it seems it's not possible to take back a vote until you have enough reputation to vote at all.

Besides protecting the vote count from being distorted by newcomers, I think the main advantage is that it makes it much harder to farm a bunch of karma with sockpuppet accounts.

From what I've read so far, I think Information Theory, Inference and Learning Algorithms does a rather good job of conveying the intuitions behind topics.

Cool, thanks! I'll try it out.

It reminds me a lot of the "mastermind group" thing, where we had weekly hangouts to talk about our goals etc. The America/Europe group eventually petered out (see here for retrospective by regex), the Eurasia/Australia group appears to be ongoing albeit with only two (?) participants.

There have also been online reading groups for the sequences, iirc. I don't know how those went though.

forums, wikis, open source software

I see a few relevant differences:

  • number of participants: If there are very many people, most of which are only sporadicall
... (read more)
We might be able to apply these "differences" to our attempt. A lot of the value we're talking about here is just some basic direction to get started and help when you get stuck. That's a pretty "small barrier to entry", and then "small incremental improvements". Could we dedicate a Slack channel to video tutoring? My experience with small IRC groups is that there is a small number of experts who check in frequently, or at least daily. Then the beginners will occasionally pop in and ask questions. If they're patient enough to stay on, an expert usually answers within the day, and often it starts a real-time chat when the expert mentions the beginner's handle. We could use the Slack channel to ask questions to get started or when we get stuck. If an appropriate teacher is on, then they can start a video chat/screen share on another site. There would be no obligation for a certain time limit.
I'll second that it's relevant. Links should say what they point to though. In this case, it was: Idea for LessWrong: Video Tutoring

I think this is a great idea, likely to have positive value for participants. So going Hamming questions on this, I think two things are important.

  1. I think the most likely way this is going to "fail", is that a few people will get together, then meet about three times, and then it will just peter out, as participants are not committed enough to participate long-term. Right now, I don't think I personally would participate without there being a good reason to believe participants will keep showing up, like financial incentives, for example.
  2. Don't
... (read more)
Failure seems like the default outcome. How do we avoid that? Have there been other similar LessWrong projects like this that worked or didn't? Maybe we can learn from them. Group projects can work without financial incentives. Most contributors to wikis and open-source software, and web forums like this one, aren't paid for that. Assume we've made it work well, hypothetically. How did we do it?

To survive, and increase one's power are instrumentally convergent goals of any intelligent agent, which means that evolution does not select for any specific type of mind, ethics, or final values.

But, as you also point out, evolution "selects on the criterion of ingroup reproductive fitness", which does select for a specific type of mind and ethics, especially if you also have the constraint, that the agent should be intelligent. As far as I am aware all of the animals considered the most intelligent are social animals (octopi may be an excep... (read more)

There is also Bertrand, which is organic. Their ingredients look like it would be pretty tasty, but it costs 9€ per day.

What's wrong with (instrumental) Rationality?

"Rationality" is the tool, but by itself, doesn't describe what goals and values the tool is being used to promote. There can be rational altruists, rational hedonists, rational omnicidal maniacs who want to eliminate suffering by eliminating life, rational egoists, and so on.

Yeah, the estimates will always be subjective to an extent, but whether you choose historic figure, or all humans and fictional characters that ever existed or whatever, shouldn't make huge differences to your results, because, in Bayes' formula, the ratio P(C|E)/P(C) ¹ should always be roughly the same, regardless of filter.

¹ C: coin exists
E: person existed

why can't Mary just look at the neural spike trains of someone seing red?

Why can't we just eat a picture of a plate of spaghetti instead of actual spaghetti? Because a representation of some thing is not the thing itself. Am I missing something?

Yes: is about a kind of knowledge. The banal truth here is that knowing about a thing doesn't turn you into it. The significant and contentious claim is that here are certain kinds of knowledge that can only be accessed by instantiating a brain state. The existence of such subjective knowledge leads to a further argument against physicalism.

The AI analogue would be: If the AI has the capacity to wirehead itself, it can make itself enter the color perception subroutines. Whether something new is learned depends on the remaining brain architecture. I would say, in the case of humans, it is clear that whenever something new is experienced, the human learns what that experience feels like. I reckon that for some people with strong visualization (in a broad sense) abilities it is possible to know what an experience feels like without experiencing first hand by synthesizing a new experience from pr... (read more)

The stronger someones imaginative ability is, the more their imagining an experience is actually having it, in terms of brain states....and the less it s a counterexample to anything relevant. If the knowedge the AI gets from the colour routine is unproblematically encoded in a string of bits, why can't it just look at the string of bits...for that matter, why can't Mary just look at the neural spike trains of someone seing red?

From the cover text of How to Build a Brain it seems the main focus is on the architecture of SPAWN, and I suspect it does not actually give a proper introduction to other areas of computational neuroscience. That said, I wouldn't be surprised if it is the most enjoyable book to read on the topic, that you can find. I have read Computational Neuroscience by Hanspeter Mallot, which is very short, weird and not very good. I'm currently about halfway through Theoretical Neuroscience by Dayan and Abbott. My impression is, it might be decent for people with a s... (read more)

Thanks for the info :) Yes, thats true. I ordered Theoretical Neuroscience couple of days ago together with Mathematics for Neuroscientists by Gabbiani and Cox. No one teaches computational neuroscience in our university, so i have to try to learn this field by myself.

All the links direct me to Ohio State University email login.

I think I fixed it, but just in case, here's the article link: []

Human Learning and Memory, by Dadid A. Lieberman (2012)

A well-written overview of current knowledge about human learning and memory. Of special interest:

  • the use of reinforcement as a teacher, parent, pet-owner or for self-improvement
  • for me personally: strategy to combat insomnia (results pending)
  • implications of memory research for study strategies

I used a measuring cup (iirc 75ml) for the powder. My typical meal would be three cups of powder and 300ml water. It's quite thick that way, my friend used more water.

Load More