Sorted by New

Wiki Contributions


Actually, I missed this one. I agree with you. 

I would edit this into the main post.  I am a programmer, but I missed it. 

Wow, it really, really worked out) I want my prestige points)

The word 'rational' is properly used to talk about cognitive algorithms which systematically promote map-territory correspondences or goal achievement. 

I disagree with the definition "systematically promote map-territory correspondences" because for me it is "maps all the way down", we never ever perceive the territory directly, we perceive and manipulate the world via models (maps). Finding models that work (that enable goal achievement/winning) is the essence of intelligence. "All models are wrong, some are useful". Even if we get to the actually elemental parts of reality and can essentially equate our most granular map with the territory that is out there, we still mainly won't care in practice about this perfect map because it is going to be computationally intractable. Lets take Newtonian Mechanics and General Relativity for example. We know that General Relativity is "truer" but we don't use it for calculating pendulum dynamics at the earth surface, the differences that it models are just irrelevant compared to other more relevant stuff. 

Second: I'm mostly making an empirical claim as to what seems to happen to individual people (and more noticeably to groups-of-people) if they focus on the slogan "rationality is winning."

This is the core claim I think!

The feedbackloops are long/slow/noisy, which makes it hard to learn if what you're trying is working.

Definitely! If the feedback loops are long, slow and noisy, then learning is long, slow and noisy. That's why I give examples of areas where the feedback loops are short, fast and with very little noise. These are examples that worked for me with astonishing efficiency. I would be the person I am otherwise. And I've chosen these areas explicitly for this reason.

If you set out to systematically win, many people end up pursuing a lot of strategies that are pretty random. And maybe they're good strategies! But bucketing all of them under "rationality" starts to deflate the meaning of the word.

"pretty random" sounds to me like the exact opposite of rational and winning)

People repeatedly ask "but, isn't it rationality to believe false things?"

Here I make an extremely strong claim that it is never rational to believe false things. Personal integrity is the cornerstone of rationality and winning. This is a blogpost scope topic, so I won't go into it further right here.

Similarly and more specifically: a lot of things-that-win in some respects are wooy, and while I think there's in fact good stuff in some woo, while the first generation of rationalists exploring that woo were rationalists with a solid epistemic foundation. Subsequent generations came more for the woo than for the rationality (See Salvage Epistemology). 

"Woo" is stuff that doesn't fit into your clear self-consistent world model. There is a lot of useful stuff out there that you guys ignore! Copenhagen interpretation, humanities, biology, religion, etc... If you don't understand why it makes sense, you don't understand it, fullstop. I believe that mining woo for useful stuff is exactly how you do original research. It worked wonders for me! But integrity goes first! You shouldn't just replace your model with the foreign one or do "model averaging", you should grok what those guys get that you are missing and incorporate it in your model. Integrity and good epistemiology are a must, if you don't have those yet, don't touch woo! This is power aka dark arts, it will corrupt you.  

In both the previous two bullets, the slogan "rationality is winning" is really fuzzy and makes it harder to discern "okay which stuff here is relevant?". Whereas "rationality is the study of cognitive algorithms that systematically arrive at truth and succeeding at your goals" at least somewhat 

I go for "rationality is cognitive algorithms that systematically arrive at succeeding at your goals". 

Third: The valley of bad rationality means that study of systemized winning is not guaranteed to actually lead to winning, even en-net over the course of your entire lifetime. 

In my experience there is a valley of bad X for every theory X. This is what you have to overcome. I agree that many perish in it. But the success of those who pass is well worth it. I think we should add more "here be dragons" and "most of you will perish" and "like seriously, 90% will do worse of by trying this". It's not for everybody, you need to have character. 

Fourth: Honestly, while I think LessWrong culture is good at epistemics, addressing motivated cognition, and some similar things... I don't have a strong reason to believe that we are particularly good at systematically winning across domains (except in domains where epistemics are particularly relevant)

I am really sorry to say this, I love LW and I took a lot from it and I deeply respect a lot of people from here, I mean like genius-level, but yep, LW sucks at winning and you are not even good in epistemics in the areas that matter for you the most. Lets do smth about it, lets win?)

So, fifth: So, to answer df fd's challenge here: 

I got into Rationality for a purpose if it is not the best way to get me to that purpose [i.e. not winning] then Rationality should be casted down and the alternative embraced. 

A lot of my answer here is "sure, that might be fine!" I highly recommend you focus on winning, and use whatever tools are appropriate, which sometimes will be "study/practice cognitive algorithms shaped" and sometimes will have other shapes. 

I do agree there is a meta-level skill of figuring out what tools to use, and I do think that meta-level skill is still pretty central to what I call rationality (which includes "applying cognitive algorithms to make good decisions"). But it's not necessarily the case that studying that skill will pay off.

Linguistically, I think it's correct to say "the rational move is the one that resulted in you winning (given your starting resources, including knowledge)", but, "that was the rational move" doesn't necessarily equal "'rationality' as a practice was helpful."

Hope that helps explain where I'm coming from.

This one I just agree. 

fyi I explicitly included this, I just warned that it wouldn't necessarily pay off in time to help

I see from the 5th point that you explicitly included it, sorry for missing it, I just tend to really get stuck in writing good deliberate replies, so I just explicitly decided to contribute whatever I realistically can. 

I still stand on the position that this one (I call it critical thinking) should come first. It's true that there is no guarantee that it would pay off in time for everybody. But if you miss it, how do you distinguish between woo and rationality? I think you are just doomed in this case. Here be dragons, most of you will perish on the way. 

True (Scottish) Rationality is winning. Firstly, whom do we call a rational agent in Economics? It is a utility optimiser given constraints. An agent that makes optimal choices and attains max utility possible, that's basically winning. But real life is more complicated, all agents are computationally constrained and "best case" is not only practically unattainable, it is uncomputable, so we cannot even compare against it. So we talk about doing "better than normally expected", "better than others", etc. When I say "winning" I mean achieving ambitious goals.

But achieving ambitious goals in real life is mainly not about calculating the optimal choices! It is mainly about character, integrity, execution, leadership and a lot of other stuff! How come I still claim that Rationality is Winning? What use is knowing what to do if in practice you don't do it? Well, that's the point! An "optimal" strategy that is infeasible is not optimal)

But why focus on rationality at all if other stuff is more important? Because, well, your character, integrity, execution, resources, etc are not under your direct control except via the decisions that you make. You get them by making rational decisions. Making decisions that make you win and achieve your (ambitious) goals is what I call rationality.

"Sometimes the way you win is by copying what your neighbours are doing, and working hard." And in this case behaving rationally is copying what your neighbours do and working hard. Doing anything else is irrational, IMHO. Figuring out whom, when and how to copy is a huge part of rationality! We also call it critical thinking. Knowing how and when to work hard is another one! Why do you exclude one of the most important cognitive algorithms "sifting out the good from the bad" from "the study (and applied skill) of finding cognitive algorithms that form better beliefs and make better decisions"? If you are not good at critical thinking, how do you know that LW is not complete bullshit?

"Developing rationality" as a goal is an awful one because you don't get feedback. Learning doesn't happen without feedback. "Winning" may be a great one IIF you pick fields with strong immediate practical unbiased feedback. For example, try playing poker, data science in an executive position, all kinds of trading (RTB, crypto, systematic, HFT, just taking advantage of the opportunities when they present themselves), doing research just for the purpose of figuring out stuff for yourself because your life & money (universal resource) depends on it (or because it's fun and later your life will depends on it :) ). These are all examples from my life and they worked wonders for me.

I am sorry if I am coming a little aggressive, I think this is a great post raising a great point. I am just a rude post-USSR trader and I believe that being direct and up to the point is the best way to communicate and to show respect)

I never had an opportunity to participate in CFAR workshops and that's a pity. I would be happy to discuss this stuff further because I think both sides have useful stuff to share.

How did you get the implied probabilities from the vanilla option markets? I don't know an obvious way to do it, perhaps the simplest (and wrong) approximation would be to take the BS IV of traded vanillas at the strike price and plug it into binary BS formula? 

Dividing the option price by the current spot or future underlying price is definitely not a correct way to do it.

Snorting peptides directly is hilarious! I should do lines of peptides at the next corona party :)

Theoretically, it shouldn’t cause an immune response, as peptides shouldn’t be immunogenic on their own, that’s why you need chitosan as a delivery mechanism and adjuvant. However, who knows? Was it actually researched and proven that peptides on their own do not cause an immune response no matter how big is the dose and route of administration? I could well imagine that this is simply a theoretical conclusion that was never empirically verified, or that it was only verified by an injection, but not by snorting, and peripheral immune system is triggered by pure peptides, while systemic is not, or that it was only tried in much lower doses. Even if a 10-100 times higher dosage of peptides is equivalent to chitosan+peptides, this is likely of little commercial interest as chitosan “enhancement” is cheaper and more scalable at commercial scale than peptides production. So it might actually work and it is cool that you’ve tried it and there is evidence that “you were congested for a few days like after previous vaccine applications and it looks like nothing really bad had happened”.

What dose of pure peptides did you use per 1 peptide and totally? The currently recommended dose is ~200ug in total, no matter how many component peptides are there. If you do 5 peptides like in the v10, then  this is 40ug per peptide. If you do 1mg=1000ug of pure peptide, then this is 25 time more dakka and it might well trigger the same response as 40ug of peptide+chitosan.

I think the currently recommended does is likely to be too low for average human risk preferences. Organisations are extremely risk averse, the negative side effects are given much more weight than unrealised benefits, so the recommended dose is likely to be “the smallest one that kind of works without any side effects” rather than the optimal one. There is almost no evidence for the efficiency of different doses and this is all guesswork based on guesswork of others in slightly related cases, e.g. the 500ug cancer peptide vaccine dosage.

Congestion after vaccine/peptides administration could be due to many factors, so I consider it only a very weak evidence. This can be purely psychosomatic. The vaccine is acidic, so it irritates your nasal mucosa. When you put anything in your nose, it may irritate it and cause congestion. You have applied pure peptides after receiving several doses of vaccine and after enough time has passed for developing an immunity, so congestion may indeed be an immune response, but it may only happen when you already have an immunity, but it won’t be triggered if you have no immunity. Actually, this is what your data suggests. “Previously, on doses 3-6 of the vaccine, I had consistently been congested for a couple days after”. I understand this as you having no congestion after doses 1-2 and getting it after all further doses. If this is so, then this looks like the actually important immune response of a naive immune system happens without congestion, but the immune response of a trained immune system happens with congestion.

Yes, exactly. "None of us has tested positive using insensitive commercial point-of-care tests"

I haven't. Firstly, there is no proper data, just some bits of evidence. Secondly, yep, I am pretty sure that they would get in trouble if they did anything that looked like a trial, so I assume that they stay on the safe side and well, don't do anything like a trial.

> The radvac vaccine will have serious side effects (i.e. besides stuffy nose for a day) for >50% of people who try it

It should be well below 1%. Firstly, if it were that bad as to cause serious side effects for >50% of people who try it, would the RaDVaC team risk promoting it? Secondly, if it were that bad, wouldn’t we hear bad stories about side effects? Thirdly, getting serious side effects accidentally in >50% cases sounds pretty hard on its own.

> The radvac vaccine induces antibodies detectable in a standard commercial blood test in most people, using the dosage in the paper with 2 booster shots

<1%, because RaDVaC team has tried it and didn’t manage to get any positive result.

> The radvac vaccine induces antibodies detectable in a standard commercial blood test in most people, using "more dakka", for some reasonable version of "more dakka"

This greatly depends on what “more dakka” and “reasonable version” means. I assume that “reasonable version” implies "doesn't cause too much harm due to immune system overstimulation”. If “more dakka” means simply a higher dosage, then I think, that this is unlikely (5%), because 1) RaDVaC team experimented on themselves quite a bit, they received a lot of dakka, but no commercial blood test detection, 2) RaDVaC team seems reasonable enough to try this approach if it looked promising. If “more dakka“ includes stronger adjuvants (chitosan is considered a weak, but safe one), then it is much more likely (20%?), because RaDVaC team didn’t investigate those (for a reason) and it sounds plausible that you can get an immune response by irritating the immune system really, really strongly.

Load More