All of Samuel Hapák's Comments + Replies

Actually you got it backwards. The so called intellectual property doesn’t have typical attributes of property:

– exclusivity: if I take it from you, you don’t have it anymore

– enforceability: it’s not trivial to even find out my “art was stolen”

– independence: I can violate your IP by accident even if I never seen any of your works (typical for patents), this can’t happen with proper property

– clear definition: you usually don’t need courts to decide whether I actually took your car or not.

Besides that, IP is in direct conflict with proper property rights ... (read more)

Nonsense. The fact that you can see some vague parallels between phlogiston and electrons or energy doesn’t make phlogiston theory any good. The fact that you can’t decide whether phlogiston represents electrons or energy should be a hint here.

Scientific theory should give useful predictions about the world and help us compress information. Phlogiston one does neither.

I would be careful to discern between fraud and police state allegations of fraud. Aaron Swartz is clearly the latter and it at least deserves to be mentioned in the article.

The line between good and bad is thin. This technique can be and often is misused for manipulation. The white-hat use of this technique is to make the other person stop and think.

Of course, my rewrite was a hyperbole;)

But you are right about value subjectivity. “I feel” are an amazing technique to deescalate conflicts and built rapport. You cannot disagree with my feelings! That’s quite powerful.

I agree with you these are useful in dialogues whether in person or in comments section.

I don’t believe they (usually) have a place in books or blogposts. Those are not situations requiring conflict deescalation. The “I think” is filler because it is implied. Of course the author writes what he thinks.

I disagree with this. As a writer, I don't mean the same thing by "I think it cost over $100" versus "it cost over $100". The latter is more confident; I don't intend to literally never be wrong when I say things like it, but I do intend to very rarely be wrong. The former suggests that I don't remember very well and I didn't look it up. And as a reader, I think I roughly by-default expect writers to be doing the same, and if they regularly say things unhedged that turn out to be false (or that I think they couldn't possibly know) I lose respect for them. I don't know how common it is for other readers to read like me, or other writers to write like me. But I'd be surprised if either demographic was fewer than 10% on LW. I weakly predict that if you compare the typical writer who doesn't use "I think" to the typical writer who does, the one who doesn't is less capable of distinguishing what-is from what-seems-to-be; and is less well-calibrated if you press them to put probabilities on their statements.
Too powerful. You can say anything, claim anything, under the guise of "feeling", and shut down anyone who disagrees, because "muh FEELINGS!" Every sentence beginning "I feel that" is false, because what follows those words is always a claim about how the world is, never a feeling.

You are taking this to the extreme. The goal is to make text succinct, to get rid of fillers. It doesn’t mean that you can’t make likelihood statements when warranted, just don’t start every sentence with agnostic “maybe”, or “I think”.

I think you might be taking this to the extreme. I guess that the goal might be to make text succinct, or maybe to get rid of fillers. I would probably say that it doesn’t mean that you can’t make likelihood statements when warranted, but it might be better to not to start every sentence with agnostic “maybe”, or “I think”.

Fwiw I think if I were rewriting the first paragraph to self-aware style I'd go for something like: And yeah, I do think that's an improvement in terms of things I'd personally like to read. It doesn't just acknowledge uncertainty, but subjectivity. E.g. I think the "I feel like" makes it easier for me to react like "interesting, I don't feel like that, I wonder why you do" versus "what, no I'm not". (Or maybe my rewrite doesn't actually reflect what you think? Like, maybe you're confident that you're speaking for Pinker as well as just yourself, in which case you could start with "Pinker would say", or "I think Pinker would say" if you're less confident.)
2Cleo Nardo2mo
Avoiding hedging is only one aspect of classic style. I would also recommend against hedging, but I would replace hedging with more precise notions of uncertainty.

My understanding of Pascal Mugging is following:

Robber approaches you promising you lots of utility in exchange of giving him $1. The probability he is not lying is extremely low, yet the utility is extremely high, so you give him $1.

The above reasoning has one trivial flaw. How do you know that there isn't a person testing your virtues, which would actually give you lots of utility if you refused to give this person $1? What makes you think that receiving lots of utility when you succumb to the robber is more probable than receiving lots of utility if you stand up to the robber?

Agreed. Hitler was total amateur. Commies killed much more people and actually managed to terrorize people for almost a century.

The following two points are contradicting each other:

  1. In theory at least, Georgism is correct that a land tax will not cause rents to rise.
  2. Land will become cheaper to buy, and there will be more pressure to use it in an economically viable way.

I believe the 2nd is correct, whereas the first one isn't. The extra pressure will result in real rents rising. If the land is under-utilised today (e.g., empty, or a garden where nice big house could stand) the maximum possible rent is not being charged. Possible reason might be that the current land-lord can't make... (read more)

The thing here is, that ancient people discovered that notes which have frequencies in a ratio of small integers sound good together. Eg. 2:3.

For a long time, people were creating scales trying to have as many nice ratios as possible. This has problems. I’ll let you think about those yourself.

Then some guy figured out that human ear is not perfect and we can’t really tell whether we hear 2:3 or 2:2.9966. And came up with idea of doing those 12th rootes of 2.

Now, try to do 2^(7/12). Try also 2^(4/12). You see ?

 You might ask "... but why do small-integer ratio sound good?". The most plausible explanation I know of is due to William Sethares, whose book Tuning, Timbre, Spectrum, Scale I highly recommend. He reckons (with some evidence) that it goes like this:

  • If you play pure sine waves together and ask people what sounds nice, there isn't any strong preference for small-integer ratios. What there is is a dislike of notes that are almost but not quite coincident.
  • Real musical notes are not pure sine waves, but they can be considered (via Fourier analysis) to b
... (read more)
For those who don't want to break out a calculator, Wikipedia has it here: [] You can see the perfect fourth and perfect fifth are very close to 4/3 and 3/2 respectively. This is basically just a coincidence and we use 12 notes per octave because there are these almost nice fractions. A major scale uses the 2212221 pattern because that hits all the best matches with low denominators, skipping 16/15 but hitting 9/8, for example. 

It’s more than that, there is no uncertainty about probabilities in the and yet, there can be a conflict.

You might want to fight even if it’s more expensive than compromising, because you are playing a repeated game. You don’t want to send a signal that you are a kind of person who compromises with an aggressor.

2Ege Erdil1y
Two points about this: 1. In repeated versions of the hawk-dove game, it's difficult to get war in an equilibrium unless there are some players who have very high discount rates for the future. There's always the implicit off-equilibrium threat of resisting if someone deviates and tries to go to war, but such an off-equilibrium threat doesn't need to be observed in order to be effective. Maybe the best example of this is monetary policy: the US doesn't have a hyperinflation because people understand the Federal Reserve's commitment to contract monetary policy enormously if that happened. However, in fact we never observe the Federal Reserve's commitment to fight hyperinflation be tested by the US dollar becoming worthless overnight. You can say that in the real world these commitments will always be tested, but the reason they are tested is usually because of disagreement about how credible the commitments are, not because there are some madmen going around who wage war for fun. 2. What I've outlined in the post is not the only way in which war can happen, since the world is complicated and a big phenomenon like war is going to defy any simple explanation. I think it's better to read the post as saying "ambiguity causes conflict", as is stated in the title, rather than "all conflict is caused by ambiguity", which would be false.

Few quick examples:

Why is it “out of curiosity” and not “out of the curiosity”?

Why “see in context” and not “see in the context”? (See the button below this form)

Why “hide previous comment” and not “hide the previous comment”? (See the button above this form)

Just pitching in on the last two: there's an abbreviated register of speech in English called 'note-taking register' that has crept its way into a lot of parts of speech and writing, including website navigation. Dropping the definite article (or most articles in general) is a core part of that register. I suspect dropping the definite article in 'refresh page' is not related to definiteness, it's a linguistic tendency towards abbreviation. Funnily enough, it's a trait shared by the stereotypical 'robot voice', as well as 'baby voice' and some others. 
There are no real answers to these. Explanations for linguistic rules are no more than ways of remembering them. Different languages, even when the same concepts apply to them, have different rules about them. For example, "Curiosity killed the cat" vs. "La curiosité a tué le chat." French uses the definite article more than English does. Why? It just does. Russian doesn't have articles at all. In fact, over- or under-use of "the" is one of the main signs that tells me that the writer is a foreigner. English treats month and day names as proper nouns, so capitalises them; French does not, while German capitalises everything. Go back a few centuries and English capitalised every important noun, and the really important ones would get caps and small caps.
These are just one native speaker's impressions, so take them with a grain of salt. Your first two examples, to me, scan as being about abstract concepts; respectively: the emotion/quality of curiosity and the property of being in context. This quora [] result indicates that it's a quality of "definiteness" that indicates when articles get dropped (maybe as a second language learner you're likely to already have this as knowledge, but find it difficult to intuit). In those examples, the meaning doesn't rely on pointing at two specific "curiosity" and "context" objects that have to be precisely designated, it relies on set phrases [,it%20is%20not%20freely%20chosen.] "out of curiosity" and "in context" that respectively describe an unmentioned action or object.  I think the article in the last example is dropped for a completely different reason. The "definiteness" argument doesn't apply, but my instinct is that this is simple terseness in the communication from UI to user. Describing every UI element with precise language would result in web pages that resemble legal documents.
Articles are hard! I was lucky enough to be raised bilingual, so I'm somewhat adept at navigating between different article schemes). I won't claim these are hard and fast rules in English, but: 1 - 'Curiosity' is an abstract noun (e.g. liberty, anger, parsimony). These generally don't have articles, unless you need some reason to distinguish between subcategories (e.g. 'the liberty of the yard' vs. 'the liberty of the French') 2 - 'Context' can refer to either a specific context (e.g. 'see in the proper context'), in which case the articles are included, or the broad category (e.g. 'context is everything'). 'see in the context' is not ungrammatical, but its usually awkward, because without an adjective its unclear which context you are talking about. (And if you were referring to one that was previously established, you would use 'that context' or 'this context'). However, in the particular case of the button, 'see in the context' would be acceptable, because the identity of 'the context' is clear! I doubt a native English speaker would say that, though, because its not idiomatic. 3 - 'hide the previous comment' is actually correct here! However, in human-machine interfaces, articles, prepositions, and pronouns are often omitted to save space/mental effort.

I am not a native speaker. Funnily, I don't do the kind of mistakes you mention, there are other much more counterintuitive aspects of the language for me:

  1. Capitalisation. That the century should be lowercase is obvious, but why is "English language" with the capital E? And why March with the capital M? Those are not proper nouns either.
  2. Commas. I would expect more people to have a problem with commas than with a phenomenon vs phenomena.
  3. Articles. I can never decide whether to put an article there or not.
Do you have some examples of sentences where this is particularly tricky? (I'm just asking out of curiosity -- I'm a monolingual English speaker who never really studied the language explicitly, so the non-native perspective can be an interesting prompt to think about the rules that seem intuitive but are often pretty weird and complicated when you think about them.)
  1. Yes
  2. Yes
  3. Only if they volunteer.
  4. It’s called taxes.
  5. Why would I? Everyone should have a right to harm themselves if they wish so.
  6. Never. Privacy is much more important than few lives saved.

Five days is too short, but if the judge instead said you'll die in the next month and it will be a complete surprise for you, it would be easy to execute.


Algorithm is simple, every day an executioner throws a coin. If it's heads, prisoner lives for another day. If it's tails, prisoner dies that day. By having 50% chance to live, if the prisoner is taken to be executed, it is a surprise for him. Now the only problem is, if by some small chance a tails wouldn't fall in the stipulated timeframe. In the original problem, chances of that are 1/32. Pretty small, but non-zero. However, for 30 days it's 10^-9, which is small enough to be pretty damn sure that the judgement will be fulfilled.

Nate Silver's predictions were changing too much over the time. If those probabilities were legit, you'd be able to sell binary options based on them. If Nate would do that, he'd went bankrupt, because he created lot of arbitrage opportunities.

That paper doesn't actually justify why 538's probabilities don't form a martingale. (In fact it's plausible that they do - to demonstrate they aren't I'd want to see someone show a strategy which is successfully arbitraging the probabilities). Since 538's model isn't open source, it's pretty difficult to say whether or not it is a true martingale, but that paper definitely doesn't show it. If we were to take a similar model which is open source (specifically The Economist's model) we can see that it is not far from being a martingale. Specifically if they added forecasting for their [fundamentals model]( (not difficult, just painful). I don't think the difference made by the fundamentals model is that significant so I think it would have been fairly difficult for anyone to arbitrage those odds. (Not that they were correct, just that they were broadly time-consistent)

You make few wrong assumptions here:

  1. Rent is much cheaper than mortgage. I don't know about your country, but where I live mortgage is cheaper than rent. People actually buy flats, rent them out, pay mortgage out of rent income and even make small cash on top of it.
  2. Comparing house to farmland is not fair. I can start comparing house to having kitchen utensils. Most of the people I know still cook their own food instead of ordering deliveries. And most of the people I know buy cars instead of renting them.
  3. When you rent, you don't have a control over your hou
... (read more)
Point 4 said a slightly different way: when you rent you are arguably shorting housing since you need to live someplace. When you own 1 place you are neutral and you aren't long in housing until you own multiple places.
It can be , if rented accommodation is publically owned , and not-for-profit, but the UK has stepped back from that .

Most of people believe in an existence of the world around us. When I say existence, I don't mean existence in a mathematical sense such as "there exists ε > 0", I mean existence as something "real", "fundamental".

On the other hand, the very same people believe there is nothing such as "soul, qualia, god, call it whatever you like it". I find that strange, because every single argument against existence of soul can be used against existence of the physical world around us.

Take this one for example:

"Of c... (read more)

Something is missing in this explanation. Why isn’t everyone super rich?

Yeah, if measuring along multiple dimensions works well, why isn't everyone in the right tail of this single dimension?

Multiple reasons:

  • First, reaching a useful Pareto frontier still isn't easy. For the sort of examples in the post, we're talking about effort equivalent to two or three separate PhD's, plus enough work in the relevant fields to master them. You'd have to clear a certain bar for intelligence and diligence and financial slack just for that to be an option.
  • Second, "super rich" isn't quite the right metric. Academics usually aren't measuring their success in dollars, for instance, and status is unfortunately more zero-sum
... (read more)

One answer is that everyone _IS_ super-rich, compared to any median or average in history.

But also missing is scalability of opportunity and size of market for a given point on a price/performance curve. It's worth noting that the best table-tennis player in the world makes NOTHING if the second- through billionth-best don't play against them. Also a whole lot of frontiers in that multi-dimensional space have so much demand that the best, second-best, and billionth-best are all at full capacity, and there remains money for the billion and one-th best to make some money at it.

It's a huge difference whether the reviewer is some anonymous person unrelated to the journal or whether it's an editor in chief of the journal itself. I don't think it's appropriate to call the latter peer-review (there are no "peers" involved), but that's not important.

Editor in chief has a strong motivation to have a good quality journal. If he rejects a good article, it's his loss. On the contrary, anonymous peer have stronger motivation to use this as an opportunity to promote (get cited) his own research than t... (read more)

Oh, I think we both definitely agree that science has changed a lot. I do also think that it still very clearly has maintained a lot of its structure from its very early days, and to bring things back to John's top level point, it is less obvious that that structure would redevelop if we were to give up completely on academia or something like that.

Royal Society in 1660 and current academia are very different beasts. For example the current citations/journal’s game is pretty new phenomenon. Peer-review wasn’t really a thing 100 years ago. Neither complex grant applications.

I thought peer-review had always been a core part of science in some form or another. I think you might be confusing external peer-view and editorial peer-review. As this Wikipedia article says:

The first record of an editorial pre-publication peer-review is from 1665 by Henry Oldenburg, the founding editor of Philosophical Transactions of the Royal Society at the Royal Society of London.[2][3][4]
The first peer-reviewed publication might have been the Medical Essays and Observationspublished by the Royal Society of Edinburgh in 1731. The present-day peer-r
... (read more)

Academia in the current form isn’t Lindy. It’s not like we’re doing this thousands of years. Current system of Academia is at most 70 years old.

The broader institutions around academia have been around since at least the Royal Society, which was founded in 1660. That's usually the age I would put the rough institutions surrounding academia.

Is it necessary so? Today science means you spend considerable portion of your time doing bullshit instead of actual research. Wouldn't you be in a much better position doing quality research if you're earning good salary, saving a big portion of it, and doing science as a hobby?

It's possible. That's what I myself am doing--supporting myself with a part-time job while I self-study and do independent FAI research. However, it's harder have credibility in the eyes of the public with this path. And for good reason--the public has no easy way to tell apart a crank from a lone genius, since it's hard to judge expertise in a domain unless you yourself are an expert in it. One could argue that the academia acts as a reasonable approximation of eigendemocracy [] and thereby solves this problem. Anyway, if the scientists with credibility are the ones who don't care about scientific integrity, that seems bad for public epistemology.

Some important things can be a source of income, such as farming. Farming is pretty important and there are no huge issues with farmers doing it for profit.

Problems happen when there is a huge disconnect between the value and reward. This happens in a basic research a lot, because researchers don't have any direct customers.

Arguably, in a basic research, you principally can't have any customers. Your customers are future researchers that will build on top of your research. They would be able to decide whether your work was valuable or whether it was crap, but you'd be pretty old or dead by that time.

Very nice. Few notes:

1. Wrong incentives are no excuse for bad behaviour, they should rather quit their jobs than engaging in one.

2. World isn't black or white, sometimes there is a gray zone where you contribute enough to be net+, while cut some corners to get your contribution accepted.

3. People tend to overestimate their contribution and underestimate the impact of their behaviour, so 2. is quite dangerous.

4. In an environment with sufficiently strong wrong incentives, the only result is that only those with weak morals survive. Natural selection.

5. There is lot of truth in Taleb's position that research should not be a source of your income, rather a hobby.

As a synthesis of points 1 and 4: it is both the incentives and you. The incentives explain why the game is so bad, but you have to ask yourself why you still keep playing it. A researcher with more personal integrity would avoid the temptation/pressure to do sloppy science... and perhaps lose the job as a result. The sloppy science itself would remain, only done by someone else.
Is this specific to research? Given unaligned incentives and Goodheart, I think you could make an argument that _nothing important_ should be a source of income. All long-term values-oriented work should be undertaken as hobbies. (Note - this is mostly a reductio argument. My actual opinion is that the split between hobby and income is itself part of the incorrect incentive structure, and there's no actual way to opt out. As such, you need to thread the needle of doing good while accepting some and rejecting other incentives.)

Yeah, I think you're right. There are two types of explanations:

  • those which compress information
  • those which provides us with faster algorithms to reason about the world

The three-body systems is the example of the latter. As is lots of math and computer science.

Good property of scientific theory is that it serves as a data compression. Les bits you need to explain the world around you, the better theory. This is IMO very good definition of what explanation is.

Also, the compression usually is lossy, such as Newtonian mechanics.

Agreed that this points in the right direction. I think there's more to it than that though. Consider for example a three-body problem under Newtonian mechanics. Then there's a sense in which specifying the initial masses and velocities of the bodies, along with Newton's laws of motion, is the best way to compress the information about these chaotic trajectories. But there's still an open question here, which is why are three-body systems chaotic? Two-body systems aren't. What makes the difference? Finding an explanation probably doesn't allow you to compress any data any more, but it still seems important and interesting. (This seems related to a potential modification of your data compression standard: that good explanations compress data in a way that minimises not just storage space, but also the computation required to unpack the data. I'm a little confused about this though.)

True, but that’s usually very artificial context. Often when someone claims they know the probabilities accurately enough, they are mistaken or lying.

There is one other explanations for the results of those experiments.

In a real world, it's quite uncommon that somebody tells you exact probabilities—no you need to infer them from the situation around you. And we the people, we pretty much suck at assigning numeric values to probabilities. When I say 99%, it probably means something like 90%. When I say 90%, I'd guess 70% corresponds to that.

But that doesn't mean that people behave irrationally. If you view the proposed scenarios through the described lens, it's more like:

a) Certainty ... (read more)

3Adele Lopez4y
I think you're right that this is part of where the intuition comes from. But it's still irrational in a context where you actually know the probabilities accurately enough.

And what about this argument:

As the civilisation progresses, it becomes increasingly cheaper to destroy the world to the point where any lunatic can do so. It might be so that physical laws make it much harder to protect against destruction than actually destroy—this actually seems to be the case with nuclear weapons.

Certainly, currently there are at least 1 in million people in this world who would choose to destroy it all if they could.

It might be so that we achieve this level of knowledge before we make it to travel across solar systems.

I'd think that some of these alien civilisation would have figured it out in time, implanted everyone with neural chips that override any world ending decision, kept technological discoveries over a certain level available only to a small fraction of the population or in the hand of aligned AI, or something. An aligned AI definitely seems able to face a problem of this magnitude, and we'd likely either get that or botch that before reaching the technological level any lunatic can blow up the planet.

Very simple. To prove it for arbitrary number of values, you just need to prove that h_i being true increases its expected “probability to be assigned” after measurement for each i.

If you define T as h_i and F as NOT h_i, you just reduced the problem to two values version.

There is actually much easier and intuitive proof.

For simplicity, let's assume H takes only two values T(true) and F(false).

Now, let's assume that God know that H = T, but observer (me) doesn't know it. If I now make measurement of some dependent variable D with value d_i, I'all either:

1. Update my probability of T upwards if d_i is more probable under T than in general.

2. Update my probability of T downwards if d_i is less probable under T than in general.

3. Don't change my probability of T at all if d_is is same as in general.

(In... (read more)

3Ronny Fernandez4y
I had already proved it for two values of H before I contracted Sellke. How easily does this proof generalize to multiple values of H?