Sorted by New

Wiki Contributions


Specifically, I would love to see a better argument for it being ahead of Helion (if it is actually ahead, which would be a surprise and a major update for me).

I agree with Jeffrey Heninger's response to your comment. Here is a (somewhat polemical) video which illustrates the challenges for Helion's unusual D-He3 approach compared to the standard D-T approach which CFS follows. It illustrates some of Jeffrey's points and makes other claims like Helion's current operational poc reactor Trenta being far from adequate for scaling to a productive reactor when considering safety and regulatory demands (though I haven't looked into whether CFS might be affected by this just the same).

For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn't, so you either reject the null, or fail to reject the null, a binary outcome.

Ritualistic hypothesis testing with significance thresholds is mostly used in the social sciences, psychology and medicine and not so much in the hard sciences (although arbitrary thresholds like 5 sigma are used in physics to claim the discovery of new elementary particles they rarely show up in physics papers). Since it requires deliberate effort to get into the mindset of the null ritual I don't think that technical and scientific-minded people just start thinking like this.

I think that the simple explanation that the effect of improving code quality is harder to measure and communicate to management is sufficient to explain your observations. To get evidence one way or another, we could also look at what people do when the incentives are changed. I think that few people are more likely to make small performance improvements than improve code quality in personal projects.

Unfortunately, what I would call the bailey is quite common on Lesswrong. It doesn't take much digging to find quotes like this in the Sequences and beyond:

This is a shocking notion; it implies that all our twins in the other worlds— all the different versions of ourselves that are constantly split off, [...]

Thanks, I see we already had a similar argument in the past.

I think there's a bit of motte and bailey going on with the MWI. The controversy and philosophical questions are about multiple branches / worlds / versions of persons being ontological units. When we try to make things rigorous, only the wave function of the universe remains as a coherent ontological concept. But if we don't have a clear way from the latter to the former, we can't really say clear things about the parts which are philosophically interesting.

I’m reluctant to engage with extraordinarily contrived scenarios in which magical 2nd-law-of-thermodynamics-violating contraptions cause “branches” to interfere.

Agreed. Roland Omnes tries to calculate how big the measurement apparatus of Wigner needs to be in order to measure his friend and gets 10 to the power of 10E18 degrees of freedom ("The Interpretation of Quantum Mechanics", section 7.8).

But if we are going to engage with those scenarios anyway, then we should never have referred to them as “branches” in the first place, ...

Well, that's one of the problems of the MWI: how do we know when we should speak of branches? Decoherence works very well for all practical purposes but it is a continuous process so there isn't a point in time where a single branch actually splits into two. How can we claim ontology here?

Answer by paragonal6-5

I'm not an expert but I would say that I have a decent understanding of how things work on a technical level. Since you are asking very general questions, I'm going to give quite general thoughts.

(1) The central innovation of the blockchain is the proof-of-work mechanism. It is is an ingenious idea which tackles a specific problem (finding consensus between possibly adversarial parties in a global setting without an external source of trust).

(2) Since Bitcoin has made the blockchain popular, everybody wants to have the specific problem it allegedly solves but almost nobody does.

(3) Proof-of-work has a certain empirical track record. This is mostly for cryptocurrencies and in the regime where the main profit of the transaction validators is from minting new coins.

(4) Proof-of-work isn't sustainable. The things which it secures (Bitcoin, smart contracts, etc.) are secure only as long as an increasing amount of energy is put into the system. Sure, traditional institutions like the government or a central bank also use a lot of energy but they can't be abandoned as easily as a blockchain because they are monopolies which have proven to be robust over long time scales. Proof-of-work becomes increasingly unstable when the monetary incentive for the people doing the validations goes down.

(5) Other proposed consensus mechanisms (proofs-of-something-else) remove the central innovation of the blockchain. I don't see them as ingenious ideas like proof-of-work but mostly as wishful thinking of having one's cake and eating it too. I'm open to change my mind here but I don't see any evidence yet.

(6) I don't share the optimism that clever technological solutions which bypass trust will lead to a flourishing society. I think the empirical link between inter-personal trust and a flourishing society is strong. Also as far as trust in people and institutions is bypassed, it is replaced by trust in code. I think it is worthwhile to spell this out explicitly.

(7) Comparisons with the .com bubble don't seem sensible to me. Bitcoin has been popular for ten years now and I still only see pyramid schemes and no sensible applications of the blockchain. Bitcoin and NFTs aren't used, people invest in them and hold them. Crypto right now is almost completely about the anticipated future value of things. In contrast, when the .com bubble became a bubble there were many websites which were heavily used at the time.

(8) Moxie Marlinspike, the founder of Signal, also makes some interesting points regarding web3: We already have an example of a decentralized system becoming widespread: the internet. Did people take matters in their hand and run their own servers? No. What happened is the emergence of centralized platforms and the same thing is happening with blockchains already. I think at least some of the potential people see with blockchains wouldn't be realized because of this dynamic.

Rick Beato has a video about people losing their absolute pitch with age (it seems to happen to everyone eventually). There are a lot of anecdata by people who have experienced this both in the video and in the comments.

Some report that after experiencing a shift in their absolute pitch, all music sounds wrong. Some of them adapted somehow (it's unclear to me how much development of relative abilities was involved) and others report not having noticed that their absolute pitch has shifted. Some report that only after they've lost their absolute pitch completely, they were able to develop certain relative pitch abilities.

Overall, people's reported experiences in the comments vary a lot. I wouldn't draw strong conclusions from them. In any case, I find it fascinating to read about these perceptions.

I am quite skeptical that hearing like a person with absolute pitch can be learned because it seems to be somewhat incompatible with relative pitch.

People with absolute pitch report that if a piece of music is played with a slightly lower or higher pitch, it sounds out of tune. If this feeling stays throughout the piece this means that the person doesn't hear relatively. So even if a relative pitch person would learn to name played notes absolutely, I don't think the hearing experience would be the same.

So I think you can't have both absolute pitch and relative pitch in the full sense. (I do think that you can improve at naming played notes, singing notes correctly without a reference note from outside your body, etc.)

Thanks for this pointer. I might check it out when their website is up again.

Many characteristics have been proposed as significant, for example:

  • It's better if fingers have less traveling to do.
  • It's better if consecutive taps are done with different fingers or, better yet, different hands.
  • It's better if common keys are near the fingers' natural resting places.
  • It's better to avoid stretching and overusing the pinky finger, which is the weakest of the five.


Just an anecdotal experience: I, too, have wrist problems. I have tried touch typing with 10 fingers a couple of times and my problems got worse each time. My experience agrees with the point about the pinky above but many consecutive taps with non-pinky fingers on the same hand also make my wrist problems worse. If traveling less means more of those, I prefer traveling more. (But consecutive taps on different hands are good for me.)

Since many consecutive taps with different fingers on the same hand seem to part of the idea behind all keyboard layouts, I expect the costs of switching from the standard layout to an idiosyncratic one to outweigh the benefits.

For now, I have given up on using all 10 fingers. My current typing system is a 3+1 finger system with a little bit of hawking. I'd like to be able to touch type perfectly but this seems to be quite hard without using 10 fingers. I don't feel very limited by my typing speed, though.

Load More