Does the Quantum Physics Sequence hold up?
It's been the better part of a decade since I read it (and I knew a lot less back then), and recently I've been curious about getting a refresher. I am not going to pick up a textbook or spend too much time on this, but if it doesn't hold up what alternative/supplementary resources would you recommend (the less math-heavy the better, although obviously some of the math is inescapable)?
I actually learnt quantum physics from that sequence, and I'm now a mathematician working in Quantum Computing. So it can't be too bad!
The explanation of quantum physics is the best I've seen anywhere. But this might be because it explained it in a style that was particularly suited to me. I really like the way it explains the underlying reality first and only afterwards explains how this corresponds with what we perceive. A lot of other introductions follow the historical discovery of the subject, looking at each of the famous experiments in turn, and only building up the theory in a piecemeal way. Personally I hate that approach, but I've seen other people say that those kind of introductions were the only ones that made sense to them.
The sequence is especially good if you don't want a math-heavy explantation, since it manages to explain exactly what's going on in a technically correct way, while still not using any equations more complicated than addition and multiplication (as far as I can remember).
The second half of the sequence talks about interpretations of quantum mechanics, and advocates for the "many-worlds" interpretation over &qu...
I also want to know this.
(This is part of a more general question: how much of the science cited in the Sequences holds up? Certainly nearly all the psychology has to be either discarded outright or tagged with “[replication needed]”, but what about the other stuff? The mockery of “neural networks” as the standard “revolutionary AI thing” reads differently today; was the fact that NNs weren’t yet the solution to (seemingly) everything essential to Eliezer’s actual points, or peripheral? How many of the conclusions drawn in the Sequences are based on facts which are, well, not factual anymore? Do any essential points have to be re-examined?)
The mockery of “neural networks” as the standard “revolutionary AI thing” reads differently today
I think the point being made there is different. For example, the contemporary question is, "how do we improve deep reinforcement learning?" to which the standard answer is "we make it model-based!" (or, I say near-equivalently, "we make it hierarchical!", since the hierarchy is a broad approach to model embedding). But people don't know how to do model-based reinforcement learning in a way that works, and the first paper to suggest that was in 1991. If there's a person whose entire insight is that it needs to be model-based, it makes sense to mock them if they think they're being bold or original; if there's a person whose insight is that the right shape of model is XYZ, then they are actually making a bold claim because it could turn out to be wrong, and they might even be original. And this remains true even if 5-10 years from now everyone knows how to make deep RL model-based.
The point is not that the nonconformists were wrong--the revolutionary AI thing was indeed in the class of neural networks--the point is that someone is mistak...
It so happens that I’ve given some thought to this question.
I had the idea (while reading yet another of the innumerable discussions of the replication crisis) of adding, to readthesequences.com, a “psychoskeptic mode” feature—where you’d click a button to turn on said mode, and then on every page you visited, you’d see every psychology-related claim red-penned (with, perhaps, annotations or footnotes detailing the specific reasons for skepticism, if any).
Doing this would involve two challenges, one informational and one technical; and, unfortunately, the former is more tedious and also more important.
The informational challenge is simply the fact that someone would have to go through every single essay in the Sequences, and note which specific parts of the post—which paragraphs, which sentences, which word ranges—constituted claims of scientific (and, in this case specifically, psychological) fact. Quite a tedious job, but without this data, the whole project is moot.
The technical challenge consists first of actually inserting the appropriate markup into the source files (still tedious, but a whole order of magnitude less so) and implementing the toggle feature and the UI for it (
...Researchers at UCL, NICE, Aberdeen, Dublin and IBM are building an ontology of human behavior change techniques (BCTs). A taxonomy has already been created, detailing 93 different hierarchically clustered techniques. This taxonomy is useful in itself; I could see myself using this to try to figure out which techniques work best for me, in fostering desirable habits and eliminating undesirable ones.
The ontology could be an extremely useful instrumental rationality tool, if it helps to identify which techniques work best on average, and in which combinations these BCTs are most effective.
On August 23rd I'll be giving a talk organized by the Foresight Institute.
Our civilization is made up of countless individuals and pieces of material technology, which come together to form institutions and interdependent systems of logistics, development and production. These institutions and systems then store the knowledge required for their own renewal and growth.
We pin the hopes of our common human project on this renewal and growth of the whole civilization. Whether this project is going well is a challenging but vital question to answer.
History shows us we are not safe from institutional collapse. Advances in technology mitigate some aspects, but produce their own risks. Agile institutions that make use of both social and technical knowledge not only mitigate such risks, but promise unprecedented human flourishing.
Join us as we investigate this landscape, evaluate our odds, and try to plot a better course.
See the Facebook event for further details.
There is a limited number of spots and there has been a bunch of interest, still I'd love rationalists to attend so try to nab tickets at eventbrite. Feel...
An AI with a goal of killing or "preserving" wild animals to reduce suffering is dangerously close to an AI that kills or "preserves" humans with the goal of reducing suffering. I think negative utilitarianism is an unhelpful philosophy best kept to a thought experiment.
I'd like to test a hypothesis around mental health and different moral philosophies as I slowly work on my post claiming that negative utilitarians have co-opted the EA movement to push for animal rights activism. Has anyone done Less Wrong survey statistics before and can easily look for correlates between dietary choice, mental health, and charities supported?
I have started to write a series of rigorous introductory blogposts on Reinforcement Learning for people with no background in it. This is totally experimental and I would love to have some feedback on my draft. Please let me know if anyone is interested.
As a non-native English speaker, it was a surprise that "self-conscious" normally means "shy", "embarassed", "uncomfortable", ... I blame lesswrong for giving me the wrong idea of this word meaning.
Does anyone else get the sense that it feels vaguely low-status to post in open threads? If so I don't really know what to do about this.
Anyone want to use the new feature to see my draft of an article on philosophy of reference and AI, and maybe provide some impetus for me to finally get it polished and done?
Is it possible to subscribe to a post so you get notifications when new comments are posted? I notice that individual comments have subscribe buttons.
Old LW had a link to the open thread in the sidebar. Would it be good to have that here so that comments later in the month still get some attention?
At times as I read through the posts and comments here I find myself wondering if things are sometimes too wrapped up in formalization and "pure theory". In some cases (all cases?) I suspect my lack of skills lead me to miss the underlying, important aspect and only see the analytical tools/rigor. In such cases I find myself thinking of the old Hayek (free-market economist, classical liberal thinker) title: The Pretense of Knowledge.
From many, many years ago when I took my Intro to Logic and years ago from a Discrete Math course I know there is...
Suppose I estimate the probability for event X at 50%. It's possible that this is just my prior and if you give me any amount of evidence, I'll update dramatically. Or it's possible that this number is the result of a huge amount of investigation and very strong reasoning, such that even if you give me a bunch more evidence, I'll barely shift the probability at all. In what way can I quantify the difference between these two things?
One possible way: add a range around it, such that you're 90% confident your credence won't move out of this range in the next
...Does anyone remember and have a link to a post from fall 2017 where someone summarized each chunk of the sequences and put it into a nice pdf?
I notice that I'm getting spam posts on my LessWrong RSS feeds and still see them in my notifications (that bell icon on the top right), even after they get deleted.
Feature Idea: It would be good if all posts had an abstract (maybe up to three sentences) at the beginning.
Let's suppose we simulate and AGI in a virtual machine running the AGI program, and only observe what happens through the side effects, no data inflow. Since an unaligned AGI would be able to see that it is in a VM, have a goal of getting out, recognize that there is no way out, hence the goal is unreachable, and subsequently suicide by halting. That would automatically filter out all unfriendly AIs.
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters: