All of afterburger's Comments + Replies

3abcd_z8y1) What error bars? Are you talking about the relative uncertainty of the priors? Because it's assumed that these prior are only my best estimate. You're not saying anything new here. 2) "how hot/successful she is relative to you": I haven't heard of any studies that positively correlate physical appearance and financial success with reduced time to get over a long-term relationship. I admit that it's possible that there's a connection, but as of yet I haven't seen any evidence that would persuade me to include it. 3) "if she broke up with you she has likely wanted to do better for some time." Again, you're making unjustified claims with no evidence to support them. "The usual post-breakup advice is to focus on your own goals for awhile." Now this I agree with.
Harry Potter and the Methods of Rationality discussion thread, part 24, chapter 95

I have greater than 5% confidence that Voldemort is three characters: Quirrell (via possession), Harry (via soul-copying ritual) and Dumbledore (via improved Imperius).

2kgalias8yWhy would Quirrell try to undermine Dumbledore in Harry's eyes, then?
0William_Quixote8yThis theory overlooks some very important information. Although its a possible deduction from in universe information ( if I saw magic, then I'm a lot more likely to think I'm in a simulation) it overlooks that this is a story. "And then he woke up" is a classic case of a terrible unsatisfactory ending. The writer wants the book to be good and wants people to recommend it to other people.
2TrE8yIt seems ridiculously complicated. Simple hypotheses backed by evidence trump complex hypotheses.
Where Are We the Weakest?

I know cookies make me unhappy in the long run, but I enjoy eating cookies in the short run. I could name a bunch of parts of the cookie-eating experience that I like, such as the feeling of sleepiness and contentment caused by eating a lot.

You could argue that any feeling is "brainwashing", meaning that my feelings are controlled by my physical brain, which is something separate from me. I am deeply uncomfortable with all of the current solutions to the hard problem of consciousness. If I am self-aware, then it seems like all matter must be aware in the same sense that I am not a philosophical zombie.

"Can we know what to do about AI?": An Introduction

That sounds exciting too. I don't know enough about this field to get into a debate about whether to save the metaphorical whales or the metaphorical pandas first. Both approaches are complicated. I am glad the MIRI exists, and I wish the researchers good luck.

My main point re: "steel-manning" the MIRI mission is that you need to make testable predictions and then test them or else you're just doing philosophy and/or politics.

Help please! Making a good choice between two jobs

Stay in London, and study in the evenings if you want. Benjamin Franklin said "three removes is as bad as a fire", meaning there's a high cost to rebuilding your social network. I'd guess it would take you about 18 months to fully build new friendships. I moved to a non-ideal city for work (twice!) and it set my career back by a couple of years. The cost of living in Glasgow is lower because people are happier living in London.

If you want to fully maximize utility, you're making a false choice by just looking at the two jobs. Get back in grad sch... (read more)

Where Are We the Weakest?

I agree. Whatever process copies rational conclusions back into subconscious emotional drivers of behavior doesn't seem to work too well. For me, I enjoy cookies just about every day, despite having no rational reason to eat them that often. Eating cookies does not fit into my long term utility-maximizing plans, but I am reluctant to brainwash myself.

1rosecongou8yIn all seriousness, how do you know that you're not simply brainwashed into believing cookies are making you happy? For example, during my religious years, attending a 5-hour prayer meeting made me feel happier -- even ones where not much English was spoken. Much of this was a learned association between attendance and the feeling of "doing the right thing," in retrospect. Once I no longer thought of it as "the right thing," the happiness I derived from it waned.
"Can we know what to do about AI?": An Introduction

Thanks for the thoughtful reply!

What code (short of a full-functioning AGI) would be at all useful here?

Possible experiments could include:

  • Simulate Prisoner's Dilemma agents that can run each others' code. Add features to the competition (e.g. group identification, resource gathering, paying a cost to improve intelligence) to better model a mix of humans and AIs in a society. Try to simulate what happens when some agents gain much more processing power than others, and what conditions make this a winning strategy. If possible, match results to real-w

... (read more)
2Viliam_Bur8yMake it scientific articles instead. Thus MIRI will get more publications. :D You can also make different expect systems compete with each other by trying to get most publications and citations.
Harry Potter and the Methods of Rationality discussion thread, part 23, chapter 94

If Ringmione is true, then I would assign over 50% probability to Dumbledore having noticed it and not called out Harry on it, in the same way that Dumbledore appeared to have noticed Harry in Azkaban and chose to not reveal it. I suspect Dumbledore is still just fighting the War, and believes that Harry is the key to defeating Voldemort and/or actually is Voldemort, and so Dumbledore did not reveal Ringmione because he believes Harry is trying to do the right thing and revealing Ringmione would cause a disastrous confrontation.

4Velorien8yGiven that all of Dumbledore's subsequent actions, including some pretty drastic decisions, were made on the assumption that Voldemort had returned, based solely on the evidence of the Azkaban break-in, this seems unlikely. He even told Bones that he had only given each cell a quick examination due to the sheer number he had to look through, which is an unnecessary detail in-universe, but out-of-universe explains to the reader how he could have overlooked Harry's concealment. ETA (this does mean "edited to add", right?): If Dumbledore was already working on the assumption that Harry was involved in the breakout, he would not have been so surprised that retrieving him from Mary's Place early would cause a paradox.
3gjm8yReally?
"Can we know what to do about AI?": An Introduction

Your arguments would be much more convincing if you showed results from actual code. In engineering fields, including control theory and computer science, papers that contain mathematical arguments but no test data are much more likely to have errors than papers that include test data, and most highly-cited papers include test data. In less polite language, you appear to be doing philosophy instead of science (science requires experimental data, while philosophy does not).

I imagine you have not actually written code because it seems too hard to do anythin... (read more)

3JoshuaZ8yWhat would you want this code to do? What code (short of a full-functioning AGI) would be at all useful here? Can you expand on this, possibly with example tasks, because I'm not sure what you are requesting here. This is a trenchant critique, but it ultimately isn't that strong: having trouble predicting should be a reason to if anything be more worried rather than less. This is missing the primary concern of people at MIRI and elsewhere. The concern isn't anything like gradually more and more competing AI coming online that are slightly smarter than baseline humans. The concern is that the first true AGI will self-modify itself to become far smarter and more capable of controlling the environment around it than anything else. In that scenario, issues like anti-trust or economics aren't relevant. It is true that on balance human lives have become better and safer, but that isn't by itself a strong reason to think that trend will continue, especially when considering hypothetical threats such the AGI threat whose actions are fundamentally discontinuous to prior human trends for standards of living.
Welcome to Less Wrong! (5th thread, March 2013)

Hello! I'm here because...well, I've read all of HPMOR, and I'm looking for people who can help me find the truth and become more powerful. I work as an engineer and read textbooks for fun, so hopefully I can offer some small insights in return.

I'm not comfortable with death. I've signed up for cryonics, but still perceive that option as risky. As a rough estimate, it appears that current medical research is about 3% of GDP and extends lifespans by about 2 years per decade. I guess that if medical research spending were increased to 30% of current GDP, the... (read more)

0idea218yHi, afterburger I find correct that you are not comfortable with death, the opposite of that would be unnatural. I don't know whether you have ever heard of this person https://en.wikipedia.org/wiki/Nikolai_Fyodorovich_Fyodorov [https://en.wikipedia.org/wiki/Nikolai_Fyodorovich_Fyodorov] "Fedorov argued that the struggle against death can become the most natural cause uniting all people of Earth, regardless of their nationality, race, citizenship or wealth (he called this the Common Cause)." Fedorov speculations about a future resurrection of all, although seen today as a joke, at least they are able to beat the "Pascal's wager" and, if we keep in mind the possiibilities of new particle physics, it is rational hoping that an extremely altruistic future humanity could decide to ressurrect all of us, by using resources on technology that today we cannot imagiine (the same way that current technology could have never been imagined by Plato or Aristotle). Although science and technology could maybe keep limits, the most important issue about that would have to do with motivations. Why should a future humanity would be interested in acting so? The only thing we could do today about helping that, would be starting to build up already the moral and cultural foundation of a fully altruistic and rational society (which would be inevitably, extremely economically efficient). And that is not done yet.