Recent Discussion

Princeton neuroscientist Michael Graziano wrote the book Rethinking Consciousness (2019) to explain his "Attention Schema" theory of consciousness (endorsed by Dan Dennett![1]). If you don't want to read the whole book, you can get the short version in this 2015 article.

I'm particularly interested in this topic because, if we build AGIs, we ought to figure out whether they are conscious, and/or whether that question matters morally. (As if we didn't already have our hands full thinking about the human impacts of AGI!) This book is nice and concrete and computational, and I think it at least of

... (Read more)

If consciousness only “emerges” when an information processing system is constructed at a higher level, then that implies that the whole is something different than the aggregate of its many individual interactions. This is unlike shminux’s description liquid water emerging from H2O interactions, which is confusing of map and territory. If a physical description stated that an interaction is conscious if and only if it is part of an information processing system, that is something that cannot be determined with local information at the exact time and place

... (read more)
1TAG2hWell, I wasn't nitpicking you. Friedenbach was assserting locality+determinism. You are asserting locality+nondeterminism, which is OK.
2Mark_Friedenbach1hFWIW I was asserting this: The only thing non-deterministic in QM is the Born rule, which isn’t part of a MWI block universe formulation. (You need a source of randomness to specify where “you” will end up in the future evolution of the universe, but not to specify all paths you might end up in.)
1TAG2hAgain: Chalmers doesn't think p-zombies are actually possible. That doesn't follow from (1). It would follow from the claim that everyone is a zombie, because then there would be nothing to consciousness except false claims to be conscious. However, if you take the view that reports of consciousness are caused by consciousness per se, then consciousness per se exists and needs to be explained separately from reports and behaviour.
Fiddle Effects Tech
829m1 min readShow Highlight

Imagine you're a fiddle player who primarily plays without effects, but would occasionally like to be able to play with them. What can you do?

One option is to put a pickup on the fiddle and run that into guitar pedals. This will work, but pickups generally sound much worse than clip-on mics like the ubiquitous AT PRO-35. Since you're mostly playing uneffected, you don't want to give that up.

Another option is to get a vocal effects processor. For example, I have a VoiceTone D1. These take balanced XLR from the mic, send balanced XLR to the board, and provide phantom power, so t... (Read more)

Bay Solstice 2019 Retrospective
601d15 min readShow Highlight

I was the Creative Director for last year’s Winter Solstice in the Bay Area. I worked with Nat Kozak and Chelsea Voss, who were both focused more on logistics. Chelsea was also the official leader who oversaw both me and Nat and had final say on disputes. (However, I was granted dictatorial control over the Solstice arc and had final say in that arena.) I legit have no idea how any one of us would have pulled this off without the others; love to both of them and also massive respect to Cody Wild, who somehow ran the entire thing herself in 2018.

While I worked with a bunch of other people on So

... (Read more)
21Raemon11hYou can watch it here [https://drive.google.com/file/d/1Dx0zHyWGpzEQQxF4b3aESTC5ylviPvA_/view?fbclid=IwAR2k3pIPJuEFhJlwAA_XqLeEFqS_huKs1TuTUkecleqcLgiWolO2aOce24U] .

How many people raised their hands when Eliezer asked about the probability estimate? When I was watching the video I gave a probability estimate of 65%, and I'm genuinely shocked that "not many" people thought he had over a 55% chance. This is Eliezer we're talking about.............

25Tenoke7hI have a bunch of comments on this: 1. I really liked the bit. Possibly because I've been lowkey following his efforts. 2. He looks quite good, and I like the beard on him. 3. .. I've always thought that his failed attempts at researching weightloss and applying what he learned were a counter example of how applicable LW/EY rationality is. Glad to see he solved it when it became more important. 1. Eliezer clearly gets too much flack in general, and especially in this case. It's not like I haven't criticised him but come on. 2. Really? Fine, you don't know him but if you don't know EY and are at a rationalist event why would you be surprised by not knowing a speaker? From the public's reaction to his openning it should've been clear most people did know him. 1. I'm not against the concept of triggering - some stuff can be, including eating disorders, but like this? Can a person not talk at all about weight gain/loss? Is the solstice at all LW-related if things can't be discussed even at their fairly basic (and socially accepted) level? Please, if you hated it give a detailed response as to why. I'm genuinely curious.
20Said Achmiz8hThat was excellent, and I quite enjoyed watching it. I’m not going to spoil it for anyone who’s not seen yet, of course, but I just want to say: Well done, Eliezer!

Tl;dr: I’ll try here to show how CFAR’s “art of rationality” has evolved over time, and what has driven that evolution.

In the course of this, I’ll introduce the distinction between what I’ll call “reality-revealing puzzles” and “reality-masking puzzles”—a distinction that I think is almost necessary for anyone attempting to develop a psychological art in ways that will help rather than harm. (And one I wish I’d had explicitly back when the Center for Applied Rationality was founded.)

I’ll also be trying to elaborate, here, on the notion we at CFAR have recently been tossing around about CFAR be

... (Read more)

I'm reminded of the post Purchase Fuzzies and Utilons Separately.

The actual human motivation and decision system operates by something like "expected valence" where "valence" is determined by some complex and largely unconscious calculation. When you start asking questions about "meaning" it's very easy to decouple your felt motivations (actually experienced and internally meaningful System-1-valid expected valence) from what you think your motivations ought to be (something like "utility maximization", whe... (read more)

4Hazard3hIt might be useful to know that I'm not that sold on a lot of singularity stuff, and the parts of rationality that have affected me the most are some of the more general thinking principles. "Look at the truth even if it hurts" / "Understanding tiny amounts of evo and evo psyche ideas" / "Here's 18 different biases, now you can tear down most people's arguments". It was those ideas (a mix of the naive and sophisticated form of them) + my own idiosyncrasies that caused me a lot of trouble. So that's why I say "rationalist memes". I guess that if I bought more singularity stuff I might frame it as "weird but true ideas".
3Kaj_Sotala4hI feel like Vaniver's interpretation of self vs. no-self [https://www.lesswrong.com/posts/dMeRKq6tXWqcNxujt/self-and-no-self] is pointing at a similar thing; would you agree? I'm not entirely happy with any of the terminology suggested in that post; something like "seeing your preferences realized" vs. "seeing the world clearly" would in my mind be better than either "self vs. no-self" or "design specifications vs. engineering constraints". In particular, Vaniver's post makes the interesting contribution of pointing out that while "reasoning vs. rationalization" suggests that the two would be opposed, seeing the world clearly vs. seeing your preferences realized can be opposed, mutually supporting, or orthogonal. You can come to see your preferences more realized by deluding yourself, but you can also deepen both, seeing your preferences realized more because you are seeing the world more clearly. In that ontology, instead of something being either reality-masking or reality-revealing, it can * A. Cause you to see your preferences more realized and the world more clearly * B. Cause you to see your preferences more realized but the world less clearly * C. Cause you to see your preferences less realized but the world more clearly * D. Cause you to see your preferences less realized and the world less clearly But the problem is that a system facing a choice between several options has no general way to tell whether some option it could take is actually an instance of A, B, C or D or if there is a local maximum that means that choosing one possiblity increases one variable a little, but another option would have increased it even more in the long term. E.g. learning about the Singularity makes you see the world more clearly, but it also makes you see that fewer of your preferences might get realized than you had thought. But then the need to stay alive and navigate the Singularly successfully, pushes you into D, where you are so focused on trying to invest all
3AnnaSalamon5hI like your example about your math tutoring, where you "had a fun time” and “[weren’t] too results driven” and reality-masking phenomena seemed not to occur. It reminds me of Eliezer talking about how the first virtue of rationality is curiosity. I wonder how general this is. I recently read the book “Zen Mind, Beginner’s Mind,” where the author suggests that difficulty sticking to such principles as “don’t lie,” “don’t cheat,” “don’t steal,” comes from people being afraid that they otherwise won’t get a particular result, and recommends that people instead… well, “leave a line of retreat” wasn’t his suggested ritual, but I could imagine “just repeatedly leave a line of retreat, a lot” working for getting unattached. Also, I just realized (halfway through typing this) that cousin_it and Said Achmiz say the same thing in another comment [https://www.lesswrong.com/posts/byewoxJiAfwE6zpep/reality-revealing-and-reality-masking-puzzles#yuKSyGzgfwdDaLt6j] .
Exploring safe explorationΩ
2511d3 min readΩ 14Show Highlight

This post is an attempt at reformulating some of the points I wanted to make in “Safe exploration and corrigibility” in a clearer way. This post is standalone and does not assume that post as background.

In a previous comment thread, Rohin argued that safe exploration is currently defined as being about the agent not making “an accidental mistake.” I think that definition is wrong, at least to the extent that I think it both doesn't make much sense and doesn't describe how I actually expect current safe exploration work to be useful.

First, what does it mean for a failure to be an “accident?” Th

... (Read more)
A particular prediction I have now, but is weakly held, is that episode boundaries are weak and permeable, and will probably be obsolete at some point. There's a bunch of reasons I think this, but maybe the easiest to explain is that humans learn and are generally intelligent and we don't have episode boundaries.
Given this, I think the "within-episode exploration" and "across-episode exploration" relax into each other, and (as the distinction of episode boundaries fades) turn into the same thing, which I think is fine to call
... (read more)
Can we always assign probabilities?
414h11 min readShow Highlight

Epistemic status: I wrote this post quickly, and largely to solicit feedback on the claims I make in it. This is because (a) I’m not sure about these claims (or how I’ve explained them), and (b) the question of what I should believe on this topic seems important in general and for various other posts I’m writing. (So please comment if you have any thoughts on this!)

I’ve now read a bunch on topics related to the questions covered here, but I’m not an expert, and haven’t seen or explicitly looked for a direct treatment of the questions covered here. It’s very possible this has already been thoro

... (Read more)

Just a passing though here. Is probability really the correct term? I wonder if what we do in these types of cases is more an assessment of our confidence in our ability to extrapolate from past experience into new, and often completely different, situations.

If so that is really not a probability about the event we're thinking about -- though perhaps is could be seen as one about our ability to make "wild" guesses (and yes, that is hyperbole) about stuff we don't really know anything about. Event there I'm not sure probability is t... (read more)

1MichaelA5hThere are definitely a lot of different types of questions. There are also definitely multiple interpretation of probability. (This post presumes a Bayesian/subjectivist interpretation of probability, but a major contender is the frequentist view.) And it's definitely possible that there are some types of questions where it's more common, empirically speaking, to use one interpretation of probability than another, and possibly where that's more useful too. But I'm not aware of it being the case that probabilities just have to mean a different thing for different types of questions. If that's roughly what you meant, could you expand on that? (That might go to the heart of the claim I'm exploring the defensibility of in this post, as I guess I'm basically arguing that we could always assign at least slightly meaningful subjective credences to any given claim.) If instead you meant just that "a 0.001% chance of god being real" could mean either "a 0.001% chance of precisely the Judeo-Christian God being real, in very much the way that religion would expect" or "a 0.001% chance that any sort of supernatural force at all is real, even in a way no human has ever imagined at all", and that those are very different claims, then I agree.
1MichaelA5hI don't understand the last half of that last sentence. But as for the rest, if I'm interpreting you correctly, here's how I'd respond: The probability of a god existing is not necessarily equal to the probability of "the set of concepts [being] in any way possible" (or we might instead say something like "it being metaphysically possible", "the question even being coherent", or similar). Instead, it's less than or equal to that probability. That is, a god can indeed only exist if the set of concepts are in any way possible, but it seems at least conceivable that the set of concepts could be conceivable and yet it still happen to be that there's no god. And in any case, for the purposes of this post, what I'm really wondering about is not what the odds of there being a god are, but rather whether and how we can arrive at meaningful probabilities for these sorts of claims. So I'd then also ask whether and how we can arrive at a meaningful probability for the claim "It is metaphysically possible/in any way possible that there's a god" (as a separate claim to whether there is a god). And I'd argue we can, through a process similar to the one described in this post. To sketch it briefly, we might think about previous concepts that were vaguely like this one, and whether, upon investigation, they "turned out to be metaphysically possible". We might find they never have ("yet"), but that that's not at all surprising, even if we assume that those claims are metaphysically possible, because we just wouldn't expect to have found evidence of that anyway. In which case, we might be forced to either go for way broader reference classes (like "weird-seeming claims", or "things that seemed to violate occam's razor unnecessarily"), or abandon reference class forecasting entirely, and lean 100% on inside-view type considerations (like our views on occam's razor and how well this claim fits with it) or our "gut feelings" (hopefully honed by calibration training). I think the pro
1MichaelA5hYour comment made me realise that I skipped over the objection that the questions are too ambiguous to be worth engaging with. I've now added a paragraph to fix that: I think the reason why I initially skipped over that without noticing I'd done so was that: * this post was essentially prompted by the post from Chris Smith with the "Kyle the atheist" example * Smith writes in a footnote "For the benefit of the doubt, let’s assume everyone you ask is intelligent, has a decent understanding of probability, and more or less agrees about what constitutes an all-powerful god." * I wanted to explore whether the idea of it always being possible to assign probabilities could stand up to that particularly challenging case, without us having to lean on the (very reasonable) strategy of debating the meaning of the question. I.e., I wanted to see if, if we did agree of the definitions, we could still come to meaningful probabilities on that sort of question (and if so, how). But I realise now that it might seem weird to readers that I neglected to mention the ambiguity of the questions, so I'm glad your comment brought that to my attention.
How to Escape From Immoral Mazes
511d18 min readShow Highlight

Previously in sequence and most on point: What is Success in an Immoral Maze?How to Identify an Immoral Maze

This post deals with the goal of avoiding or escaping being trapped in an immoral maze, accepting that for now we are trapped in a society that contains powerful mazes. 

We will not discuss methods of improving conditions (or preventing the worsening of conditions) within a maze, beyond a brief note on what a CEO might do. For a middle manager anything beyond not making the problem worse is exceedingly difficult. Even for the CEO this is an extraordinarily difficult task.   

To rescue so... (Read more)

As George Carlin says, some people need practical advice. I didn't know how to go about providing what such a person would need, on that level. How would you go about doing that?

The solution is probably not a book. Many books have been written on escaping the rat race that could be downloaded in the next 5 minutes, yet people don't, and if some do in reaction to this comment they probably won't get very far.

Problems that are this big and resistant to being solved are not waiting for some lone genius to find the 100,000 word combination that... (read more)

1jmh4hFirst to be clear I have not closely read all the series or even this one completely -- just feeling sick today so not focused. However, I did have a thought I wanted to get out. May have been well addressed already. It seems that we are perhaps missing an element here. Is it possible that even if one is working, from a entire corporate structure setting, in a moral maze that various levels and don't really impose the same problems. Thinking of this as a setting where we see the whole as one large pond. However, what if rather than one large pond what we have is actually a collection or connected smaller ponds and the maze really only applies in some and at the collection of ponds level. Is there something of a fallacy of composition error potential here? The whole is a moral maze but many of the ponds it is comprised of lack that character? If so then it may well be possible to escape the maze without having to quit the job.
3Zvi16h1: First are should be 'what if' 2: Difference is that third-to-last question is about the 'can't afford it' concern, which is distinct from generally being trapped. Could see changing it to be last three, or unifying the notes. 3: Differently. Arcane here means 'complex and obscure details that need to be mastered and done correctly, or it won't work'. Incantation here means 'a thing you say in order to evoke a particular response' in this case a social web pattern.
3Pattern18hThis is a really great post.
Moloch Hasn’t Won
11020d7 min readShow Highlight

This post begins the Immoral Mazes sequence. See introduction for an overview of the plan. Before we get to the mazes, we need some background first.

Meditations on Moloch

Consider Scott Alexander’s Meditations on Moloch. I will summarize here. 

Therein lie fourteen scenarios where participants can be caught in bad equilibria.

  1. In an iterated prisoner’s dilemma, two players keep playing defect.
  2. In a dollar auction, participants massively overpay.
  3. A group of fisherman fail to coordinate on using filters that efficiently benefit the group, because they can’t punish those who don’t profi by not usi
... (Read more)

Happy to delete the word 'you' there since it's doing no work. Not going to edit this version, but will update OP and mods are free to fix this one. Also took opportunity to do a sentence break-up.

As for saying explicitly that slavery is bad, well, pretty strong no. I'm not going to waste people's time doing that, nor am I going to invite further concern trolling, or the implication that when I do not explicitly condemn something it means I might secretly support it or something. If someone needs reassurance that someone talking about slavery as one of the horrible things also opposes a less horrible form of slavery, then they are not the target audience.

What is Success in an Immoral Maze?
377d2 min readShow Highlight

Previously in Sequence: Moloch Hasn’t WonPerfect CompetitionImperfect CompetitionDoes Big Business Hate Your Family?What is Life in an Immoral Maze?Stripping Away the Protections

Immoral Mazes are terrible places to be. Much worse than they naively appear. They promise the rewards and trappings of success. Do not be fooled. 

If there is one takeaway I want everyone to get from the whole discussion of Moral Mazes, it is this:

Being in an immoral maze is not worth it. They couldn’t pay you enough. Even if they could, they definitely don’t. If you end up CEO, you still lose. These lives ar... (Read more)

1zby6hHow does that relate to all what was said (and sang) about 'rat race'?

'Rat race' is a highly related concept. It's mostly a subset, I think, although your view of the term may vary. Rat race illustrates the idea that when all the workers try harder to get ahead of other workers, everyone does lots more work, often to no useful end, without people on net getting ahead. Or, alternatively, that you do all this work just to stay in place. It certainly has implications of 'what I am doing doesn't actually matter' and also 'what I am doing is a zero-sum game" which implies the first thing.

How to Identify an Immoral Maze
615d4 min readShow Highlight

Previously in sequence: Moloch Hasn’t WonPerfect CompetitionImperfect CompetitionDoes Big Business Hate Your Family?What is Life in an Immoral Maze?Stripping Away the ProtectionsWhat is Success in an Immoral Maze?

Immoral mazes (hereafter mazes), as laid out in the book Moral Mazesare toxic organizations. Working for them puts tremendous pressure on you to prioritize getting ahead in the organization over everything else. Middle managers are particularly affected – they are pushed to sacrifice not only all of their time, but also things such

... (Read more)
1ErickBall14hOne factor is that the military has a pretty consistent policy of moving officers around to different postings every few years. You never work with the same people very long, except maybe at the very top. This might help enable some of the outrunning-your-mistakes phenomenon mentioned above, but it also probably means you can't develop the kind of interpersonal politics you might see in a big corporation.

When you say "the military" do you mean "the US military" here? I would be surprised if that's a consistent phenomena over the different militarizes that exist.

5Mark_Friedenbach17hThat's a fully generic response though. Any combination of goals/drives could have a (possibly non-linear) mapping which turns them into a single unified goal in that sense, or vice versa. Let me put it more simply: can achieving "self-determination" alleviate your need to eat, sleep, and relieve yourself? If not, then there are some basic biological needs (maintenance of which is a goal) that have to be met separately from any "ultimate" goal of self-determination. That's the sense in which I considered it obvious we don't have singular goal systems.
2mr-hire17hYeah, I think that if the brain in fact is mapped that way it would be meaningful to say you have a single goal. Maybe, it depends on how the brain is mapped. I know of at least a few psychology theories which would say things like avoiding pain and getting food are in the service of higher psychological needs. If you came to believe for instance that eating wouldn't actually lead to those higher goals, you would stop. I think this is pretty unlikely. But again, I'm not sure.
2Mark_Friedenbach18hThanks, I learned something. Although for the purposes of this discussion it seems that Maslow's specific factorization of goals is questionable, but not the general idea of a hierarchy of needs. Does that sound reasonable?

Well, it sounds to me like it's more of a heterarchy than a hierarchy, but yeah.

So, a Living Being is composed of multiple parts who act pretty much on tandem except extreme situations like Cancer, how does that work?

Inspired by my post on problems with causal decision theory (CDT), here is a hacked version of CDT that seems to be able to imitate timeless decision theory (TDT) and functional decision theory[1] (FDT), as well as updateless decision theory (UDT) under certain circumstances.

Call this ACDT, for (a)causal decision theory. It is, essentially, CDT which can draw extra, acausal arrows on the causal graphs, and which attempts to figure out which graph represents the world it's in. The drawback is its lack of elegance; the advantage, if it works, is that it's simple to specify and focuses attention

... (Read more)
3rmoehn10hTo other readers: If you see broken image links, try right-click+View Image, or open the page in Chrome or Safari. In my Firefox 71 they are not working.

That's annoying - thanks for pointing it out. Any idea what the issue is?

Update: Beliefs that are about the world in general, and not about yourself in particular (ie. things you don't want to say about yourself)

Neat solution, but I feel the dynamics of advicer/advicee is more like I acquired this piece of wisdom over time through experiences, hardships, etc, so that you may not have to go through all of them. And mostly I think this is what lures in people. So, I think it won't play out the same way thus defeating the original intention of the solution.

I am okay with people sharing their experiences and wisdom they acquired as a result of their journey, but what irks me is the extrapolation of it without sharing the downsides.

I’ve spent a lot of time defending LW authors’ right to have the conversation they want to have, whether that be early stage brainstorming, developing a high context idea, or just randomly wanting to focus on some particular thing. 

LessWrong is not only a place for finished, flawless works. Good intellectual output requires both Babble and Prune, and in my experience the best thinkers often require idiosyncratic environments in order to produce and refine important insights. LessWrong is a full-stack intellectual pipeline. 

But the 2018 Review is supposed to be late stage in that pipe

... (Read more)

Yeah, true, that seems like a fair reason to point out for why there wouldn't be more reviews. Thanks for sharing your personal reasons.

Previously: Eliezer's "Against Rationalization" sequence

I've run out of things to say about rationalization for the moment. Hopefully there'll be an Against Rationalization III a few years from now, but ideally some third author will write it.

For now, a quick recap to double as a table of contents:

... (Read more)
3Raemon18hCongrats! Note that if you go to the library page and scroll down a bit, you'll find a "create sequence" button, which you can use if you want to create a formal sequence for this. (Also happy to help with this if the UI is confusing – we haven't really optimized our sequence UI as much as we'd like)

So *that's* where that UI lives! I did look for it.

Might go back and convert this into a proper sequence when I get back from Mystery Hunt.

In this post I'd basically like to collect some underappreciated points about utility functions that I've made in the comments of various places but which I thought were collecting into a proper, easily-referenceable post. The first part will review the different things referred to by the term "utility function", review how they work, and explain the difference between them. The second part will explain why -- contrary to widespread opinion on this website -- decision-theoretic utility functions really do need to be bounded.

(It's also worth noting that as a consequence, a number of the decis

... (Read more)
1Isnasene16hAhh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]. As a result, I went on a tangent about infinity to begin exploring whether my modified notion of a utility function would break in ways that regular ones wouldn't. I agree, one shouldn't conclude anything without a theorem. Personally, I would approach the problem by looking at the infinite wager comparisons discussed earlier and trying to formalize them into additional rationality condition. We'd need * an axiom describing what it means for one infinite wager to be "strictly better" than another. * an axiom describing what kinds of infinite wagers it is rational to be indifferent towards Then, I would try to find a decisioning-system that satisfies these new conditions as well as the VNM-rationality axioms (where VNM-rationality applies). If such a system exists, these axioms would probably bar it from being represented fully as a utility function. If it didn't, that'd be interesting. In any case, whatever happens will tell us more about either the structure our preferences should follow or the structure that our rationality-axioms should follow (if we cannot find a system). Of course, maybe my modification of the idea of a utility function turns out to show such a decisioning-system exists by construction. In this case, modifying the idea of a utility function would help tell me that my preferences should follow the structure of that modification as well. Does that address the question? [1] From your post:

Ahh, thanks for clarifying. I think what happened was that your modus ponens was my modus tollens -- so when I think about my preferences, I ask "what conditions do my preferences need to satisfy for me to avoid being exploited or undoing my own work?" whereas you ask something like "if my preferences need to correspond to a bounded utility function, what should they be?" [1]

That doesn't seem right. The whole point of what I've been saying is that we can write down some simple conditions that ought to be true in order to avoid being exploitable or othe

... (read more)
Using Expert Disagreement
1019h4 min readShow Highlight

Previously: Testing for Rationalization


One of the red flags was "disagreeing with experts". While all the preceding tools apply here, there's a suite of special options for examining this particular scenario.

The "World is Mad" Dialectic

Back in 2015, Ozymandias wrote:

I think a formative moment for any rationalist– our “Uncle Ben shot by the mugger” moment, if you will– is the moment you go “holy shit, everyone in the world is fucking insane.”
First, you can say “holy shit, everyone in the world is fucking insane. Ther
... (Read more)
The lying theory is tricky, as it can explain anything.

The lying theory can explain away any "evidence", but not tell you what the truth is - at best it can tell you where the truth is not.

A rant against robots
583d5 min readShow Highlight

What comes to your mind when you hear the word "artificial intelligence" (or "artificial general intelligence")? And if you want to prepare the future, what should come to your mind?

It seems that when most people hear AI, they think of robots. Weirdly, this observation includes both laymen and some top academics. Stuart Russell's book (which I greatly enjoyed) is such an example. It often presents robots as an example of an AI.

But this seems problematic to me. I believe that we should dissociate a lot more AIs from robots. In fact, given that most people will neverthel... (Read more)

Typically, Legg-Hutter intelligence does not seem to require any "embodied intelligence".

Don't make the mistake of basing your notions of AI on uncomputable formalisms. That mistake has destroyed more minds on LW than probably anything else.

1Donald Hobson18hAlgorithms don't have a single "power" setting. It is easier to program a single computer than to make a distributed fault tolerant system. Algorithms like alpha go are run on a particular computer with an off switch, not spread around. Of course, a smart AI might soon load its code all over the internet, if it has access. But it would start in a box.

This post summarizes the sequence on value learning. While it doesn’t introduce any new ideas, it does shed light on which parts I would emphasize most, and the takeaways I hope that readers get. I make several strong claims here; interpret these as my impressions, not my beliefs. I would guess many researchers disagree with the (strength of the) claims, though I do not know what their arguments would be.

Over the last three months we’ve covered a lot of ground. It’s easy to lose sight of the overall picture over such a long period of time, so let's do a brief recap.

The “obvious” approach

H... (Read more)

I feel like you are trying to critique something I wrote, but I'm not sure what? Could you be a bit more specific about what you think I think that you disagree with?

(In particular, the first paragraph sounds like a statement that I myself would make, so I'm not sure how it is a critique.)

Load More