I'm laying out my thoughts in order to get people thinking about these points and perhaps correct me. I definitely don't endorse deferring to anything I say, and I would write this differently if I thought people were likely to do so. 1. OpenAI's model of "deploy as early as...
but I am pretty sure that there is a program that you can write down that has the same structural property of being interpretable in this way, where the algorithm also happens to define an AGI.
Interesting. I have semi-strong intuitions in the other direction. These intuitions are mainly from thinking about what I call the Q-gap, inspired by Q Home's post and this quote:
…for simple mechanisms, it is often easier to describe how they work than what they do, while for more complicated mechanisms, it is usually the other way around.
Intelligent processes are anabranching rivers of causality: it starts and ends at highly concentrated points, but the route between is incredibly hard... (read more)
There's a funny self-contradiction here.[1]
If you learn from this essay, you will then also see how silly it was that it had to be explained to you in this manner. The essay is littered with appeals to historical anecdotes, and invites you defer to the way they went about it because it's evident they had some success.
Bergman, Grothendieck, and Pascal all do this.
If the method itself doesn't make sense to you by the light of your own reasoning, it's not something you should be interested in taking seriously. And if the method makes sense to you on its own, you shouldn't care whether big people have or haven't tried it before.
But whatever... (read more)
I'm curious to know what people are down voting.
My uncharitable guess? People are doing negative selection over posts, instead of "ruling posts in, not out". Posts like this one that go into a lot of specific details present voters with many more opportunities to disagree with something. So when readers downvote based on the first objectionable thing they find, writers are disincentivised from going into detail.
Plus, the author uses a lot of jargon and makes up new words, which somehow associates with epistemic inhumility for some people. Whereas I think writers should be making up new word candidates ~most of the time they might have something novel & interesting to say.
Interesting! I came to it from googling about definitions of CLT in terms of convolutions. But I have one gripe:
does that mean the form of my uncertainty about things approaches Gaussian as I learn more?
I think a counterexample would be your uncertainty over the number of book sales for your next book. There are recursive network effects such that more book sales causes more book sales. The more books you (first-order) expect to sell, the more books you ought to (second-order) expect to sell. In other words, your expectation over X indirectly depends on your expectation over X (or at least, it ought to, insofar as there's recursion in the territory as... (read more)
I've taken to calling it the 'Q-gap' in my notes now. ^^'
You can understand AlphaZero's fundamental structure so well that you're able to build it, yet be unable to predict what it can do. Conversely, you can have a statistical model of its consequences that lets you predict what it will do better than any of its engineers, yet know nothing about its fundamental structure. There's a computational gap between the system's fundamental parts & and its consequences.
The Q-gap refers to the distance between these two explanatory levels.
... (read more)...for simple mechanisms, it is often easier to describe how they work than what they do, while for more complicated mechanisms, it is usually the
Yeah, a lot of "second-best theories" are due to smallmindedness xor realistic expectations about what you can and cannot change. And a lot of inadequate equilibria are stuck in equilibrium due to the repressive effect the Overton window has on people's ability to imagine.
A general frame I often find comes in handy while analysing systems is to look for look for equilibria, figure out the key variables sustaining it (e.g., strategic complements, balancing selection, latency or asymmetrical information in commons-tragedies), and well, that's it. Those are the leverage points to the system. If you understand them, you're in a much better position to evaluate whether some suggested changes might work, is guaranteed to fail, or suffers from a lack of imagination.
Suggestions that fail to consider the relevant system variables are often what I call "second-best theories". Though they might be locally correct, they're also blind to the broader implications or underappreciative... (read more)
I dislike the frame of "charity" & "steelmanning". It's not usefwl for me because it assumes I would feel negatively about seeing some patterns in the first place, and that I need to correct for this by overriding my habitual soldier-like attitudes. But the value of "interpreting patterns usefwly" is extremely general, so it's a distraction to talk as if it's exclusive to the social realm.
Anyway, this reminded me of what I call "analytic" and "synthetic" thinking. They're both thinking-modes, but they emphasise different things.
I'm laying out my thoughts in order to get people thinking about these points and perhaps correct me. I definitely don't endorse deferring to anything I say, and I would write this differently if I thought people were likely to do so.
Strong upvote, but I disagree on something important. There's an underlying generator that chooses between simulacra do a weighted average over in its response. The notion that you can "speak" to that generator is a type error, perhaps akin to thinking that you can speak to the country 'France' by calling its elected president.
My current model says that the human brain also works by taking the weighted (and normalised!) average (the linear combination) over several population vectors (modules) and using the resultant vector to stream a response. There are definite experiments showing that something like this is the case for vision and motor commands, and strong reasons to suspect that this is... (read more)
I'm sad this comment was interpreted as "combative" (based on Elizabeth's reaction). It's probably a reasonable prediction/interpretation, but it's far from what I intended to communicate. I wanted my comment to be interpreted with some irony: it's sad that this post has to be written like this in order to get through to most readers, because most readers are not already at the point where they can benefit from its wisdom unless it's presented to them in this manner.