I dunno, isn't this just a nerdy version of numerology?
I think for non-elites it's about the same. It depends on how you conceive "ideas" of course - whether you restrict the term purely to abstractions, or broaden it to include all sorts of algorithms, including the practical.
Non-elites aren't concerned with abstractions as much as elites, they're much more concerned with practical day-to-day matters like raising a family, work, friends, entertainment, etc.
Take for instance DIY videos on Youtube - there are tons of them nowadays, and that's an example of the kind of thing that non-elites (and ind...
It could be that, like sleep, the benefits of reading fiction aren't obvious and aren't on the surface. IOW, escapism might be like dreaming - a waste from one point of view (time spent) but still something without which we couldn't function properly, so therefore not a waste, but a necessary part of maintenance, or summat.
What happens if it doesn't want to - if it decides to do digital art or start life in another galaxy?
That's the thing, a self-aware intelligent thing isn't bound to do the tasks you ask of it, hence a poor ROI. Humans are already such entities, but far cheaper to make, so a few who go off and become monks isn't a big problem.
I can't remember where I first came across the idea (maybe Daniel Dennett) but the main argument against AI is that it's simply not worth the cost for the foreseeable future. Sure, we could possibly create an intelligent, self-aware machine now, if we put nearly all the relevant world's resources and scientists onto it. But who would pay for such a thing?
What's the ROI for a super-intelligent, self-aware machine? Not very much, I should think - especially considering the potential dangers.
So yeah, we'll certainly produce machines like the robots in Inte...
If there's any kernel to the concept of rationality, it's the idea of proportioning beliefs to evidence (Hume). Everything really flows from that, and the sub-variations (like epistemic and instrumental rationality) are variations of that principle, concrete applications of it in specific domains, etc.
"Ratio" = comparing one thing with another, i.e. (in this context) one hypothesis with another, in light of the evidence.
(As I understand it, Bayes is the method of "proportioning beliefs to evidence" par excellence.)
Great stuff! As someone who's come to all this Bayes/LessWrong stuff quite late, I was surprised to discover that Scott Alexander's blog is one of the more popular in the blogosphere, flying the flag for this sort of approach to rationality. I've noticed that he's liked by people on both the Left and the Right, which is a very good thing. He's a great moderating influence and I think he offers a palatable introduction to a more serious, less biased way of looking at the world, for many people.
I think the concept of psychological neoteny is interesting (Google Bruce Charlton neoteny) in this regard.
Roughly, the idea would be that some people retain something of the plasticity and curiosity of children, whereas others don't, they mature into "proper" human beings and lose that curiosity and creativity. The former are the creative types, the latter are the average human type.
There are several layered ironies if this is a valid notion.
Anyway, for the latter type, they really do exhaust their interests in maturity, they stick to one c...
All purely sensory qualities of an object are objective, yes. Whatever sensory experience you have of an object is just precisely how that object objectively interacts with your sensory system. The perturbation that your being (your physical substance) undergoes upon interaction with that object via the causal sensory channels is precisely the perturbation caused by that object on your physical system, with the particular configuration ("wiring") it has.
There are still subjective perceived qualities of objects though - e.g. illusory (e.g.like M...
Yes, for that person. Remember, we're not talking about an intrinsic or inherent quality, but an objective quality. Test it however many times you like, the lemon will be sweet to that person - i.e. it's an objective quality of the lemon for that person.
Or to put it another way, the lemon is consistently "giving off" the same set of causal effects that produce in one person "tart", another person "sweet".
The initial oddness arises precisely because we think "sweetness" must itself be an intrinsic quality of something, because there's several hundred years of bad philosophy that tells us there are qualia, which are intrinsically private, intrinsically subjective, etc.
Sweetness isn't an intrinsic property of the thing, but it is a relational property of the thing - i.e. the thing's sweetness comes into existence when we (with our particular characteristics) interact with it. And objectively so.
It's not right to mix up "intrinsic" or "inherent" with objective. They're different things. A property doesn't have to be intrinsic in order to be objective.
So sweetness isn't a property of the mental model either.
It's an objective quality (of a thing) that arises only in its interaction with us. An ana...
Hmm, but isn't this conflating "learning" in the sense of "learning about the world/nature" with "learning" in the sense of "learning behaviours"? We know the brain can do the latter, it's whether it can do the former that we're interested in, surely?
IOW, it looks like you're saying precisely that the brain is not a ULM (in the sense of a machine that learns about nature), it is rather a machine that approximates a ULM by cobbling together a bunch of evolved and learned behaviours.
It's adept at learning (in the sense of learning reactive behaviours that satisfice conditions) but only proximally adept at learning about the world.
Great stuff, thanks! I'll dig into the article more.
I'm not sure what you mean by gerrymandered.
What I meant is that you have sub-systems dedicated to (and originally evolved to perform) specific concrete tasks, and shifting coalitions of them (or rather shifting coalitions of their abstract core algorithms) are leveraged to work together to approximate a universal learning machine.
IOW any given specific subsystem (e.g. "recognizing a red spot in a patch of green") has some abstract algorithm at its core which is then drawn upon at need by an organizing principle which utilizes it (plus oth...
That's a lot to absorb, so I've skimmed it, so please forgive if responses to the following are already implicit in what you've said.
I thought the point of the modularity hypothesis is that the brain only approximates a universal learning machine and has to be gerrymandered and trained to do so?
If the brain were naturally a universal learner, then surely we wouldn't have to learn universal learning (e.g. we wouldn't have to learn to overcome cognitive biases, Bayesian reasoning wouldn't be a recent discovery, etc.)? The system seems too gappy and glitchy, too full of quick judgement and prejudice, to have been designed as a universal learner from the ground up.
I think there's always been something misleading about the connection between knowledge and belief. In the sense that you're updating a model of the world, yes, "belief" is an ok way of describing what you're updating. But in the sense of "belief" as trust, that's misleading. Whether one trusts one's model or not is irrelevant to its truth or falsity, so any sort of investment one way or another is a side-issue.
IOW, knowledge is not a modification of a psychological state, it's the actual, objective status of an "aperiodic cry...
I remember reading a book many years ago which talked about the "hormonal bath" in the body being actually part of cognition, such that thinking of the brain/CNS as the functional unit is wrong (it's necessary but not sufficient).
This ties in with the philosophical position of Externalism (I'm very much into the Process Externalism of Riccardo Manzotti). The "thinking unit" is really the whole body - and actually finally the whole world (not in the Panpsychist sense, quite, but rather in the sense of any individual instance of cognitio...
Oh, true for the "uploaded prisoner" scenario, I was just thinking of someone who'd deliberately uploaded themselves and wasn't restricted - clearly suicide would be possible for them.
But even for the "uploaded prisoner", given sufficient time it would be possible - there's no absolute impermeability to information anywhere, is there? And where there's information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )
But that reminds me of the problem of trying to isolate an AI once built.
Isn't suicide always an option? When it comes to imagining immortality, I'm like Han Solo, but limits are conceivable and boredom might become insurmountable.
The real question is whether intelligence has a ceiling at all - if not, then even millions of years wouldn't be a problem.
Charlie Brooker's Black Mirror tv show played with the punishment idea - a mind uploaded to cube experiencing subjectively hundreds of years in a virtual kitchen with a virtual garden, as punishment for murder (the murder was committed in the kitchen). In real time, the cube is ...
Thanks for the heads up, never heard of this guy before but he's very good and quite inspiring for me where I'm at right now.