Posts

Sorted by New

Wiki Contributions

Comments

Thanks for the heads up, never heard of this guy before but he's very good and quite inspiring for me where I'm at right now.

I think for non-elites it's about the same. It depends on how you conceive "ideas" of course - whether you restrict the term purely to abstractions, or broaden it to include all sorts of algorithms, including the practical.

Non-elites aren't concerned with abstractions as much as elites, they're much more concerned with practical day-to-day matters like raising a family, work, friends, entertainment, etc.

Take for instance DIY videos on Youtube - there are tons of them nowadays, and that's an example of the kind of thing that non-elites (and indeed elites to the extent that they might actually care about DIY) are going to benefit from tremendously. And I think it's going to be natural for a non-elite individual to check out a few (after all it's pretty costless, except in terms of a tiny bit of time) and sift out what seem like the best methods.

It could be that, like sleep, the benefits of reading fiction aren't obvious and aren't on the surface. IOW, escapism might be like dreaming - a waste from one point of view (time spent) but still something without which we couldn't function properly, so therefore not a waste, but a necessary part of maintenance, or summat.

What happens if it doesn't want to - if it decides to do digital art or start life in another galaxy?

That's the thing, a self-aware intelligent thing isn't bound to do the tasks you ask of it, hence a poor ROI. Humans are already such entities, but far cheaper to make, so a few who go off and become monks isn't a big problem.

I can't remember where I first came across the idea (maybe Daniel Dennett) but the main argument against AI is that it's simply not worth the cost for the foreseeable future. Sure, we could possibly create an intelligent, self-aware machine now, if we put nearly all the relevant world's resources and scientists onto it. But who would pay for such a thing?

What's the ROI for a super-intelligent, self-aware machine? Not very much, I should think - especially considering the potential dangers.

So yeah, we'll certainly produce machines like the robots in Interstellar - clever expert systems with a simulacrum of self-awareness. Because there's money in it.

But the real thing? Not likely. The only way it will be likely is much further down the line when it becomes cheap enough to do so for fun. And I think by that time, experience with less powerful genies will have given us enough feedback to be able to do so safely.

If there's any kernel to the concept of rationality, it's the idea of proportioning beliefs to evidence (Hume). Everything really flows from that, and the sub-variations (like epistemic and instrumental rationality) are variations of that principle, concrete applications of it in specific domains, etc.

"Ratio" = comparing one thing with another, i.e. (in this context) one hypothesis with another, in light of the evidence.

(As I understand it, Bayes is the method of "proportioning beliefs to evidence" par excellence.)

Great stuff! As someone who's come to all this Bayes/LessWrong stuff quite late, I was surprised to discover that Scott Alexander's blog is one of the more popular in the blogosphere, flying the flag for this sort of approach to rationality. I've noticed that he's liked by people on both the Left and the Right, which is a very good thing. He's a great moderating influence and I think he offers a palatable introduction to a more serious, less biased way of looking at the world, for many people.

I think the concept of psychological neoteny is interesting (Google Bruce Charlton neoteny) in this regard.

Roughly, the idea would be that some people retain something of the plasticity and curiosity of children, whereas others don't, they mature into "proper" human beings and lose that curiosity and creativity. The former are the creative types, the latter are the average human type.

There are several layered ironies if this is a valid notion.

Anyway, for the latter type, they really do exhaust their interests in maturity, they stick to one career, their interests are primarily friends and family, etc., so it's easy to see how for them, life might be "done" at some point. For geeks, nerds, artists, and probably a lot of scientists too, the curiosity never ends, there's always interest about what happens next, what's around the corner, so for them, the idea of life extension and immortality is a positive.

All purely sensory qualities of an object are objective, yes. Whatever sensory experience you have of an object is just precisely how that object objectively interacts with your sensory system. The perturbation that your being (your physical substance) undergoes upon interaction with that object via the causal sensory channels is precisely the perturbation caused by that object on your physical system, with the particular configuration ("wiring") it has.

There are still subjective perceived qualities of objects though - e.g. illusory (e.g.like Müller-Lyer, etc., but not "illusions" like the famous "bent" stick in water, that's a sensory experience), pleasant, inspiring, etc.

I'm calling "sensory" here the experience (perturbation of one's being) itself, "perception" the interpretation of it (i.e. hypothetical projection of a cause of the perturbation outside the perturbation itself). Of course in doing this I'm "tidying up" what is in ordinary language often mixed (e.g. sometimes we call sensory experiences as I'm calling them "perceptions", and vice-versa). At least, there are these two quite distinct things or processes going on, in reality. There may also be caveats about at what level the brain leaves off sensorily receiving and starts actively interpreting perception, not 100% sure about that.

Load More