hosford42
hosford42 has not written any posts yet.

hosford42 has not written any posts yet.

You should read up on regularization) and the no free lunch theorem, if you aren't already familiar with them.
A theory is a model for a class of observable phenomena. A model is constructed from smaller primitive (atomic) elements connected together according to certain rules. (Ideally, the model's behavior or structure is isomorphic to that of the class of phenomena it is intended to represent.) We can take this collection of primitive elements, plus the rules for how they can be connected, as a modeling language. Now, depending on which primitives and rules we have selected, it may become more or less difficult to express a model with behavior isomorphic to the original,... (read more)
In brainstorming, a common piece of advice is to let down your guard and just let the ideas flow without any filters or critical thinking, and then follow up with a review to select the best ones rationally. The concept here is that your brain has two distinct modes of operation, one for creativity and one for analysis, and that they don't always play well together, so by separating their activities you improve the quality of your results. My personal approach mirrors this to some degree: I rapidly alternate between these two modes, starting with a new idea, then finding a problem with it, then proposing a fix, then finding a new... (read 410 more words →)
The only way to pretend that human value isn't just another component of how humans historically have done this, is by bestowing some sort of transcendent component to human biology (i.e. a soul or something).
Human values are special because we are human. Each of us is at the center of the universe, from our own perspective, regardless of what the rest of the universe thinks of that. It's the only way for anything to have value at all, because there is no other way to choose one set of values over another except that you happen to embody those values. The paperclip maximizer's goals do not have value with respect to our... (read more)
Regarding this post and the complexity of value:
Taking a paperclip maximizer as a starting point, the machine can be divided up into two primary components: the value function, which dictates that more paperclips is a good thing, and the optimizer that increases the universe's score with respect to that value function. What we should aim for, in my opinion, is to become the value function to a really badass optimizer. If we build a machine that asks us how happy we are, and then does everything in its power to improve that rating (so long as it doesn't involve modifying our values or controlling our ability to report them), that is the... (read more)
It is better in the sense that it is ours. It is an inescapable quality of life as an agent with values embedded in a much greater universe that might contain other agents with other values, that ultimately the only thing that makes one particular set of values matter more to that agent is that those are the values that belong to that agent.
We happen to have as one of our values, to respect others' values. But this particular value happens to be self-contradictory when taken to its natural conclusion. To take it to its conclusion would be to say that nothing matters in the end, not even what we ourselves care... (read more)
I didn't miss the point; I just had one of my own to add. I gave the post a thumbs-up before I made my comment, because I agree with the overwhelming majority of it and have dealt with people who have some of the confusions described therein. Anyway, thanks for explaining.
I guess relevance is a matter of perspective. I was not aware that my ideas were not novel; they were at least my own and not something I parroted from elsewhere. Thanks for taking the time to explain, and no, I feel much better now.
My first comment ever on this site promptly gets downvoted without explanation. If you disagree with something I said, at least speak up and say why.
If evolutionary biology could explain a toaster oven, not just a tree, it would be worthless.
But it can, if you consider a toaster to be an embodied meme. Of course, the evolution that applies to toasters is more Lamarckian than Darwinian, but it's still evolution. Toaster designs that have higher utility to human beings lead to higher rates of reproduction, indirectly by human beings. The basic elements of evolution, namely mutation and reproduction, are all there.
What's interesting is that while natural evolution of biological organisms easily gets stuck in local optima, the backwards retina being an example, artificial evolution of technology often does not, due to the human mind being... (read more)
You can't get to the outside. No matter what perspective you are indirectly looking from, you are still ultimately looking from your own perspective. (True objectivity is an illusion - it amounts to you imagining you have stepped outside of yourself.) This means that, for any given phenomenon you observe, you are going to have to encode that phenomenon into your own internal modeling language first to understand it, and you will therefore perceive some lower bound on complexity for the expression of that phenomenon. But that complexity, while it seems intrinsic to the phenomenon, is in fact intrinsic to your relationship to the... (read 353 more words →)