TimFreeman

Wiki Contributions

Comments

Sorted by

Humans can be recognized inductively: Pick a time such as the present when it is not common to manipulate genomes. Define a human to be everyone genetically human at that time, plus all descendants who resulted from the naturally occurring process, along with some constraints on the life from conception to the present to rule out various kinds of manipulation.

Or maybe just say that the humans are the genetic humans at the start time, and that's all. Caring for the initial set of humans should lead to caring for their descendants because humans care about their descendants, so if you're doing FAI you're done. If you want to recognize humans for some other purpose this may not be sufficient.

Predicting human behavior seems harder than recognizing humans, so it seems to me that you're presupposing the solution of a hard problem in order to solve an easy problem.

An entirely separate problem is that if you train to discover what humans would do in one situation and then stop training and then use the trained inference scheme in new situations, you're open to the objection that the new situations might be outside the domain covered by the original training.

Hyperventilating leads to hallucinations instead of stimulation. I went to a Holotropic Breathwork session once. Some years before that, I went to a Sufi workshop in NYC where Hu was chanted to get the same result. I have to admit I cheated at both events -- I limited my breathing rate or depth so not much happened to me.

Listening to the reports from the other participants of the Holotropic Breathwork session made my motives very clear to me. I don't want any of that. I like the way my mind works. I might consider making purposeful and careful changes to how my mind works, but I do not want random changes. I don't take psychoactive drugs for the same reason.

If you give up on the AIXI agent exploring the entire set of possible hypotheses and instead have it explore a small fixed list, the toy models can be very small. Here is a unit test for something more involved than AIXI that's feasible because of the small hypothesis list.

Getting a programming job is not contingent on getting a degree. There's an easy test for competence at programming in a job interview: ask the candidate to write code on a whiteboard. I am aware of at least one Silicon Valley company that does that and have observed them to hire people who never finished their BS in CS. (I'd rather ask candidates to write code and debug on a laptop, but the HR department won't permit it.)

Getting a degree doesn't hurt. It might push up your salary -- even if one company has enough sense to evaluate the competence of a programmer directly, the other companies offering jobs to that programmer are probably looking at credentials, so it's rational for a company to base salaries on credentials even if they are willing to hire someone who doesn't have them. Last I checked, a BS in CS made sense financially, a MS made some sense too, and a PhD was not worth the time unless you want a career writing research papers. I got a PhD apparently to postpone coming into contact with the real world. Do not do that.

If you can't demonstrate competent programming in a job interview (either due to stage fright or due to not being all that competent), getting a degree is very important. I interview a lot of people and see a lot of stage fright. I have had people I worked with and knew to be competent not get hired because of how they responded emotionally to the interview situation. What I'm calling "stage fright" is really cognitive impairment due to the emotional situation; it is usually less intense than the troubles of a thespian trying to perform on stage. Until you've done some interviews, you don't know how much the interview situation will impair you.

Does anyone know if ex-military people get stage fright at job interviews? You'd think that being trained to kill people would fix the stage fright when there's only one other person in the room and that person is reasonably polite, but I have not had the opportunity to observe both the interview of an ex-military person and their performance as a programmer in a realistic work environment.

I have experienced consequences of donating blood too often.The blood donation places check your hemoglobin, but I have experienced iron deficiency symptoms when my hemoglobin was normal and my serum ferritin was low. The symptoms were twitchy legs when I was trying to sleep and insomnia, and iron deficiency was confirmed with a ferritin test. The iron deficiency symptoms went away and ferritin went back to normal when I took iron supplements and stopped donating blood, and I stopped the iron supplements after the normal ferritin test.

The blood donation places will encourage you to donate every 2 months, and according to a research paper I found when I was having this problem essentially everyone will have low serum ferritin if they do that for two years.

I have no reason to disagree with the OP's recommendation of donating blood every year or two.

Well, I suppose it's an improvement that you've identified what you're arguing against.

Unfortunately the statements you disagree with don't much resemble what I said. Specifically:

The argument you made was that copy-and-destroy is not bad because a world where that is done is not worse than our own.

I did not compare one world to another.

Pointing out that your definition of something, like harm, is shared by few people is not argumentum ad populum, it's pointing out that you are trying to sound like you're talking about something people care about but you're really not.

I did not define "harm".

The disconnect between what I said and what you heard is big enough that saying more doesn't seem likely to make things better.

The intent to make a website for the purpose of fostering rational conversation is good, and this one is the best I know, but it's still so cringe-inducing that I ignore it for months at a time. This dialogue was typical. There has to be a better way but I don't know what it is.

Nothing I have said in this conversation presupposed ignorance, blissful or otherwise.

I give up, feel free to disagree with what you imagine I said.

Check out Argumentum ad Populum. With all the references to "most people", you seem to be committing that fallacy so often that I am unable to identify anything else in what you say.

This reasoning can be used to justify almost any form of "what you don't know won't hurt you". For instance, a world where people cheated on their spouse but it was never discovered would function, from the point of view of everyone, as well as or better than the similar world where they remained faithful.

Your example is too vague for me to want to talk about. Does this world have children that are conceived by sex, children that are expensive to raise, and property rights? Does it have sexually transmitted diseases? Does it have paternity tests? Does it have perfect contraception? You stipulated that affairs are never discovered, so liberal use of paternity tests imply no children from the affairs.

I'm also leery of the example because I'm not sure it's relevant. If you turn off the children, in some scenarios you turn off the evolution so my idea of looking at evolution to decide what concepts are useful doesn't work. If you leave the children in the story, then for some values of the other unknowns jealousy is part of the evolutionarily stable strategy, so your example maybe doesn't work.

Can you argue your point without relying so much on the example? "Most of us think X is bad" is perhaps true for the person-copying scheme and if that's the entire content of your argument then we can't address the question of whether most of us should think X is bad.

OTOH, some such choices are worse than others.

If you have an argument, please make it. Pointing off to a page with a laundry list of 37 things isn't an argument.

One way to find useful concepts is to use evolutionary arguments. Imagine a world in which it is useful and possible to commute back and forth to Mars by copy-and-destroy. Some people do it and endure arguments about whether they are still the "same" person when they got back, some people don't do it because of philosophical reservations about being the "same" person. Since we hypothesized that visiting Mars this way is useful, the ones without the philosophical reservation will be better off, in the sense that if visiting Mars is useful enough they will be able to out-compete the people who won't visit Mars that way.

So if you want to say that going places by copy-and-destroy is a bad thing for the person taking the trip, you should be able to describe the important way in which this hypothetical world where copy-and-destroy is useful is different from our own. I can't do that, and I would be very interested if you can.

Freezing followed by destructive upload seems moderately likely to be useful in the next few decades, so this hypothetical situation with commuting to Mars is not irrelevant.

Suppose we define a generalized version of Solomonoff Induction based on some second-order logic. The truth predicate for this logic can’t be defined within the logic and therefore a device that can decide the truth value of arbitrary statements in this logical has no finite description within this logic. If an alien claimed to have such a device, this generalized Solomonoff induction would assign the hypothesis that they're telling the truth zero probability, whereas we would assign it some small but positive probability.

I'm not sure I understand you correctly, but there are two immediate problems with this:

  • If the goal is to figure out how useful Solomonoff induction is, then "a generalized version of Solomonoff Induction based on some second-order logic" is not relevant. We don't need random generalizations of Solomonoff induction to work in order to decide whether Solomonoff induction works. I think this is repairable, see below.
  • Whether the alien has a device that does such-and-such is not a property of the world, so Solomonoff induction does not assign a probability to it. At any given time, all you have observed is the behavior of the device for some finite past, and perhaps what the inside of the device looks like, if you get to see. Any finite amount of past observations will be assigned positive probability by the universal prior so there is never a moment when you encounter a contradiction.

If I understand your issue right, you can explore the same issue using stock Solomonoff induction: What happens if an alien shows up with a device that produces some uncomputable result? The prior probability of the present situation will become progressively smaller as you make more observations and asymptotically approach zero. If we assume quantum mechanics really is nondeterministic, that will be the normal case anyway, so nothing special is happening here.

Load More