Schlega

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Schlega00

I'm not a fan of Evangelion or Doctor Who, but I've been enjoying Shinji and the Doctor

Schlega30

I'm in the under-qualified but interested camp. I'll plan on coming.

Schlega20

In my experience, trying to choose what I care about does not work well, and has only resulted in increasing my own suffering.

Is the problem that thinking about the amount of suffering in the world makes you feel powerless to fix it? If so then you can probably make yourself feel better if you focus on what you can do to have some positive impact, even if it is small. If you think "donating to science" is the best way to have a positive impact on the future, than by all means do that, and think about how the research you are helping to fund will one day reduce the suffering that all future generations will have to endure.

Schlega10

I was thinking that using (length of program) + (memory required to run program) as a penalty makes more sense to me than (length of program) + (size of impact). I am assuming that any program that can simulate X minds must be able to handle numbers the size of X, so it would need more than log(X) bits of memory, which makes the prior less than 2^-log(X).

I wouldn't be overly surprised if there were some other situation that breaks this idea too, but I was just posting the first thing that came to mind when I read this.

Schlega20

Edit: formatting fixed. Thanks, wedrifid.

My response to the mugger:

  • You claim to be able to simulate 3^^^^3 unique minds.
  • It takes log(3^^^^3) bits just to count that many things, so my absolute upper bound on the prior for an agent capable of doing this is 1/3^^^^3.
  • My brain is unable to process enough evidence to overcome this, so unless you can use your matrix powers to give me access to sufficient computing power to change my mind, get lost.

My response to the scientist:

  • Why yes, you do have sufficient evidence to overturn our current model of the universe, and if your model is sufficiently accurate, the computational capacity of the universe is vastly larger than we thought.
  • Let's try building a computer based on your model and see if it works.
Schlega00

This has changed my mind. Including examples that require slightly different thought patterns seems to be a good idea.

Schlega00

I agree that if the numbers given in the example were trustworthy, then it would be a good example. The part that confused me was that there would be no incentive to start the project unless the original estimate of the cost was significantly less than $7000. It seems reasonable to expect that the next $4000 you spend will have a similar effect on your expected cost to finish. If you perpetually think "Just another $5000 and I will be done", then you are no better off than if you think "I already spent so much, I can't just quit now."

The more money that is sunk into it, the stronger the evidence that you are bad at estimating the cost. I assume that this evidence is implied to be included in the new cost estimate, but I think a general audience would not immediately notice this.

[This comment is no longer endorsed by its author]Reply
Schlega20

The iPhone app example in the presentation confuses me.

The way it is presented, it looks like the conclusion is that you should always be willing to spend an additional $6999.99 no matter how much you have already spent. If current you is willing to spend the extra money regardless of whether you have already spent $4000 or $10999.99, then I don't see why future you would feel any different.

I would think that you should take in to account the fact that your original estimate of the cost was too low. Given that this is the case, you should expect that your current estimate of the cost to finish is also too low. I would multiply your cost to finish estimate by (current estimated total cost) / (original estimate) and only continue if that result is less than $7000.

Going over this in the presentation would introduce complications to the problem that would most likely lead to even more confusion, but when the details are left out, it looks like you are endorsing the same decisions that the sunk cost fallacy would lead to. I suggest changing the example to something else entirely.

[This comment is no longer endorsed by its author]Reply
Schlega00

Thank you for replying. This showed up just as I was editing the parent.

Schlega00

This was highly entertaining. I hope to see more of this in the future.

EDIT: Never mind the stuff I said below. I figured it out.

This got me started on the Hardest Logic Puzzle. I seem to be making an error in reasoning, but I can't identify the problem.

There are 12 possible ways to label A,B,C,da and ja. 3 yes-no questions can only distinguish between 8 states, so it must be possible to label A,B and C without knowing da and ja.

Random's answer to any question is not correlated with the content of that question, so it seems impossible to extract any information from it. It is not possible to guarantee that random is not asked any questions, so that just leaves 2 useful questions. But there are 6 possible states. It seems like it should be impossible.

Assume the answer to the first question is da. From there, I would try to formulate a question so that the honest answer must also be da. If I get 'da', then B is either True or Random, otherwise, it is either False or Random. This reduces the problem to 4 states, which would be solvable if we knew which answer was yes and which was no. As it stands, I can only tell whether the answers to questions 1 and 3 are the same or different, so I am still left with 2 possibilities.

I assume this was discussed in the post you linked to, but I would rather not read through the comments there for fear that I will read someone else's complete solution. Without completely giving it away, can someone please help clear up my confusion?

Load More