Ben

Physicist and dabbler in writing fantasy/science fiction.

Wiki Contributions

Comments

Ben82

Confusion slain!

I forgot that their were leftover chips rewarded to the player with the most goal suit cards (I now remember seeing that in the rules, but wrote it off as a way of fixing the fact that the number of goal suit cards and players could both vary so their would be rounding errors, and didn't keep it in mind). That achieves the same kind of thing I was gesturing at (most of a suit), but much more elegantly.

Thank you for clarifying that.

Ben3-2

Something that confuses me a bit about Figgie, is that not only is it a zero-sum game (which is fine), but every individual exchange is also zero-sum (which seems not fine). If I imagine a group of 4 people playing it, and two of them just say "I won't do any trading at all, just take my dealt hand (without looking at it) to the end of the round", and the other two players engage in trade, then (on average) the score of the two trading players will be the same as those of the two players who don't trade. This, seems like its a problem. If your assessment is that the other players are more skilled than you, then it is optimal to just not engage.

I haven't played it, so this idea might be very silly, but it feels like the scoring should be rewarding players who have made their hand very strongly contain one particular suit (even if its not the goal suit). Then in the example above the two players engaging in trade can help one another to end up with lopsided hands (eg. one has lots of hearts, the other lots of spades), so that the group that trades has a relative advantage over a group that doesn't.

As a candidate rule it would be something like: At round end every spade you have makes you pay 1 chip out to the person with the most spades (for all suits except the goal suit).

Ben684

As an insight into the power of Wikipedia. When I first found lesswrong when googling something I read a couple of articles, thought "this seems good, if a bit weird", and then read its Wikipedia page before going any further.

At the time my view (which I remember saying to a friend who was looking up the website after I recommended an article) was "Its mostly right, but fixates a weird amount on the basilisk thing".

It certainly works. Another friend of mine recently opinionated "everyone on LW is an idiot, because basilisk thing", which was interesting because I didn't know that friend knew about LW at all, and from the basilisk thing mentioned it seemed likely they had just read its Wikipedia. (To them, the argument is not "All users of this website are idiots because I think one topic discussed on it once was dumb". But instead "All users of this website are idiots because the one thing the website apparently discusses seems dumb". Its important that the basilisk on wiki's LW article was not one thing out of 10 or 20. But the single one thing.)

I am here because the Wikipedia page didn't put me off reading LW. 

Answer by Ben20

We expect heat to flow from hot to cold, devices that deviate from this are thermodynamically unlikely, which is another way of saying that they require a low entropy source. (As you said.) Low entropy = thermodynamically unlikely.  This means that heat pumps are extremely non-random. So any system that looks like its random (a hot cup of tea) is going to be a very bad candidate. Similarly I think that things like weather phenomena are a bad place to look.

Living creatures can do thermodynamically unlikely things. As an example lots of (all?) individual cells move various chemicals (like salt) against the density gradients (so they move salt from a place of low concentration to a place of high salt concentration). This is Active Transport. This is just as thermodynamically unlikely as a heat pump, but its a "salt pump" not a "heat pump" so its not exactly right.

My feeling is that an actual "heat pump" (with heat, not salt) must occur in some organisms, and I think I have found a borderline example at this link (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3962001/#:~:text=After%20getting%20hot%20enough%20the,the%20nest%20surface%2015%2C%2046.) :

"In the spring ants are observed to create clusters on the mound surface as they bask in the sun. Their bodies contain a substantial amount of water which has high thermal capacity making ant bodies an ideal medium for heat transfer. After getting hot enough the ants move inside the nest where the accumulated heat is released."

If we suppose that, as a result of this, the inside of the ant's nest ends up warmer than the air outside then this I think possibly counts. Its a heat pump where the working fluid is living ants. The cold ones leave to bask in the sun, then return hot.

Its borderline because there is cheating going on, in that the sun is much hotter than the inside of the ant's nest (I assume), and they are using the sun to heat themselves up. Ideally we need ants that carry around little compressible air sacks they can inflate inside and deflate outside, so that they can unambiguously take heat from the cool air outside to deposit in the hot air inside their nest.

Ben40

I think the best model for "why ban fresh bread" is something like what lexande said, but modified like this:

-People were buying fresh bread every day and (if wealthy) throwing out the bread from the day before (or throwing it out on some horizon). The idea was maybe that by preventing this use of flour (optimised for niceness) the economic forces would then optimise better for calories.
-A solidarity thing? Upping the price would disproportionately hurt the poor. Pushing the market by lowering the quality is more egalitarian in some sense. It pushes the rich into buying something more expensive, and the poor into just having worse bread.
-Bakers could optimise bread for deliciousness on the day of baking, or (somehow) make bread that was likely to last longer. Longer lasting bread would improve efficency by seeing less of it go bad.
-It was just really dumb. eg. People were arriving at the bakers to find the bread sold out. And were furious and petitioned the government to do something. The government (for some bizare reason) believed that if the sales were delayed a day that the bakeries wouldn't be out of bread when you visited (although they may be out of bread they were allowed to sell you).
 

Ben20

This is quite similar to the "swampman" thought experiment (https://en.wikipedia.org/wiki/Donald_Davidson_(philosopher)).

My thoughts: Assuming there is no subjective experience after death (no afterlife or anything), then it is sort of trivial that subjective experience ends at death, so you don't ever experience it.

Now, my read on your argument is that in a sufficiently big universe or multiverse, there will be many "mes" with exactly the same subjective experiences so far, and that whenever one (or a large number) of "mes" die there will be some others who are narrowly saved at the last moment, just as they wheeze their last breath an alien turns up and heals them or whatever. Or they were in a simulation the whole time or similar.

However, it remains the case that before the death there were N copies, and afterwards there were N-1. Its not like you "merged with" or "snapped into" the surviving ones. You are not causally propagating yourself into them. Its just you have accepted a world view where it is possible to likely that there are people arbitrarily similar to you.

My feeling is that its like this analogy. Imagine that in the near future all records of the works of Shakespeare (all of them, including all quotes) are lost forever. But that, it just so happens that by complete coincidence there are pebbles on a beach in another galaxy, that can be read in binary (dark/pale pebbles 1/0) to symbolise the full works of Shakespeare to the letter. Does that make it any less of a loss that the works were lost here on Earth?

Answer by Ben3-2

If you think consciousness is a real existent thing then it has to either be in:

The Software, or

The Hardware.

Assuming to be in the hardware causes some weird problems. Like "which atom in my brain in the conscious one?". Or "No, Jimmy can't be conscious because he had a hip replacement, so his hardware now contains non-biological components".

Most people therefore assume it is in the software. Hence why a simulation of you, even one done by thousands of apes working it out on paper, is imagined to be as conscious as you are. If it helps, that simulation (assuming it works) will say the same things you would. So, if you think its not conscious then you also think that everything you do and say, in some sense, does not depend on your being conscious, because a simulation can do and say the same without the consciousness.

There is an important technicality here. If I am simulating a projectile then the real projectile has mass, and my simulation software doesn't have any mass. But that doesn't imply that the projectiles mass does not matter. My simulation software has a parameter for the mass, which has some kind of mapping onto the real mass. A really detailed simulation of every neuron in your brain will have some kind of emergent combination of parameters that has some kind of mapping onto the real consciousness I assume you possess. If the consciousness is assumed to be software, then you have two programs that do the same thing. I don't think there is any super solid argument that forces you to accept that this thing that maps 1:1 to your consciousness is itself conscious. But there also isn't any super solid argument that forces you to accept that other people are conscious. So at some point I think its best to shrug as say "if it quacks like consciousness".

Ben76

I agree with the post generally. However, the chef example is (I think) somewhat flawed, as with all TV the footage is edited before you see it. So you have no idea how many pieces of advice the chef mentor gave that were edited out. In the UK version of the Apprentice the contestants would have a 3 hour planning session that would be edited down to 5 minutes, so you knew that whatever it was they were talking about in that 5 minutes of footage was the decision that would dominate their performance, meaning (as a viewer) it was very easy to see what was going to go wrong ahead of time.

Ben20

Exactly this. Your client is charged with 9 murders. You, followed by all other lawyers, refuse to defend them because they are so obviously guilty. They go to prison. But, they only killed 8 people. The real culprit in the 9th case goes free. 

Ben63

Very possible. I am not fully convinced. The dog had to identify the people who had food in there bags, and tell them apart from all the people who used to have food in those same bags, or were eating on the flight and have food on there breath or hands. A dog trying to identify (for example) canabis would probably have an easier time.

My stance is not "I know 100% that sniffer dogs are a silver bullet", but the weaker position "The majority of the value of a sniffer dog comes from it actually smelling things, rather than giving the officer controlling it a plausible way of profiling based on other (possibly protected) characteristics."

Load More