Edited to add: Well, that was quick. Doesn't look like the bottom fell out.
Edited again: Here's the criminal complaint against the alleged operator. The details at least make sense as a story: in the early days of Silk Road, the alleged operator had really lousy opsec, linking his name to the Silk Road project. Then later, he seems to have got scammed by a guy who first threatened to extort him, then pretended to be a hit-man who would kill the extortionist.
I need some advice. I recently moved to a city and I don't know how to stop myself from giving money to strangers! I consider this charity to be questionable and, at the very least, inefficient. But when someone gets my attention and asks me specifically for a certain amount of money and tells me about themselves, I won't refuse. I don't even feel annoyed that it happened, but I do want to have it not happen again. What can I do?
The obvious precommitment to make is to never carry cash. I am strongly considering this and could probably do so, but it is nice to be able to have at least enough for a bus trip, a quick lunch or for some emergency. I have tried to give myself a running tally of number of people refused and when that gets to, say, 20, I would donate something to a known legitimate charity. While doing so makes me feel better about passing beggars by, it doesn't help once someone gets me one-on-one. So I've never gotten to that tally without resetting it first by succumbing to someone. Is there some way to not look like an easy mark? Are there any good standard pieces of advice and resources for this?
However, I always find these exchanges to be really fascinating from the ... (read more)
assume that they're scamming. It will often be true and even when honest giving money to panhandlers is an inefficient use of charity. Remind yourself that you already have a budget for charity and that you're sending it to givewell or MIRI or whatever.
And yet people here are still surprised that gatekeepers could lose at the AI
box game.
9Viliam_Bur10y
Keep your head up [http://en.wikipedia.org/wiki/Alexander_technique] and your
back straight, look towards the horizon, walk with a certain pace.
Avoid the places with high density of scammers, if you can. (For example in my
city it would around the train station.)
Did you notice [http://lesswrong.com/lw/if/your_strength_as_a_rationalist/]
immediately that the person is lying to you (pretending to care about time, but
actually not caring), therefore you have no social obligation to interact with
them?
I keep an attitude that if someone is manipulating me like this, I owe them
nothing socially... I give myself permission to just walk away without any
explanation or interaction, or to lie to them (even in a very transparrent
manner: "sorry, I don't have any money"; they did it first, so they have no
right to complain). Saying "sorry, I am in a hurry" and walking away without
looking at them should work in most cases (and is even socially acceptable if
you care about that aspect).
More meta, I have problem giving you good advice, because I have no idea why do
you behave this way. I don't know what precisely happens in your head during the
interaction, which is why I can't be specific about which parts of that you need
to change (because it starts in the head). It is an interaction: they are
playing their parts of the script, you are playing your part
[http://en.wikipedia.org/wiki/Games_People_Play_%28book%29]. The key is to stop
playing your part (because obviously, they have no motivation to stop playing
theirs).
Is it difficult for you to realize that you are being scammed? Or do you suspect
this, but you don't feel certain about your judgement? Or are you pretty sure
about your judgement, but you don't know how to stop the interaction without...
feeling bad about yourself? Seems to me the last one is more likely. If that is
true, please explain the details. Do you believe you should feel bad about
yourself for not giving money to strangers (because you imagine s
2tgb10y
It just, you know, feels like yes they could use this money more than I could. I
know that there's a good chance they're lying, but they're lying to spice it up
but probably do need this for one reason or another. It's not entirely rational
choice, I admit, but it always seems like a rather minor favor that really won't
hurt me much this time. It's just that it happens far too frequently for my own
comfort that I consider it a problem. I don't even feel bad about having given
them the money, even in retrospect. I just know that I can't give everyone money
who asks for it and that by conceding I'm encouraging even more of this
exploitation. (But neither do I think I get a significant 'warm fuzzies' feeling
for giving as it seems to be cancelled by the "what am I doing?" in the back of
my head.)
I guess that EY's tale here
[http://lesswrong.com/lw/6z/purchase_fuzzies_and_utilons_separately/] about
holding open doors and letting people know they left the car trunk open is why I
keep doing this. If I modify myself to completely ignore these little things...
what will I lose? Can I really just not ever give anyone the time? What about
all those times when they really did just need to know the time, or wanted to
charge their phone, or whatever? Those happen, probably more often than times
when they're just tricks for scammers. That's why I was looking at solutions
like not carrying cash - a way that I can not ignore it but still can't fall for
it.
For the record, this was the first time I've given out more than a dollar or
two. My original post has probably made it seem like I do this more often and
more egregiously than I do, partly because I was carried away by that particular
exchange and partly because prior to moving this never happened so anything
seems like a lot.
Edit: In fact, now I can think of at least one situation in which I had to ask
strangers for some quarters in order to be able to pay to park and catch my
train. The only difference in this situation b
An idea: Next time try to estimate how much money such person makes. As a rough estimate, divide the money you gave them by the length of your interaction. (To get a more precise estimate, you would have to follow them and observe how much other people give them, but that could be pretty dangerous for you.)
Years ago I made a similar estimate for a beggar on a street (people dropped money to his cap, so it was easy to stand nearby, watch for a few minutes and calculate), and the conclusion was that his income was above average for my country.
By the way, these people destroy a lot of social capital by their actions. They make life more difficult for people who genuinely want to ask for the time, or how to get somewhere, or similar things. They condition people against having small talk with people they don't know. -- So if you value people being generally kind to strangers, remember that these scammers make their money by destroying that value.
it feels like it, but it's wrong. And you are actively making the situation
worse. Better melt your cash and burn your bills. These people could use food,
shelter and some skills to earn an honest living. There are charitable
organizations providing these services, find the best ones and donate to them.
Next time you give a dollar to a beggar, think of how your selfish feel-good act
makes the world a worse place to live.
0tgb10y
Thanks, this is probably the tact I need to take.
7Lumifer10y
Don't visit the third world. Ever.
5Douglas_Knight10y
On the contrary, a visit to an actually poor place might give him the context to
reevaluate the first world poor.
0tgb10y
Too late multiple times over, sorry. Though I haven't since I was old enough to
really have any money on me.
2hyporational10y
People don't leave their car trunks open for deception unless they're
kidnappers. If you can't tell if people are lying or not, please just ignore
them. Otherwise you're encouraging the dishonest ones to harass other people
too.
2kalium10y
I'll be willing to help out if I hear anything other than a request for money,
or if I see an obvious problem I can help with (like a cyclist with a flat tire
when I have a patch kit in my pocket). I just categorically don't allow
"kindness to strangers" to translate to "giving money to strangers," and as soon
as money comes up I say I'm broke (which is not true, but not that far from it
either), figuratively close my ears, and walk away.
I suppose it helps that most panhandlers in my area have signs. Most
non-sign-carriers who approach me want directions or some such. Maybe I just
look like a bad scam target though.
1A1987dM10y
Yes, that's what I usually do. (Sometimes I give them trivial amounts of money
like €0.50 instead.)
Yep... People trying to dishonestly manipulate me trigger a heckuva memetic
immune response, akin to refusing an offer in the Ultimatum game.
9juliawise10y
I hate feeling I have to walk by a person pandhandling and not respond at all -
it makes me feel like a bad person. I had been told not to make eye contact
unless you're going to give money, but I've recently changed my strategy and
started smiling before giving my standard, "no, sorry" to the request for cash.
Recently I flashed a smile as I strode by a man on the sidewalk. He smiled back
and said, "God bless you for that smile." It felt like we connected, which is
what people are generally going for when they give money (unless it's just to
avoid feeling guilty).
Yvain's take on all this: http://squid314.livejournal.com/340483.html
[http://squid314.livejournal.com/340483.html]
0drethelin10y
This is usually what I do.
8DavidS10y
Let me suggest a world view which is much less negative than the other replies:
I view panhandlers as vendors of warm fuzzies and therefore treat them as I
would any other street vendor whose product I am most likely not interested in.
In particular, I have no reason to be hostile to them, or to be disrespectful of
their trade.
If they engage me politely, I smile and say "No thanks." I think the second word
there is helpful to my mindset and also makes their day a little better. If they
become hostile or unpleasant, I feel no guilt about ignoring them; they have
given me good reason to suspect their fuzzies are of low quality. If they have a
particularly amusing approach, and I feel like treating myself, I give them
money. (EG The woman who offered to bet me a dollar that she could "knock down
this wall", gesturing at a nearby brick building. It was obviously a setup, but
it was worth paying a dollar to learn the punchline, and she delivered it well.)
I developed this mindset while living in Berkeley, CA near Telegraph and walking
everywhere, which I suspect means that I was encountering panhandlers at a rate
about as high as anyone in the first world.
I also, of course, contribute significant portions of money to charities which
can do a lot more good with it. If you are looking for a charity which
specifically aids people in a situation similar to the ones you are refusing,
you may want to consider the HOPE program http://www.thehopeprogram.org/
[http://www.thehopeprogram.org/] . In 2007, Givewell said about them "For donors
looking to help extremely disadvantaged adults obtain relatively low-paying
jobs, we recommend HOPE."
http://www.givewell.org/united-states/charities/HOPE-Program
[http://www.givewell.org/united-states/charities/HOPE-Program] . There is an
argument (and Givewell makes it) that helping extremely disadvantaged adults in
the first world obtain relatively low-paying jobs is so much harder than helping
poor people in the third world that it shou
8JoshuaFox10y
I was cured after I naively gave money to a street beggar, and was pursued for
more money, to the point that I felt threatened.
My usual procedure in the US is to actively pretend that beggars, and those who
look like them, don't exist. Phil Collins wouldn't like it
[http://www.metrolyrics.com/another-day-in-paradise-lyrics-phil-collins.html],
but after that occasion and one or two like it, I feel scared. I truly admire a
certain friend who can chit-chat on a friendly basis with a street person.
As I got older and more confident, I developed other practices:
1. Someone asked for money for food, so I handed her a bag of fancy chocolate
almonds I had in my hand. She looked like that wasn't what she was
expecting.
2. In a friendly way, I told a collector for some ineffective charity that, in
honor of his request, I would give 100 NIS more than usual to my regular
charity, but not his. Chutzpah.
3. When a collector for some ineffective charity comes up to me, I solicit him,
in a friendly way, to give money to my favorite charity before he has a
chance to ask. Once I got 1 NIS this way, so I felt obliged to give him a
(different) shekel. I then had fun ceremonially taking that 1 NIS coin to
the treasurer, along with my usual donation.
4. Once I asked a phone collector for some ineffective charity, in a friendly
way, to decide on my behalf: Should I give 100 NIS to a certain truly worthy
cause, or deny it to that worthy cause and give it to her charity. She got
quite tangled up trying to answer.
In short, I became a little obnoxious. The fact that I regularly give a good
amount to charity is probably what gave me the psychological leeway to do this.
(And I wouldn't do any of that to a more-or-less worthy charity, or if a friend
asked.)
4A1987dM10y
A few months ago I was with a co-worker in the centre of a foreign capital,
waiting for some other people, and some guy approached us offering to sell us
some marijuana. I told him “I quitted smoking five years ago” and we kept
talking about that for about half a minute before he left.
My co-worker was very annoyed that I didn't just ignore the guy.
That is freakin' awesome.
7moridinamael10y
Remind yourself that the panhandler is defecting (in the Prisoner's Dilemma
sense) by putting you in that situation. Remind yourself that they are actively
and premeditatedly manipulating you through a set of known exploitable
psychological levers. There is a strain of Dark Arts to this advice, because you
are choosing to preemptively deflect your empathy with a feeling of
defensiveness. It is nonetheless true that the panhandler is being rude,
definitionally, and that you are being tricked.
6hyporational10y
I'm terribly sorry for my strong reaction but this whole post reeks of abuser
attracting vulnerability so much that it's making me angry. It's not difficult
to imagine beggars can sense you a mile away.
What the hell? People are robbing your time in the street and lying to your face
to get your money too, and you are considering to inconvenience your own life to
accommodate them? Just learn to tell a white lie like the rest of humanity, it
doesn't even matter if you do it badly in a case like this. All you need is an
attitude change, not a bag of tricks.
I'm going to appeal to your altruism. You're making lying for money profitable.
When you give away your hard earned money it doesn't hurt just you or the
potential charities.
I'm not sure what makes me so angry about this... it just seems that
submissiveness seems to be a relatively common failure mode for otherwise smart
people.
ETA: in Europe, begging is highly organized so you would likely be financing
organized crime.
ETA2:
Yes, stop giving money to these people. Of course they recognize you, it's their
job.
6Moss_Piglet10y
It seems like your problem might be too in having much empathy for strangers,
which (at least when dealing with panhandlers) shouldn't theoretically be too
hard to deal with. If you cultivate a mindset of viewing beggars as parasites
and degenerates you ought to be able to resist any impulses of sympathy which
come up, especially since you already know that you're not helping them and many
are in fact con-artists. It shouldn't really affect your other charity giving
much either, since my understanding is that EA mostly focuses on giving medical
aid to foreigners rather than dealing with poverty in areas with high costs of
living like American cities.
On the other hand, it's very possible empathy isn't your real problem here. The
feeling of gratitude (even faux-gratitude) and generosity from handing a few
bucks to a hobo is a big rush; I certainly get more utility out of my spare
change that way than I ever would buying junk food with it. If that's your issue
than it might be smart to do what you're doing now and poison the good feeling
by re-framing it as something shameful.
5ChristianKl10y
Instead of thinking about stopping giving the money think about stopping giving
them the time to tell you a long story.
5kalium10y
Don't turn your head in their direction. Don't change your pace. Don't make eye
contact. It gets easier.
Does your city have transit passes or RFID stored-value cards? It may be
possible for you to be prepared to take the bus without carrying cash. As for
lunch, is it uncommon for restaurants in your area to accept credit cards?
4Username10y
"Sorry man, I don't have cash."
If you feel bad about lying (given that it's not a good idea to give money to
panhandlers, you shouldn't), take a note of how much money you would have given
them and donate double that to your nearest food bank/shelter. There, now you
actually helped them.
19eB110y
Others have recommended keeping your eyes away from them, I'll add the
possibility of wearing headphones and sunglasses to give you plausible
deniability which will probably make you feel better psychologically even though
it should have no impact.
Another idea is you could keep a few quarters in your pocket and just give them
a quarter from your pocket as quickly as you can, then at least you are limiting
the damage to a trivial amount. I have never tried this idea.
A website shouldn't just go down when the people managing it stop working, it's not like they're pedaling away inside the servers. Block the federal highways with army tanks, sorry the government is closed.
There is a nontrivial set of the voting public who legitimately believe money equals tech working via magical alchemy.
The name derives from the National Park Service's alleged habit of saying that any cuts would lead to an immediate closure of the wildly popular Washington Monument.
As a sysadmin, if I were to be furloughed indefinitely I would probably spin down any nontrivial servers. A server that goes wrong and can't be accessed is a really, really, really, really terrible-horrible-no-good-very-bad thing. And things go wrong on a regular basis in normal times; when the government is shut down and a million things that get done everyday suddenly stop being done, something somewhere is going to break. Some 12-year-old legacy cron job sitting in an obscure corner of an obscure server written by a long-departed contractor is going to notice that the foobar queue is empty , which turns out to be an undefined behavior because the foobar queue has always had stuff going through it before, so it executes an else branch it's never had occasion to execute, which sends raw debugging information to a production server because the contractor was bad at things, and also included passwords in their debugging because they were really bad at things...
This is actually a terrible example of Washington Monument Syndrome.
"
Hi, Server admin here... We cost money as does our infrastructure, I imagine a site that large costs a very good deal, we aren't talking five bucks on bluehost here.
I am private sector, but if I were to be furloughed for an indeterminate amount of time you really have two options.
Leave things on autopilot until the servers inevitably break or the site crashes at which point parts or all of it will be left broken without notice or explanation. Or put up a splash page and spin down 99% of my infrastructure (That splash page can run on a five dollar bluehost account) and then leave. I won't be able to come in while furloughed to put it up after it crashes.
If you really think web apps keep themselves running 24/7 without intervention we really have been doing a great job with that illusion and I guess the sleepless nights have been worth it to be successfully taken for-granted."
This is true; however keeping a website running is still very, very cheap
compared to almost anything else the government does, including functions that
are continuing as usual during the shutdown.
If web apps are too high maintenance, that does not explain the shutdown of
government Twitters (example: https://twitter.com/NOAA
[https://twitter.com/NOAA], which went to the extra effort of posting that "we
won't be tweeting 'cause shutdown.") I note with amusement however that the
Health and Human Services Twitter is alive and well and tweeting about the ACA.
1Sly10y
"This is true; however keeping a website running is still very, very cheap
compared to almost anything else the government does, including functions that
are continuing as usual during the shutdown."
This is literally irrelevant when the non-essential services have to be shut
down. If your techs get furloughed, shutting down the site is appropriate.
The twitter accounts are "shut down" in the sense that the employee who would
have done the tweeting is now furloughed and can't. Putting out a tweet
explaining the upcoming lapse makes a whole lot of sense to me.
6[anonymous]10y
The Shutdown Wasn’t Pointless. It Revealed Information
[http://wjspaniel.wordpress.com/2013/10/17/the-shutdown-wasnt-pointless-it-revealed-information/]
William Spaniel says on twitter he is not sure about how he feels about our
models of war also explaining U.S. Congress bargaining. Besides war being
politics by other means, I say we obviously should expect the models to work to
a limited extent. Democracy is a highly ritualized form of civil war and not any
kind of war but the kind practiced in the 19th century when democracy began its
march. Instead of drafting a mob and then ordering them to shoot the opposing
mob, you orderly assemble your respective mobs and then count them via voting.
Since Samuel Colt [http://en.wikipedia.org/wiki/Samuel_Colt] made men equal in
the 19th century you assume the slightly larger mob wins. Some nations even
factor in territory held to decide outcomes. After elections both mobs go safely
home and about their business, while in theory the government implements a real
outcome of the simulated war.
I'm half expecting that sooner or later someone will realize you can with
current technology win civil wars with drones agains mobs and Democracy will be
discarded in favor of a more stable equilibrium. On the other hand early 20th
century thinkers, futurists and fiction writers
[http://en.wikipedia.org/wiki/The_Shape_of_Things_to_Come#Plot] expected people
to realize air power changed the calculus of war and for this change to impact
politics quite profoundly. Arguably maybe we would even be better off had they
been right. All power to the pilots!
[http://unqualified-reservations.blogspot.com/2008/07/olxiii-tactics-and-structures-of-any.html]
Yet they weren't. Evidence against.
6solipsist10y
I suspect it would be illegal to run those servers. The Anti-Deficiency Act
[http://www.gao.gov/legal/lawresources/antideficiencybackground.html] forbids
the government from "involving the government in any obligation to pay money
before funds have been appropriated". The Army can't purchase new tanks, NASA
can't order a new space shuttle, and I bet most agencies can't rack up more
obligations with their ISPs and electric companies.
This act, by the way, is the reason nonessential workers are forbidden from
volunteering for work.
6ChristianKl10y
http://www.ncbi.nlm.nih.gov/ [http://www.ncbi.nlm.nih.gov/] seems still to run.
2[anonymous]10y
BLAST and PubMed are running automatically but there is no updating of either of
them with new materials.
2fubarobfusco10y
Theater, certainly; in the sense of staging an elaborate show for the public
(see also "security theater") — but why kabuki specifically?
9TheOtherDave10y
As arundelo notes, it's a trope.
I think it's meant to evoke the extremely stylized, not at all realistic nature
of the art form... that is, it's not that the audience is being tricked into
thinking something is going on, it's that the audience is willingly going along
with the story being told.
7arundelo10y
It's a common usage in some circles. Jon Lackman wrote a Slate article
criticizing it
[http://www.slate.com/articles/life/the_good_word/2010/04/its_time_to_retire_kabuki.html].
0fubarobfusco10y
Okay, so it's like "Chinese fire drill"
[https://en.wikipedia.org/wiki/Chinese_fire_drill]. Got it.
-4Multiheaded10y
Actually, y'all wrong. It was simply a fun idea for celebrating 4chan's 10th
birthday [https://twitter.com/4chan/status/384895444096016384].
I've heard several stories in the last few months of former theists becoming atheists after reading The God Delusion or similar Four-Horsemen tract. This conflicts with my prior model of those books being mostly paper applause lights that couldn't possibly change anyone's mind.
Insofar as atheism seems like super-low-hanging fruit on the tree of increased sanity, having an accurate model for what gets people to take a bite might be useful.
Has anyone done any research on what makes former believers drop religion? More generally, any common triggers that lead people to try to get more sane?
I can tell you what triggered me becoming an atheist.
I was reading a lot of Isaac Asimov books, including the non-fiction ones. I gained respect for him. After learning he was an atheist, it started being a possibility I considered. From there, I was able to figure out which possibility was right on my own.
This seems to be a trend. I never seriously worried about animals until joining felicifia.org where a lot of people do. I never seriously considered that wild animals' lives aren't worth living until I found out some of the people on there do. I think it's a lot harder to seriously consider an idea if nobody you respect holds it. Just knowing that a good portion of the population is atheist isn't enough. Once you know one person, it doesn't matter how many people hold the opposite opinion. You are now capable of considering it.
I didn't think unfriendly AI was a serious risk until I came here, but that might have been more about the arguments. I figured that an AI could just be programmed to do what you tell it to and nothing more (and from there can be given Asimov-style laws). It wasn't until I learned more about the nature of intelligence that I realized that that is not likely going to be easy. Intelligence is inherently goal-based, and it will maximize whatever utility function you give it.
Theism isn't about god. It has also social and therefore strong emotional consequences. If I stop being a theist, does it mean I will lose my friends, my family will become more cold to me, and I will lose an access to world's most wide social networks?
In such case the new required information isn't a disproved miracle or an essay on Occam's razor. That has zero impact on the social consequences. It's more important to get an evidence that there is a lot of atheists, they can be happy, and some of them are considered very cool even outside of atheist circles. (And after having this evidence, somehow, the essays about Occam's razor become more convincing.)
Or let's look at it from the opposite side: Even the most stupid demostrations of faith send the message that it is socially accepted to be religious; that after joining a religion you will never be alone. Religion is so widespread not because the priests are extra cool or extra intelligent. It's because they are extra visible and extra audacious: they have no problem declaring that everyone who disagrees with them is stupid and evil and will go to hell (or some more polite version of this, which still gets the message across) -- a... (read more)
That looks like more of a reply to the parent comment than to mine.
1Viliam_Bur10y
Under the usual convention that "reply to" means "disagree with", it certainly
does. :D
Although the "some of them are considered very cool even outside of atheist
circles" part was inspired by you mentioning Asimov. (Only the remaining 99%
aren't.)
1[anonymous]10y
My original question was basically asking for evidence for your hypothesis
(religion is mostly a social motivated-reasoning thing, and the best way to fix
it is to demonstrate (over)confidence and social acceptance) or alternative
hypothesis. It sounds plausible, but I don't think anyone has actually tried to
check with any degree of rigor.
6Bakkot10y
There's a PDF (legal, even!) here
[http://pub.uni-bielefeld.de/publication/1782990], linked next to "download".
See also their website
[http://www.uni-bielefeld.de/\(en\]/theologie/forschung/religionsforschung/forschung/streib/dekonversion/),
which is probably more digestible.
2palladias10y
Well, this is anecdata, but when I was an atheist, I found God Delusion
frustrating and not worth handing to my Christian friends, since it attacked
lowest common denominator Christianity a lot, and my friends tended to be nerdy
Thomists. Plus, I find a lot of Four Horseman stuff frustrating because they
rarely construct something of their own to defend (though I understand the sense
of urgency to knock people out of their current worldview -- if you find it
abhorrent enough -- and let them land where they may).
6[anonymous]10y
You say "when I was an atheist". Running into ex-atheists is a rare thing,
especially here - may I ask what changed your mind?
2Vaniver10y
I recently came across this, from the theist perspective (i.e. they tracked down
people who had left and interviewed them, with the hope to prevent that in the
future), and I remember it hinged mostly on social factors. (The enthusiastic
youth pastor quits, and is replaced by someone that doesn't know the Bible as
well, etc.)
I'm sure there are some people who deconverted because of reading those books-
but it's likely that they also would have deconverted if they moved from Town A
to Town B, for example, so that doesn't seem like a terribly effective way to
reach everyone.
0Salutator10y
I think another thing to remember here is sampling bias. The actual
conversion/deconversion probably mostly is the end point of a lengthy
intellectual process. People far along that process probably aren't very
representative of people not going through it and it would be much more
interesting what gets the process started.
To add some more anecdata, my reaction
[http://last-conformer.net/2012/11/15/the-counter-productiveness-of-mockery/] to
that style of argumentation was almost diametrically opposed. I suspect this is
fairly common on both sides of the divide, but not being convinced by some
specific argument just isn't such a catchy story, so you would hear it less.
Mark sighs sadly. “Never mind… it’s obvious you don’t know. Maybe all pebbles are magical to start with, even before they enter the bucket. We could call that position panpebblism.”
This is clearly a joke at the expense of some existing philosophical position called pan[something] but I can't find the full name, which may be necessary to make the joke understandable in French. Can anyone help?
I initially read it as an allusion to Panpsychism
[http://en.wikipedia.org/wiki/Panpsychism]:
or maybe to a generic pan-x-ism
[http://www.petemandik.com/blog/2008/03/13/precedents-of-pan-x-ism/]. But, in
retrospect, the position that "all pebbles are magical to start with" should be
called "panmagism" or something. Panpebblism means that there is a pebble in
everything (or everyone). So I am no longer sure what Eliezer meant.
3witzvo10y
I think he's just using the prefix "pan-"
[http://dictionary.reference.com/browse/Pan-] to mean all, though perhaps
pantheism is relevant.
0Roxolan10y
I'll just keep the prefix/suffix as is and hope for the best then
("pancailloutisme").
In the past few hours, my total karma score has dropped by fifteen points. It looks like someone is going back through my old comments and downvoting them. A quick sample suggests that they've hit everything I've posted since some time in August, regardless of topic.
Is this happening to anyone else?
Anyone with appropriate access care to investigate?
To whoever's doing this — Here's the signal that your action sends to me: "Someone, about whom all you know is that they have an LW account that they use to abuse the voting system, doesn't like you." This is probably not what you mean to convey, but it's what comes across.
That kind of stuff happens quite often.
[https://www.google.com/search?q=karmassassination+site:lesswrong.com]
0Moss_Piglet10y
Maybe it's just me not knowing much about website design, but this seems like a
problem which could be mitigated with automatic controls on the karma system.
Like, for example, you have a limit of +/- n net Karma you can award to any
given poster in an arbitrary time limit t. Or even that if your rate of
downvoting any given poster cracks some ceiling it sends up an automatic mod
flag that there might be an attack going on.
Ideally, of course, we could just abide by the honor system, but from a
pragmatic perspective it might make more sense to set up stronger safeguards as
an additional measure.
1A1987dM10y
That can be also implemented more softly by still allowing anyone to vote anyone
as much as they want, but requiring a captcha for each vote after a given limit.
I got an offer of an in-person interview from a tech company on the left coast. They want to know my current salary and expected salary. Position is as a software engineer. Any ideas on the reasonable range? I checked Glassdoor and the numbers for the company in question seem to be 100k and a bit up. I suppose, actually, that this tells me what I need to know, but honestly it feels awfully audacious to ask for twice what I'm making at the moment. On the other hand I don't want to anchor a discussion that may seriously affect my life for the next few years at too small a number. So, I'm seeking validation more than information. Always audacity?
Always ask as much as you can. Otherwise you are just donating the money to your boss. If you hate having too much money, consider donating to MIRI or CFAR or GiveWell instead. Or just send it to me. (Possible exception is if you work for a charity, in which case asking less than you could is a kind of donation.)
The five minutes of negotiating you salary are likely to have more impact on your future income than the following years of hard work. Imagine yourself a few years later, trying to get a 10% increase and hearing a lot of bullshit about how the economical situation is difficult (hint: it is always difficult), so you should all just work harder and maybe later, but no promises.
it feels awfully audacious to ask for twice what I'm making at the moment
I know. Been there, twice. (Felt like an idiot after realising that I worked for a quarter of my market price at the first company. Okay, that's exaggerated, because my market price increased with the work experience. But it was probably half of the market price.)
The first time, I was completely inexperienced about negotiating. It went like: "So tell me how much you want." "Uhm, you tell me how much you give peop... (read more)
Asking for more than all the money is trivial. Don't even get me started on how
much someone who is good at math can ask for. This is obviously not a good
strategy. There is an optimum amount to ask for. How do you find it?
3Moss_Piglet10y
By looking at the distribution of that industry / company's wages for someone of
your qualifications and asking for something on the high end. They will then
either accept or try to barter you down. Either way, you will most likely end up
with more than what you would have gotten otherwise.
In other words, exactly what Viliam_Bur said to begin with.
3RolfAndreassen10y
Well yes, but how much can I ask? :) At any rate I went for 125k, which seems to
be in the upper third or so of what Glassdoor reports. Thanks for the
encouragement.
5Viliam_Bur10y
When the first two companies will say that they would hire you if you asked a
bit less, and you refuse, and the third company gives you as much as you asked,
then you know you are working for a market salary. Until then you are probably
too cheap.
Sorry, I am not from USA, so I am unable to give specific numbers. I guess you
should ask for 140k now, and be willing to get down to 125k (prepare some
excuse, such as "normally I would insist on 140k, but since this is the work I
always wanted to have, and [insert all the benefits your interviewer mentioned],
I'd say we have a deal").
Don't deliberately screw yourself over. Don't accept less than the average for your position and either point blank refuse to give them negotiating leverage by telling them your current salary or lie.
Look up what Ramit Sethi has to say about salary negotiation. He really outlines the how things look from the other side and how asking for your 100k is not nearly as audacious as it seems.
You may feel better about being audacious if you do an explicit cost-of-living
calculation given the rent and price differential. If you see that maintaining
the same standard of living is going to cost you 80k, then 100k stops seeming
like a huge number.
It's also true that there is only epsilon chance of screwing yourself. Nobody is
going to reject you because the expected salary number you suggested was too
high; it makes no sense. You could suggest 150k and the only bad thing that will
happen is you might only get offered 120k.
0RolfAndreassen10y
No doubt you are correct; anyway, it's only a job interview. Other fish in the
sea, if necessary.
7Ben_LandauTaylor10y
Always audacity! If you ask for a number that's too high, they are extremely
unlikely to withdraw the offer. Anecdotally, a very good friend of mine was just
able to negotiate a 50% increase in his starting salary in a similar-sounding
situation.
3RolfAndreassen10y
Ok. I took a deep breath, closed my eyes, and said "125000". Hope it wasn't too
low.
0JoshuaFox10y
Rolf, you work in an industry where people are becoming millionaires and
billionaires overnight. Maybe you won't manage that, but no need to be
embarrassed for raking it in.
Note that even though you don't need to reveal your salary in negotiations,
current salary often anchors negotiations in your next one as well as your
current one, illogical though that may be. So the deal you make now has
long-term implications. Also, in yet another of those biases they talk about
here, a high salary may, within limits, make your bosses think you are a better
worker who deserves a higher status.
I would like to eventually create a homeschooling repository. Probably with research that might help people in deciding whether or not to homeschool their children, as well as resources and ideas for teaching rationality (and everything else) to children.
I have noticed that there have been several question in the past open threads about homeschooling and unschooling. One of the first things I plan to do is read through all past lesswrong discussions on the topic. I haven't really started researching yet, but I wanted to start by asking if anyone had anything that they think would belong in such a repository.
I would also be interested in hearing any personal opinions on the matter.
Homeschooling is like growing your own food (or doing any other activity where you don't take advantage of division of labor): if you enjoy it, have time for it and are good at it, it's worth trying. Otherwise it's useless frustration.
I couldn't agree more about division of labor in general, but with the current state of the public school system, I do not trust them to do a good job of teaching anything.
I do not have the time or patience for it, and probably am not good at it, but fortunately my partner would be the one teaching.
Good compared to what? Compared to other developed countries, compared to what
they could do if they spent their resources more wisely, compared to what you
could do homeschooling your kid?
A lot of the criticism of US schools is based on the first two criteria, but the
third one should be the one that matters for you - even if they do a crappy job
compared to Europe or Canada, they might still do a better job than you on your
own, especially if you take into account things like learning to get along with
peers.
(That being said, I don't know enough about either your situation or even US
schools (I live in France), I'm just wary of the jump from "schools are bad" to
"I can do better than schools")
0Manfred10y
Student achievement in US schools compared to e.g. Finnish schools is to a large
extent a reflection of the much greater inequality in the US. If you're a middle
class parent and you're not living in a high-poverty neighborhood, your kid will
be totally fine going to public school.
4Scott Garrabrant10y
What does "totally fine" mean? I wouldn't describe 99% of the population as
"totally fine" in terms of education and rational thought.
2Manfred10y
I don't think a system that can promise you 3 standard deviations of improvement
has been invented. Look at twin studies.
2Scott Garrabrant10y
I agree that 3 standard deviations of improvement of a random person is a lot to
ask for. However, I can easily see that someone with built in potential to be 3
sd above average could be brought down to near average by the wrong system.
My expectation of my children's potential is very dependent on how heritable
intelligence is, and I admittedly do not know much about that.
4Emile10y
As far as I know, most estimates point to around 50% genetic, but parenting
style doesn't explain much of the remaining 50%
See this:
http://infoproc.blogspot.fr/2009/11/mystery-of-nonshared-environment.html
[http://infoproc.blogspot.fr/2009/11/mystery-of-nonshared-environment.html]
5Barry_Cotter10y
Given the Bloom two sigma phenomenom it would not surprise me if unschooling + 1
hour tuition per day beat regular school. And if you read Lesswrong there's a
reasonable p() that an hour of a grad student's time isn't that expensive.
8Viliam_Bur10y
I googled the "Bloom two sigma phenomenom" and... correct me if I am wrong, but
I parsed it as:
"If we keep teaching students each lesson until they understand, and only then
move to the next lesson (as opposed to, I guess, moving ahead at predetermined
time intervals), they will be at top 2 percent of all students".
What exactly is the lesson here? The weaker form seems to be -- if students
don't understand their lessons, it really makes a difference at tests. (I guess
this is not a big surprise.) The stronger form seems to be -- in standard
education, more than 90% of students don't understand the lessons. Which
suggests that of the money given to education, the huge majority is wasted.
Okay, not wasted completely; "worse than those who really understand" does not
necessarily mean "understands nothing". But still... I wonder how much
additional money would be needed to give decent education to everyone, and how
much would the society benefit from that.
Based on my experience as a former teacher, the biggest problem is that many
students just don't cooperate and do everything they can to disrupt the lesson.
(In homeschool and private tutoring, you don't have these classmates!) And in
many schools teachers are completely helpless about that, because the rules
don't allow them to make anything that could really help. Any attempt to make
education more efficient would have to deal with the disruptive students;
perhaps to remove them from the main stream. And the remaining ones should learn
until they understand. Perhaps with some option for the smarter ones to move
ahead faster.
2TsviBT10y
Are you kidding? Did you go to school? Teaching material to a class of 10 (let
alone 20 or 50) K-12 kids, selected only by location and socio-economic class,
is a ridiculously overconstrained problem. To give one of the main problems: for
each concept you teach, you have to choose how long to explain it and give
examples. If you move on, then any kid who didn't really get it will become very
lost for the rest of the year (I'm thinking of technical subjects, where you
have long dependent chains of concepts). If you keep dropping kids, then
everyone gets lost. If you wait until everyone gets it, then you go absurdly
slow. My little brother has been "learning" basic arithmetic in his (small,
private) school for six years.
0shminux10y
Not sure what exactly in my comment you are objecting to so vehemently. The
issues you describe are exactly the same as with any mass production, including
food.
4TsviBT10y
If we are talking about babysitting, then of course I agree - much more
efficient to have one person babysit 15 kids.
If we are talking about learning, then I am vehemently objecting to "a normal
education is just/almost as good as homeschooling".
The point of the example I gave (mastery learning vs. speed in the classroom)
was that you can't mass produce education in the naive way. Taking advantage of
division of labor in this context would mean hiring tutors.
More generally, every form of utilitarianism I've seen assumes that you should value people equally, regardless of how close they are to you in your social network. How much damage are you obligated to do to your own society for people who are relatively distant from it?
It's melatonin; melatonin is so cheap that you actually wouldn't save much, if any, money by sending your customers fakes. And the effect is clear enough that they'd quickly call you on fakes.
And they may look shady simply because they're not competently run. To give an example, I've been running an ad from a modafinil seller, and as part of the process, I've gotten some data from them - and they're easily costing themselves half their sales due to basic glaring UI issues in their checkout process. It's not that they're scammers: I know they're selling real modafinil from India and are trying to improve. They just suck at it.
.. no.
I kinda assumed I wouldn't be able to get one since I don't have any obvious
sleeping issues. "I did my independent research and figured it would improve my
sleep beyond the baseline" wouldn't work, I think.
6Douglas_Knight10y
What's the harm in trying?
(Lying to your doctor could be dangerous. So don't do that.)
Just say "I think my sleep could be better." It's true and baseline is vague
enough that doctors don't mind improving people beyond it.
Doctors do get nervous when people they don't know come in asking for a
particular drug, even something like melatonin or a hair-loss drug. This is much
more likely to work if you have a regular doctor.
Going back to the original question, can you order it off of amazon.com (not
co.uk)?
0kalium10y
What is the danger in telling your doctor you have insomnia when you don't?
0Douglas_Knight10y
This particular example is probably safe, but I think it's better to give more
generalizable advice.
0bramflakes10y
I assumed that Amazon would be smart enough to restrict orders to countries
where the products are illegal or restricted, but I'm unsure whether independent
sellers associated with amazon have the same restriction. In any case I bought
some from another UK-based site. It was only like 20 quid for half a year's
worth of pills so I don't consider it much of a loss if they don't arrive or are
just sugar pills (which gwern points out is unlikely).
If I make a target, but instead of making it a circle, I make it an immeasurable set, and you throw a dart at it, what's the probability of hitting the target?
I suppose the question is: What should you do if you're offered a bet on whether
the dart will hit the target or not?
There's no way to avoid the question other than arguing somehow that you'll
never encounter an immeasurable set.
Immeasurable objects are physically impossible. The actual target will be
measurable, even if the way you came up with it was to try to follow the
"instructions" that describe an immeasurable set.
0D_Alex10y
Hmm. What is the exact length of your, say, pen? Is it a rational number or a
real number... I mean the EXACT lengh...?
Note if the answer to the last question is "it is a real number", then it is
possible to construct the bet as proposed by the OP.
Before you quote "Planck's Length" in your reply, there is currently no directly
proven physical significance of the Planck length (at least according to
Wikipedia).
7Quinn10y
For the same reasons you outline above
[http://lesswrong.com/r/discussion/lw/ir4/open_thread_september_30_october_6_2013/9tkv],
I'm okay with fighting this hypothetical target.
If I must dignify the hypothesis with a strategy: my "buy" and "sell" prices for
such a bet correspond to the inner and outer measures of the target,
respectively.
If you construct a set in real life, then you have to have some way of judging whether the dart is "in" or "out". I reckon that any method you can think of will in fact give a measurable set.
Alternatively, there are several ways of making all sets measurable. One is to reject the Axiom of Choice. The AoC is what's used to construct immeasurable sets. It's consistent in ZF without AoC that all sets are Lebesgue measurable.
If you like the Axiom of Choice, then another alternative is to only demand that your probability measure be finitely additive. Then you can give a "measure" (such finitely additive measures are actually called "charges") such that all sets are measurable. What's more you can make your probability charge agree with Lebesgue measure on the Lebesgue measurable sets. (I think you need AoC for this though.)
In L.J. Savage's "The Foundations of Statistics" the axioms of probability are justified from decision theory. He only ever manages to prove that probability should be finitely additive; so maybe it doesn't have to be countably additive. One bonus of finite additivity for Bayesians is that lots of improper priors become proper. For example, there's a uniform probability charge on the naturals.
I've been thinking about this a bit more. My current thinking is basically what
Coscott said
[http://lesswrong.com/lw/ir4/open_thread_september_30_october_6_2013/9tk1]:
1. We only care about probabilities if we can be forced to make a bet.
2. In order for it to be possible to decide who won the bet, we need that
(almost always) a measurement to some finite accuracy will suffice to
determine whether the dart is in or out of the set.
3. Thus the set has a boundary of measure zero.
4. Thus the set is measurable.
What we have shown is that in any bet we're actually faced with, the sets
involved will be measurable.
(The steps from 2 to 3 and 3 to 4 are left as exercises. (I think you need
Lebesgue measurable sets rather than just Borel measurable ones))
Note that the converse fails: I believe you can't make a bet on whether or not
the dart fell on a rational number, even though the rationals are measurable.
2Pfft10y
Here's a variant which is slightly different, and perhaps stronger since it also
allows some operations with "infinite accuracy".
In order to decide who won the bet, we need a referee. A natural choice is to
say that the referee is a Blum-Shub-Smale machine
[http://en.wikipedia.org/wiki/Blum%E2%80%93Shub%E2%80%93Smale_machine], i.e. a
program that gets a single real number x∈[0,1] as input, and whose operations
are: loading real number constants; (exact) addition, substraction,
multiplication and division; and branching on whether a≤b (exactly).
Say you win if the machine accepts x in a finite number of steps. Now, I think
it's always the case that the set of numbers which are accepted after n steps is
a finite union of (closed or open) intervals. So then the set of numbers that
get accepted after any finite number of steps is a countable union of finite
unions of intervals, hence Borel.
7Scott Garrabrant10y
Immeasurable sets are not something in the real world that you can throw a dart
at.
I can rephrase your problem to be: "If I have an immeasurable set X in the unit
interval, [0,1), and I generate a uniform random variable from that interval,
what is the probability that that variable is in X?"
The problem is that a "uniform random variable" on a continuous interval is a
more complicated concept than you think. Let me explain, by first giving an
example where X is measurable, lets say X=[0,pi-3). We analyze random continuous
variables by reducing to random discrete variables. We can think of a "uniform
random variable" as a sequence of digits in a decimal expansion which are
determined by rolling a 10 sided die. So for example, we can roll the die, and
get 1,4,6,2,9,..., which would correspond to .14629..., which is not in the set
X. Notice that while in principle we might have to roll the die arbitrarily many
times, we actually only had to roll the die 3 times in this case, because once
we got 1,4,6, we knew the number was too big to be in the set X. We can use this
fact that we almost always have to roll the die only a finite number of times to
get a definition of the "probability of being in X." In this case, we know that
the probability is between .141 and .142, by considering 3 die rolls, and if we
consider more die rolls, we get more accuracy that converges to a single number,
pi-3.
Now, let's look at what goes wrong if X is not measurable. The problem here is
that the set is so messy that even if we we know about the first finitely many
digits of a random number, we wont be able to tell if the number is in X. This
stops us from doing the procedure like above and defining what we mean.
Is this clear?
5Oscar_Cunningham10y
EDIT: I retract the following. The problem with it is that Coscott is arguing
that "something in the real world that you can throw a dart at" implies
"measurable" and he does this by arguing that all sets which are "something in
the real world that you can throw a dart at" have a certain property which
implies measurability. My "counterexamples" are measurable sets which fail to
have this property, but this is the opposite of what I would need to disprove
him. I'd need to find a set with this property that isn't measurable. In fact, I
don't think there is such a set; I think Coscott is right.
The sets with this property (that you can tell whether your number is in or out
after only finitely many dice rolls) are the open sets, not the measurable sets.
For example, the set [0,pi-3] is measurable but not open. If the die comes up
(1,4,1,5,9,...) then you'll never know if your number is in or out until you
have all the digits. For an even worse example take the rational numbers:
they're measurable (measure zero) but any finite decimal expansion could be
leading to a rational or an irrational.
3arundelo10y
That doesn't seem right to me. Take as my target the open set (0, pi-3). If I
keep rolling zeros I'll never be able to stop. (Edit: I know that the
probability of rolling all zeros approaches 0 as the number of die rolls
approaches infinity, but I assume that a demon can take over the die and start
feeding me all zeros, or the digits of pi-3 or whatever. As I think about this
more I'm thinking maybe what you said works if there is no demon. Edit 2: Or
not. If there's no demon and my first digit is 0 then I can stop, but that's
only because 0 is expressible as an integer divided by a power of ten. If
there's no demon and I roll the first few digits of pi-3, I know that I'll
eventually go over or under pi-3, but I don't know which, and it doesn't matter
whether pi-3 itself is in my target set.)
Every die roll tells me that the random number I'm generating lies in the closed
interval [x, x+1/10^n], where x is the decimal expansion I've generated so far
and n is how many digits I've generated. If at some point I start rolling all 0s
or all 9s I'll be rolling forever if the number I'm generating is a limit point
of the target set, even if it's not in the target set.
3Oscar_Cunningham10y
I should have been more accurate and said "If the random number that you'll
eventually get does in fact lie in the set, then you'll find out about this fact
after a finite number of rolls."
This really does define open sets, since for any point in an open set there's an
open ball of radius epsilon about it which is in the set, and then the interval
[x, x+1/10^n] has to be in that ball once 1/10^n < epsilon/2.
EDIT: (and the converse also holds, I think, but it requires some painfully
careful thinking because of the non-uniqueness of decimal expansions)
I think a more exact representation of what Coscott actually said is the
following property: "We almost always only have to roll the die finitely many
times to determine whether the point is in or out."
This still doesn't specify measurable sets (because of the counterexample given
by the rationals). I think the type of set that this defines is "Sets with
boundary of measure zero" where the boundary is the closure minus the interior.
Note that the rationals in [0,1) have boundary everywhere (i.e. boundary of
measure 1).
0arundelo10y
Ah, so if my target set is (0, pi-3) and the demon feeds me the digits of pi-3,
I will be rolling forever, but if the demon feeds me the digits of pi-3-epsilon
(or any other number in (0, pi-3)) I will be able to stop after a finite amount
of rolls.
That sounds right to me, although I don't understand measure very well. I was
informally thinking of this property as "continuousness".
0Scott Garrabrant10y
Yeah, but I can't explain explain that without analysis not appropriate for a
less wrong post. I remember that the probability class I took in undergrad
dodged the measure theory questions by defining probabilities on open sets,
which actually works for most reasonable questions. I think such a
simplification is appropriate, but I should have had a disclaimer.
0MrMind10y
What Quinn said.
Throwing darts at immeasurable set was a technique used to 'prove' the continuum
hypothesis.
1Douglas_Knight10y
Could you give a reference? Are you assuming choice?
6MrMind10y
Sorry, it was used to 'disprove' the continuum hypothesis.
It's the Freiling axiom
[http://en.wikipedia.org/wiki/Freiling's_axiom_of_symmetry].
1Douglas_Knight10y
Thanks! I think it's quite reasonable to reject choice and take as an axiom that
all sets are measurable, so I'm interested in the consequences of it. I'd always
been told that the continuum hypothesis is orthogonal to everything people care
about, but that's only after assuming choice.
2MrMind10y
There's also the Axiom of Determinacy that rejects Choice and, when paired with
the existence of a very strong measurable cardinal, gives a very broad class of
measurable sets.
0Douglas_Knight10y
Could you give an example of a set whose measurability I might care about, other
than subsets of R? something for random processes?
Could you give a reference for the combination?
0MrMind10y
Well, I guess this pretty much depends on the area you're working on. I'm
interested in the foundation of mathematics, for which measurable sets are of
big importance (for example, they are the smallest critical point of embedding
of transitive models, or they are the smallest large cardinal property that
cannot be shown to exists inside the smallest inner model). Outside of that
area, I guess the interest is all about R and descriptive set theory.
Edit: It's not true that measurable cardinals are the smallest large cardinals
that do not exists in L. Technically, the consistency strength is called 0#, and
between that and measurables there are Ramsey cardinals.
Well, the definitive source is Kanamori's book "The higher infinite", but it's
advanced. Some interesting things can be scooped up from Wikipedia's entry about
the axiom of determinacy [http://en.wikipedia.org/wiki/Axiom_of_determinacy].
-2Thomas10y
CH is orthogonal to ZF. CH is orthogonal to ZFC.
If ZFC is inconsistent, then ZF is also inconsistent.
AC is orthogonal to CH.
2JoshuaZ10y
That doesn't actually answer Douglas's statement that the continuum hypothesis
is orthogonal to everything people care about if one assumes choice. In fact
Doug's statement is more or less correct. See in particular discussion here
[http://math.stackexchange.com/questions/472957/the-continuum-hypothesis-the-axiom-of-choice].
In particular, ZF + CH implies choice for sets of real numbers, which is what we
care about for most practical purposes.
2Douglas_Knight10y
A comment at your link baldly asserts that ZF+CH implies choice for sets of real
numbers, but the link seems otherwise irrelevant. Do you have a better citation?
In particular, what do you mean by CH without choice? In fact, the comment
asserts that ZF+CH implies R is well-orderable, which I don't think is true
under weaker notions of CH.
0JoshuaZ10y
CH in that context then is just that there are no sets of cardinality between
that of R and N. You can't phrase it in terms of alephs (since without choice
alephs aren't necessarily well-defined). As for a citation, I think Caicedo's
argument here
[http://math.stackexchange.com/questions/314741/question-about-generalized-continuum-hypothesis]
can be adopted to prove the statement in question.
2Douglas_Knight10y
I said that I doubt your claim, so blog posts proving different things aren't
very convincing. Maybe I'm confused by the difference between choice and
well-ordering, but imprecise sources aren't going to clear that up.
In fact, it was Caicedo's post that lead me to doubt Buie. Everything Caicedo
says is local. In particular, he says that CH(S) and CH(2^S) imply that S is
well-orderable. Buie makes a stronger specific claim that CH implies R is
well-orderable, which sounds like a stronger specific claim, unlikely to be
proved by local methods. I guess it is not exactly stronger, though, because the
hypothesis is a little different (CH=CH(N), not CH(R)).
--------------------------------------------------------------------------------
Alephs are defined without choice. They are bijective equivalence classes of
ordinals. In any event, ℵ_1 is the union of countable ordinals. Sometimes they
are called cardinals.
It is widely reported that the (weak) CH is the that every uncountable subset of
the reals is bijective with the reals, while strong CH is that the reals are
bijective with ℵ_1. I think you and Buie are simply confusing the two
statements.
--------------------------------------------------------------------------------
Also, sometimes people use "weak continuum hypothesis" to mean 2^ℵ_0 < 2^ℵ_1; I
think it is strictly weaker than the statement that there are no sets between
ℵ_0 and 2^ℵ_0.
0JoshuaZ10y
Hmm, that does make it seem like I may be confused here. Possible repaired
statement would then use GCH in some form rather than just CH and that should go
through then since GCH will imply that for all infinite S, CH(S) and CH(2^S),
which will allow one to use Caicedo's argument. Does that at least go through?
I think you are likely correct here.
2Douglas_Knight10y
Yes, Caicedo mentions that GCH implies AC. This is a theorem of Siepinski, who
proved it locally. Specker strengthened the local statement to CH(S)+CH(2^S) =>
S is well-orderable. It is open whether CH(S) => S is well-orderable.
0JoshuaZ10y
Ok. That makes sense. I think we're on the same page then now, and you are
correct that Buie and I were confuse about the precise version of the statements
in question.
There seems to be a consensus among people who know what they're talking about that the fees you pay on actively managed funds are a waste of money. But I saw some friends arguing about investing on Facebook, with one guy claiming that index funds are not actually the best way to go for diversified investing that does not waste any money on fees. Does anyone know if there is anything too this? More specifically, are Vanguard's funds really as cheap as advertised, or is there some catch to them?
The idea is that you can't, on average and long term, beat the market. So paying
extra money for a fund that claims to be able to do that is an unnecessary
gamble. Accumulating the expertise to evaluate a fund's ability to perform
better than the market would give you the ability to just invest at that level
anyway, so you might as well save your time and money and stick it in the
cheapest market funds you can manage.
Yes, some strategies beat the market, sometimes (they also sometimes fail
catastrophically). But you can do pretty damn well comparably in the long term
by having a very low-cost, low-effort strategy that frees up a lot of time and
effort for other pursuits.
You can look up expense ratios on Google, Morningstar, etc. Vanguard does pretty
well. They're pretty well represented here
[http://etfdb.com/compare/lowest-expense-ratio/].
2Douglas_Knight10y
This seems like a really weird question. If your friend is advocating something
else, how about you tell us what it is? If your friend is knocking Vanguard, but
not specifying what's better, why should I care? Your last sentence suggests
that Vanguard is lying about it's fees. That would be a reasonable thing to say
in isolation, but it's not true.
0Viliam_Bur10y
Most likely the alternative is to pay his friend to actively manage your money
using his secret knowledge.
This does not automatically mean the friend is wrong. But I also wouldn't expect
any kind of proof or guarantee. We are moving from the evidence area to "just
trust me, I'm smart" area.
2solipsist10y
Asset allocation matters too. Vanguard target retirement funds
[https://personal.vanguard.com/us/funds/vanguard/TargetRetirementList] follow
the conventional wisdom (more stocks when you're young, more bonds when you're
older) and are pretty cheap. Plowing all new investments into a single
target-date fund is good advice for most people*.
I implemented a scheme to lower my expenses from 0.17% to 0.09%, but it was not
worth the time, hassle, and tax complications.
*People who should do something more complicated include retirees, who should
strongly consider buying an annuity, and people who are saving to donate to
charity [http://www.overcomingbias.com/2013/04/more-now-means-less-later.html].
2ChristianKl10y
The issue with an index fund that based on something like the SAP 500 is that
the SAP 500 changes over time.
If a company loses their SAP 500 all the index funds that are based on the SAP
500 dump their stocks on the market. On average that's not going to be a good
trade. The same goes for the trade of buying the companies that just made it
into the SAP 500. On average you are going to lose some money to hedge funds or
investment banks who take the other side on those trades.
In general you can expect that if you invest money into the stockmarket big
powerful banks have some way to screw you. But they won't take all your money
and index funds are still a good choice if you don't want to invest too much
time thinking about investing.
4wgd10y
This sounds like a sufficiently obvious failure mode that I'd be extremely
surprised to learn that modern index funds operate this way, unless there's some
worse downside that they would encounter if their stock allocation procedure was
changed to not have that discontinuity.
1Lumifer10y
They do because their promise is to match the index, not produce better returns.
Moreover, S&P500 is cap-weighted so even besides membership changes it is
rebalanced (the weights of different stocks in the portfolio change) on a
regular basis. That also leads to rather predictable trades by the indexers.
0ChristianKl10y
Being an index fund is fundamentally about changing your portfolio when the
index changes. There no real way around it if you want to be an index fund.
1solipsist10y
If you could consistently make money by shorting stocks that are about to fall
off an index, the advantage would arbitraged to oblivion.
1ChristianKl10y
The question is whether you know that the stocks are about to fall off the index
before other market participants. If your high frequency trading algorithm is
the first to know that a stock is about to fall off an index, than you make
money with it.
Using the effect to make money isn't easy because it requires having information
before other market participants. That doesn't change anything about whether the
index funds on average lose money on trades to update their portfolio to index
changes.
0shminux10y
There is no catch, you don't pay anything other than their advertised fees.
Andrew Tobias [http://en.wikipedia.org/wiki/Andrew_Tobias] has been using them
as an example of a great way to invest in the market for years. (His writings on
investing are great, ignore anything he says about politics on his blog
[http://andrewtobias.com/column/].)
-4V_V10y
IIUC, since most transactions in the stock market are zero-sum (at least in
terms of money), the fact that index funds make money despite using very simple
and predictable strategies implies that on average, everybody else manages to do
worse.
3Lumifer10y
Nope, that's not how it works. Just because the transaction is zero-sum doesn't
mean the value is zero-sum.
Consider (abstract) agriculture. You buy seeds, that's a "zero-sum" transaction,
plant them, wait for them to grow, pick the harvest and sell it in another
"zero-sum" transaction. Both your transactions with the market are zero-sum and
yet... :-)
Specifically, stock market is not zero-sum game.Therefore the fact that (some)
index funds (sometimes) make money does not imply that the everybody else does
worse.
0V_V10y
Yes, but you can't plant stock options in the ground, and in fact you can't
really do anything with them other than selling them or keeping them and cash
the dividends (assuming that you don't buy enough shares of a company to gain
control of it).
Since different people can assign different utility to cash and can discount
future utility differently, it is possible that a transaction is positive sum:
e.g. consider an old person with a short remaining life expectancy selling all
their stocks to a young person. But at the level of large investment funds and
banks, these effects should mostly cancel out, therefore the stock market is
approximately zero-sum (up to events that alter the amount of available stocks,
such as defaults, IPOs and recapitalizations)
1Lumifer10y
Stock options are very different from stock shares. I assume you're talking
about shares.
Stock shares represent part ownership of a company. The fact that you're likely
to be a minority owner and have no control over the company does not change the
fact that you are still legally entitled to a share of the company's value. If
the company's value rises, the value of your share rises as well.
If you're talking about personal subjective utility, every voluntary transaction
is positive-sum. But that's irrelevant for the purpose of this discussion since
here we are talking dollars and not utilons.
You still don't understand. Public companies (generally) create value. This
value accrues to the owners of the companies who are the holders of stock
shares. In the aggregate, stock holders own all the public companies. If the
public companies produced value, say, this year, the value of the companies
themselves increased. This means that the worth of the stock in the stock market
has increased -- even if no transactions have taken place. That is why stock
market is not a zero-sum game.
0V_V10y
Yes, sorry about the imprecision.
I didn't claim that owning stock shares produces no value. I claimed that most
trades involving stocks (those which neither increase nor decrease the amount of
stocks) are zero-sum w.r.t. monetary value.
Consider Alice and Bob who have the same utility function w.r.t. money and the
same discounting strategy. Alice sells a share to Bob at price X. Alice is
betting that the total discounted utility from the dividends gained by owning
the share indefinitely is less than the immediate utility of owning X, while Bob
is betting that it is greater than that. Clearly they can't be both right. The
gain of one is the loss of the other.
0shminux10y
Sounds like a fully general counter-argument against investing.
0V_V10y
No, you can invest in index funds and make money or invest in securities not
traded on the stock market.
0Lumifer10y
Generally it depends and there are certainly exceptions, but this position is a
good prior to be modified by evidence. Absent evidence it stands :-)
Investing is complicated. There is no simple, bulletproof, one-size-fits-all
recipe. To talk about "the best way" you need to start by specifying your goals
and constraints (including things like risk tolerance) -- that's surprisingly
hard.
Douglas_Knight just fixed it. (It's a wiki; in the future, just fix it!)
0Viliam_Bur10y
I was also curious about why exactly the latter link does not work. By reading
the URLs, I would expect the former to be "Open Threads withing the Discussion
subreddit" and the latter "Open Threads, anywhere". If this understanding is
correct, they should produce the same results. Which means either my model is
wrong, or there is a bug in LW software.
3Douglas_Knight10y
No, the main subreddit is weird and a lot of things (eg, tags) that don't
specify a subreddit are in main. The All subreddit ought to do both main and
discussion, but its tags [http://lesswrong.com/r/all/tag/open_thread/] only
cover main. Also, there are a lot of bugs in main and all.
Am I mistaken, or do the Article Navigation buttons only ever take to posts in Main, even if I start out from a post in Discussion? Is this deliberate? Why?
You correctly identify a bug. Here is another bug, which is less consistent.
Posts in discussion have two URLs, one marked discussion, one not. For this open
thread, here
[http://lesswrong.com/r/discussion/lw/ir4/open_thread_september_30_october_6_2013/]
is the discussion link. Its tag links in the lower right corner of the post send
you to discussion posts (but the tag links in the article navigation don't). I
got that discussion link from a page of new discussion articles. But if I
instead go to Coscott's submitted page, I get this link
[http://lesswrong.com/lw/ir4/open_thread_september_30_october_6_2013/], which
looks like it's in main, with tag links also in main.
Another PT:LoS question. In Chapter 8 ("Sufficiency, Ancillarity and all that"), there's a section Fisher information. I'm very interested in understanding it, because the concept has come up in improtant places in my statistics classes, without any conceptual discussion of it - it's in the Cramer-Rao bound and the Jeffreys prior, but it looks so arbitrary to me.
Jaynes's explanation of it as a difference in the information different parameter values give you about large samples is really interesting, but there's one step of the math that I just c... (read more)
There's no first-order term because you are expanding around a maximum of the
log posterior density. Similarly, the second-order term is negative (well,
negative definite) precisely because the posterior density falls off away from
the mode. What's happening in rough terms is that each additional piece of data
has, in expectation, the effect of making the log posterior curve down more
sharply (around the true value of the parameter) by the amount of one copy of
the Fisher information matrix (this is all assuming the model is true, etc.).
You might also be interested in the concept of "observed information," which
represents the negative of the Hessian of the (actual not expected)
log-likelihood around the mode.
0alex_zag_al10y
ah, thank you! It makes me so happy to finally see why that first term
disappears.
But now I don't see why you subtract the second-order terms.
I mean, I do see that since you're at a maximum, the value of the function has
to decrease as you move away from it.
But, in the single-parameter case, Jaynes's formula becomes
}=\log{p(x%7C\theta_0)}%20-%20\frac{\partial%5E2%20\log{p(x%7C\theta)}}{\partial%20\theta%5E2}(\delta\theta)%5E2)
But that second derivative there is negative. And since we're subtracting it,
the function is growing as we move away from the minimum!
1witzvo10y
Yes, that formula doesn't make sense (you forgot the 1/2, by the way). I believe
8.52/8.53 should not have a minus there and 8.54 should have a minus that it's
missing. Also 8.52 should have expected values or big-O probability notation.
This is a frequentist calculation so I'd suggest a more standard reference like
Ferguson
[http://www.amazon.com/Course-Sample-Chapman-Statistical-Science/dp/0412043718/ref=sr_1_12?ie=UTF8&qid=1381121913&sr=8-12&keywords=asymptotic+statistics]
There is too much unwarranted emphasis on ketosis when it comes to Keto diets, rather than hunger satiation. That might sound like a weird claim since the diet is named after ketosis, but when it comes to the efficacy of the Keto diet for weight loss with no regard to potential health or cognitive effects, ketosis has little to do with weight loss. Most attempts to explain the Keto diet almost always starts with an explanation on what ketosis is with an emphasis on attaining ketosis rather than hunger satiation and caloric deficit. Here is intro excerpt... (read more)
That is true. However for many people "health benefits" beyond and above losing
weight are a major advantage of keto diets.
Losing weight isn't be-all end-all of the way you eat.
0niceguyanon10y
Yea that is why I maintain a keto-esque diet, because I believe in the long term
effects of reduced consumption of processed carbs and high glycemic index foods.
An analysis of the front page of r/keto only results in one post about positive
health/cognitive effects, almost every post has to do with "look at how much
weight I lost", which is a shame because you're right when you say that Losing
weight isn't the be-all end-all of the way your eat.
Yet another newbie question. What's the rational way to behave in a prediction market where you suspect that other participants might be more informed than you?
Here's a toy model to explain my question. Let's say Alice has flipped a fair coin and will reveal the outcome tomorrow. You participate in a prediction market over the outcome of the coin. The only participant besides you is Bob. Also you know that Alice has flipped another fair coin to decide whether to tell Bob the outcome of the first coin in advance. What trades should you offer to Bob, and wha... (read more)
Stay out of the market.
Alternatively, if you have a strong prior, you can treat the bets of other
better-informed participants as evidence and do Bayesian updating. But it will
have to be a pretty strong prior to still bet against them.
Of course, if the market has both better-informed and worse-informed
participants and you know who they are, you can just bet together with the
better-informed participants.
0[anonymous]10y
Have you seen my reply to Coscott? You can't naively treat the actions of less
informed people as evidence, because they might be rational and try to mislead
you. (See "bluffing" and "sandbagging" in poker.) The game-theoretic view makes
everything simpler, in this case almost too simple.
5Scott Garrabrant10y
You will not take a bet with Bob. If he does not know the result of the coin, he
will not take anything worse than even odds.
You should clearly not offer him even odds. If you offer him anything else, he
will accept if and only if he knows you will lose.
4cousin_it10y
Hang on, I just realized there's a much simpler way to analyze the situations I
described, which also works for more complicated variants like "Bob gets a 50%
chance to learn the outcome, but you get a 10% chance to modify it afterward".
Since money isn't created out of nothing, any such situation is a zero-sum game.
Both players can easily guarantee themselves a payoff of 0 by refusing all
offers. Therefore the value of the game is 0. Nash equilibrium, subgame-perfect
equilibrium, no matter. Rational players don't play.
That leads to the second question: which assumptions should we relax to get a
nontrivial model of a prediction market, and how do we analyze it?
Robin Hanson argues that prediction markets should be subsidized by those who want the information. (They can also be subsidized by "noise" traders who are not maximizing their expected money from the prediction market.) Under these conditions, the expected value for rational traders can be positive.
Good link, thanks. So Robin knows that zero-sum markets will be "no-trade" in
the theoretical limit. Can you explain a little about the mechanism of
subsidizing a prediction market? Just give stuff to participants? But then the
game stays constant-sum...
8badger10y
Basically, you'd like to reward everyone according to the amount of information
they contribute. The game isn't constant sum overall since the amount of
information people bring to the market can vary. Ideally, you'd still like the
total subsidy to be bounded so there's no chance for infinite liability.
Depending on how the market is structured, if someone thinks another person has
strictly more information than them, they should disclose that fact and receive
no payout (at least in expectation). Hanson's market scoring rules reward
everyone according to how much they improve on the last person's prediction. If
Bob participates in the market before you, you should just match his prediction.
If you participate before him, you can give what information you do have and
then he'll add his unique information later.
3cousin_it10y
Many thanks for the pointer to LMSR! That seems to answer all my questions.
(Why aren't scoring rules mentioned in the Wikipedia article on prediction
markets? I had a vague idea of what prediction parkets were, but it turns out I
missed the most important part, and asked a whole bunch of ignorant questions...
Anyway, it's a relief to finally understand this stuff.)
5badger10y
They should be. Just a matter of someone stepping up to write that section. The
modern theory on market makers has existed for less than a decade and only
matured in the last few years, so it just hasn't had time to percolate out. Even
here on Less Wrong, where prediction markets are very salient and Hanson is well
known, there isn't a good explanation of the state of the art. I have a sequence
in the works on prediction markets, scoring rules, and mechanism design in an
attempt to correct that.
1cousin_it10y
That would be great! If you need someone to read drafts, I'd be very willing :-)
0ChristianKl10y
There's no problem with the game being constant sum.
0saturn10y
I always assumed it was by selling prediction securities for less than they will
ultimately pay out.
4Scott Garrabrant10y
The assumption you should relax is that of an objective probability. If you
treat probabilities as purely subjective, and that saying that P(X)=1/3 means
that my decision procedure thinks the world with not X is twice as important as
the world with X, then we can make a trade.
Lets say I say P(X)=1/3 and you say P(X)=2/3, and I bet you a dollar that not X.
Then I pay you a dollar in the world that I do not care about as much, and you
pay me a dollar in the world that you do not care about as much. Everyone wins.
This model of probability is kind of out there, but I am seriously considering
that it might be the best model. Wei Dai argues for it here
[http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/].
0cousin_it10y
I know Wei's model and like it a lot, but it doesn't solve this problem. With
subjective probabilities, the exchange of information between players in a
market becomes very complicated, like Aumann agreement but everyone has an
incentive to mislead everyone else. How do you update when the other guy
announces that they're willing to make such-and-such bet? That depends on why
they announce it, and what they anticipate your reaction to be. When you're
playing poker and the other guy raises, how do you update your subjective
probabilities about their cards? Hmm, depends on their strategy. And what does
their strategy depend on? Probably Nash equilibrium considerations. That's why
I'd prefer to see a solution stated in game-theoretic terms, rather than
subjective probabilities.
ETA: see JGWeissman's and badger's comments, they're what I wanted to hear. The
answer is that we relax the assumption of zero-sum, and set up a complex system
of payouts to market participants based on how much information they give to the
central participant. It turns out that can be done just right, so the Nash
equilibrium for everyone is to tell their true beliefs to the central
participant and get a fair price in return.
2badger10y
Game theory in these setting is built on subjective probabilities! The standard
solution concept in incomplete-information games is even known as Bayes-Nash
equilibrium [http://en.wikipedia.org/wiki/Bayes-Nash_equilibrium].
The LMSR is stronger strategically than Nash equilibrium, assuming everyone
participates only once. In that case, it's a dominant strategy to be honest,
rather than just a best response. If people participate multiple times, the
Bayes-Nash equilibrium is harder to characterize. See Gao et al (2013)
[http://www.eecs.harvard.edu/econcs/pubs/Gao_ec13.pdf]] for the best current
description, which roughly says you shouldn't reveal any information until the
very last moment. The paper has an overview of the LMSR for anyone interested.
0cousin_it10y
Thanks for the link to Gao et al. It looks like the general problem is still
unsolved, would be interesting to figure it out...
0Scott Garrabrant10y
Maybe I should try to turn this comment into a full discussion post.
What's the LMSR prediction market scoring rule? We've just started an ad-hoc prediction market at work for whether some system will work, but I can't remember how to score it.
The log market scoring rule (LMSR) depends on there being an order to the stated
probabilities, so the payoffs would be different for the order NS, SD, AK than
for the order AK, SD, NS.
Given a particular order, the payoff for the i-th probability submitted is
log(pi^k) - log(p{i-1}^k) if event k occurs. For example, if the order is NS,
SD, AK and the system does work, AK's payoff is log(.35) - log(.75). If the
system doesn't work, AK's payoff is log(.65) - log(.25).
I haven't seen this written about anywhere, but if you just have probabilities
submitted simultaneously and you don't want to fix an order, one way to score
them would be log(pi^k) - \frac{1}{n} \sum{j \ne i} log(p_j^k) (the log of the
probability person i gives to event k minus the average of the probabilities
everyone else gave, including the house, assuming there are n participants plus
the house). This is just averaging over the payoffs of every possibly ordering
of submission. So, for these probabilities, AK's score if the system worked
would be log(.35) - (log(.75) + log(.5) + log(.5))/3.
From thought experiment to real experiment. I mean really, how could they NOT
build it once they thought of it? link
[https://en.wikipedia.org/wiki/Elitzur%E2%80%93Vaidman_bomb-tester]
I installed solar panels, which were pretty expensive, but pay back as they generate electricity.
The common question was "How long will it take to earn your investment back?" I understand why they're asking. The investment is illiquid, even more than a long-term bank deposit. But if wanted to get my money "back," I'd keep it in my checking account. The question comes from a tendency to privilege a bird in the hand over those that are still in the bush.
The important point they should ask about is my pr... (read more)
Correct.
If you want to be even more correct :-) you should estimate your IRR (internal
rate of return [http://en.wikipedia.org/wiki/Internal_rate_of_return]) and
compare it with your opportunity costs for the money invested.
0JoshuaFox10y
Yes, good point. It took me a while to figure out by myself the best way I
should be calculating my rate of return on my variable-return investments,
before discovering this in Excel.
But in this case, the panels produce (hopefully) a pretty constant annual amount
of electricity, and the price I get is a fixed amount, so it seems that
calculating IRR is easy.
As long as we are the topic, maybe the smart folks here can explain,
mathematically, why the summation formula for IRR does not admit a closed-form
solution? I asked on Quant StackExchange
[http://quant.stackexchange.com/questions/8576/why-is-there-no-closed-form-equation-for-xirr]
and didn't get much of an answer.
2Lumifer10y
Your calculation is presumably for a fairly long term. In the long term prices
don't remain fixed, things break down, need maintenance, etc. For example, hail
might damage some of your panels. Or your roof might start to leak and the
presence of the panels will substantially add to the cost of repairing it.
0JoshuaFox10y
Excellent. I had an overly-simplistic mental model in which the panels would
last until they fail.
But yes, unexpected costs that are nonetheless below full price of the panels
are a real possibility.
As the partial government shutdown enters its third day, many House Republicans are determined to keep fighting, even though they see no plausible way out of the current impasse, because they've come so far they cannot imagine backing down now. "I think there's a sense that for us to do a clean CR now -- then what the hell was this about?" one Republican House member told me. "So I don't think it's going to end anytime soon."
The occasional phenomenon where people go downvote every comment by someone they disagree with could be limited by only allowing people to downvote comments made within the last week.
Or limit the number of votes one person can give to another within a time period. I think most vendetta voting happens in the heat of the moment. I don't like not being able to vote old comments, or skewing the voting on either side.
I like this fix. If the mass voters tend to have low karma, you could also make
this a fix that only applies to people below some karma threshold.
1gjm10y
In the only cases I've seen where I've had grounds for suspicion about who was
doing the karmassassination, the person I thought was the culprit was a
high-karma long-established LWer.
(But there is some bias here; such people are likely to be more salient as
candidates for who the culprit is. And in no case have I been very sure who was
responsible.)
1hyporational10y
I think even the best of us are susceptible to the keyboard warrior berserk
mode.
I always wondered if an algorithm could be implemented akin to the Page rank algorithm. A vote from someone counts more if the person votes seldom and it counts more if the person is upvoted frequently by people with high vote weight.
Could you explain this bit? I'd expect someone who votes seldom to have lower
quality votes, because ey're likely to read less of LW.
2Scott Garrabrant10y
The assumption is that it is that we will capture the variable of "how well do
they know lesswrong" by measuring how much they are upvoted. I think the most
important part is that votes by people with high karma give more karma. The best
kind of upvote is one by someone who is very very popular on lesswrong because
they say lots of important stuff, but almost never thinks anything is worth
upvoting.
1Rob Bensinger10y
Ah. If that's the goal, I suggest increasing the impact of votes the more
upvoted someone is, and increasing the upness of votes the more often she
downvotes relative to upvoting. If I'm popular and upvote a whole lot of things,
that seems like a possible reason to weight my downvotes more strongly. But If
I'm popular and don't vote for much of anything at all, it's not as clear to me
why that's a reason to take my vote more seriously than if I were equally
popular but participated in the voting system more. The latter just seems to
discourage popular people from voting very much.
If we want to encourage our popular people to vote more, we should increase the
power of their votes the more votes they make, rather than decreasing it.
6Scott Garrabrant10y
I did not know this was a thing, but I do not think this is a worthwhile fix. If
a user experiences a sudden drop in karma, and a lot of -1 posts, they should be
able to report the user, and a mod should be able to check and punish them and
fix the problem. We do not want a fix which shows up as an inconvenience often
for a problem which is only rarely a problem.
7Risto_Saarelma10y
I've never seen a mod capable of checking who downvoted what reacting in any way
when this
[http://lesswrong.com/lw/77b/please_do_not_downvote_every_comment_or_post/] has
[http://lesswrong.com/lw/9l7/whats_going_on_here/] come
[http://lesswrong.com/lw/fnk/meta_retributive_downvoting_why/] up
[http://lesswrong.com/lw/hgm/open_thread_may_1731_2013/919h].
8Scott Garrabrant10y
I would guess that a mod being capable of checking that would be an easier or at
least not much harder fix than a time limit on voting down.
5shminux10y
Having been on both sides of a flash downvote (guilty!), I can tell you that
these vendettas are not very effective on a forum of this size. Enough people
read old comments and tend to upvote comments they otherwise wouldn't if they
feel it's unfairly penalized, even if the comment is old. It's a lot more
effective to post a quality reply which convinces other readers that the comment
in question deserves a downvote.
0Protagoras10y
Hmmm. It's true that one of the cases where I'm most likely to upvote is where I
see a comment that looks fine to me that has a negative point total. But flash
downvoting won't necessarily produce any negative point totals, and I'm a lot
less likely to spring into action for a post that merely doesn't have quite as
many positive points as it should (since I rarely have a very firm idea of how
many positive points anything should have). Then again, perhaps in cases where
nothing actually goes negative, not much harm is really being done anyway. So I
may agree with you, but I'm not sure you've got exactly the right reason.
I guess mostly my own feeling is that the karma system seems to work pretty well
as is. When I see a comment getting downvoted to oblivion, it usually seems to
deserve it, and the quality of conversation around here usually seems above the
internet average. I'm sure karma doesn't precisely measure what it's supposed to
measure (whatever that is anyway), but I'm inclined to suspect that trying to
make it do so is likely to end up being more trouble than it's worth.
What's the relationship between Epistemology and Ontology? Are both important of attention or do you get the other for free when you deal with one of them?
An exceedingly complicated and controversial question! Some have argued that you
only need epistemology, or even that epistemology is all you can get; you can
only know what you can know, so you might as well confine your attention to the
knowable, and not worry whether there might be things that are which are not
knowable. Others claim that it's obvious that whether things exist or not surely
doesn't depend on whether they're known, and it's even less likely that it could
depend on such a suspicious, hypothetical property as knowability. Of course,
the latter view doesn't entail that one should favor ontology over epistomology,
but trying to balance both introduces very difficult problems of how to tie the
two together, so it is fairly common to take one as primary and use it to settle
questions about the other.
One might wonder what practical consequences one choice or the other might have,
and here again there is much controversy. The pro-ontology faction claims that
emphasizing epistemology encourages subjectivism and relativism and weakens our
grasp on reality. The pro-epistemology faction replies that emphasizing ontology
is exactly as relative (or non-relative) as emphasizing epistemology, it's just
that when ontology is emphasized, biases are hidden because the focus is turned
away from questions of how actual humans arrive at their ontological
conclusions.
Personally, I am tentatively on the side of the epistemologists, but it seems to
me that details matter a great deal, and there are far too many details to
discuss in a comment (indeed, a book is likely insufficient).
0ChristianKl10y
Even when insufficient, is there a book or other source that you could
recommend?
0Protagoras10y
Hmmm. Bas van Fraassen's The Scientific Image takes the side of the
epistemologists on scientific questions. I take Kant to be an advocate for the
epistemologists in his Critique of Pure Reason, though he makes some effort to
be a compromiser. Rae Langton argues that the compromises in Kant are genuinely
important, and so advocates a role for both epistemology and ontology, in her
Kantian Humility. Heidegger seemed to want to make ontology primary, but I can't
really recommend anything he wrote. It's difficult to know exactly what to
recommend, because this issue is thoroughly entangled with a host of other
issues, and any discussion of it is heavily colored (and perhaps heavily
distorted) by whichever other issues are also on the table. Still, those are a
few possibilities which come to mind.
0ChristianKl10y
When focusing on a issue such as friendliness of an FAI do you think that's in
the domain of epistemology or ontology?
0Protagoras10y
I feel like it's more epistemological, but then I tend to think everything is.
Perhaps it is another symptom of my biases, but I think it more likely that
trying to build an AI will help clarify questions about ontology vs.
epistemology than that anything in our present knowledge of ontology vs.
epistemology will help in devising strategies for building an AI.
0ChristianKl10y
Cyc calls itself an ontology. Doesn't any AI need such an ontology to reason
about the world?
0Protagoras10y
Well, this would be an example of one of the projects that I think may teach us
something. But if you are speaking of "an ontology," rather than just
"ontology," you may be talking about some theory of relativized ontologies, but
more likely you're not speaking about ontology in the same way as those who
prioritize it over epistemology. Those who make epistemology primary still talk
about things, they just disagree with the ontologists about complicated aspects
of our relationship to the things and what our talk about the things means.
0ChristianKl10y
I'm not sure. Barry Smith who leads Basic Formal Ontology which get's used for
medical informatics writes in his "Against Fantology
[http://ontology.buffalo.edu/bfo/Against_Fantology.pdf]"-paper sentences like:
Baysianism as described by Yvain
[http://slatestarcodex.com/2013/08/06/on-first-looking-into-chapmans-pop-bayesianism/]
as described in seems a bit like what Barry Smith describes as spreadsheet
ontology with probability values instead of logical true false values.
Even if ontological questions can't be setteled in a way to decide which
ontology is more correct than another, it seems to me that you have to decide
for one ontology to use for your AGI. Different choices of how you structure
that ontology will have a substantial effect on the way the AGI reasons.
I'm requesting recommendations for guides to meditation.
I've had great success in the past with 'sleeping on in' to solve technical problems. This year I've been trying power-napping during lunch to solve the morning's problems in the afternoon, I'm not sure the success of power-nap is any better that the control group. The next step is to see if I can step away from the old hamfisted methods and get results from meditation.
You need to formulate a (much) more precise question. And, preferably, one which
can be answered with more than "It depends".
2blacktrance10y
Unfortunately, I don't even know enough about the issue to be able to formulate
it much better. Of the controversial issues that have an impact on politics,
it's one of the ones I know least about. The most I can elaborate is: To what
extent does fracking have an environmental impact, particularly with regard to
groundwater?
Does anyone have a good resource on learning how to formate graphs and diagrams?
What are the effects on the reader between having 90%, 100% or 110% spacing between letters? When should one centralize text. What about bold and italics?
Is there good research based resource that explains the effects that those choices have on the reader?
Don't have a formal source, but I can give you a quick rundown of the advice my
group ends up giving to every student we work with:
* Label the dang axes.
* Make the axis labels bigger.
* Make histogram lines thicker; make dots larger.
* If the dots are very dense, don't use dots, use a color scale.
* For the sake of the absent gods, don't make your colour scale
brown-yellow-lightgray-black-darkbrown-darkgray-darkyellow, as one often-used
plotting package did by default. (It was an inheritance from the early
nineties, and honestly it was still weird.) Make it something that humans
naturally read as a scale, eg blue to red by way of violet, dark green to
light green, or blue to red by way of the rainbow.
* On a white background, do not use yellow or bright green unless the
individual dots or areas are large. Lines, generally speaking, are not large.
* Put a legend in one corner, explaining what the line styles mean.
* If you're using (eg) triangles for one data type and circles for another,
make the points bigger. Yes, it likely looks perfectly clear on your screen,
to your young eyes, at a distance of a foot. You will eventually present it
on a crappy twenty-year-old projector to men of sixty and seventy sitting at
the back of a large auditorium. EMBIGGEN THE DANG POINTS. Also, use colours
to further clarify the difference, unless colour is indicating a different
dimension of information.
* Make bin sizes a round number - 1, 2, or 5 - in a unit of interest.
* If plotting numbers of something, indicate the bin size by labeling the y
axis (for example) "Events / 2 MeV".
* As a general rule, make both a linear and a semilog plot. You can skip the
linear if there are no features of interest at high densities, and the
semilog if there are no features of interest at low densities.
3A1987dM10y
Here's a few reason not to do that
[http://root.cern.ch/drupal/content/rainbow-color-map]. (Not to mention the
possibility of colour-blind viewers.)
0NancyLebovitz10y
Thanks for the link. I recommend reading it to anyone who's interested in how
data gets (mis)represented.
0RolfAndreassen10y
Well, you have to admit it's still a big improvement over the old ROOT default.
:)
0A1987dM10y
Well, the old default does make local variations more visible, especially for
the colour-blind. OTOH I agree that telling at a glance which of two widely
separated spots on the graph has a higher value is all but outright impossible
with it.
1ChristianKl10y
How do I know that they are big enough?
3RolfAndreassen10y
When the seventy-year-old at the back of the large auditorium with the cheap,
ancient projector can read them. Alternatively, when your boss stops
complaining. Lines are too thick if they overlap; dots are too big when you
can't easily tell the difference between high and medium density. (And if this
happens at the default dot size, switch to a colour scale.)
If you're doing PowerPoint or similar presentation tools, you want your axis
labels to be the same size as your bullet-point text. One trick I sometimes use
is to whiteout the axis labels in the image file of my plot, and put them back
in using the same text tool that's creating my bullets.
0Douglas_Knight10y
How many of those suggestions could be replaced by "use ggplot2"?
0RolfAndreassen10y
Within our group, none, because then we'd have to learn R. For ChristianKI,
quite possibly all of them.
4daenerys10y
Those sorts of questions are asked in a field called Information Visualization,
which is a part of Human Factors Engineering.
0ChristianKl10y
What's a good resource to learn about it? Is there a textbook you can recommend?
4Lumifer10y
Look up Edward Tufte [http://www.edwardtufte.com/tufte/], and in particular his
seminal book The Visual Display of Quantitative Information
[http://www.amazon.com/The-Visual-Display-Quantitative-Information/dp/0961392142].
0A1987dM10y
Look the graphs at 50% their actual size (or less) and notice how much effort it
takes you to read them. I'd guess that decently correlates with how much effort
it takes someone with worse visual acuity to read them at full size.
An earthrise that might be witnessed from the surface of the Moon would be quite unlike moonrises on Earth. Because the Moon is tidally locked with the Earth, one side of the Moon always faces toward Earth. Interpretation of this fact would lead one to believe that the Earth's position is fixed on the lunar sky and no earthrises can occur, however, the Moon librates slightly, which causes the Earth to draw a Lissajous figure on the sky. This figure fits inside a rectangle 15°48' wide and 13°20' high (in angular dimensions), while the angular diameter of the Earth as seen from Moon is only about 2°. This means that earthrises are visible near the edge of the Earth-observable surface of the Moon (about 20% of the surface). Since a full libration cycle takes about 27 days, earthrises are very slow, and it takes about 48 hours for Earth to clear its diameter.
The Earthrise videos from Apollo were shot while orbiting around the Moon.
0ZankerH10y
The Earthrise videos were shot while orbiting around the Moon in an
(approximately) equatorial circular orbit, not from the surface. That's why the
effect is similar.
1Thomas10y
Agree. My intention was and still is, that people would understand how peculiar
sky is above the Moon. For one, there is Earth just hanging (oscillating a
little, that's true) in the sky. Here we have Polaris, doing the same, but on
the northern, not equatorial sky. It's worth to mention, that from the Moon
south pole, you always see the Sun and Earth. Sun orbiting around, Earth not.
So the other week I read about viewquakes. I also read about things a CS major could do that aren't covered by the usual curriculum. And then this article about the relationship escalator. Those gave me not quite a viewquake but clarified a few things I already had in mind and showed me some I had not.
What I am wondering is now, can anyone here give me a non-technical viewquake? What non-technical resources can give me the strongest viewquake akin to the CS major answer? With non-technical I mean material that doesn't fall into the usual STEM spectrum peop... (read more)
Many non-technical viewquakes are deep in the mindkilling territory. I guess I
better refrain from giving specific examples, but it may seem from outside like
this:
A: I read this insightful book / article / website and it completely changed the
way I see the world.
B: Dude, you are completely brainwashed and delusional.
The lesson is that "dramatically changing one's world view" is not necessarily
the same as "corresponding better with the territory". And it can be sometimes
difficult to evaluate the latter. Just because many people firmly believe theory
X is true, does not make it true. Just because many people firmly believe theory
X is false, does not make it false. For many theories you will find both kinds
of people.
4RomeoStevens10y
I had a viewquake a few years ago when I stayed silent with a group of friends I
normally would have interacted with. Their subconscious prodding of me to
fulfill my usual social role revealed to me that I even had a specific role in
the group in the first place, and subsequently opened me up to a lot of things
that I had disregarded before.
0Barry_Cotter10y
Auf Englisch wuerde man STEM, science, technology, engineering, mathematics
nutzen statt MINT.
0Metus10y
Ich wusste irgendwas stimmt nicht mit der Abkürzung. Danke für die Korrektur.
-3ChristianKl10y
How about reading one of the books in the first link? Otherwise
https://www.quora.com/Jobs-1/Whats-something-that-is-common-knowledge-at-your-work-place-but-would-be-mind-blowing-to-the-rest-of-us
[https://www.quora.com/Jobs-1/Whats-something-that-is-common-knowledge-at-your-work-place-but-would-be-mind-blowing-to-the-rest-of-us]
is a good thread.
Could you explain in what way that answer caused a viewquake? I see some
information that some people might not have known beforehand but it doesn't seem
to me that fundamental.
Typing isn't taught in universities but being reminded that typing is important
for programming doesn't change anything groundbreaking about the world.
5Emile10y
Gah!
That annoying website not only wants your email, it also wants you to fill in a
bunch of information so it can "send you updates", just so you are allowed to
read it.
And then when you join, it will display a message to all your contacts that you
are "following their answers", of course without telling you anything.
2[anonymous]10y
Contacts on what? Your comments makes it sound like it will use its
authorization with a Google Account to send spam. And I just clicked the
permission button 30 seconds before reading it.
More specifically: I connected to Quora using my Facebook account. When I connected, within the Quora system the message "Viliam is following your questions and answers" was sent to all Quora users who are also my Facebook contacts.
As far as I know, it didn't do anything outside of Quora. But even this is kinda creepy. I discovered it when one of those users asked me in a FB message why exactly am I following his questions (in given context, it seemed like a rather creepy action by me). I didn't even know what he was speaking about.
So the lesson is that if Quora later shows you announcements like: "XYZ is interested in your questions", it most likely means that XYZ simply joined Quora, and Quora knows you two know each other. (Also, you can remove the people you are following in Quora settings. You probably didn't even know you are "following" them, did you?)
I hate this kind of behavior, when social networks pretend their users have some activity among them, when in reality they don't. But I generalize this suspicion to all software. Whenever some software tells me: "Your friend XYZ wants you to do this, or tells you that", I always assume it is a lie. And if my friends XYZ really wants me to do something, they should tell me that using their own words outside of the system I don't know. For example by phone, email, or facebook (not auto-generated) message.
Already did read most of them.
I quote myself
The CS example showed that a college curriculum is not comprehensive and there
are quite a few skills to be named improving on the sorry saying "You go to
college not only for the curriculum but so much more".
0ChristianKl10y
Did you previously expect that college curriculums are actually optimized to
teach all skills that are needed on the job?
2Metus10y
No, but neither did I think that the relationship escalator is a natural state
of the world. But having something like that spelled out when one has not
thought about it can be very helpful.
In February 2013, IBM announced that Watson software system's first commercial application would be for utilization management decisions in lung cancer treatment at Memorial Sloan–Kettering Cancer Center in conjunction with health insurance company WellPoint.[13] IBM Watson’s business chief Manoj Saxena says that 90% of nurses in the field who use Watson now follow its guidance.[14]
How do you know when you work at a project like Watsen whether the work you are doing is dangerous and could result in producing an UFAI? Didn't they essentially build an oracle AGI?
What heuristic should someone building a new AI use to decide whether it's essential to talk with MIRI about it?
Why would they talk to MIRI about it at all?
They're the ones with the actual AI expertise, having built the damn thing in
the first place, and have the most to lose from any collaboration (the source
code of a commercial or military grade AI is a very valuable secret).
Furthermore, it's far from clear that there is any consensus in the AI community
about the likelihood of a technological singularity (especially the subset which
FOOMs belong to) and associated risks. From their perspective, there's no reason
to pay MIRI any attention at all, much less bring them in as consultants.
If you think that MIRI ought to be involved in those decisions, maybe first
articulate what benefit the AI researchers would gain from collaboration in
terms that would be reasonable to someone who doesn't already accept any of the
site dogmas or hold EY in any particular regard.
0ChristianKl10y
As far as I understand that's MIRI's position that they ought to be involved
when dangerous things might happen.
But what goes for someone who does accept the site dogma's in principle but
still does some work in AI.
0Moss_Piglet10y
I'm sorry, I didn't get much sleep last night, but I can't parse this sentence
at all. Could you rephrase it for me?
0drethelin10y
well step one is ever having heard of MIRI or thought about UFAI in any context
except that of hal or skynet
0ChristianKl10y
I doubt that's enough. If someone still wants to do AI research after having
heared of UFAI he needs some decision criteria to decide when it's time to
contact MIRI.
0shminux10y
The decision criteria are easy: talk/listen to the recognized AI research
experts with a proven track record. Then weigh their arguments, as well as those
of MIRI. It's the weight assignment that's not obvious.
0ChristianKl10y
If you have a potentially dangerous idea then talking to recognized AI research
experts might itself be dangerous.
0shminux10y
No, not really. If the situation is anything like that in math, physics,
chemistry or computer science, unless you put in your 10k hours into it, your
odds of coming up with a new idea are remote.
0ChristianKl10y
I don't believe that to be true as ideas can something come from integrating
knowledge of different fields.
An anthropologist that learned a new paradigma about human reasoning from
studying the way some African tribe reasons about the world can reasonable bring
a new idea into computer science. He will need some knowledge about computer
science but no 10k hours.
In http://meaningness.com/metablog/how-to-think
[http://meaningness.com/metablog/how-to-think] David Chapman describes how he
used AI problem by using various tools.
When takling one problem the problem wasn't that difficult if you had knowledge
of a certain field of logic. He solved another problem through antropology.
According to him advances are often a function of having access to a particular
mental tool to which no one else who tackled the problem had access.
Putting in a lot of time means that you have access to a lot of tool and know of
many problems. But if you put all your time into learning the same tools that
people in the field already use, you probably don't have many mental tools that
few people in a given field possess.
Paradigm changing inventions often come into fields through people who are
insider/outsiders. They are enough of an insider to understand the problem but
they bring expertise from another field. See "The Economy of Cities" by Jane
Jacobs for more on that point.
0shminux10y
I concede that a math expert can start usefully contributing to a math-heavy
area fairly quickly. Having expertise in an unrelated area can also be useful,
as a supplement, not as a substitution. I do not recall a single amateur having
contributed to math or physics in the last century or so.
0ChristianKl10y
Do you consider the invention of the Chomsky hierarchy to lie outside the field
of math? Do you think that Chomsky had 10k hours of math expertise when he wrote
it down?
Regardless having less than 10k hours in a field and being an amateur are two
different things.
I don't hold economists in very high regard but I would expect that one of them
did contribute at least a little bit in physics.
I remember chatting with a friend who studies math and computer science. My
background is bioinformatics. If my memory is right he has working at a project
that an applied mathematics group gave him because he knew something about
mathematical technique XY. He needed to find some constants that were useful for
another algorithm. He had a way to evaluate the utility of a certain value as a
constant. His problem was that he had a 10 dimensional search space and didn't
really know how to search effectively in it.
In my bioinformatics classes I learned algorithms that you can use for a task
like that. I'm no math expert but in that particular problem I still could
provide useful input.
I would expect that there are quite a few areas where statistical tools
developed within bioinformatics can be useful for people outside of it.
But to come back to the topic of AI. A math expert working in some obscure
subfield of math could plausible do something that advances AI a lot without
being an AI expert himself.
0shminux10y
Don't know. Maybe a resident mathematician would chime in.
I am not aware of any. Possibly something minor, who knows.
Yes, indeed, that sounds quite plausible. Whether this something is important
enough to be potentially dangerous is a question to be put to an expert in the
area.
I saw this post from EY a while ago and felt kind of repulsed by it:
I no longer feel much of a need to engage with the hypothesis that rational agents mutually defect in the oneshot or iterated PD. Perhaps you meant to analyze causal-decision-theory agents?
Never mind the factual shortcomings, I'm mostly interested in the rejection of CDT as rational. I've been away from LW for a while and wasn't keeping up on the currently popular beliefs on this site, and I'm considering learning a bit more about TDT (or UDT or whatever the current iteration is called... (read more)
The question "which decision theory is superior?" has this flavor of "can my dad beat up your dad?"
CDT is what you use when you want to make decisions from observational data or RCTs (in medicine, and so on).
TDT is what you use when "for some reason" your decisions are linked to what counterfactual versions/copies of yourself decided. Standard CDT doesn't deal with this problem, because it lacks the language/notation to talk about these issues. I argue this is similar to how EDT doesn't handle confounding properly because it lacks the language to describe what confounding even means. (Although I know a few people who prefer a decision algorithm that is in all respects isomophic to CDT, but which they prefer to call EDT for I guess reasons having to do with the formal epistemology they adopted. To me, this is a powerful argument for not adopting a formal epistemology too quickly :) )
I think it's more fruitful to think about the zoo of decision theories out there in terms of what they handle and what they break on, rather than in terms of anointing some of them with the label "rational" and others with the label "irrational." These labels carry no information. There is probably no total ordering from "best to worst" (for example people claim EDT correctly one boxes on Newcomb, whereas CDT does not. This does not prevent EDT from being generally terrible on the kinds of problems CDT handles with ease due to a worked out theory of causal inference).
I don't like the notion of using different decision theories depending on the
situation, because the very idea of a decision theory is that it is consistent
and comprehensive. Now if TDT were formulated as a plugin that seamlessly
integrated into CDT in such a way that the resulting decision theory could be
applied to any and all problems and would always yield optimal results, then
that would be reason for me to learn about TDT. However, from what I gathered
this doesn't seem to be the case?
4Slackson10y
TDT performs exactly as well as CDT on the class of problems CDT can deal with,
because for those problems it essentially is CDT. So in practice you just use
normal CDT algorithms except for when counterfactual copies of yourself are
involved. Which is what TDT does.
0Vaniver10y
I argue that there's an mapping in the opposite direction: if you add extra
nodes to any problem that looks like a problem where TDT and CDT disagree, and
adjust which node is the decision node, then you can make CDT and TDT agree (and
CDT give the "TDT solution"). This is obvious in the case of Newcomb's Problem,
for example.
2IlyaShpitser10y
I guess it's true that CDT needed lots of ideas to work. TDT has one idea: "link
counterfactual decisions together." So it is not an unreasonable view that TDT
is an addendum to CDT, and not vice versa, since CDT is intellectually richer.
3Slackson10y
This is essentially what the TDT paper argues. It's been a while since I've read
it, but at the time I remember being sufficiently convinced that it was strictly
superior to both CDT and EDT in the class of problems that those theories work
with, including problems that reflect real life.
-2Andreas_Giger10y
I think people have slightly misunderstood what I was referring to with this:
My question was whether there is a conclusive, formal proof for this, not
whether this is widely accepted on this site (I already realized TDT is
popular). If someone thinks such a proof is given somewhere in an article (this
one? [http://intelligence.org/files/TDT.pdf]) then please direct me to the point
in the article where I can find that proof. I'm very suspicious about this
though, since the wiki makes blatantly false claims, e.g. that TDT performs
better in one-shot PD than CDT, while in fact it can only perform better if
access to source coude is given. So the wiki article feels more like promotion
than anything.
Also, I would be very interested to hear about what kind of reaction from the
scientific community TDT has received. Like, very very interested.
-4Oscar_Cunningham10y
Then no. In "normal" situations CDT does as well as anything else.
Silk Road drugs market shut down; alleged operator busted.
Bitcoin drops from $125 to $90 in heavy trading.
Edited to add: Well, that was quick. Doesn't look like the bottom fell out.
Edited again: Here's the criminal complaint against the alleged operator. The details at least make sense as a story: in the early days of Silk Road, the alleged operator had really lousy opsec, linking his name to the Silk Road project. Then later, he seems to have got scammed by a guy who first threatened to extort him, then pretended to be a hit-man who would kill the extortionist.
If anyone wants to read all the primary source documents, see http://www.reddit.com/r/SilkRoad/comments/1nmiyb/compiling_all_dprrelevant_pages_suggestions_needed/
I need some advice. I recently moved to a city and I don't know how to stop myself from giving money to strangers! I consider this charity to be questionable and, at the very least, inefficient. But when someone gets my attention and asks me specifically for a certain amount of money and tells me about themselves, I won't refuse. I don't even feel annoyed that it happened, but I do want to have it not happen again. What can I do?
The obvious precommitment to make is to never carry cash. I am strongly considering this and could probably do so, but it is nice to be able to have at least enough for a bus trip, a quick lunch or for some emergency. I have tried to give myself a running tally of number of people refused and when that gets to, say, 20, I would donate something to a known legitimate charity. While doing so makes me feel better about passing beggars by, it doesn't help once someone gets me one-on-one. So I've never gotten to that tally without resetting it first by succumbing to someone. Is there some way to not look like an easy mark? Are there any good standard pieces of advice and resources for this?
However, I always find these exchanges to be really fascinating from the ... (read more)
The basic answer is not to talk to these people.
Do not answer questions about what time it is, do not enter any conversations at all. At most say "sorry" and walk on.
Just. Do. Not. Talk. To. Them.
assume that they're scamming. It will often be true and even when honest giving money to panhandlers is an inefficient use of charity. Remind yourself that you already have a budget for charity and that you're sending it to givewell or MIRI or whatever.
An idea: Next time try to estimate how much money such person makes. As a rough estimate, divide the money you gave them by the length of your interaction. (To get a more precise estimate, you would have to follow them and observe how much other people give them, but that could be pretty dangerous for you.)
Years ago I made a similar estimate for a beggar on a street (people dropped money to his cap, so it was easy to stand nearby, watch for a few minutes and calculate), and the conclusion was that his income was above average for my country.
By the way, these people destroy a lot of social capital by their actions. They make life more difficult for people who genuinely want to ask for the time, or how to get somewhere, or similar things. They condition people against having small talk with people they don't know. -- So if you value people being generally kind to strangers, remember that these scammers make their money by destroying that value.
Interesting statements I ran into with regards to kabuki theater aspects of the so called United States federal government shutdown of 2013. This resulted in among other things closing down websites.
I was interested to know this kind of thing has a name: Washington Monument Syndrome.
As a sysadmin, if I were to be furloughed indefinitely I would probably spin down any nontrivial servers. A server that goes wrong and can't be accessed is a really, really, really, really terrible-horrible-no-good-very-bad thing. And things go wrong on a regular basis in normal times; when the government is shut down and a million things that get done everyday suddenly stop being done, something somewhere is going to break. Some 12-year-old legacy cron job sitting in an obscure corner of an obscure server written by a long-departed contractor is going to notice that the foobar queue is empty , which turns out to be an undefined behavior because the foobar queue has always had stuff going through it before, so it executes an else branch it's never had occasion to execute, which sends raw debugging information to a production server because the contractor was bad at things, and also included passwords in their debugging because they were really bad at things...
This is actually a terrible example of Washington Monument Syndrome.
" Hi, Server admin here... We cost money as does our infrastructure, I imagine a site that large costs a very good deal, we aren't talking five bucks on bluehost here.
I am private sector, but if I were to be furloughed for an indeterminate amount of time you really have two options. Leave things on autopilot until the servers inevitably break or the site crashes at which point parts or all of it will be left broken without notice or explanation. Or put up a splash page and spin down 99% of my infrastructure (That splash page can run on a five dollar bluehost account) and then leave. I won't be able to come in while furloughed to put it up after it crashes.
If you really think web apps keep themselves running 24/7 without intervention we really have been doing a great job with that illusion and I guess the sleepless nights have been worth it to be successfully taken for-granted."
I've heard several stories in the last few months of former theists becoming atheists after reading The God Delusion or similar Four-Horsemen tract. This conflicts with my prior model of those books being mostly paper applause lights that couldn't possibly change anyone's mind.
Insofar as atheism seems like super-low-hanging fruit on the tree of increased sanity, having an accurate model for what gets people to take a bite might be useful.
Has anyone done any research on what makes former believers drop religion? More generally, any common triggers that lead people to try to get more sane?
Edit: Found a book: Deconversion: Qualitative and Quantitative Results from Cross-Cultural Research in Germany and the United States of America. It's recent (2011) and seems to be the best research on the subject available right now. Does anyone have access to a copy?
I can tell you what triggered me becoming an atheist.
I was reading a lot of Isaac Asimov books, including the non-fiction ones. I gained respect for him. After learning he was an atheist, it started being a possibility I considered. From there, I was able to figure out which possibility was right on my own.
This seems to be a trend. I never seriously worried about animals until joining felicifia.org where a lot of people do. I never seriously considered that wild animals' lives aren't worth living until I found out some of the people on there do. I think it's a lot harder to seriously consider an idea if nobody you respect holds it. Just knowing that a good portion of the population is atheist isn't enough. Once you know one person, it doesn't matter how many people hold the opposite opinion. You are now capable of considering it.
I didn't think unfriendly AI was a serious risk until I came here, but that might have been more about the arguments. I figured that an AI could just be programmed to do what you tell it to and nothing more (and from there can be given Asimov-style laws). It wasn't until I learned more about the nature of intelligence that I realized that that is not likely going to be easy. Intelligence is inherently goal-based, and it will maximize whatever utility function you give it.
Theism isn't about god. It has also social and therefore strong emotional consequences. If I stop being a theist, does it mean I will lose my friends, my family will become more cold to me, and I will lose an access to world's most wide social networks?
In such case the new required information isn't a disproved miracle or an essay on Occam's razor. That has zero impact on the social consequences. It's more important to get an evidence that there is a lot of atheists, they can be happy, and some of them are considered very cool even outside of atheist circles. (And after having this evidence, somehow, the essays about Occam's razor become more convincing.)
Or let's look at it from the opposite side: Even the most stupid demostrations of faith send the message that it is socially accepted to be religious; that after joining a religion you will never be alone. Religion is so widespread not because the priests are extra cool or extra intelligent. It's because they are extra visible and extra audacious: they have no problem declaring that everyone who disagrees with them is stupid and evil and will go to hell (or some more polite version of this, which still gets the message across) -- a... (read more)
I'm in the process of translating some of the Sequences in French. I have a quick question.
From The Simple Truth:
This is clearly a joke at the expense of some existing philosophical position called pan[something] but I can't find the full name, which may be necessary to make the joke understandable in French. Can anyone help?
In the past few hours, my total karma score has dropped by fifteen points. It looks like someone is going back through my old comments and downvoting them. A quick sample suggests that they've hit everything I've posted since some time in August, regardless of topic.
Is this happening to anyone else?
Anyone with appropriate access care to investigate?
To whoever's doing this — Here's the signal that your action sends to me: "Someone, about whom all you know is that they have an LW account that they use to abuse the voting system, doesn't like you." This is probably not what you mean to convey, but it's what comes across.
I got an offer of an in-person interview from a tech company on the left coast. They want to know my current salary and expected salary. Position is as a software engineer. Any ideas on the reasonable range? I checked Glassdoor and the numbers for the company in question seem to be 100k and a bit up. I suppose, actually, that this tells me what I need to know, but honestly it feels awfully audacious to ask for twice what I'm making at the moment. On the other hand I don't want to anchor a discussion that may seriously affect my life for the next few years at too small a number. So, I'm seeking validation more than information. Always audacity?
Always ask as much as you can. Otherwise you are just donating the money to your boss. If you hate having too much money, consider donating to MIRI or CFAR or GiveWell instead. Or just send it to me. (Possible exception is if you work for a charity, in which case asking less than you could is a kind of donation.)
The five minutes of negotiating you salary are likely to have more impact on your future income than the following years of hard work. Imagine yourself a few years later, trying to get a 10% increase and hearing a lot of bullshit about how the economical situation is difficult (hint: it is always difficult), so you should all just work harder and maybe later, but no promises.
I know. Been there, twice. (Felt like an idiot after realising that I worked for a quarter of my market price at the first company. Okay, that's exaggerated, because my market price increased with the work experience. But it was probably half of the market price.)
The first time, I was completely inexperienced about negotiating. It went like: "So tell me how much you want." "Uhm, you tell me how much you give peop... (read more)
Don't deliberately screw yourself over. Don't accept less than the average for your position and either point blank refuse to give them negotiating leverage by telling them your current salary or lie.
For better, longer advice see [Salary Negotiation for Software Engineers](http://www.kalzumeus.com/2012/01/23/salary-negotiation)
I'm afraid I couldn't quite bring myself to follow all the advice in your link, but at any rate I increased my number to 125k. So, it helped a bit. :)
Look up what Ramit Sethi has to say about salary negotiation. He really outlines the how things look from the other side and how asking for your 100k is not nearly as audacious as it seems.
I would like to eventually create a homeschooling repository. Probably with research that might help people in deciding whether or not to homeschool their children, as well as resources and ideas for teaching rationality (and everything else) to children.
I have noticed that there have been several question in the past open threads about homeschooling and unschooling. One of the first things I plan to do is read through all past lesswrong discussions on the topic. I haven't really started researching yet, but I wanted to start by asking if anyone had anything that they think would belong in such a repository.
I would also be interested in hearing any personal opinions on the matter.
Homeschooling is like growing your own food (or doing any other activity where you don't take advantage of division of labor): if you enjoy it, have time for it and are good at it, it's worth trying. Otherwise it's useless frustration.
I couldn't agree more about division of labor in general, but with the current state of the public school system, I do not trust them to do a good job of teaching anything.
I do not have the time or patience for it, and probably am not good at it, but fortunately my partner would be the one teaching.
Mindkilling for utilitarians: Discussion of whether it would have made sense to shut down the government to try to prevent the war in Iraq
More generally, every form of utilitarianism I've seen assumes that you should value people equally, regardless of how close they are to you in your social network. How much damage are you obligated to do to your own society for people who are relatively distant from it?
How can I acquire melatonin without a prescription in the UK? The sites selling it all look very shady to me.
It's melatonin; melatonin is so cheap that you actually wouldn't save much, if any, money by sending your customers fakes. And the effect is clear enough that they'd quickly call you on fakes.
And they may look shady simply because they're not competently run. To give an example, I've been running an ad from a modafinil seller, and as part of the process, I've gotten some data from them - and they're easily costing themselves half their sales due to basic glaring UI issues in their checkout process. It's not that they're scammers: I know they're selling real modafinil from India and are trying to improve. They just suck at it.
If I make a target, but instead of making it a circle, I make it an immeasurable set, and you throw a dart at it, what's the probability of hitting the target?
In other words, "what is the measure of an unmeasurable set?". The question is wrong.
I'll never encounter an immeasurable set.
If you construct a set in real life, then you have to have some way of judging whether the dart is "in" or "out". I reckon that any method you can think of will in fact give a measurable set.
Alternatively, there are several ways of making all sets measurable. One is to reject the Axiom of Choice. The AoC is what's used to construct immeasurable sets. It's consistent in ZF without AoC that all sets are Lebesgue measurable.
If you like the Axiom of Choice, then another alternative is to only demand that your probability measure be finitely additive. Then you can give a "measure" (such finitely additive measures are actually called "charges") such that all sets are measurable. What's more you can make your probability charge agree with Lebesgue measure on the Lebesgue measurable sets. (I think you need AoC for this though.)
In L.J. Savage's "The Foundations of Statistics" the axioms of probability are justified from decision theory. He only ever manages to prove that probability should be finitely additive; so maybe it doesn't have to be countably additive. One bonus of finite additivity for Bayesians is that lots of improper priors become proper. For example, there's a uniform probability charge on the naturals.
Topic: Investing
There seems to be a consensus among people who know what they're talking about that the fees you pay on actively managed funds are a waste of money. But I saw some friends arguing about investing on Facebook, with one guy claiming that index funds are not actually the best way to go for diversified investing that does not waste any money on fees. Does anyone know if there is anything too this? More specifically, are Vanguard's funds really as cheap as advertised, or is there some catch to them?
To find previous Open Threads, click on the "open_thread" link in the list of tags below the article. It will show you this page:
http://lesswrong.com/r/discussion/tag/open_thread/
For some reasons that I don't understand, the Special threads wiki page has a link to this:
http://lesswrong.com/tag/open_thread/
...but that page doesn't work well.
Am I mistaken, or do the Article Navigation buttons only ever take to posts in Main, even if I start out from a post in Discussion? Is this deliberate? Why?
Another PT:LoS question. In Chapter 8 ("Sufficiency, Ancillarity and all that"), there's a section Fisher information. I'm very interested in understanding it, because the concept has come up in improtant places in my statistics classes, without any conceptual discussion of it - it's in the Cramer-Rao bound and the Jeffreys prior, but it looks so arbitrary to me.
Jaynes's explanation of it as a difference in the information different parameter values give you about large samples is really interesting, but there's one step of the math that I just c... (read more)
There is too much unwarranted emphasis on ketosis when it comes to Keto diets, rather than hunger satiation. That might sound like a weird claim since the diet is named after ketosis, but when it comes to the efficacy of the Keto diet for weight loss with no regard to potential health or cognitive effects, ketosis has little to do with weight loss. Most attempts to explain the Keto diet almost always starts with an explanation on what ketosis is with an emphasis on attaining ketosis rather than hunger satiation and caloric deficit. Here is intro excerpt... (read more)
Yet another newbie question. What's the rational way to behave in a prediction market where you suspect that other participants might be more informed than you?
Here's a toy model to explain my question. Let's say Alice has flipped a fair coin and will reveal the outcome tomorrow. You participate in a prediction market over the outcome of the coin. The only participant besides you is Bob. Also you know that Alice has flipped another fair coin to decide whether to tell Bob the outcome of the first coin in advance. What trades should you offer to Bob, and wha... (read more)
Robin Hanson argues that prediction markets should be subsidized by those who want the information. (They can also be subsidized by "noise" traders who are not maximizing their expected money from the prediction market.) Under these conditions, the expected value for rational traders can be positive.
What's the LMSR prediction market scoring rule? We've just started an ad-hoc prediction market at work for whether some system will work, but I can't remember how to score it.
Say I have these bets:
House: 50%
Me: 50%
SD: 75%
AK: 35 %
what is the payout/loss for each player?
Does anyone have any short thought experiments that have caused them to experience viewquakes on their own?
Here's a twist on prospect theory.
I installed solar panels, which were pretty expensive, but pay back as they generate electricity.
The common question was "How long will it take to earn your investment back?" I understand why they're asking. The investment is illiquid, even more than a long-term bank deposit. But if wanted to get my money "back," I'd keep it in my checking account. The question comes from a tendency to privilege a bird in the hand over those that are still in the bush.
The important point they should ask about is my pr... (read more)
Sunk cost fallacy spotted in an unusually pure state at unusually high levels:
I find it quite possible that w... (read more)
The occasional phenomenon where people go downvote every comment by someone they disagree with could be limited by only allowing people to downvote comments made within the last week.
Or limit the number of votes one person can give to another within a time period. I think most vendetta voting happens in the heat of the moment. I don't like not being able to vote old comments, or skewing the voting on either side.
I always wondered if an algorithm could be implemented akin to the Page rank algorithm. A vote from someone counts more if the person votes seldom and it counts more if the person is upvoted frequently by people with high vote weight.
What's the relationship between Epistemology and Ontology? Are both important of attention or do you get the other for free when you deal with one of them?
I'm requesting recommendations for guides to meditation.
I've had great success in the past with 'sleeping on in' to solve technical problems. This year I've been trying power-napping during lunch to solve the morning's problems in the afternoon, I'm not sure the success of power-nap is any better that the control group. The next step is to see if I can step away from the old hamfisted methods and get results from meditation.
This may be treading close to a mindkilling topic, but - what's the scientific consensus on fracking?
Does anyone have a good resource on learning how to formate graphs and diagrams?
What are the effects on the reader between having 90%, 100% or 110% spacing between letters? When should one centralize text. What about bold and italics?
Is there good research based resource that explains the effects that those choices have on the reader?
If you didn't know it already
Not quite true:
So the other week I read about viewquakes. I also read about things a CS major could do that aren't covered by the usual curriculum. And then this article about the relationship escalator. Those gave me not quite a viewquake but clarified a few things I already had in mind and showed me some I had not.
What I am wondering is now, can anyone here give me a non-technical viewquake? What non-technical resources can give me the strongest viewquake akin to the CS major answer? With non-technical I mean material that doesn't fall into the usual STEM spectrum peop... (read more)
Try this version of the link.
More specifically: I connected to Quora using my Facebook account. When I connected, within the Quora system the message "Viliam is following your questions and answers" was sent to all Quora users who are also my Facebook contacts.
As far as I know, it didn't do anything outside of Quora. But even this is kinda creepy. I discovered it when one of those users asked me in a FB message why exactly am I following his questions (in given context, it seemed like a rather creepy action by me). I didn't even know what he was speaking about.
So the lesson is that if Quora later shows you announcements like: "XYZ is interested in your questions", it most likely means that XYZ simply joined Quora, and Quora knows you two know each other. (Also, you can remove the people you are following in Quora settings. You probably didn't even know you are "following" them, did you?)
I hate this kind of behavior, when social networks pretend their users have some activity among them, when in reality they don't. But I generalize this suspicion to all software. Whenever some software tells me: "Your friend XYZ wants you to do this, or tells you that", I always assume it is a lie. And if my friends XYZ really wants me to do something, they should tell me that using their own words outside of the system I don't know. For example by phone, email, or facebook (not auto-generated) message.
Wikipedia:
How do you know when you work at a project like Watsen whether the work you are doing is dangerous and could result in producing an UFAI? Didn't they essentially build an oracle AGI?
What heuristic should someone building a new AI use to decide whether it's essential to talk with MIRI about it?
I saw this post from EY a while ago and felt kind of repulsed by it:
Never mind the factual shortcomings, I'm mostly interested in the rejection of CDT as rational. I've been away from LW for a while and wasn't keeping up on the currently popular beliefs on this site, and I'm considering learning a bit more about TDT (or UDT or whatever the current iteration is called... (read more)
The question "which decision theory is superior?" has this flavor of "can my dad beat up your dad?"
CDT is what you use when you want to make decisions from observational data or RCTs (in medicine, and so on).
TDT is what you use when "for some reason" your decisions are linked to what counterfactual versions/copies of yourself decided. Standard CDT doesn't deal with this problem, because it lacks the language/notation to talk about these issues. I argue this is similar to how EDT doesn't handle confounding properly because it lacks the language to describe what confounding even means. (Although I know a few people who prefer a decision algorithm that is in all respects isomophic to CDT, but which they prefer to call EDT for I guess reasons having to do with the formal epistemology they adopted. To me, this is a powerful argument for not adopting a formal epistemology too quickly :) )
I think it's more fruitful to think about the zoo of decision theories out there in terms of what they handle and what they break on, rather than in terms of anointing some of them with the label "rational" and others with the label "irrational." These labels carry no information. There is probably no total ordering from "best to worst" (for example people claim EDT correctly one boxes on Newcomb, whereas CDT does not. This does not prevent EDT from being generally terrible on the kinds of problems CDT handles with ease due to a worked out theory of causal inference).
Didn't the paper show TDT performing better than CDT in Parfit's Hitchhiker?
That might count as being of similar dubiousness, although I like this quote by Eliezer arguing otherwise: