Interesting comment by Gregory Cochran on torture not being useless as is often claimed.
Torture can be used to effectively extract information. I can give you lots of examples from WWII. People often say that it’s ineffective, but they’re lying or deluded. Mind you, you have to use it carefully, but that’s true of satellite photography. Note: my saying that it works does not mean that I approve of it.
... At the Battle of Midway, two American fliers, whose planes had been shot down near the Japanese carriers, were pulled out of the water and threatened with death unless they revealed the position of the American carriers. They did so, and were then promptly executed. Later, at Guadalcanal, the Japanese captured an American soldier who told them about a planned offensive – with that knowledge the Japanese withdrew from the area about to be attacked. I don’t why he talked [the guy didn't survive] – maybe a Japanese interrogator spent a long time building a bond of trust with that Marine. But probably not. For one thing, time was short. I see people saying that building such a bond is in the long run more effective, but of course in war, time is often short.
We seem to like "protecting" ought by making false claims about what is.
Possibly related to the halo or overjustification effects; arguments as soldiers seems especially applicable - admitting that torture may actually work is stabbing one's other anti-torture arguments in the back.
I read somewhere that lying takes more cognitive effort than telling the truth. So it might follow that if someone is already under a lot of stress -- being tortured -- then they are more likely to tell the truth.
On the other hand, telling the truth can take more effort than just saying
something. Very modest levels of stress or fatigue make it harder for me to
remember where, when, and with whom something happened.
8shminux10y
I agree that it is a PC thing to say now in the US liberal circles that torture
doesn't work. The original context was different, however: torture is not
necessarily more effective than other interrogation techniques, and is often
worse and less reliable, so, given its high ethical cost to the interrogator, it
should not be a first-line interrogation technique. This eventually morphed into
the (mostly liberal) meme "torture is always bad, regardless of the situation".
This is not very surprising, lots of delicate issues end up in a silly or
simplistic Schelling point, like no-spanking, zero-tolerance of drugs, no
physical contact between students in school, age restrictions on sex, drinking,
etc.
0Douglas_Knight10y
Could you provide evidence for this claim?
2shminux10y
Going by the links on Wikipedia
[http://en.wikipedia.org/wiki/Effectiveness_of_torture_for_interrogation]. A
quote:
0[anonymous]10y
test
8Eugine_Nier10y
This has interesting implications for consequentialism vs. deontology.
Consequentialists, at least around here, like to accuse deontologists of jumping
through eleborate hoops with their rules to get the consequences they want.
However, it is just as common (probably more so) for consequentialists to jump
through hoops with their utility function (and even their predictions) to be
able to obey the deontological rules they secretly want.
2shminux10y
Real humans are neither consequentialists nor deontologists, so pretending to be
one of these results in arguments like that.
3NancyLebovitz10y
Certainly true-- I believe a lot of claims about the healthiness of
vegetarianism fall into that category.
Another problem is taking something that's true in some cases, or even
frequently, and claiming that it's universal. In the case of torture, it's one
thing to claim that torture rarely produces good information, and another to
claim that it never does.
Arguments as soldiers
[http://blogs.swarthmore.edu/burke/blog/2013/10/03/the-cheese-stands-alone/#comments]
in regards to universities divesting from fossil fuels.
0GLaDOS10y
The point on torture being useful seems really obvious in hindsight. Before
reading this I pretty much believed it was useless. I think it settled my head
in the mid 2000s, arriving straight from political debates. Apparently knowing
history can be useful!
Overall his comment is interesting but I think the article has more important
implications, someone should post it. So I did.
[http://lesswrong.com/r/discussion/lw/iuc/link_distance_from_harvard/] (^_^)
-3ChristianKl10y
I don't see anything insightful about the statement. It rather trival to point
out that there were events were torture produced valuable information. Nobody
denies that point. It rather sounds like he doesn't understand the position
against which he's arguing.
It's not like any other kind of intelligence. This ignores the psychological
effects of the torture on the person doing the torturing. Interrogators feel
power over a prisioner and get information from them. That makes them spend to
much attention of that information in contrast to other information.
2Eugine_Nier10y
And this is different from someone who, say, spends a lot of effort turning an
agent, or designing a spy satellite, how?
5ChristianKl10y
Beating someone else up triggers primal instincts. Designing a spy satelite or
using it's information doesn't.
There's motivated reasoning involving is assessing the information that you get
by doing immoral things as high value.
Pretending that there are no revelant psychological effects from the torture on
the person doing the torturing just indicates unfamiliarity with the arguments
for the position that torture isn't effective.
I would add that that as far as the description of the battle of Midway in the
comment goes, threating people with execution isn't something that in the US
would be officially torture. Prosecutors in Texas do it all the time to get
people to agree to plea bargains. It's disgusting but not on the same level as
putting electrodes on someone's genitals. It also doesn't have the same effects
on the people doing the threating as allowing them to inflict physical pain.
If you threaten someone with death unless he gives you information you also
don't have the same problem of false information that someone will give you to
make the pain stop immediately.
As far as the other example in that battle goes, the author of the comment
doesn't even know whether torture was used and seems to think that there are no
psychological tricks that you can play to get information in a short amount of
time. Again an indication of not having read much about how interrogation works.
Here on Lesswrong we have AI players who get gatekeepers to let the AI go in two
hours of text based communication. As far as I understand Eliezer did that feat
without having professional grade training in interrogation. If you accept
that's possible in two hours, do you really think that a professional can't get
useful information from a prisioner in a few hours without using torture?
0Eugine_Nier10y
From what I heard, most of said psychological tricks relay on the person you're
interrogating not knowing that you're not willing to torture them.
Not reliably. This worked on about half the people.
Depending on the prisoner. There are certainly many cases of prisoners who don't
talk. If the prisoners are say religious fanatics loyal to their cause, this is
certainly very hard.
4drethelin10y
getting half your prisoners to capitulate is still pretty damn good.
0ChristianKl10y
Being able to read bodylanguage very well is also a road to information. You can
use Barnum statements to give the subject the impression that you have more
knowledge than you really have and then they aren't doing anything wrong if they
tell you what you know already.
In the case in the comment the example was an American soldier who probably
doesn't count as religious fanatic. The person who wrote it suggested that the
fast transfer of information is evidence of there being torture involved.
It was further evidence for my claim that the person who wrote the supposedly
insightful comment didn't research this topic well.
I case wasn't that there certain evidence that torture doesn't work but that the
person who wrote the comment isn't familiar with the subject matter and as a
result the comment doesn't count as insightful.
Nothing works 100% reliably.
-4Pentashagon10y
Similarly, basilisks would work as motivation to develop a certain kind of FAI
but there's a ban on discussing them here. Why? Isn't it worth credibly
threatening to torture people for 50 years to eventually save some large number
of future people from dust specks (or worse) by more rapidly developing FAI?
It's possible that the harm to society of knowing about and expecting torture is
greater than the benefit of using torture. In that case, torturing in absolute
secret seems to be the way to maximize utility. Not particularly comforting.
2drethelin10y
A) It's not credible B)The basilisk only "works" on a very few people and as far
as I can tell it only makes them upset and unhappy rather than working as hard
as they can on FAI. C) Getting people on your side is pretty important. Telling
people they will be tortured if they don't get on your side is not a very good
move for a small organization.
0Eugine_Nier10y
Um, the threat of torture only works if people know about the threat.
Some early experimental studies with LSD suggested that doses of LSD too small to cause any noticeable effects may improve mood and creativity. Prompted by recent discussion of this claim and the purely anecdotal subsequent evidence for it, I decided to run a well-powered randomized blind trial of 3-day LSD microdoses from September 2012 to March 2013. No beneficial effects reached statistical-significance and there were worrisome negative trends. LSD microdosing did not help me.
I recently played and won an additional game of AI Box with DEA7TH. Obviously, I played as the AI. This game was conducted over Skype.
I'm posting this in the open thread because unlike my lastfewAI Box Experiments, I won’t be providing a proper writeup (and I didn't think that just posting "I won!" is enough to validate starting a new thread). I've been told (and convinced) by many that I was far too leaky with strategy and seriously compromised future winning chances of both myself and future AIs. The fact that one of my gatekeepers guessed my tactic(s) was the final straw. I think that I’ve already provided enough hints for aspiring AIs to win, so I’ll stop giving out information.
I guess you used words. That seems to be all the tactical insight needed to
develop an effective counter-strategy. I really don't get how this escaping
thing works on people. Is it due to people being systematically overconfident in
their own stubbornness? I mean I know I couldn't withstand torture for long. I
expect even plain interrogation backed by credible threats would break me over
time. Social isolation and sleep deprivation would break me too. But one hour of
textual communication with a predefined and gamified objective and no negative
external consequences? That seems so trivial..
Other people have expressed similar sentiments, and then played the AI Box experiment. Even of the ones who didn't lose, they still updated to "definitely could have lost in a similar scenario."
Unless you have reason to believe your skepticism comes from a different place than theirs, you should update towards gatekeeping being harder than you think.
The heuristic of ignoring secretive experiments that don't publish their details
has served me well in the past.
6Sly10y
I have played the game twice and updated in the opposite direction you claim.
In fact, my victories were rather trivial. This is despite the AIs trying really
really hard.
3ChristianKl10y
Did you play against AI that do have won sometime in the past?
0Sly10y
I do not honestly know. I will happily play a "hard" opponent like Eliezer or
Tux. I have said this before, I estimate 99%+ chance of victory.
1wedrifid10y
Unless I have already heard the information you have provided and updated on it,
in which case updating again at your say so would be the wrong move. I don't
tend update just because someone says more words at me to assert social
influence. Which is kind of the point, isn't it? Yes, I do have reason to
believe that I would not be persuaded to lose in that time.
Disagreement is of course welcome if it is expressed in the form of a wager
where my winnings would be worth my time and the payoff from me to the
gatekeeper is suitable to demonstrate flaws in probability estimates.
1A1987dM10y
Probably you're right, but as far as I can tell the rules of the game don't
forbid the use of ASCII ar
0wedrifid10y
Just so long as it never guesses my fatal flaw (_
[http://en.wikipedia.org/wiki/Whitespace_(programming_language]).
I was surprised by the breadth of ideas he addresses. It blew my mind that he
put that together in under a month.
2David_Gerard10y
I assume he's been thinking about this stuff for years, given he's known the
people in the Reactionary subculture that long.
4CAE_Jones10y
He wrote Reactionary Philosophy in an enormous, planet-sized nutshell
[http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/]
back in March, as a precursor to a reactionary take-down essay that never seemed
to materialize, other than a few bits and pieces, such as the one on how war is
on the decline
[http://slatestarcodex.com/2013/05/22/apart-from-better-sanitation-and-medicine-and-education-and-irrigation-and-public-health-and-roads-and-public-order-what-has-modernity-done-for-us/].
This faq seems to be the takedown he was aiming for, so I imagine he's been
building it for at least the past seven months, probably longer.
(ETA: In the comments on the Anti-reactionary FAQ, Scott says it took roughly a
month, so I guess it wasn't as much of an on-going project as I predicted.)
3niceguyanon10y
I'm keeping this question to this thread as to not spam political talk on the
new open thread.
What does a post scarcity society run by reactionaries look like? If state
redistribution is not something that is endorsed, what happens to all the people
who have no useful skills? In a reactionary utopia where there is enough
production but lacking an efficient way to distribute resource base on ability
or merit, what happens to the people who have been effectively replaced by
automation? Is it safe to assume that there are no contemporary luddites among
reactionaries?
4[anonymous]10y
I can answer that question to a certain extent, as I've talked to several people
in reaction who have thought about it, as have I, at least once we look far into
the posthuman era, it might be most easily imagined as a society of gods above
and beasts below, something the ancient Greeks found little difficulty imagining
and certainly didn't feel diminished their humanity. An important difference
between the post human fantasies often imagined is that the superiority of
transhuman minds would not be papered over by fictional legal equality, there
would be a hierarchy based on the common virtues the society held in regard and
there would be efforts to ensure the virtues remained the same, to prevent value
drift. Much of the society would be organized along the lines of striving to
enable (post)human flourishing as defined by the values of the society.
An aristocracy prevailing, indeed "rule of the best", with at least a ceremonial
Emperor at its apex. Titles of nobility where in theory in ancient society
awarded, both to incentivize people for long term planning, define their place,
formalize their unique influence as owners of land and warriors, define what the
social circle you are expect to compare yourself with is and expecting the good
use of such privilege by people from excellent families. Extending and indeed
much improving such concept offers fascinating possibilities, compatible with
human imagination and preferences, think of the sway that nobility, let alone
magical or good queens, dukes and knights hold over even our modern imagination.
Consider that in a world where aging is cured and heredity, be it genetic or
otherwise full understood, where minds are emulated and merge and diverge, the
line between you and your ancestors/previous versions blurs. A family with
centuries of diligent service, excellence, virtue, daring and achievement... I
can envision such a grand noble lineage made up of essentially one person.
But this is an individual aspect of the
2CAE_Jones10y
From what I understood based on reading the anti-reactionary faq, Scott's
interpretation of Moldbug's interpretation of an ideal reactionary king would
either arrange infrastructure such that there are always jobs available, or
start wireheading the most useless members of society (though if I'm reading it
right, Moldbug isn't all that confident in that idea, either). I'd not mind a
correction (as Scott points out, either option would be woefully inefficient
economically).
0A1987dM10y
This makes me suspect he may have much more free time than I guessed, and no
longer despair of a new LW survey in the foreseeable future.
0A1987dM10y
It hasn't appeared in the “Recent on Rationality Blogs” sidebar on LW yet. How
long does that normally take? 24 hours?
0Adele_L10y
It seems likely that this post has been blocked from appearing there, due to its
political and controversial nature.
As a part of my Master’s thesis in Computer Science, I am designing a game which seeks to teach its players a subfield of math known as Bayesian networks, hopefully in a fun and enjoyable way. This post explains some of the basic design and educational philosophy behind the game, and will hopefully also convince you that educational games don’t have to suck.
I will start by discussing a simple-but-rather-abstract math problem and look at some ways by which people have tried to make math problems more interesting. Then I will consider some of the reasons why the most-commonly used ways of making them interesting are failures, look at the things that make the problems in entertainment games interesting and the problems in most edutainment games uninteresting, and finally talk about how to actually make a good educational game. I’ll also talk a bit about how I’ll try to make the math concerning Bayesian networks relevant and interesting in my game, while a later post will elaborate more on the design of the game.
I was thinking recently that if soylent kicks something off and 'food replacement' -type things become a big deal, it could have a massive side effect of putting a lot of people onto diets with heavily reduced animal and animal product content. Its possible success could inadvertently be a huge boon for animals and animal activists.
Personally, I'm somewhat sympathetic towards veganism for ethical reasons, but the combination of trivial inconvenience and lack of effect I can have as an individual has prevented me from pursuing such a diet. Soylent would allow me to do so easily, should I want to. Similarly, there are people who have no interest in animal welfare at all. If 'food replacements' become big, it could mean for the incidental conversion of those who might have otherwise never considered veganism or vegetarianism to a lifestyle that fits within those bounds, for only their personal cost or convenience reasons.
I anticipate artificial meat having a much bigger impact than meal-replacement
products. I anticipate that demand for soylent-like meal replacement products
among the technophile cluster will peak within the next three years, and will
wager $75 to someone's $100 that this is the case if someone can come up with a
well-defined metric for checking this.
5[anonymous]10y
Note that the individual impact you can have by being a vegetarian is actually
pretty big
[http://lesswrong.com/lw/hox/effective_altruism_through_advertising/]. Sure,
it's small in terms of _percentage_of the problem, but that's the wrong way to
measure effect. If you saw a kid tied to railroad tracks, you wouldn't leave
them there on account of all the children killed by other causes every day.
5James_Miller10y
Let $X= the cost to me of being a vegetarian. I'm indifferent between donating
$X to the best charity I can find or being a vegetarian. For what values of $X
would you advise me to become a vegetarian assuming that if I don't become a
vegetarian I really will donate an extra $X to, say, MIRI
[http://intelligence.org/]?
1kalium10y
Being a vegetarian does not have a positive monetary cost, unless it makes you
so unhappy that you find yourself less motivated at work and therefore earn less
money or some such. Meat may be heavily subsidized in the US, but it's still
expensive compared to other foods.
5James_Miller10y
I would rather pay $8,000 a year than be a vegetarian. Consequently, if my
donating $8,000 to a charity would do more good for the rest of the world than
my becoming a vegetarian would, it's socially inefficient for me to become a
vegetarian.
5kalium10y
You can make a precommitment to do only one or the other, but if you become
vegetarian you don't actually lose the $8,000 and become unable to give it to
MIRI. In this sense it is not a true tradeoff unless happiness and income are
easily interconvertible for you.
-1James_Miller10y
I have a limited desire to incur costs to help sentients who are neither my
friends nor family. This limited desire creates a "true tradeoff".
-1[anonymous]10y
I fight the hypothetical - there is no such tradeoff.
A more concrete hypothetical: Suppose that every morning when you wake up you're
presented with a button. If you press the button, an animal will be tortured for
three days, but you can eat whatever you want that day. If you don't press the
button, there's no torture, but you can't eat meat. By the estimates in this
paper [http://www.utilitarian-essays.com/dollar-worth.pdf], that's essentially
the choice we all make every day (3:1 ratio taken by a_m times l_m = at least
1000 animal-days of suffering avoided per year of vegetarianism ~= 3 days of
torture per day of vegetarianism).
Anyway - you should not be a vegetarian iff you would press the button every
day.
5James_Miller10y
This is absurd. I really, really would rather pay $8,000 a year than be a
vegetarian. Do you think I'm lying or don't understand my own preferences? (I'm
an economist and so understand money and tradeoffs and I'm on a paleo diet and
so understand my desire for meat.)
I would rather live in a world in which I donate $8,000 a year to MIRI and press
the button to one in which I'm a vegetarian and donate nothing to charity.
0[anonymous]10y
There is no market for your proposed trade. In this case using money as a proxy
for utility/preference doesn't net you any insight because you can't exchange
vegetarianism or animal-years-of-torture for anything else. Of course you can
convert to dollars if you really want to, but you have to convert both sides -
how much would you have to be paid to allow an animal to be tortured for three
days? (This is equivalent to the original question, we've just gone through some
unnecessary conversions).
0hyporational10y
Have you/they thought about other environmental implications? Processing
everything down to simple nutrients to make the drink doesn't sound very energy
efficient. Might compete with eating meat, but definately not with veganism.
I like my meat, btw.
0Nectanebo10y
Personally, I haven't really thought of it. Might be an angle worth looking at
the product from, you're right.
I haven't really been following their progress or anything, so I don't know, but
it's possible they've touched on it at some point before. You could dig around
on the soylent forum or even start the topic yourself if you really felt like
it. I think the creators of the product are reasonably active on there.
0passive_fist10y
One of the primary ingredients of soylent is whey protein, which is produced
from cow's milk. It is not a vegan product.
Whey is a byproduct of cheesemaking, which is why it is currently relatively
inexpensive. If people started consuming whey protein en masse, it would shift
the economics of whey production and dairy cow breeding in potentially highly
unfavorable directions for both the cows and the soylent enthusiasts (because it
would become more expensive).
Sadly, there doesn't seem to be any viable alternative to whey at this point (if
there was, they'd use that, but there isn't).
8Nectanebo10y
It doesn't use whey for protein any more. Apparently the only issue for veganism
(and vegetarianism) at the moment is fish oil for Omega 3s.
0passive_fist10y
I didn't know that. What does it use instead of whey?
4Nectanebo10y
Rice Protein, it seems.
Relevant blog posts:
soylent blog, 2013-07-24
soylent blog, 2013-08-27
link to blog [http://blog.soylent.me/]
So it was whey, then it was rice protein and pea protein, now it's just rice
protein.
Their final ingredient list hasn't been finalised yet, they seem to be getting
close though. They said they'll post it once it's done.
0passive_fist10y
Thanks for the info. While I suppose this is an improvement, I wonder about the
scalability of this approach and the impact on the environment. Rice doesn't
exactly produce that much protein per acre of land. I'll have to look at the
numbers though.
I also wonder where they're sourcing Lysine from.
I know someone who has a young child who is very likely to die in the near future. This person has (most likely) never heard of cryonics. My model of this person is very unlikely to decide to preserve their child even if they knew about it.
I don't know if I should say something. At first I was thinking that I should because the social ramifications are negligible. After thinking about it for a while, I changed my mind and decided that possibly I was just trying to absolve myself of guilt at the cost of offending a grieving parent. I am not sure if this is just rationalization.
Does the person has the financial means to pay for out of the pocket cryonics?
It probably won't be possible to get life insurance for the child.
0Scott Garrabrant10y
I am not sure. I think so.
6Dorikka10y
Attempting to highlight relevant variables:
* how likely your persuasion is to offend parents (which is a pdf, not binary,
of course)
* how much you care whether you offend parents (see previous parenthetical)
* U(child lives a long time) - U(child dies as expected)
* P(child lives a long time | child gets frozen)
* P(child gets frozen | you try to persuade the parents)
Edited to fix formatting.
5Moss_Piglet10y
You should reconsider this assumption. I would imagine that making suggestions
about what to do with someone's soon-to-be-dead child's body would be looked
upon coldly at best and with active hostility at worst. It's like if you
suggested you knew a really good mortician; it's just not the sort of thing
you're supposed to be saying.
There's also the fact that, as a society, we are very keen when watching the
bereaved for signs they haven't accepted the death. To most people cryonics
looks like a sort of pseudoscientific mummification and the idea that such a
person could be revived as delusional. It is easy to imagine that if your friend
shelled out hundreds of thousands on your say-so for such a project people might
see you as preying on a mentally vulnerable person.
This is not to make a value judgement or a suggestion, just pointing out that
the social consequences are quite possibly non-negligible.
5James_Miller10y
If you have not signed up for cryonics yourself, you could ask this person for
advice as to whether you should. If you have signed up, you could work this into
a conversation. Or just find some video or article likely to influence the
parent and forward it to him, perhaps an article mentioning Kim Suozzi.
3Scott Garrabrant10y
The only plausible ways I can think to bring it up are:
1) Directly
2) Talk about it to someone else with him in the room
3) Convince someone else who very close to him but not directly dealing with the
loss of their child to consider it, and possibly bring it up for me
I think if I were to bring it up, I would take the third path.
What expert advice is worth buying? Please be fairly specific and include some conditions on when someone should consider getting such advice and focus on individuals and families versus, say, corporations.
I ask because I recently brainstormed ways that I could be spending my money to make my life better and this was one thing that I came up with and realized I essentially never bought except for visiting my doctor and dentist. Yet there are tons of other experts out there willing to give me advice for a fee: financial advisers, personal trainers, nutritionists, lawyers, auto-mechanics, home inspectors, and many more.
Therapy probably has the most impact on an individuals life satisfaction
3Suryc1110y
Sources, please?
0Lumifer10y
It depends on your needs, doesn't it?
Specify what you want, see if you know how to get there -- and if you don't,
check if someone will provide a credible roadmap for a fee...
0Barry_Cotter10y
Personal fitness folk: doing starting strength is three hours a week that will
make all the rest much better and a personal trainer will make your form good,
which is really important. If your conscientiousness is normal tutors rock. If
you can afford one, hire a tutor.
3RomeoStevens10y
Most personal trainers will not be able to help you have awesome form in
powerlifting (starting strength) lifts. You're better off submitting videos of
your form to forums devoted to such things than with the average PT.
How many people here use Anki, or other Spaced Repetition Software (SRS)?
[pollid:565]
I'm finding it pretty useful and wondering why I didn't use it more intensively before. Some stuff I've been adding into Anki:
Info about data structures and algorithms (I'm reading a book on them, and think it's among the most generally useful useful knowledge for a programmer)
Specific commands for tools I use a lot (git, vim, bash - stuff I used to put into a cheat sheet)
Some Japanese (seems at least half of Anki users use it to learn Japanese)
Tidbits from lukeprog's posts on procrastination
Some Haskell (I'm not studying it intensively, but doing a few exercises now and then, and adding what I learn in Anki)
I have much more stuff I'd like to Ankify (my notes on Machine Learning, databases, on the psychology of learning; various inspirational quotes, design patterns and high-level software architecture concepts ...).
Some ways I got better at using Anki:
I use much less pre-made decks
I control the new-cards-per-day depending on how much I care about a topic. I don't care much about vim, so have 3 to 5 new cards per day, but go up to 20 for procrastination or
I've abandoned many decks almost completely because I made too complex cards.
Make the cards simple [http://www.supermemo.com/articles/20rules.htm] and combat
interference [http://en.wikipedia.org/wiki/Interference_theory]. That doesn't
mean you can't learn complex concepts. Now that I've got it right, I can go
through hundreds of reviews per day if I've fallen behind a bit, and don't find
it exhausting. If I manage to review every day, it's because I'm doing it first
in the morning.
I use a plugin/option to make the answer show automatically after 6 seconds, so
it's easy to spot cards that are formatted badly or cause interference, and take
too much time.
4ChristianKl10y
Some general Anki tips: If you use it to learn a foreign language use the
Awesome TTS plugin [https://ankiweb.net/shared/info/301952613]. Whenever Anki
displays a foreign word it should also play the corresponding sound. Don't try
to consciously get the sound. Just let Anki play the sound in the background.
I use a plugin that adds extra buttons to new cards
[https://ankiweb.net/shared/info/468253198] I changed it in a way that gives the
6th button a timeframe of 30-60 days till the new card shows up the second time.
I use that button for cards that are highly redundant.
Frozen fields [https://ankiweb.net/shared/info/2819760111] is a plugin that's
useful for creating cards and I wouldn't want to miss it. It allow you to
present specific fields in the new card dialog from being cleared when you
create a new card.
Quick Colour Changing [https://ankiweb.net/shared/info/2491935955] is another
useful addon. It allows you to use color more effectively to highlite aspects of
cards.
I have written my more general thought about how to use Anki lately in another
thread:
http://lesswrong.com/r/discussion/lw/isu/advice_for_a_smart_8yearold_bored_with_school/9vlh
[http://lesswrong.com/r/discussion/lw/isu/advice_for_a_smart_8yearold_bored_with_school/9vlh]
On of the core ideas that I developed over the last time is that you really want
to make cards as easy as possible. I think the problem with most premade card
that you find online is that they just aren't easy enough. They take too much
for granted.
Take an issue such as the effect of epinephrine on the heart. It raises hard
rate. Most of the deck that you find out there would ask something like: "What's
the effect of epinephrine on the heart?" That's wrong. That's not basic enough.
It's much simpler to ask: "epinephrine ?(lowers/raises)? heart rate"
I think that idea also helps a lot with language learning. I think the classic
idea of asking "What does goood mean in French?" is problematic. If you like in
the dictionary
2shokwave10y
Using it regularly is the most important thing by far. I don't use it anymore,
the costs to starting back up seem too high (in that I try and fail to
re-activate that habit), I wish I hadn't let that happen. Don't be me; make Anki
a hardcore habit.
4Emile10y
Why not just restart from scratch with empty decks? It should be less daunting
at first...
My strategy to avoid losing the habit is having decks I care less about than
others, so that when I stopped using Anki for a few weeks, I only had to catch
up on the "important" decks first, which was less daunging than catching up with
everything (I eventually catched up with all the decks, somewhat to my
surprise).
I'm also more careful than before in what I let in - if content seems too
unimportant, it gets deleted. If it's difficult, it gets split up or rewritten.
And I avoid adding too many new cards.
54hodmt10y
Continuing with your current deck should be strictly superior to starting from
scratch, because you will remember a substantial portion of your cards despite
being late. Anki even takes this into account in its scheduling, adjusting the
difficulty of cards you remembered in that way. If motivation is a problem, Anki
2.x series includes a daily card limit beyond which it will hide your late
reviews. Set this to something reasonable and pretend you don't have any late
cards. Your learning effectiveness will be reduced but still better than
abandoning the deck.
I've previously let Anki build up a backlog of many thousand unanswered cards. I
cleared it gradually over several months, using Beeminder for motivation.
0Emile10y
True, I forgot about that option - I actually discovered it after I had cleared
my backlog, and thought "hm, that could've been useful too..."
1ChristianKl10y
I think when restarting a deck after a long time it's important to use the
delete button a lot. There might be cards that you just don't want to learn and
it's okay to delete them.
You could also gather the cards you think are really cool and move them into a
new deck and then focus on learning that new deck.
0Barry_Cotter10y
When using pre-made decks the only efficient way is to follow along, i.e. if you
don't know the source book/course it's not very good. Partial exception,
vocabulary lists.
2Emile10y
Agreed - and you can even go wrong with vocabulary lists if they're too advanced
(some German vocabulary got overwhelming for me, I just dropped everything).
Another partial exception can be technical references (learning keywords in a
programming language or git commands).
People who want to eat fewer animal products usually have a set of foods that are always okay and a set of foods that are always not (which sometimes still includes some animal products, such as dairy or fish), rather than trying to eat animal products less often without completely prohibiting anything. I've heard that this is because people who try to eat fewer animal products usually end up with about the same diet they had when they were not trying.
I wonder whether trying to eat more of something that tends to fill the same role as animal products would be an effective way to eat fewer animal products.
I currently have a fridge full of soaking dried beans that I have to use up, and the only way I know how to serve beans is the same as the way I usually eat fish, so I predict I'll be eating much less fish this week than I usually do (because if I get tired of rice and beans, rice and fish won't be much of a change). I'm not sure whether my result would generalize to people who use more than five different dinner recipes, though. I should also add that my main goal is learning how to make cheap food taste good by getting more practice cooking beans - eating fewer animal products would just be a side effect.
Now that I write this, I'm wishing I'd thought to record what food I ate before filling my fridge with beans. (I did write down what I could remember.)
People who you know want to eat fewer animal products. If I just decided to eat
less meat, you'd be much less likely to find out this fact about me than if I
decided to become fully lacto-ovo-vegetarian.
2mare-of-night10y
Good point.
0ChristianKl10y
I don't think that an accurate description of the average vegetarian. A lot of
self labeled vegetarians do eat animal products from time to time.
Most people who tell you that they try to eat only healthy food and no junk
food, still eat junk food from time to time. The same goes for vegetarians
eating flesh.
Additionally eat less red meat is part of the official mantra on healthy eating.
A lot of people subscribed to the idea that limiting the amount of red meat they
eat is good while not eliminating it completely.
0tgb10y
I find this hard to believe, knowing several people who have become vegetarians
and vegans and hardly ever eating meat myself. Do you have any support for this
claim? Anecdotally, one new vegan (from being a vegetarian) stopped eating pizza
which had previously been more-or-less a mainstay of his. My sister became a
vegetarian as a kid despite actually quite liking meat at the time; not only did
her eating habits changed but that of my entire family did significantly. My
parents describe it as going from thinking "What meat is for dinner?" to
thinking "What is for dinner?" ever night.
2philh10y
I think that was "people who try to eat fewer animal products without completely
prohibiting anything". It seems plausible to me.
0mare-of-night10y
Yes, this is what I meant.
0tgb10y
Okay, that sounds plausible.
0kalium10y
Prohibiting particular foods on certain days is also popular: "Meatless Mondays"
or Catholic-style fasts.
I would like recommendations for a small, low-intensity course of study to improve my understanding of pure mathematics. I'm looking for something fairly easygoing, with low time-commitment, that can fit into my existing fairly heavy study schedule. My primary areas of interest are proofs, set theory and analysis, but I don't want to solve the whole problem right now. I want a small, marginal push in the right direction.
My existing maths background is around undergrad-level, but heavily slanted towards applied methods (calculus, linear algebra), statist... (read more)
If you like Haskell's type system I highly recommend learning category theory.
This book [http://www.math.mcgill.ca/triples/Barr-Wells-ctcs.pdf] does a good
job. Category theory is pretty abstract, even for pure math. I love it.
2Adele_L10y
Essentially, this kind of math is called category theory. There is this book
[http://lesswrong.com/lw/ioo/book_review_basic_category_theory_for_computer/],
which is highly recommended, and fills your criteria decently well. I am
currently working through this book, and I am happy to discuss things with you
if you would like.
0Scott Garrabrant10y
I am not sure if it is good for you background and needs, but I would like to
mention The Book of Numbers
[http://www.amazon.com/The-Book-Numbers-John-Conway/dp/038797993X]. I read and
understood this book in high school without any formal training of calculus. I
think this book is very effective at showing people how math can be beautiful in
a context that does not have many prerequisites.
I sometimes use the term ‘accessible’ in the Microsoft sense.
The mouthful version of ‘accessible’ is something like this: To abstractly describe the character of a human interactive or processed experience when it is tailored to not exceed the limitations of the particular human being to which it is being presented.
So, if you are blind or paralyzed, your disability prevents you from using a computer terminal in the normal way without some assistive technology. If you are confined to a wheelchair, you cannot easily enter a bu
I upvoted this, even though the part where wealth is suggested as a filter for
competence completely fails to distinguish the Bill Gates (rich because
competent) from the Paris Hiltons (rich because someone somewhere in the
ancestry was competent and/or lucky). (Though it's possible I just upvoted it
because it starts out talking about accessibility and how the existence of
imperfect beings kinda nukes the idea of libertarian free will, both of which I
wish more people understood.)
0Douglas_Knight10y
After Conrad decided to give 97% of his fortune to charity, it appears to me
that Paris will earn more money than she will inherit. Even if she is as stupid
as the character she plays, she has acquired competent agents.
I don't have much of a point, but people who win the fame tournament are
probably not famous by accident.
His argument against Haidt's ideas on differences between liberals and conservatives related to his moral foundation theory differing psychology is similar to the ones Vladimir_M and Bryan Caplan made, but he upgrades it with a plausible explanation for why it might seem otherwise. The references are well worth checking out.
I recently found out a surprising fact from this paper by Scott Aaronson. P=NP does not (given current results) imply that P=BQP. That is, even if P=NP there may still be substantial speedups from quantum computing. This result was surprising to me, since for most computational classes we normally think about that are a little larger than P, they end up equaling P if P=NP. This is due to the collapse of the polynomial hierarchy. Since we cannot resolve that BQP lives in the polynomial hierarchy, we can't make that sort of argument.
Sure, but that's just saying that P=NP is not a robust hypothesis. Conditional
on P=NP, what odds do you put that P is not P^#P or PSPACE? (though maybe the
first is a robust hypothesis that doesn't cover BQP)
0JoshuaZ10y
I'm not sure. If P=NP this means I'm drastically wrong about a lot of my
estimates. Estimating how one would update conditioning on a low probability
event is difficult because it means there will be something really surprising
happening, so I'd have to look at how we proved that P=NP to see what the
surprise ended up being. But, if that does turn out to be the case, I'm fairly
confident I'd then assign a pretty high probability to P=PSPACE. On the other
hand we know that of the inequalities between P, NP, PSPACE and EXP, at least
one of them needs to be strict. So why should I then expect it to be strict on
that end? Maybe I should then believe that PSPACE=EXP? PSPACE feels closer to P
than to EXP but that's just a rough feeling, and we're operating under the
hypothetical that we find out that a major intuition in this area is wrong.
The linked givewell posts discuss microloans, not outright grants. Link to
Blattman's paper: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2268552
[http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2268552]
Has someone attempted to make the jump from the reported data to QALY estimates
or some other comparable measure?
I like the ideas of (1) providing an alternative video introduction, because
some people like that stuff, and (2) having the last part of "what to do after
reading LessWrong".
I think the rationality videos should be even linked from the LW starting page.
Or even better, the LW starting page should start with a link saying "if you are
here for the first time, click here", which would go to a wiki page, which would
contain the links to videos (with small preview images) on the top.
0Ben Pace10y
Cheers - yeah, especially for my friends for whom reading a couple of those
posts would be a big deal, the talks are very useful. I'll make a top-level
comment on next week's open thread proposing the idea :)
Added: By the way, as to the 'post LW' section, you might've noticed that the
last post in 'welcome to Bayesianism' is a critique of LessWrong as a shiny
distraction rather than of actual practical use. I'm hoping the whole thing
leads people to be more practically rational and involved in EA and CFAR.
0FiftyTwo10y
Might be useful to hae introduction points for people with a certain degree of
preexisting knowledge of the subject but from other sources. E.g. If I want to
introduce a philosophy postgrad to lesswrong I would want to start with a
summary of lesswrong's specific definition of 'rationality' and how it compares
to other versions, rather than starting from scratch.
0Ben Pace10y
I'm sorry, I had a little difficulty parsing your comment; are you saying that
my introduction would be useful for a philosophy postgrad, or that my summary is
starting from scratch and the former would be something for someone to work on?
LW tells people to upvote good comments and downvote bad comments. Where do I set the threshold of good/bad? Is it best for the community if I upvote only exceptionally good comments, or downvote only very bad comments, or downvote all comments that aren't exceptionally good, or something else? Has this been studied? Is it possible to make a karma system where this question doesn't arise?
Information theory says that you communicate the most if you send the three
signals of up, down, nothing equally often. This would be a psychological
disaster if everyone did it, but maybe you should.
2witzvo10y
It seems to me that the total voting ought to reflect the "net reward" we want
to give the poster for their action of posting, like a trainer rewarding good
behavior or punishing bad. For this reason, my voting usually takes into account
the current total score. I think the community already abides by this for most
negatively scored posts -- they usually don't sail much below -2. For posts that
I feel I really benefited from, though, I don't really follow my own policy per
se. -- I just "pay back" what I got out of it to them.
I basically only downvote if there's some line of argument that I object to in
the post. I think I need to say what I'm objecting to specifically when I do
this more often.
My opinion is it has to depend on the current score of the post. [At least under
the current system, which reports, if you will, net organic responses; in a
different system where responses were from solicited peer-review requests,
different behavior would be warranted.]
Good questions. I don't know. There's some further discussion here
[http://lesswrong.com/lw/z/information_cascades/].
1hyporational10y
This should be implemented in the system if done at all. Downvoting
"nondeservingly" upvoted posts will make obvious but true comments look
controversial. I think inconsistently meta-gaming the system just makes it less
informative.
If you don't think something deserves the upvotes, but isn't wrong, then simply
don't vote.
ETA: I assume you didn't mean that downvoting to balance the votes is good, but
you didn't mention it either.
0witzvo10y
Good point. I don't actually do that, I do the "don't vote" policy you
mentioned, but I hadn't thought about why, or even noticed that I do it
correctly. Thanks. Your point that it would make the voting look controversial
is well taken.
I would be tempted to upvote something that I thought had karma that was too
low. This would tend to cause it to look "controversial" when, maybe, I agreed
that it deserved a negative score. Is upvoting behavior also a bad idea in this
case and I should just "not vote"?
I don't see how that's possible without it having more information.
I don't want to overthink this too much as I can't help but think that these
issues are artifacts of the voting system itself being a bit crude: e.g. should
I be able to "vote" for a target karma score instead of just up or down? The
score of the post could be the median target score.
0hyporational10y
I don't know. I'm quite green here too. I don't usually read heavily downvoted
comments, as they're hidden by default. Downvoted comments are less visible
anyway, so any meta-gaming on them has less meaningful impact.
I might upvote a downvoted comment, if I don't understand why it's downvoted and
wanted it to be more visible so that discussion would continue. It would be a
good to follow up with a comment to clarify that, but many times I'm too lazy :(
I think making the system more complicated would just make people go even more
meta.
0Scott Garrabrant10y
I think that if we could coordinate perfectly what we mean by good comments, and
each comment has a score between 0 and 1, then we should all upvote a comment
with a positive score with a probability equal to its score, and downvote a
comment with negative score with probability equal to its negative score.
0witzvo10y
This would cause the karma assigned to a post to drift over time unboundedly
with expectation of: (the traffic that it recieves)*(the average score of
voters), which seems problematic to me.
Nitpick: maybe you want the score to run between -1 and 1 and voting probability
to be according to the absolute score? I'm confused by your phrase "comment with
negative score".
0Scott Garrabrant10y
"negative score" means the negative of the score you give. If you give -1/2, you
downvote with probability 1/2.
0[anonymous]10y
If we could coordinate perfectly, we'd delegate all the voting to one person.
Can you try solving the problem with weaker assumptions?
Why are AMD and Intel so closely match in terms of processor power?
If you separated two groups and incentivized them to develop the best processors and came back in 20 years, I wouldn't expect both groups to have done approximately comparably. Particularly so if the one that is doing better is given more access to resources. I can think of a number of potential explanations none of which are entirely satisfactory to me, though. Some possibilites:
there is more cross-talk between the companies than I would guess (through hiring former employees, reading pa
I'm afraid I didn't keep information about the citation, but when I was reading
up on chip fabs for my essay [http://www.gwern.net/Slowing%20Moore%27s%20Law] I
ran into a long article claiming that there is a very strong profit motive for
companies to stack themselves into an order from most expensive & cutting-edge
to cheapest & most obsolete, and that the leading firm can generally produce
better or cheaper but this 'uses up' R&D and they want to dribble it out as
slowly as possible to extract maximal consumer surplus.
4Vaniver10y
There is lots of cross-talk. Note also that Intel and AMD buy tools from other
companies- and so if Cymer [http://www.cymer.com/] is making the lasers that
both use for patterning, then neither of them has a laser advantage.
I find in general very hard to predict what kind of acceptance will my post receive, basing on the karma point of each. While as a policy I try not to post strategically (that is, rationality quotes, pandering to the Big Karmers, etc.), but just only those things I find relevant or interesting for this site, I have found no way to reliably gauge the outcome. It is particularly bewildering to me that comments that (I hope) are insightful gets downvoted to the limit of oblivion or simply ignored while comments or requests of clarification are the most upvoted. Have someone constructed a model of how the consensus works here on LW? Just curious...
Curious about specific examples.
This can have many reasons. Posting too late, when people don't read the
article. Using difficult technical arguments, so people are not sure about their
correctness, so they don't upvote.
0MrMind10y
If you click on my name, the first two comment at -2 are the ones: I was
seriously thinking of being contributing to the discussion.
Yeah, this does not bother me much, I'm more puzzled by the "trivial comment ->
loads of karma" side: "How did you make those graphs" and "How do you say 'open
secret' in English" attracted 5 karmas each. Loads here has to be intended as
relative to the average of points my posts receive.
Before, I modeled karma as a kind of power-law: all else being equal, those who
have more karma will receive more karma for their comment. So I guessed that the
more you align to the modus cogitandi of Big Karmers, the more karma you will
receive. This doesn't explain the situation above, though.
Upvoted to reinforce explaining what your votes mean.
3A1987dM10y
Upvoted because I was going to write the same thing, and upvoting the comment is
what I usually do when I see that someone has already written what I was going
to write.
5witzvo10y
+1 for explaining why. I'm not sure I agree with the behavior particularly,
since it could give a lot of credit for something relatively obvious. I probably
wouldn't do it if the question had more than +5 already unless I was really
glad.
Oh, I will give extra +1's when the context made me think it would be hard for
the person to ask the question they asked, e.g. because it challenges something
they'd been assuming.
6witzvo10y
As a rule I don't think it's productive to worry about karma too much, and I'm
going to assume you agree and that you're asking "what am I missing, here" which
is a perfectly useful question.
Before I get into your question, here's an example
[http://lesswrong.com/lw/cz8/proposal_show_up_and_down_votes_separately/6sq6]
that was at -2 when I encountered it, but that I see has now risen to having +5,
so there's definitely some fluidity to the outcome (you might be interested in
the larger discussion on that page anyway).
So the two examples that you mention at -2 presently are 1
[http://lesswrong.com/lw/ish/open_thread_october_7_october_12_2013/9vty] and 2
[http://lesswrong.com/lw/ish/open_thread_october_7_october_12_2013/9vrn].
Part of the problem in those examples seems to be an issue of language, but I
don't think that's all of it. For example, you offer to clarify that when you
say "natural inclination" you mean an "innate impulse [that] is strongly present
almost universally in humans" and give examples of things humans seek regularly
("eating, company, sex"). From my interpretation of the other posts, when they
say "natural inclination" they mean "behavior that would be observed in a group
of humans (of at least modest size) unless laws or circumstances specifically
prevent it". I suspect that the downvotes could be because your meaning was
sufficiently unexpected that even when you wrote to clarify what it was, they
couldn't believe that that was what you meant. And, on balance, no, that doesn't
seem right to me since you were making an honest effort to clarify terms.
For what it's worth, here's why I'd object to your choice of terms, and this
could explain some of the downvotes, since it's obviously much less effort to
just downvote than explain. I'd object because your definition inserts an
implied "and the situation is normal" into the definition. For example, in
normal situations a person would rather have an ice cream than kill someone. But
if the situ
19eB110y
There are possible privileged situations, however. If you are in the environment
of evolutionary adaptedness, living with your tribe out on the African savannah,
how many days per year are you going to have an "inclination" to kill another
human, vs. how many days are you going to have an "inclination" to eat, have sex
and socialize. I'm guessing the difference is something like 1 vs. 360, unless
tribal conflicts were much more common in that environment than I expect, and
people desired to kill during those conflicts more than I expect (furthermore I
would expect people to see it as an unfortunate but necessary action, which
doesn't jive with my sense of the definition of "inclination", but that's not
critical to the point). Clearly putting them on the same level carves up human
behavior in a particular way which is not obvious just from the term "natural
inclination."
2witzvo10y
That all seems fair to me. To be honest I haven't read enough of the context to
know how relevant these distinctions are to it, and I agree the term seems
problematic which is all the more reason that trying to nail it down is actually
useful behavior, hence MrMind's concern, I guess.
0hyporational10y
One reason is people vote to signal simple agreement.
Not saying it would work, but there could be "warm fuzzy votes" that don't
contribute to karma at all, or contribute much less, and are shown separately.
Comments could be arranged by those too if need be. It would be an interesting
experiment to see how much people agree with posts that have no other value.
0witzvo10y
As for a model... obviously not a full model:
Statements that are short and that are non-controversially in line with the
position that most readers would approve of and flow with the context well and
get a lot of "traffic" are the most likely to have skyrocketing +1's.
If it has a useful insight or a link to an important resource this also helps,
but only if it's lucid enough in its explanation.
I am interested in reading further on objective vs subjective Bayesianism, and possibly other models of probability. I am particularly interested in something similar to option 4 in What Are Probabilities, Anyway. Any recommendations on what I should read?
I recently memorized an 8-word passphrase generated by Diceware.
Given recentadvances in password cracking, it may be a good time to start updating your accounts around the net with strong, prescriptively-generated passphrases.
Added: 8-word passphrases are overkill for most applications. 4-word passphrases are fairly secure under most circumstances, and the circumstances where in which they are not may not be helped by longer passphrases. The important thing is avoiding password reuse and predictable generation mechanisms.
I find it much easier to use random-character passwords. Memorize a few, then
cycle them. You'll pretty much never have to update them. If you can't memorize
them all, use software for that.
5arundelo10y
The "dictionary attacks" sentence is a non sequitur. The number of possible
eight-word Diceware passwords [http://www.wolframalpha.com/input/?i=7776%5E8] is
within an order of magnitude of the number of possible 16-character line noise
passwords [http://www.wolframalpha.com/input/?i=94%5E16].
4hyporational10y
You're right, removed it. I'm not sure I understand why people prefer using
passphrases though. Isn't it incredibly annoying to type them over and over
again?
5arundelo10y
I think the main advantage is that they're easier to memorize
[http://xkcd.com/936/].
Another is that, although they're harder to type because they're longer, they're
easier to type because they don't have a bunch of punctuation and uppercase
letters, which are harder to type on some smartphones (and slower to type on a
regular keyboard). And while I'm at it, one more minor advantage (not relevant
for people making up their own passwords) is that the average person does not
know punctuation characters very well, e.g., does not know the difference
between a slash and a backslash.
0hyporational10y
They may be easier to type the first few times, but after your "muscle memory"
gets it even the trickiest line noise is a breeze.
That smartphone thing is a good point, though. My phone is my greatest security
risk because of this problem. Probably should ditch the special characters.
0Douglas_Knight10y
Yes, no one should use line noise passwords because they are hard to type. If
you want 100 bits in your password, you should not use 16 characters of line
noise. But maybe you should use 22 lower case letters.
The xkcd cartoon is correct that the passwords people do use are much less
secure than they look, but that is not relevant to this comparison. And
lparrish's links say that low entropy pass phrases are insecure.
But why do you want 100 bit passwords? The very xkcd cartoon you cite says that
44 bits is plenty. And even that is overkill for most purposes. Another xkcd
[https://xkcd.com/792/] says "The real modern danger is password reuse." Without
indicating when you should use strong passwords, I think this whole thread is
just fear-mongering.
0lsparrish10y
According to the Diceware FAQ
[http://world.std.com/~reinhold/dicewarefaq.html#howlong], large organizations
might be able to crack passphrases 7 words or less in 2030. Of course that's
different from passwords (where you have salted hashes and usually a limit on
the number of tries), but I think when it comes to establishing habits / placing
go-stones against large organizations deciding to invest in snooping to begin
with, it is worthwhile. Also, eight words isn't that much harder than four words
(two sets of four).
One specific use I have in mind where this level of security is relevant is
bitcoin brainwallets for prospective cryonics patients. If there's only one way
to gain access to a fortune, and it involves accessing the memories of a
physical brain, that increases the chances that friendly parties would
eventually be able to reanimate a cryonics patient. (Of course, it also means
more effort needs to go into making sure physical brains of cryonics patients
remain in friendly hands, since unfriendlies could scan for passphrases and
discard the rest.)
0hyporational10y
I don't understand what you mean by this. How are salting and limits properties
of passwords (but not passphrases)?
0lsparrish10y
What I meant is that those properties are specific to the secret part of login
information used for online services, as distinct from secret information used
to encrypt something directly.
0[anonymous]10y
Sorry, what I meant is something more like like 'encryption phrases' and
'challenge words'. Either context could in principle refer to a word or a
phrase, actually. However, when you are encrypting secret data that needs to
stay that way for the long term, such as your private PGP key, it is more
important to pick something that can't concievably be brute forced, hence the
usage of the term 'passphrase' usually applies to that. If someone steals your
hard drive or something, your private key will only stay private for as long as
the passphrase you picked is hard to guess, and they could use that to decrypt
any incoming messages that used your public key.
When you are simply specifying how to gain access to an online service, it is a
bit less crucial to prevent the possibility of brute forcing (so a shorter
'password' is sort of okay), but it is crucial for the site owner to use things
like salt [https://crackstation.net/hashing-security.htm] and
collision-resistant hash functions to prevent preimage attacks
[http://en.wikipedia.org/wiki/Preimage_attack], in the event that the
password-hash list is stolen. (Plaintext passwords should never be stored, but
unsalted hashes are also bad.)
If someone was using a randomly generated phrase of 4+ words or so for their
'password', salt would be more or less unnecessary due to the extremely high
probability that it is unique to begin with. This makes for one less thing you
have to trust the site owner for (but then, you do still have to trust that they
aren't storing plaintext, that the hash they use is collision-resistant, etc).
I'm not sure if it is possible to use salt with something like PGP. I imagine
the random private key is itself sufficient to make the encrypted key as a whole
unique. Even if the passphrase itself were not unique, it would not be obvious
that it isn't until after it is cracked. The important thing to make it
uncrackable is that it be long and equiprobable with lots of other possibilities
(which inc
-1Douglas_Knight10y
Yes, there are some uses. I'm not convinced that you have any understanding of
the links in your first comment and I am certain that that it was a negative
contribution to this site.
If you really are doing this for such long term plans, you should be concerned
about quantum computers and double your key length. That's why NSA doesn't use
128 bits. Added: but in the particular application of bitcoin, quantum computers
break it thoroughly.
1lsparrish10y
Well, that's harsh. My main intent with the links was to show that the system
for picking the words must be unpredictable, and that password reuse is harmful.
I can see now that 8-word passphrases are useless if the key is too short or
there's some other vulnerability, so that choice probably gives us little more
than a false sense of security.
This is news to me. However, I had heard that there are only 122 bits due to the
use of RIPEMD-160 as part of the address generation mechanism.
0hyporational10y
Rudeness doesn't help people change their minds. Please elaborate what you mean
by this. Even if he's wrong, the following discussion could be a positive
contribution.
3Omid10y
There are 7776 words in Diceware's dictionary. Would you rather memorize 8 short
words, 22 letters (a-z case insensitive), or 16 characters (a-z case sensitive,
plus numerals and punctuation marks?)
4hyporational10y
If I really had to type them in myself every time I wanted to use them, 16
random characters absolutely. Repeatedly typing the 8 words compared to 16
characters probably takes more time in the long run than memorizing the random
string. Memorizing random letters isn't significantly easier in my experience
than memorizing random characters.
I find myself over sensitive to negative feedback and under-responsive to positive feedback.* Does anyone have any advice/experience on training myself to overcome that?
*This seems to be a general issue in people with depression/anxiety, I think its something to do with how dopamine and serotonin mediate the reward system but I'm not an expert on the subject. Curiously sociopaths have the opposite issue, underresponding to negative feedback.
Spend more cognitive resources on dealing with positive feedback.
When someone says that you have a nice shirt, think about why they said it.
Probably they wanted to make you feel good. What does that mean? They care about
making you feel good. You matter to them.
Gratitude journaling is a tool with a good evidence base. At the end of every
day, write down all good feedback that you got. It doesn't matter if it was
trival. Just write stuff down.
Meditation is also a great tool.
I wouldn't be sure about that claim. I think sociopaths rather have different
criteria of what constitutes negative feedback. I think physical pain would have
the same effect on a sociopaths as on a regular person.
2PECOS-910y
The Feeling Good Handbook
[http://www.amazon.com/Feeling-Good-Handbook-David-Burns/dp/0452281326] has good
evidence as a treatment for depression and could help you to identify and
address your automatic thoughts caused by negative feedback.
I'd like to highly recommend Computational Complexity by Christos H. Papadimitriou. Slightly dated in a fast changing field, but really high quality explanations. Takes a bit more of a logic-oriented approach than Hopcroft and Ullman in Introduction to Automata Theory, Languages, and Computation. I think this topic is extremely relevant to decision theory for bounded agents.
Thanks for the recommendation, but isn't this sort of thing better suited for
the Media thread?
6JayDee10y
I would recommend the Best Textbooks on Every Subject
[http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/] thread,
rather. This comment (upvoted, incidentally) very almost meets the requirements
there:
Those who have been reading LessWrong in the last couple of weeks will have little difficulty recognizing the poster of the following. I'm posting this here, shorn of identities and content, as there is a broader point to make about Dark Arts.
These are, at the time of writing, his two most recent comments. I will focus on the evidential markers, and have omitted everything else. I had to skip entirely over only a single sentence of the original, and that sentence was the hypothetical answer to a rhetorical question.
I'm having difficulty recognizing the poster of the following, and searching
individual phrases is only turning up this comment. While I approve of making
broad points about Dark Arts, I'm worried that you're doing so with a parable
rather than an anecdote, which is a practice I disapprove of.
3shminux10y
I'm guessing that RichardKennaway means this post
[http://lesswrong.com/user/JoshElders/submitted/] and the related open thread.
2Vaniver10y
I, thankfully, missed that the first time around. Worry resolved. (Also, score
one for the deletion / karma system, that that didn't show up in Google
searches.)
0NancyLebovitz10y
I couldn't figure it out, either-- the good news is that someone who's so vague
has a reasonable chance of being so boring as to be forgettable.
0TheOtherDave10y
I'm fairly certain that the user RK is referring to was deleted from the site.
EDIT: But I am wrong! He wrote a post that got deleted, and I got confused.
0arundelo10y
Not as of this writing. [http://lesswrong.com/user/JoshElders/overview/]
0TheOtherDave10y
corrected, thanks
0hyporational10y
I agree that being slippery and vague is usually bad, and one way to employ Dark
Arts.
However, avoiding qualifiers of uncertainty and not softening one's statements
at all exposes oneself to other kinds of dark arts. Even here, it's not
reasonable to expect conversants to be mercifully impartial about everything.
Someone who expects strong opposition would soften their language more than
someone whose statements are noncontroversial.
0Richard_Kennaway10y
There's slippery, and there's vague. The one that I have not named is certainly
being slippery, yet is not at all vague. It is quite clear what he is
insinuating, and on close inspection, clear that he is not actually saying it.
Qualifiers of uncertainty should be employed to the degree that one is actually
uncertain, and vagueness to the degree that one's ideas are vague. In diplomacy
it has been remarked that what looks like a vague statement may be a precise
statement of a deliberately vague idea.
-2niceguyanon10y
If your concerns are valid, then it doesn't help those that are not aware of who
you are talking about, by hiding the identity of the accused. We're all grown
ups here we can handle it.
8Viliam_Bur10y
I think the pattern is also important per se. You can meet the pattern in the
future, in another place.
It's a pattern of how to appear reasonable, cast doubt on everything, and yet
never say anything tangible that could be used against you. It's a way to
suggest that other people are wrong somehow, without accusing them directly, so
they can't even defend themselves. It is not even clear if the person doing this
has some specific mission, or if breeding uncertainty and suspicion is their
sole mission.
And the worst thing, it works. When it happens, expect such person to be
upvoted, and people who point at them (such as Richard), downvoted.
0Richard_Kennaway10y
As Viliam_Bur says, it is the general pattern that is my subject here, not to
heap further opprobrium on the one who posted what I excerpted. Goodness knows
I've been telling him to his virtual face enough of what I think of him already.
More from Tolkien.
[http://books.google.co.uk/books?id=12e8PJ2T7sQC&pg=PT142&lpg=PT142&dq=%22Suddenly+another+voice+spoke,+low+and+melodious%22]
I have a hard time terminating certain subroutines in my brain. This most regularly happens when I am thinking about a strategy game or math that I am really interested in. I will continue thinking about whatever it is that is distracting me even when I try not to.
The most visible consequence of this is that it sometimes interferes with my sleep. I usually get to bed at a regular time, but if I get distracted it could take hours for me to get to sleep, even if I cut myself off from outside stimulus. It can also be a ... (read more)
This is pretty much what meditation is for — minus the "force", that is.
2JayDee10y
I use certain videogames for something similar. I've collected a bunch of
(Nintendo DS, generally) games that I can play for five minutes or so to pretty
much reset my mind. Mostly it's something I use for emotions, but the basic idea
is to focus on something that takes up all of that kind of attention - that
fully focuses that part of my brain which gets stuck on things.
Key to this was finding games that took all my attention while playing, but had
an easy stopping point after five minutes or so of play - Game Center CX / Retro
Game Challenge is my go-to, with arcade style gameplay where a win or loss comes
up fairly quick.
2Viliam_Bur10y
StepMania [http://en.wikipedia.org/wiki/StepMania] is great for this (needs
specialized hardware). It needs the mind and the body. When playing on a
challenging level, I must pay full attention to the game -- if my mind starts
focusing on any idea, I lose immediately.
2Emile10y
Intensive exercise - I remember P.J.Eby saying he'd use intensive exercise (in
his case I thin it was running across his house) as a "reset button" for the
mind. It's pretty cheap to try! (I have occasionally did that - pushups, usually
- though it's more often to get rid of angry annoyance than distractions)
1kalium10y
Physical pain will do it. Exercise is one option, but for me it always seems to
be the bad "I am destroying my joints" kind of pain so I stop before it hurts
enough to reset my thought patterns. Holding a mug of tea that's almost but not
quite hot enough to burn, and concentrating on that feeling to the exclusion of
everything else, seems to work decently. A properly forceful backrub is better,
though it requires a partner. And if your partner is a sadist then you begin to
have many excellent options.
0hibiscus10y
Addressing the sleep half: if meditation or sleep visualization exercises are
hard for you, try coloring something really intricate and symmetrical. Like
these [http://mandalacoloringmeditation.com/mandala-coloring/free-downloads/].
The idea is to keep your brain engaged enough to not think about the intrusive
thing you were thinking about before, but calm enough to move towards sleep.
0hyporational10y
I read fiction or easy nonfiction. This distracts me from other thoughts, but
isn't engaging enough to keep me awake.
0Lumifer10y
Alcohol.
3Dorikka10y
I don't have a citation, but I've heard that alcohol will screw with your sleep.
Might want to Google if you're thinking about going that route.
-1Lumifer10y
I don't know if a citation would help -- alcohol's effect on sleep (and other
things) is fairly personal. If you don't already know, you'll need to experiment
and find out how it works for you.
In any case, alcohol is just the easiest of the hit-the-brain-below-the-cortex
options. There are other alternatives, too, e.g. sex or stress.
I'd love to hear some first hand accounts. It sounds like all the things I enjoyed about going to church when I was a Christian, with the Christianity part.
If you enjoyed going to church as a Christian, and considered it enough to make
this post, then you should probably just go. There is not much penalty for
trying.
I go to a UU church, which looks kind of similar. (They are not all atheist, but
they are all different things and agree to disagree about theology.) I don't
really enjoy the singing that much, at least not the hymns, and I still enjoy
the experience as an atheist. Just don't expect to get the same level of
intelligence or rationality you get from here though. If you are looking for
good philosophical discussion, that probably isn't the place to get it.
Overview of systemic errors in science-- wishful thinking, lack of replication, inept use of statistics, sloppy peer review. Probably not much new to most readers here, but it's nice to have it all in one place. The article doesn't address fraud very much because it may have a small effect compared to unintentionally getting things wrong.
Account of a retraction by an experiment's author Doing the decent thing when Murphy attacks. Most painful sentence: "First, we found that one of the bacterial strains we had relied on for key experiments was mislabel... (read more)
Stock market investment would seem like a good way to test predictive skills, have there been any attempts to apply lw style rationality techniques to it?
I disagree and hope that more people would update regarding this belief. There
is no alpha (risk adjusted excess returns), at least not for you. Here is why:
1. For all intents and purposes, stocks markets are efficient; even if you
don't agree, you would still have to answer the question "to what degree of
inefficiency is there that will allow you to extract or arbitrage gains"?
Your "edge" is going to be very very small if you even have one.
2. Assuming you have identified measurable inefficiencies, your trading costs
will negate it.
3. The biggest players have access to better information, both insider and
public, at faster speeds than you could ever attain and they already
participate in 'statistical arbitrage' on a huge scale. This all makes the
stock market very efficient, and very difficult for you, the individual
investor to game a meaningful edge.
4. The assumption that one could test for significantly better predictive
skills in the stock market, would imply that risk free arbitrage is common –
You could just buy one stock and sell an index fund or vice verse, then
apply this with the law of large numbers and voila you are now a millionaire
but alas this does not commonly happen.
-2Lumifer10y
I happen to disagree. I don't think this statement is true.
First, there are many more financial markets than the stock market. Second, how
do you know that stock markets are efficient?
That seems to be a bald assertion with no evidence to back it up, especially
given that we haven't specified what kind of trading we are talking about.
The biggest players have their own set of incentives and limitations, there are
not necessarily the best at what they do, and, notably, they are not interested
in trades/strategies where the payoffs are not measured in many millions of
dollars.
I don't see how that implies it. Riskless arbitrage, in any case, does not
require any predictive skills given that it's arbitrage and riskless. You test
for predictive skills in the market by the ability to consistently produce alpha
(properly defined and measured).
6niceguyanon10y
Upvoted because your reservations are probably echoed by many.
I'd like to change your mind specifically when it comes to "playing the stock
market" for excess returns. My full statement is "There is no alpha (risk
adjusted excess returns), at least not for you". This reflects my belief that
while alpha is certainly measurable and some entities may achieve long term
alpha, for most people this will not happen and will be a waste of time and
money.
First, OP mentions stock market I'm not particularly picking on it. Second, for
all intents and purposes for the individual, it is. Think about it this way,
instead of saying whether or not the stock market is efficient, like it's
binary, lets just ask how efficient it is. In the set of all markets, is the
stock market among the most efficient markets that exist? I would see no reason
why it wouldn't be. Have you ever played poker with 9 others of the best players
in the world? Chances are you haven't, because they aren't likely to be part of
your local game, but the stock market is easy to enter and anyone one may
participate. While you sit there analyzing your latest buy low and sell high
strategy, you are playing against top tier mathematicians and computer engineers
synergistically working with each other with the backing of institutions. A lone
but very smart and rational thinking programmer isn't likely to win. Why would
you choose to make that the playground for you to test your predictions skills?
There are better places, like prediction book.
Even dirt cheap discount brokers charge about $5 a trade, but if you were
something of a professional then you could join a prop firm and get even cheaper
maybe .005 per share. But now you have the problem of maintaining volume of
trades in order to keep that rate. If you are a buy and holder you would still
need to diversify and balance your portfolio with transaction trades, 1. To
prove you statistically did better than the market rather than variance. 2.
Prevent individ
3Lumifer10y
That's a remarkably low bar.
A great deal of things will not happen "for most people". Getting academic
tenure, for example. Or having a net wealth of $1m. Or having travelled to the
Galapagos Islands. Etc., etc.
Yes, but that's the basic uninformed default choice when people talk about
financial markets. It's like "What do you think about hamburgers? Oh, I think
McDonalds is really yucky". Um, there's more than that.
If you look at what's available for an individual American investor with, say,
$5-10K to invest, she can invest in stocks or bonds or commodities (agricultural
or metals or oil or precious metals or...) or currencies or spreads or
derivatives -- and if you start looking at getting exposure through ETFs, you
can invest into pretty much anything.
The focus on the stock market is pretty much a remnant from days long past.
I don't know. It depends on how smart and skilled he is.
He might also join forces with some smart friends. Become, y'know, one of those
teams of "top tier mathematicians and computer engineers" who eat the lunch of
plain-vanilla investors. But wait, if the markets are truly efficient, what are
these top-tier people doing in there anyway? :-/
Because the outcomes are direct and unambiguous. Because some people like
challenges. Because it's a way to become rich quickly.
Mutual fund managers are very restricted in what they can do. Besides outright
constraints (for example, they can't go short) they are slaves to their
benchmarks.
Oh, no. "Riskless" and "I think it's as good as riskless" are very, very
different things.
That doesn't get you anywhere near "riskless". That just makes you hedged with
respect to the market, hopefully beta-hedged and not just dollar-hedged.
True, but people show a very consistent ability to come up with new ones when
old ones die.
In any case, no one is arguing that you can find a trade or a strategy and then
milk it forever. You only need to find a strategy that will work for long enough
for you to
4WalterL10y
Disclaimer: I day trade, so this might be influenced by defensiveness.
The thinking patterns I've learned on LW haven't really helped me to discover
any new edge over the markets. Investment, or speculation, feels more like Go or
blackjack as an activity. Being a rationalist doesn't directly help me notice
new trades or pick up on patterns that the analysts I read haven't already seen.
On the other hand, the most difficult thing about dealing with financial matters
is remaining calm and taking the appropriate action. LW techniques have helped
me with this a lot. I believe that reading LW has made me a more consistent
trader.
I'm not sure that the above was written clearly, let me try again. My
proficiency as a speculator goes up and down based on my state of mind. Reading
LW hasn't made the ups higher, but its made me less likely to drop to a valley.
On a tangent, while I'm thinking about it.
Has anyone else just been baldly disbelieved if they mention that they made
money in a nontraditional way? The only other time I've seen it happen is making
money at Vegas. I've met people who seem to have 'The House Always Wins', or
'You Can't Beat The Market' or 'Sweepstakes/Lotteries Are A Waste Of Money' as
an article of faith to the point that, presented with a counter example, they
deny reality.
4[anonymous]10y
At my current level of investment, I probably have received substantial benefit
from other skills that seem Less Wrong related that are not predictive, like not
panicking, understanding risk tolerance and better understanding the math behind
why diversification works.
But I suppose those aren't particularly unique to Less Wrong even though I feel
like reading the site does help me apply some of those lessons.
0ChristianKl10y
I would guess that to the extend that some hedge fund uses lw style rationality
tecniques to train the predictive skills of their staff, they wouldn't be public
about effective techniques.
Awhile back I posted a comment on the open thread about the feasibility of permanent weight-loss. (Basically: is it a realistic goal?) I didn't get a response, so I'm linking it here to try again. Please respond here instead of there. Note: most likely some of my links to studies in that comment are no longer valid, but at least the citations are there if you want to look those up.
I think the substance is that there are plenty of people who change their weight
permanently. On the other hand the evidence for particular interventions isn't
that good.
7passive_fist10y
None of those address permanent weight loss per se. They all address the more
specific problem of permanent weight loss through dietary modification.
A successful approach to weight loss would incorporate a change in diet and
exercise habits along with an investigation of the 'root cause' of the excess
weight i.e. the psychological factor that causes excessive eating (Depression?
Stress? Pure habit? etc.)
I also question your implicit premise that "If it ain't permanent it ain't worth
doing". That sounds like a rationalization to me. For a woman who's 25 and
looking to maximize her chance of reproductive success (finding a mate), 'just 5
years' of weight loss would be extraordinarily superior to no weight loss.
Permanent weight loss would be only marginally better.
5shokwave10y
(Barring you being a metabolic mutant. If you have tried counting calories and
it didn't work for you, then please ignore this post; weight loss is a lot more
complicated than how I am about to describe it here.)
Permanent weight loss is possible and feasible; however it will probably require
constant effort to maintain.
For example, count your daily caloric intake on myfitnesspal.com (my username is
shokke, if you wish to use the social aspect of it too). Eat at a caloric
deficit (TDEE minus ~500) until desired weight is attained, then continue
counting calories and eat at maintenance (TDEE) indefinitely. If you stop
counting calories you will very likely regain that weight.
This requires you to count calories for the rest of your life, or at least until
you no longer care about your weight. Or we develop a better method of weight
control.
5FiftyTwo10y
Is there a lesswrong group on myfitnesspal? Can we make one?
Edit
I've just made one [http://www.myfitnesspal.com/groups/home/17058-lesswrong]
I believe there is a named cognitive bias for this concept but a preliminary search hasn't turned anything up:
The tendency to use measures or proxies that are easily available rather than the ones that most accurately measure the cared about outcome.
I have this fragment of a memory of reading about some arcane set of laws or customs to do with property and land inheritance. It prevented landowners from selling their land, or splitting it up, for some reason. This had the effect of inhibiting agricultural development sometime in the feudal era or perhaps slightly after. Anyone know what I'm talking about?
(I'm aware of the opposite problem, that of estates being split up among all children (instead of primogeniture) which caused agricultural balkanization and prevented economies of scale.)
This sounds like the system that France had before the first French Revolution. That is, up until 1789; I'm not sure when it started. I wouldn't be surprised if a similar system existed in other European countries at around the same time, but I'm not sure which. (I've only been reading history for a couple years, and most of it has been research for fiction I wanted to write, so my knowledge is pretty specifically focused.)
Under this system, the way property is inherited depends on the type of property. Noble propes is dealt with in the way you describe - it can't be sold or given away, and when the owner dies, it has to be given to heirs, and it can't be split among them very much. My notes say the amount that goes to the main heir is the piece of land that includes the main family residence plus 1/2 - 4/5 of everything else, which I think means there's a legal minimum within that range that varies by province, but I'm not completely sure. Propes* includes lands and rights over land (land ownership is kind of weird at this time - you can own the tithe on a piece of land but not the land itself, for example) that one has inherited. Noble propes is propes that belongs to a noblepers... (read more)
I always wondered why people didn't just buy a square inch of land if that's all
it took to be noble.
6mare-of-night10y
Yeah, at least in France, land can't make you noble, even if it's a whole noble
fief with a title attached. (Then you're just a rich commoner who owns a title
but can't use it.) You could become noble by holding certain jobs for a long
enough time (usually three generations), though. And people did buy those. (Not
through bribes - the royal government sold certain official posts to raise
revenues, so it was legal.)
There was also a sort of real estate boom after the revolutionary government
passed some laws to make it easier for commoners to buy land, which was sort of
like what you describe - all the farmers who could afford it would buy all the
land they could at higher values than it was worth, because it made them feel
like they were rich landowners.
6Protagoras10y
Adam Smith reported that this was how the law worked in the Spanish territories
in the Americas, in order to ensure the continued existence of a wealthy and
powerful landed aristocracy and so maintain social stability. He theorized that
this policy was the reason that the Spanish territories were so much poorer than
the English territories, even though the former had extensive gold deposits and
the latter did not.
0bramflakes10y
Yeah I did some more research, apparently they were called "fee tails" or
"entails". They were designed to keep large estates "in the family", even if
that ended up being a burden to the future generations.
As I want to fix my sleep (cycle) I am looking for a proper full spectrum light to screw in my desk light. But when I shop for "full spectrum" light it turns out that they only have three peaks and do not even come near a black body in lighting. Is there something for less than a small fortune for a student like I am looking for? E27 socket, available in the EU.
I can ask more generally: What is the lighting situation at your desk and at your home? I aim for lighting very low in blue in the evening and as close to full daylight during work. For th... (read more)
What evidence do you have that full spectrum light is beneficial? It seems you
already know that it's the blue spectrum that primarily controls the circadian
rhythm.
0Metus10y
No particualar evidence but the closer light is to natural sunlight the better
it looks. I could also argue that the closer I come to 'natural' conditions,
that is much sun-like light the better I should fare.
1RomeoStevens10y
Orange goggles/glasses for late at night aren't that bad and are very cheap. I
don't have a good solution for the full spectrum issue. MIRI is getting by with
the regular full spectrum bulbs AFAIK (is there a followup on the very bright
lights experiment?)
0David_Gerard10y
I use a bedside lamp with a full-size Edison screw (I think E27 is full size).
Daylight-spectrum bulbs are readily available in all manner of fittings on eBay.
Last lot we got were 6x30W (equivalent 150W) with UK bayonet fittings for £5
each (though I don't use something that bright for my bedside lamp).
The essence of EA is that people are equal, regardless of location. In other words, you'd rather give money to poor people in far away countries than people in your own country if it's more effective, even though the latter feel intuitively more close to you. People care more about their own countries' citizens even though they may not even know them. Often your own country's citizens are similar to you culturally and in other ways, more than people in far-way countries and you might feel a certain bond with your ... (read more)
Essentially, I could do things that help other people and me, or I could do things that only help other people but I don't get anything (except for a good feeling) from it. The latter set contains much more options, and also more diverse options, so it is pretty likely that the efficient solution for maximizing global utility is there.
I am not saying this to argue that one should choose the latter. Rather my point is that people sometimes choose the former and pretend they chose the latter, to maximize signalling of their altruism.
"I donate money to ill people, and this is completely selfless because I am healthy and expect to remain healthy." So, why don't you donate to ill people in poor countries instead of your neighborhood? Those people could buy greater increase in health for the same cost. "Because I care about my neighbors more. They are... uhm... my tribe." So you also support your tribe. That's not completely selfless. "That's a very extreme judgement. Supporting people in my tribe is still more altruistic than many other people do, so what's your... (read more)
Also, if it turns out that I have three sub-budgets as you describe here (X, Y,
Z) and there exist three acts (Ax, Ay, Az) which are optimal for each budget,
but there exists a fourth act B which is just-barely-suboptimal in all three, it
may turn out that B is the optimal thing for me to do despite not being optimal
for any of the sub-budgets. So optimizing each budget separately might not be
the best plan.
Then again, it might.
3Viliam_Bur10y
Generally, you are right. But in effective altruism, the axis "helping other
people" is estimated to do hundred times more good if you use a separate budget
for it.
This may be suboptimal for the other axes, though. Taking the pledge
[http://www.givingwhatwecan.org/taking-the-pledge] and having your name on the
list could help along the "signalling philantropy" axis.
0TheOtherDave10y
Fair point.
0[anonymous]10y
Expanding on this, isn't there an aspect of purchasing fuzzies in the usual form
of effective altruism? I know there's been a lot of talk of vegetarianism and
animal-welfare on LW, but there's something in it that's related to this issue.
At least some people believe it's been pretty conclusively proven that mammals
and some avians have a subjective experience and the ability to suffer, in the
same way humans have. In this way humans, mammals, and those avian species are
equal - they have roughly the same capacity to suffer. Also, with over 50
billion animals used to produce food and other commodities every year, one could
argue that the scope of suffering in this sphere in greater than in the human
kind.
So let's assume that the animals used in the livestock have an equal ability to
suffer when compared to humans. Let's assume that the scope of suffering is
greater in the livestock industry than in the human kind. Let's also assume that
we can more easily reduce this suffering than the suffering of humans. I don't
think it's a stretch to say that these three assumptions could actually be true
and this post [http://lesswrong.com/lw/i3s/why_eat_less_meat/] analyzed these
factors in more detail. From these assumptions, we should conclude not only that
we should become vegetarians, like this post
[http://lesswrong.com/lw/i3s/why_eat_less_meat/] argues, but also that the
animal welfare should be our top priority. It is our moral imperative to
allocate all the resources we dedicate to buying utilons to animal welfare,
until the marginal utility for it is lower than for human welfare.
Again, just playing a devil's advocate. Are there other reasons to help humans
other than the fact they belong to our tribe more than animals? The
counterarguments raised in this post by RobbBB
[http://lesswrong.com/lw/i3s/why_eat_less_meat/9fm2] are very relevant,
especially 3. and 4. Maybe animals don't actually have the subjective experience
of suffering and what we think as suffering
1Viliam_Bur10y
I had this horrible picture of a future where human-utilons-maximizing altruists
distribute nets against mosquitoes as the most cost-efficient tool to reduce the
human suffering, and the animal-utilons-maximizing altruists sabotage the net
production as the most cost-efficient tool to reduce the mosquito suffering...
0[anonymous]10y
That's a worthwhile concern, but I personally wouldn't make the distinction
between animal-utilons and human-utilons. I would just try to maximize utilons
for conscious beings in general. Pigs, cows, chicken and other farm animals
belong in that category, mosquitoes, insects and jellyfish don't. That's also
why I think eating insects is on par with vegetarianism because you're not
really hurting any conscious beings.
1hyporational10y
Since we're playing the devil's advocate here: much more important than
geographical and cultural proximity to me would be how many values I share with
these people I'm helping, were I ever to come in even remote contact with them
or their offspring.
Would you effective altruist people donate mosquito nets to baby eating aliens
if it cost effectively relieved their suffering? If not, where do you draw the
line in value divergence? Human?
So, what's all this about a Postivist debacle I keep hearing? Who were the positivists, what did we have in common with them, what was different, and how and why did they fail?
Positivism states that the only authentic knowledge is that which allows verification and assumes that the only valid knowledge is scientific.[2] Enlightenment thinkers such as Henri de Saint-Simon, Pierre-Simon Laplace and Auguste Comte believed the scientific method, the circular dependence of theory and observation, must replace metaphysics in the history of thought. Sociologica
I'm no expert on the history of epistemology, but this
[http://lesswrong.com/lw/ss/no_logical_positivist_i/] may answer some of your
questions, at least as they relate to Eliezer's particular take on our agenda.
-2Scott Garrabrant10y
We consider probabilities authentic knowledge. Since we are Bayesianists and not
Frequentists those probabilities are sometimes about questions which can not be
scientifically tested. Science requires repeatable verification, and our
probabilities don't stand up to that test.
2Scott Garrabrant10y
I assume this was downvoted for inaccuracy. If so, I would like know what you
think is wrong please.
For several years now I've lived in loud apartments, where I can often hear conversations or music late into the night.
I often solve this problem by wearing earplugs. However, I don't want to sleep with earplugs every night, and so I've made a number of attempts to adjust to the noise without earplugs, either going "cold-turkey" for as long as I can stand, or by progressively increasing my exposure to night-time noise.
Despite several years of attempts, I don't think I've habituated at all. What giv... (read more)
Since you are already fine with white noise, you should try using white noise to
drown out the music or voices. A quick search of white noise on the internet
lead me to simplynoise [simplynoise.com] where you can stream white noise over
the internet. If not, then try a phone app.
0Richard_Kennaway10y
I don't need such a thing for sleeping, but I find SimplyNoise
[http://simplynoise.com/] gives a satisfactory sound having a much steeper
fall-off with frequency than white noise (flat spectrum of energy vs. frequency)
or pink noise (3dB fall-off per octave), both of which sound unpleasantly harsh
to me. They also have a few soundscapes (thunderstorm, river, etc.). The app is
not free, but cheap, and there are also pay-what-you-want mp3 download files.
Let's assume society decides that eating meat from animals lacking self-awareness is ethical, and anything with self-awareness is not ethical to eat, and that we have a reliable test to tell the difference. Is it ethical to deliberately breed tasty animals to lack self-awareness, both before or after their species has self-awareness?
My initial reaction to the latter is 'no, it's not ethical, because you would necessarily be using force on self-aware entities as part of the breeding process'. The first part of the question seems to lean towards 'yes', but t... (read more)
I think any question of the form "Assume X is ethical, is X' also ethical?" is
inherently malformed. If my ethics do not follow X, then the change in my ethics
which causes me to include X may be very relevant to X'.
I don't think anyone who is a vegetarian regardless of self-awareness would be
able to answer the question you are asking.
I think the big question that implies this one is "Should we eat baby humans?
Why?"
I believe the answer is "No, because there is no convenient place to draw the
line between baby and adult, so we should put the line at the beginning, and
because other people may have strong emotional attachment to the baby."
I think the first part of my reason is eliminated by your "reliable test." If
the test is completely reliable, that is a very good place to draw the line.
The second part is not going away. It has been evolved in us for a very long
time, however, it is not clear if people will get the same attachment to
non-human babies. I think that our attachment to non-humans is much lower, and
there is not a significant difference between their attachment before and after
self awareness.
However, the question asked assumes that our ethics distinguish between
creatures with and without self awareness. If that distinction is caused by us
having different levels of emotional attachment to the animal depending on its
self awareness, then it would change my answer.
0savanik10y
As for the first part, I would say that it's fairly common for an individual and
a society to not have perfectly identical values or ethical rules. Should I be
saying 'morals' for the values of society instead?
I would hope that ethical vegetarians can at least give me the reasons for their
boundaries. If they're not eating meat because they don't want animals to
suffer, they should be able to define how they draw the line where the capacity
to suffer begins.
You do bring up a good point - most psychologists would agree that babies go
through a period before they become truly 'self-aware', and I have a great deal
of difficulty conceiving of a human society that would advocate 'fresh baby
meat' as ethical. Vat-grown human meat, I can see happening eventually. Would
you say the weight there more on the side of, 'This being will, given standard
development, gain self-awareness', or on the side of 'Other self-aware beings
are strongly attached to this being and would suffer emotionally if it died'?
The second one seems to be more the way things currently function - farmers
remind their kinds not to name the farm animals because they might end up on
their plate later. But I think the first one can be more consistently applied,
particularly if you have non-human (particularly non-cute) intelligences.
1Scott Garrabrant10y
'This being will, given standard development, gain self-awareness' is a common
reason that I missed.
I am partially confused by it, because this notion of "standard development" is
not easily defined, like "default" in negotiations.
0savanik10y
You could put strict statistical definitions around it if you wanted, but the
general idea is, 'infants grow up to be self-aware adults'.
This may not always be true for exotic species. Plenty of species in nature, for
example, reproduce by throwing out millions of eggs / spores/ what have you that
only a small fraction of which grow up to be adults. Ideally, any sort of rule
you'd come up with should be universal, regardless of the form of intelligence.
At some point, some computer programs would have to be considered to be people
and have a right to existence. But at what stage of development would that
happen?
I've got a few questions about Newcomb's Paradox. I don't know if this has already been discussed somewhere on LW or beyond (granted, I haven't looked as intensely as I probably should have) but here goes:
If I were approached by Omega and he offered me this deal and then flew away, I would be skeptical of his ability to predict my actions. Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 milli... (read more)
I had a random-ish thought about programming languages, which I'd like comments on: It seems to me that every successful programming language has a data structure that it specialises in and does better than other languages. Exaggerating somewhat, every language "is" a data structure. My suggestions:
C is pointers
Lisp is lists (no, really?)
Ruby is closures
Python is dicts
Perl is regexps
Now this list is missing some languages, for lack of my familiarity with them, and also some structures. For example, is there a language which "is" strings? And on this model, what is Java?
Well, different languages are based on different ideas. Some languages explore
the computational usefulness of a single data structure, like APL with arrays or
Forth with stacks. Lisp is pretty big, but yes you could say it emphasizes
lists. (If you're looking for a language that emphasizes strings, try SNOBOL or
maybe Tcl?) Other languages explore other ideas, like Haskell with purity,
Prolog with unification, or Smalltalk with message passing. And there are
general-purpose languages that don't try to make any particular point about
computation, like C, Java, JavaScript, Perl, Python, Ruby, PHP, etc.
3Lumifer10y
I don't think this idea works.
Pointers in C aren't data structures -- they are a low-level tool for
constructing data structures. Neither closures nor regexps are "data
structures". And Perl was historically well-known for relying on hashes which
you assigned to Python as dicts.
Certainly each programming language has a "native" programming style that it
usually does better than other languages -- but that's a different thing.
0Viliam_Bur10y
Java is classes -- a huge set of standardized classes, so for most things you
want to do, you choose one of those standard classes instead of deciding "which
one of the hundred libraries made for this purpose should I use in this
project?".
At least this was until the set of standardized classes became so huge that it
often contains two or three different ways to do the same thing, and for web
development external libraries are used anyway. (So we have AWT, Swing and
JavaFX; java.io and java.nio; but we are still waiting for the lambda
functions.)
0[anonymous]10y
Different languages are good at different things. For some languages it happens
to be a data structure:
* Lisp is lists
* Tcl is strings
* APL is arrays
* Forth is stacks
* SQL is tables
Other languages are good at something specific which isn't a data structure
(Haskell, Prolog, Smalltalk etc.) And others are general languages that don't
try to make any particular point about computation (C, Java, JavaScript, Perl,
Python, Ruby etc.)
0Cyan10y
I'm not sure R fits this metaphor -- the closest I can get is "R is CRAN", but
the C?AN concept is not unique to R. Hmm... maybe R is data.frames. Java is
prepare your anus for objects.
Interesting comment by Gregory Cochran on torture not being useless as is often claimed.
... (read more)Possibly related to the halo or overjustification effects; arguments as soldiers seems especially applicable - admitting that torture may actually work is stabbing one's other anti-torture arguments in the back.
I read somewhere that lying takes more cognitive effort than telling the truth. So it might follow that if someone is already under a lot of stress -- being tortured -- then they are more likely to tell the truth.
I decided to publish http://www.gwern.net/LSD%20microdosing ; summary:
Discussion elsewhere:
AI Box Experiment Update
I recently played and won an additional game of AI Box with DEA7TH. Obviously, I played as the AI. This game was conducted over Skype.
I'm posting this in the open thread because unlike my last few AI Box Experiments, I won’t be providing a proper writeup (and I didn't think that just posting "I won!" is enough to validate starting a new thread). I've been told (and convinced) by many that I was far too leaky with strategy and seriously compromised future winning chances of both myself and future AIs. The fact that one of my gatekeepers guessed my tactic(s) was the final straw. I think that I’ve already provided enough hints for aspiring AIs to win, so I’ll stop giving out information.
Sorry, folks.
This puts my current AI Box Experiment record at 2 wins and 3 losses.
Other people have expressed similar sentiments, and then played the AI Box experiment. Even of the ones who didn't lose, they still updated to "definitely could have lost in a similar scenario."
Unless you have reason to believe your skepticism comes from a different place than theirs, you should update towards gatekeeping being harder than you think.
The Anti-Reactionary FAQ by Yvain. Konkvistador notes in the comments he'll have to think about a refutation, in due course.
I continue blogging on the topic of educational games: Teaching Bayesian networks by means of social scheming, or, why edugames don’t have to suck
I was thinking recently that if soylent kicks something off and 'food replacement' -type things become a big deal, it could have a massive side effect of putting a lot of people onto diets with heavily reduced animal and animal product content. Its possible success could inadvertently be a huge boon for animals and animal activists.
Personally, I'm somewhat sympathetic towards veganism for ethical reasons, but the combination of trivial inconvenience and lack of effect I can have as an individual has prevented me from pursuing such a diet. Soylent would allow me to do so easily, should I want to. Similarly, there are people who have no interest in animal welfare at all. If 'food replacements' become big, it could mean for the incidental conversion of those who might have otherwise never considered veganism or vegetarianism to a lifestyle that fits within those bounds, for only their personal cost or convenience reasons.
I know someone who has a young child who is very likely to die in the near future. This person has (most likely) never heard of cryonics. My model of this person is very unlikely to decide to preserve their child even if they knew about it.
I don't know if I should say something. At first I was thinking that I should because the social ramifications are negligible. After thinking about it for a while, I changed my mind and decided that possibly I was just trying to absolve myself of guilt at the cost of offending a grieving parent. I am not sure if this is just rationalization.
Advice?
What expert advice is worth buying? Please be fairly specific and include some conditions on when someone should consider getting such advice and focus on individuals and families versus, say, corporations.
I ask because I recently brainstormed ways that I could be spending my money to make my life better and this was one thing that I came up with and realized I essentially never bought except for visiting my doctor and dentist. Yet there are tons of other experts out there willing to give me advice for a fee: financial advisers, personal trainers, nutritionists, lawyers, auto-mechanics, home inspectors, and many more.
How many people here use Anki, or other Spaced Repetition Software (SRS)?
[pollid:565]
I'm finding it pretty useful and wondering why I didn't use it more intensively before. Some stuff I've been adding into Anki:
I have much more stuff I'd like to Ankify (my notes on Machine Learning, databases, on the psychology of learning; various inspirational quotes, design patterns and high-level software architecture concepts ...).
Some ways I got better at using Anki:
People who want to eat fewer animal products usually have a set of foods that are always okay and a set of foods that are always not (which sometimes still includes some animal products, such as dairy or fish), rather than trying to eat animal products less often without completely prohibiting anything. I've heard that this is because people who try to eat fewer animal products usually end up with about the same diet they had when they were not trying.
I wonder whether trying to eat more of something that tends to fill the same role as animal products would be an effective way to eat fewer animal products.
I currently have a fridge full of soaking dried beans that I have to use up, and the only way I know how to serve beans is the same as the way I usually eat fish, so I predict I'll be eating much less fish this week than I usually do (because if I get tired of rice and beans, rice and fish won't be much of a change). I'm not sure whether my result would generalize to people who use more than five different dinner recipes, though. I should also add that my main goal is learning how to make cheap food taste good by getting more practice cooking beans - eating fewer animal products would just be a side effect.
Now that I write this, I'm wishing I'd thought to record what food I ate before filling my fridge with beans. (I did write down what I could remember.)
I would like recommendations for a small, low-intensity course of study to improve my understanding of pure mathematics. I'm looking for something fairly easygoing, with low time-commitment, that can fit into my existing fairly heavy study schedule. My primary areas of interest are proofs, set theory and analysis, but I don't want to solve the whole problem right now. I want a small, marginal push in the right direction.
My existing maths background is around undergrad-level, but heavily slanted towards applied methods (calculus, linear algebra), statist... (read more)
Inaccessible Is Ungovernable
... (read more)Is disgust "conservative"? Not in a Liberal society (or likely anywhere else) by Dan Kahan
His argument against Haidt's ideas on differences between liberals and conservatives related to his moral foundation theory differing psychology is similar to the ones Vladimir_M and Bryan Caplan made, but he upgrades it with a plausible explanation for why it might seem otherwise. The references are well worth checking out.
I recently found out a surprising fact from this paper by Scott Aaronson. P=NP does not (given current results) imply that P=BQP. That is, even if P=NP there may still be substantial speedups from quantum computing. This result was surprising to me, since for most computational classes we normally think about that are a little larger than P, they end up equaling P if P=NP. This is due to the collapse of the polynomial hierarchy. Since we cannot resolve that BQP lives in the polynomial hierarchy, we can't make that sort of argument.
Apparently recent work shows that direct giving of grants in developing countries has high rates of return. This more or less confirms what Givewell has said before about microfinance.
My current guide to reading LessWrong can be found here.
I would like to know what people think about my potentially adding it to the sequences page, along with Academian's and XiXiDu's guides.
Just looking for feedback. Cheers.
Apparently replies to myself no longer show up in my inbox.
Kudos to whoever made that happen.
LW tells people to upvote good comments and downvote bad comments. Where do I set the threshold of good/bad? Is it best for the community if I upvote only exceptionally good comments, or downvote only very bad comments, or downvote all comments that aren't exceptionally good, or something else? Has this been studied? Is it possible to make a karma system where this question doesn't arise?
Why are AMD and Intel so closely match in terms of processor power?
If you separated two groups and incentivized them to develop the best processors and came back in 20 years, I wouldn't expect both groups to have done approximately comparably. Particularly so if the one that is doing better is given more access to resources. I can think of a number of potential explanations none of which are entirely satisfactory to me, though. Some possibilites:
I find in general very hard to predict what kind of acceptance will my post receive, basing on the karma point of each.
While as a policy I try not to post strategically (that is, rationality quotes, pandering to the Big Karmers, etc.), but just only those things I find relevant or interesting for this site, I have found no way to reliably gauge the outcome.
It is particularly bewildering to me that comments that (I hope) are insightful gets downvoted to the limit of oblivion or simply ignored while comments or requests of clarification are the most upvoted.
Have someone constructed a model of how the consensus works here on LW? Just curious...
I don't know about other people but when I upvote a simple question I'm saying "yeah I was wondering this too"
Upvoted because yeah I do this too.
I am interested in reading further on objective vs subjective Bayesianism, and possibly other models of probability. I am particularly interested in something similar to option 4 in What Are Probabilities, Anyway. Any recommendations on what I should read?
I recently memorized an 8-word passphrase generated by Diceware.
Given recent advances in password cracking, it may be a good time to start updating your accounts around the net with strong, prescriptively-generated passphrases.
Added: 8-word passphrases are overkill for most applications. 4-word passphrases are fairly secure under most circumstances, and the circumstances where in which they are not may not be helped by longer passphrases. The important thing is avoiding password reuse and predictable generation mechanisms.
I find myself over sensitive to negative feedback and under-responsive to positive feedback.* Does anyone have any advice/experience on training myself to overcome that?
*This seems to be a general issue in people with depression/anxiety, I think its something to do with how dopamine and serotonin mediate the reward system but I'm not an expert on the subject. Curiously sociopaths have the opposite issue, underresponding to negative feedback.
I'd like to highly recommend Computational Complexity by Christos H. Papadimitriou. Slightly dated in a fast changing field, but really high quality explanations. Takes a bit more of a logic-oriented approach than Hopcroft and Ullman in Introduction to Automata Theory, Languages, and Computation. I think this topic is extremely relevant to decision theory for bounded agents.
Those who have been reading LessWrong in the last couple of weeks will have little difficulty recognizing the poster of the following. I'm posting this here, shorn of identities and content, as there is a broader point to make about Dark Arts.
These are, at the time of writing, his two most recent comments. I will focus on the evidential markers, and have omitted everything else. I had to skip entirely over only a single sentence of the original, and that sentence was the hypothetical answer to a rhetorical question.
... (read more)Here is a problem that I regularly face:
I have a hard time terminating certain subroutines in my brain. This most regularly happens when I am thinking about a strategy game or math that I am really interested in. I will continue thinking about whatever it is that is distracting me even when I try not to.
The most visible consequence of this is that it sometimes interferes with my sleep. I usually get to bed at a regular time, but if I get distracted it could take hours for me to get to sleep, even if I cut myself off from outside stimulus. It can also be a ... (read more)
Has anyone had any experience with http://sundayassembly.com ?
I'd love to hear some first hand accounts. It sounds like all the things I enjoyed about going to church when I was a Christian, with the Christianity part.
Overview of systemic errors in science-- wishful thinking, lack of replication, inept use of statistics, sloppy peer review. Probably not much new to most readers here, but it's nice to have it all in one place. The article doesn't address fraud very much because it may have a small effect compared to unintentionally getting things wrong.
Account of a retraction by an experiment's author Doing the decent thing when Murphy attacks. Most painful sentence: "First, we found that one of the bacterial strains we had relied on for key experiments was mislabel... (read more)
Stock market investment would seem like a good way to test predictive skills, have there been any attempts to apply lw style rationality techniques to it?
Has anyone used fitbit or similar products for tracking activity and sleep?
Awhile back I posted a comment on the open thread about the feasibility of permanent weight-loss. (Basically: is it a realistic goal?) I didn't get a response, so I'm linking it here to try again. Please respond here instead of there. Note: most likely some of my links to studies in that comment are no longer valid, but at least the citations are there if you want to look those up.
I believe there is a named cognitive bias for this concept but a preliminary search hasn't turned anything up: The tendency to use measures or proxies that are easily available rather than the ones that most accurately measure the cared about outcome.
anyone know what it might be called?
http://en.wikipedia.org/wiki/Attribute_substitution ?
Calling all history buffs:
I have this fragment of a memory of reading about some arcane set of laws or customs to do with property and land inheritance. It prevented landowners from selling their land, or splitting it up, for some reason. This had the effect of inhibiting agricultural development sometime in the feudal era or perhaps slightly after. Anyone know what I'm talking about?
(I'm aware of the opposite problem, that of estates being split up among all children (instead of primogeniture) which caused agricultural balkanization and prevented economies of scale.)
This sounds like the system that France had before the first French Revolution. That is, up until 1789; I'm not sure when it started. I wouldn't be surprised if a similar system existed in other European countries at around the same time, but I'm not sure which. (I've only been reading history for a couple years, and most of it has been research for fiction I wanted to write, so my knowledge is pretty specifically focused.)
Under this system, the way property is inherited depends on the type of property. Noble propes is dealt with in the way you describe - it can't be sold or given away, and when the owner dies, it has to be given to heirs, and it can't be split among them very much. My notes say the amount that goes to the main heir is the piece of land that includes the main family residence plus 1/2 - 4/5 of everything else, which I think means there's a legal minimum within that range that varies by province, but I'm not completely sure. Propes* includes lands and rights over land (land ownership is kind of weird at this time - you can own the tithe on a piece of land but not the land itself, for example) that one has inherited. Noble propes is propes that belongs to a noblepers... (read more)
As I want to fix my sleep (cycle) I am looking for a proper full spectrum light to screw in my desk light. But when I shop for "full spectrum" light it turns out that they only have three peaks and do not even come near a black body in lighting. Is there something for less than a small fortune for a student like I am looking for? E27 socket, available in the EU.
I can ask more generally: What is the lighting situation at your desk and at your home? I aim for lighting very low in blue in the evening and as close to full daylight during work. For th... (read more)
I have a question about Effective Altruism:
The essence of EA is that people are equal, regardless of location. In other words, you'd rather give money to poor people in far away countries than people in your own country if it's more effective, even though the latter feel intuitively more close to you. People care more about their own countries' citizens even though they may not even know them. Often your own country's citizens are similar to you culturally and in other ways, more than people in far-way countries and you might feel a certain bond with your ... (read more)
Maybe it is a problem of puchasing fuzzies and utilons together, and also being hypocritical about it.
Essentially, I could do things that help other people and me, or I could do things that only help other people but I don't get anything (except for a good feeling) from it. The latter set contains much more options, and also more diverse options, so it is pretty likely that the efficient solution for maximizing global utility is there.
I am not saying this to argue that one should choose the latter. Rather my point is that people sometimes choose the former and pretend they chose the latter, to maximize signalling of their altruism.
"I donate money to ill people, and this is completely selfless because I am healthy and expect to remain healthy." So, why don't you donate to ill people in poor countries instead of your neighborhood? Those people could buy greater increase in health for the same cost. "Because I care about my neighbors more. They are... uhm... my tribe." So you also support your tribe. That's not completely selfless. "That's a very extreme judgement. Supporting people in my tribe is still more altruistic than many other people do, so what's your... (read more)
LWers may appreciate this Onion-style satire: "Another Empty, Lifeless Planet Found".
So, what's all this about a Postivist debacle I keep hearing? Who were the positivists, what did we have in common with them, what was different, and how and why did they fail?
... (read more)How can I learn to sleep in a noisy environment?
For several years now I've lived in loud apartments, where I can often hear conversations or music late into the night.
I often solve this problem by wearing earplugs. However, I don't want to sleep with earplugs every night, and so I've made a number of attempts to adjust to the noise without earplugs, either going "cold-turkey" for as long as I can stand, or by progressively increasing my exposure to night-time noise.
Despite several years of attempts, I don't think I've habituated at all. What giv... (read more)
I've been on a scifi audio kick lately and was wondering are there any good sites other than sffaudio?
Let's assume society decides that eating meat from animals lacking self-awareness is ethical, and anything with self-awareness is not ethical to eat, and that we have a reliable test to tell the difference. Is it ethical to deliberately breed tasty animals to lack self-awareness, both before or after their species has self-awareness?
My initial reaction to the latter is 'no, it's not ethical, because you would necessarily be using force on self-aware entities as part of the breeding process'. The first part of the question seems to lean towards 'yes', but t... (read more)
I've got a few questions about Newcomb's Paradox. I don't know if this has already been discussed somewhere on LW or beyond (granted, I haven't looked as intensely as I probably should have) but here goes:
If I were approached by Omega and he offered me this deal and then flew away, I would be skeptical of his ability to predict my actions. Is the reason that these other five people two-boxed and got $1,000 due to Omega accurately predicting their actions? Or is there some other explanation… like Omega not being a supersmart being and he never puts $1 milli... (read more)
I had a random-ish thought about programming languages, which I'd like comments on: It seems to me that every successful programming language has a data structure that it specialises in and does better than other languages. Exaggerating somewhat, every language "is" a data structure. My suggestions:
Now this list is missing some languages, for lack of my familiarity with them, and also some structures. For example, is there a language which "is" strings? And on this model, what is Java?