I know this comes up from time to time, but how soon until we split into more subreddits? Discussion is a bit of firehose lately, and has changed drastically from its earlier role as a place to clean up your post and get it ready for main. We get all kinds of meetup stuff, philosophical issues, and so forth which mostly lack relevance to me. Not knocking the topics (they are valuable to the people they serve) but it isn't helpful for me.
Mostly I am interested in scientific/technological stuff, especially if it is fairly speculative and in need of advocacy. Cryonics, satellite-based computing, cryptocurrency, open source software. Assessing probability and/or optimal development paths with statistics and clean epistemology is great, but I'm not super enthused about probability theory or philosophy for its own sake.
Simply having more threads in the techno-transhumanist category could increase the level of fun for me. But there also needs to be more of a space for long-term discussions. Initial reactions often aren't as useful as considered reactions a few days later. When they get bumped off the list in only a few days, that makes it harder to come back with considered responses, and it makes for fewer considered counter-responses. Ultimately the discussion is shallower as a result.
Also, the recent comments bar on the right is less immediately useful because you have to click to the Recent Comments page and scroll back to see anything more than a few hours in the past.
I guess instead of complaining publicly, it would be better to send a private message to a person who can do something about it, preferably with a specific suggestion, and a link to a discussion which proves that many people want it.
Long-term threads separately seems to be a very popular idea... there were even some polls in the past to prove it.
MIRI's strategy for 2013 involves more strongly focusing on math research, which I think is probably the right move, even though it leaves them with less use for me. (Math isn't my weakest suit, but not my strongest, either.)
My current understanding of how hypnosis works is:
The overwhelming majority of our actions happen automatically, unconsciously, in response to triggers. Those can be external stimuli, or internal stimuli at the end of a trigger-response chain started by an external stimulus. Stimulus-response mapping are learnt through reinforcement. Examples: walking somewhere without thinking about your route (and sometimes arriving and noticing you intended to go someplace else), unthinkingly drinking from a cup in front of you. (Finding and exploiting those triggers is incredibly useful if you have executive function issues.)
This "free won't" isn't very reliable. In particular, there's very little you can do about imagery ("Don't think of a purple elephant"). Examples: advertising, priming effects, conformity.
Conscious processes can't multitask much, so by focusing attention elsewhere, stimuli cause responses more reliably and less consciously. See any study on cognitive
3jimmy8yAs far as I can tell, it's more of a spandrel than anything. As a general rule,
anything you can do with "hypnosis", you can do without. Depending on what
you're doing with it, it can be more of a feature or more of a bug that comes
inherent to the architecture.
I could probably give a better answer if you explained exactly what you mean by
"hypnosis", since no one can agree on a definition.
4Paul Crowley8yDennett makes a good case for the word "spandrel" not really meaning much in
"Darwin's Dangerous Idea".
The Linear Interpolation Fallacy: that if a lot of something is very bad, a little of it must be a little bad.
Most common in politics, where people describe the unpleasantness of Somalia or North Korea when arguing for more or less government regulation as if it had some kind of relevance. Silliest is when people try to argue over which of the two is worse. Establishing the silliness of this is easy. Somalia beats assimilation by the borg, so government power is bad. North Korea beats the Infinite Layers of the Abyss, so government power is good. Surely no universal principle of government can be changed by which contrived example I pick.
And, with a little thought, it seems clear that there is some intermediate amount of goverment that supports the most eudaemonia. Figuring out what that amount is and which side of it any given goverment lies on are important and hard questions. But looking at the extremes doesn't tell us anything about them.
(Treating "government power" as a scalar can be another fallacy, but I'll leave that for another post.)
3Viliam_Bur8yMore nasty details: An amount of government which supports the most eudaemonia
in the short term, may not be the best in the long term. For example, it could
create a situation where the government can expand easily and has natural
incentives to expand. Also, the specific amount of government may depend
significantly on the technological level of society; inventions like internet or
home-made pandemic viruses can change it.
3[anonymous]8yI think the "non-scalar" point is a much more important take-away.
Generalizing: "Many concepts which people describe in linear terms are not
actually linear, especially when those concepts involve any degree of
complexity."
0dspeyer8ySome discussion of that [http://lesswrong.com/lw/mm/the_fallacy_of_gray/59q0]
2[anonymous]8yI've seen that applied to all kinds of things, ranging from vitamines
[http://lesswrong.com/lw/20i/even_if_you_have_a_nail_not_all_hammers_are_the/]
to sentences starting with “However”
[http://itre.cis.upenn.edu/~myl/languagelog/archives/003721.html], to name the
first two that spring to mind.
What is the smartest group/cluster/sect/activity/clade/clan that is mostly composed of women? Related to the other thread on how to get more women into rationality besides HPMOR.
Ashkenazi dancing groups? Veterinarian College students? Linguistics students? Lilly Allen admirers?
No seriously, name guesses of really smart groups, identity labels etc... that you are nearly certain have more women than men.
Academic psychologists are mostly female. That would seem to be a pretty good target audience for LW. There are a few other academic areas that are mostly female now, but keep in mind that many academic fields are still mostly male even though most new undergraduates are female in the area.
There are lists online of academic specialty by average GRE scores. Averaging the verbal and quantitative scores, and then determining which majority-female discipline has the highest average would probably get you close to your answer.
1[anonymous]8yWell, keep in mind that 75% of LWers are under 31
[http://lesswrong.com/lw/fp5/2012_survey_results/#more] anyway, so it's the sex
ratio among the younger cohorts you mainly care about, not the sex ratio
overall.
2knb8yBut it isn't the undergrads you're looking for if you want the "smartest mostly
female group." Undergrads are less bright on average than advanced degree
holders due to various selection effects.
5diegocaleiro8yI think we are aiming for "females who can become rationalists" which means that
expected smarts are more valuable then real smarts, in particular if the real
ones were obtained through decades (implying the person will then be less
flexible, since older).
2[anonymous]8yIME, among post-docs there might not be as many females as among freshers, but
there definitely are more than among tenured professors.
6Mitchell_Porter8yProfessional associations for women in the smartest professions.
6NancyLebovitz8yOne of my friends has nominated the student body at Bryn Mawr.
6Dias8yBryn Mawr has gone downhill a lot since the top female students got the chance
to go to Harvard, Yale, etc. instead of here. Bryn Mawr does have a cognitive
bias course (for undergraduates) but the quality of the students is not that
high.
Of course, Bryn Mawr does excellently at the only-women part, and might do well
overall once we take into account that constraint.
0NancyLebovitz8yAnd another friend has recommended DC WebWomen [http://dcwebwomen.org/].
0drethelin8yGender studies graduate programs.
1Jonathan_Graehl8yaren't plenty of other arts+humanities fields female-majority now when you look
at newly minted phds?
0drethelin8ydunno! It was just a guess
0ThrustVectoring8yI'm not entirely sure that targeted recruitment of feminists is a good idea. It
seems to me like a good way to get LW hijacked into a feminist movement.
7Randy_M8yLessWrong+?
5Viliam_Bur8yLessIncorrect
6bogus8yI agree, and would expand this to any politically motivated movement (including
libertarians, moldbuggians etc.). After all, this is the main rationale for our
norm of not discussing politics on LW itself.
5ThrustVectoring8yPolitical movements in general care more about where you are and your usefulness
as a soldier for their movement than how you got there. It's something that we
are actively trying to avoid.
0jooyous8yI'm going to take a blind guess and say nurses. Someone tell me how I did!
9Jonathan_Graehl8ynurses are smart, but not impressively so.
How much difference can nootropics make to one's studying performance / habits? The problems are with motivation (the impulse to learn useful stuff winning out over the impulse to waste your time) and concentration (not losing interest / closing the book as soon as the first equation appears -- or, to be more clear, as soon as I anticipate a difficult task laying ahead). There are no other factors (to my knowledge) that have a negative impact on my studying habits.
Or, to put it differently: if a defective motivational system is the only thing standing between me and success, can I turn into an uber-nerd that studies 10 h/day by popping the right pills?
EDIT: Never messed with my neurochemistry before. Not depressed, not hyperactive... not ruling out some ADD though. My sleep "schedule" is messed up beyond belief; in truth, I don't think I've even tried to sleep like a normal person since childhood. Externally imposed schedules always result in chronic sleep deprivation; I habitually push myself to stay awake till a later hour than I had gone to sleep at the previous night (/morning/afternoon) -- all of this meaning, I don't trust myself to further mess with my sleeping habits. Of what I've read so far, selegiline seems closest to the effects I'm looking for, but then again all I know about nootropics I've learned in the past 6 hours. I can't guarantee I can find most substances in my country.
... Bad or insufficient sleep can cause catastrophic levels of akrasia. Fix that, then if you still have trouble, consider other options. Results should be apparent in days, so it is not a very hard experiment to carry out - set alarms on your phone or something for when to go to bed, and make your bedroom actually dark (this causes deeper sleep) you should get more done overall because you will waste less of your waking hours.
-1Dahlen8yYou're right about that, but the problem with lack of motivation persists even
during times when I can set my own schedule and get as much sleep as I need.
(Well, to put it precisely, not sleeping enough guarantees that I won't get
anything done out of my own choice, but sleeping enough doesn't guarantee that I
will, not even closely.)
4Qiaochu_Yuan8yI agree with ThrustVectoring that you'll probably get more mileage out of
implementing something like a GTD system (or at least that doing this will be
cheaper and seems like it would complement any additional mileage you get out of
nootropics). There are lots of easy behavioral / motivational hacks you can use
before you start messing with your neurochemistry, e.g. rewarding your inner
pigeon [http://lesswrong.com/lw/fc3/checklist_of_rationality_habits/].
I've had some success recently with Beeminding [https://www.beeminder.com/] my
Pomodoros [http://www.pomodorotechnique.com/]. It forces me to maintain a
minimal level of work per unit time (e.g. recently I was at the MIRI workshop
[http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/], and even
though ordinarily I would have been able to justify not doing anything else
during that week I still spent 25 minutes every day working on problem sets for
grad school classes) which I'm about to increase.
7Dahlen8yTried. Failed. Everything that requires me, in my current state, to police
myself, fails miserably. It's like my guardian demon keeps whispering in my ear,
"hey... who's to stop me from breaking the same rules that I have set for
myself?" -- cue yet another day wasted.
Eat candy every time I clear an item off my to-do list? Eat candy even when I
don't!
Pomodoros? Y-yeah, let's stop this timer now, shall we -- I've just got this
sudden imperious urge to play a certain videogame, 10 minutes into my Pomodoro
session...
Schedule says "do 7 physics problems"? Strike that, write underneath "browse
4chan for 7 hours"!
... I don't know, I'm just hopeless. Not just lazy, but... meta-lazy too?
Sometimes I worry that I was born with exactly the wrong kind of brain for
succeeding (in my weird definition of the word); like utter lack of
conscientiousness is embedded inextricably into the very tissues of my brain.
That's why nootropics are kind of a last resort for me.
... I don't know, I'm just hopeless. Not just lazy, but... meta-lazy too? Sometimes I worry that I was born with exactly the wrong kind of brain for succeeding (in my weird definition of the word); like utter lack of conscientiousness is embedded inextricably into the very tissues of my brain. That's why nootropics are kind of a last resort for me.
I could have easily written this exact same post two years ago. I used to be incredibly akratic. For example, at one point in high school I concluded that I was simply incapable of doing any schoolwork at home. I started a sort of anti-system where I would do all the homework and studying I could during my free period the day it was due, and simply not do the rest. This was my "solution" to procrastination.
Starting in January, however, I made a very conscious effort to combat akrasia in my life. I made slow, frustrating progress until about a week and a half ago where something "clicked" and now I spend probably 80% of my free time working on personal projects (and enjoying it). I know, I know, this could very easily be a temporary peak, but I have very high hopes for continuing to improve.
2Dahlen8yPlease do go on; I'd be very much interested in what you have to say.
8gothgirl4206668yOkay.
To be honest, it's really hard to say exactly what lead to my change in
willpower/productivity. Now that I actually try to write down concrete things I
do that I didn't do two months ago, it's hard, and my probability that my recent
success is a fluke has gone up a little.
I feel like what happened is that after reading a few self-help books and
thinking a lot about the problem, I ended up completely changing the way I think
about working in a difficult-to-describe way. It's kind of like how when I first
found LessWrong, read through all the sequences, and did some musings on my own,
I completely changed the way I form beliefs. Now I say to myself stuff like "How
would the world look differently if x were true?" and "Of all the people who
believe x will happen to them, how many are correct?", even without consciously
thinking about it. Perhaps more importantly, I also stopped thinking certain
thoughts, like "all the evidence might point to x, but it's morally right to
believe y, so I believe y", etc.
Similarly, now, I now have a bunch of mental habits related to getting myself to
work harder and snap out of pessimistic mindstates, but since I wasn't handed
them all in one nicely arranged body of information like I was with LessWrong,
and had to instead draw from this source and that source and make my own
inferences, I find it really hard to think in concrete terms about my new mental
habits. Writing down these habits and making them explicit is one of my goals,
and if I end up doing that, I'll probably post it somewhere here. But until
then, what I can do is point you in the direction of what I read, and outline a
few of what I think are the biggest things that helped me.
The material I read was
* various LessWrong writings
* PJ Eby's Thinking Things Done
* Succeed: How We Can Reach Our Goals by Heidi Halvorson
* Switch: How to Change When Change Is Hard by Chip and Dan Heath
* Feeling Good: The New Mood Therapy by David D. Burns
* The Procrastina
1TheOtherDave8ySome people in a similar position recruit other people to police us when our
ability to police ourselves is exhausted/inadequate. Of course, this requires
some kind of policing mechanism... e.g., whereby the coach can unilaterally
withhold rewards/invoke punishments/apply costs in case of noncompliance.
0Paul Crowley8yHave you tried setting very small and easy goals and seeing if you can meet
those?
0Dahlen8yI have made many incremental steps towards modifying some behaviours in a
desired direction, yes, but they don't tend to be consciously directed. When
they are, I abandon them soon; no habit formation occurs of these attempts. I am
making progress, but it seems to be largely outside of my control.
0Qiaochu_Yuan8yHave you tried Beeminder? That's less self-policing and more Beeminder policing
you, as long as you haven't sunk so low as to lie to Beeminder. Alternatively,
there are probably things you can do to cultivate self-control in general,
although I'm not sure what those would be (I've been practicing with denying
myself various things for awhile now).
2Dahlen8yNo way, it's the stupidest thing I could do with my already very very limited
financial resources. That sort of way of motivating yourself is really sort of a
luxury, at least when viewed from my position. Lower middle class folks in
relatively poor countries can't afford to gamble their meagre savings on a
fickle motivation; any benefit I could derive from it is easily outweighted by
the very good chance of digging myself into a financial hole... so, I can't take
that risk.
9Qiaochu_Yuan8yI can think of much stupider things. Doesn't the fact that you have limited
finances make this an even better tool to use (in that you'll be more motivated
not to lose money)? The smallest pledge is $5 and if you stay on track (it helps
to set small goals at first) you never have to pay anything. I think you're
miscalibrated about how risky this is.
And how were you planning on obtaining nootropics if your finances are so
limited?
2Dahlen8y... No. It doesn't work like that at all. That's the definition of digging
myself into a hole. Will I be struggling to get out of it all the more so? Yes,
I will, but at a cost greater than what I was initially setting out to
accomplish. I'd rather be unmotivated than afraid of going broke.
Possibly. The thing is, around here, even $5 is... Well, not much by any
measure, but it doesn't feel negligible, you know what I'm saying? Someone of
median income couldn't really say it's no big deal if they come to realize the
equivalent of $5 is missing from their pockets. It probably doesn't feel like
that to an American, so I understand why you may think I'm mistaken.
I can afford to spend a few bucks on a physical product with almost guaranteed
benefits. I can't afford to bet money on me doing things I have a tendency to do
very rarely. In one case I can expect to get definite value from the money I
spend, in the other I'm basically buying myself some worries. (I should,
perhaps, add that the things I want to motivate myself to do don't have a chance
of earning me income any time soon.)
Of course; it wasn't meant to be understood literally.
--
The bottom line is, they're not getting my money. I'm really confident that it's
a good decision, and have really good reasons to be suspicious of any attempts
to get me to pay for something, and there are really many things out there that
are obviously useful enough that I don't need to be persuaded into buying them.
So... I appreciate that you mean to help, it's more than one can ask from
strangers, but I strongly prefer alternatives that are either free, guaranteed,
or ideally both.
8gwern8yYou could try using Beeminder without giving them money.
0OrphanWilde8yA habit I'm working on developing is to ask a mental model of a Manager what I
-should- be doing right now. As long as I don't co-opt the Manager, and as long
as there's a clearly preferable outcome, it seems to work pretty well.
Even when there isn't a clearly preferable outcome, the mental conversation
helps me sort out the issue. (And having undertaken this, I've discovered I used
to have mental conversations with myself all the time, and at some point lost
and forgot the habit.)
2DaFranker8yI've tried similar approaches. From that opening line and with sane priors, you
can probably get a pretty good idea of what the results were.
For me, and I suspect many others for whom all self-help and motivational
techniques and hacks just "inexplicably" fail and which "they must be doing
wrong", the problem is almost entirely within one single, simple assumption that
seems to work naturally for the authors, but which is for me a massive amount of
cognitive workload that is continuously taxing on my mental energy.
And said assumption that I refer to is precisely here:
The question I shall ask, to illustrate my point, is: If you were programming a
computer to do this (e.g. open a chat window with someone posing as a Manager
for the appropriate discussion), how would you go about it?
More importantly, how does the program know when to open the window?
Suppose the program has access to your brain and can read what you're thinking,
and also has access to a clock.
Well, there are three most obvious, simple answers, in order of code complexity:
1. Keep the chat window open all the time. This is obviously costly
attention-wise (but not for the program), and the window is always in the
way, and chances are that after a while you'll stop noticing that window and
never click on it anymore, and it will lose all usefulness. It then becomes
a flat tax on your mind that is rendered useless.
2. Open the chat window at specific intervals. This brings another question:
how often? If it's too often, it gets annoying, and it opens too many times
when not needed, and eventually that'll cause the same problems as solution
1. If it's not often enough, then you won't get much benefit from it
whenever you would need it. And even if it is a good interval, you'll still
sometimes open it when not needed, or not open it when it was needed more
often that day or in the middle of an interval. We can do better.
3. Look for the kind of situ
3OrphanWilde8yAm I correct in ascertaining that your issue is less making the right decisions,
and more trying to remember to consciously make decisions at all?
0DaFranker8yIn some sense, yes.
However, sometimes it gets much more complex. It can very well happen that I
insert a trigger to "must go do dishes once X is done", but then I think "Hmm,
maybe I should go do dishes" at some point in the future when I'm in-between
activities, and X happens to be done, but (and this is the gut-kicking bastard):
Thinking that I should do the dishes is not properly linked to checking whether
X is done, and thus I don't see the process that tells me that X is done so I
should do the dishes!
And therefore what happens afterwards, instead of realizing that X is done me
getting up to do dishes, is me thinking "yeah I should, but meh, this is more
interesting". And X has never crossed my mind during this entire internal
exchange. And now I'm back to tabsploding / foruming / gaming. And then three
hours later I realize that all of this happened when I finally think of X. Oops.
So yes. "Trying to remember" is an active-only process for me. Something must
trigger it. The thoughts and triggers do not happen easily and automatically at
the right and proper times. Once the whole process is there and [Insert favorite
motivational hack] is actually in my steam-of-consciousness, then this whole
"motivation" thing becomes completely different and much easier to solve.
Unfortunately, I do not yet have access to technology of sufficient
sophistication to externalize and fully automate this process. I've dreamed of
it for years though.
1OrphanWilde8yThis may be a stupid question, but I have to ask:
Have you tried designing solutions for this problem? Pomodoro and the like are
designed to combat akrasia; they're designed to supplement or retrain willpower.
They're solutions for the wrong problem; your willpower isn't entering into it.
Hypothesis: Pomodoro kind-of sort-of worked for you for a short period of time
before inexplicably failing. You might not have even consciously noticed it
going off.
3DaFranker8yIf I'm reading you correctly, that hypothesis is entirely correct. Pomodoro is
also not the only thing where this has happened. In most cases, I don't
consciously realize what happens until later, usually days or weeks after the
fact.
I've tried coming up with some solutions to the problem, yes, but so far there's
only three avenues that I've tried that had promising results:
* Use mental imagination techniques to train habits: imagine arriving in
situation or getting feelings X, anchor that situation or feeling to action
Y. This works exceptionally well and easily for me, but... Yep. Doing the
training is itself something that suffers from this problem. I would need to
use it to train using it. Which I can't, 'caus I'm not good enough at it (I
tried). Some bootstrapping would be required for this to be a reliable
method, but it's also in itself a rather expensive and time-consuming
exercise (not the same order of magnitude as constant mindfulness, though),
so I'd prefer better alternatives.
* Spam post-its or other forms of fixed visual / auditory reminders in the
appropriate contexts, places and times. Problem is, this becomes like the
permanent or fixed-timed chat windows in the programmed Manager example - my
brain learns to phase them out or ignore them, something which is made
exponentially worse when trying to scale things up to more things.
* Externalize and automate using machines and devices. Setting programmatic
reminders on my phone using tasker
[https://play.google.com/store/apps/details?id=net.dinglisch.android.taskerm]
is the best-working variant I've found so far, but the app is difficult to
handle and crashes often - and every single time it crashes, I lose
everything (all presets, all settings, all events, everything - as if I had
reinstalled the app completely). I gave up on that after about the fourth
time I spent hours configuring it and then lost everything from a single
unrel
0OrphanWilde8yI actually suffer from exactly the same issue. (I opted to try to run the
Manager app full-time, although I'm not having a lot of luck training myself to
actually do it. I figure any wasted brain cycles probably weren't being used
anyways on account that I couldn't remember to do things that required using
them.)
Thus far the only real "hack" I've worked out is to constantly change reminder
mechanisms. I'm actually fine with highly disruptive alerts - my favorite alarm
is also the most annoying - but the people around me tend to hate them.
Hacks aside, routine has been the only thing I've found that helps, and helps
long-term. And given my work schedule, which can vary from "Trying to find
something to do" to "Working eighteen hours days for two weeks straight" with
just about everything in the middle, routine has been very hard to establish.
However, I have considerably better luck limiting my routine; waking up at 6 AM
every day, and dedicating this time strictly to "Stuff that needs doing", has
worked for me in the past. (Well, up until a marathon work period.)
4RomeoStevens8ynicotine has been a significant help with motivation. I only vape eliquid with
nicotine when I am studying. This seems to have resulted in a large reduction in
ugh fields.
2ThrustVectoring8yIt depends on what you get way too much to give a blanket category answer.
Adderall/other ADD medications have a proven track record. Modafinil is likely
also helpful, esp with concentration and having more time in general to get
things done.
Honestly, if you're anything like me, you'd get a lot more mileage out of
implementing and organization and time management system.
My own deconversion was prompted by realizing that Rand sucked at psychology. Most of her ideas about how humans should think and behave fail repeatedly and embarrassingly as you try to apply it to your life and the lives of those around you. In this way, the disease gradually cures itself, and you eventually feel like a fool.
It might also help to find a more powerful thing to call yourself, such as Empiricist. Seize onto the impulse that it is not virtuous to adhere to any dogma for its own sake. If part of Objectivism makes sense, and seems to work, great. Otherwise, hold nothing holy.
Michael Huemer explains why he isn't an Objectivist here and this blog is almost nothing but critiques of Rand's doctrines. Also, keep in mind that you are essentially asking for help engaging in motivated cognition. I'm not saying you shouldn't in this case, but don't forget that is what you are doing.
With that said, I enjoyed Atlas Shrugged. The idea that you shouldn't be ashamed for doing something awesome was (for me, at the time I read it) incredibly refreshing.
2mstevens8yQuoting from the linked blog:
"Assume that a stranger shouted at you "Broccoli!" Would you have any idea what
he meant? You would not. If instead he shouted "I like broccoli" or "I hate
broccoli" you would know immediately what he meant. But the word by itself,
unless used as an answer to a question (e.g., "What vegetable would you like?"),
conveys no meaning"
I don't think that's true? Surely the meaning is an attempt to bring that
particular kind of cabbage to my attention, for as yet unexplained reasons.
1Desrtopa8yThat's a possible interpretation, but I wouldn't say "surely."
Some other possibilities.
The person picked the word apropos of nothing because they think it would be
funny to mess with a stranger's head.
It's some kind of in-joke or code word, and they're doing it for the amusement
of someone else who's present (or just themselves if they're the sort of person
who makes jokes nobody else in the room is going to get.)
The person is confused or deranged.
3TheOtherDave8yIf I heard someone shout "Broccoli" at me without context, my first assumption
would be that they'd actually said something else and I'd misunderstood.
0mstevens8yBut this doesn't seem particularly different from the ambiguity in all language.
The linked site seems to suggest there's some particular lack of meaning in
isolated words.
2mstevens8yMy reaction to Rand is pretty emotional, rather than "I see why her logic is
correct!", which I think justifies the motivated cognition aspect a little bit.
-2blacktrance8ySome of Huemer's arguments against Objectivism are good (particularly the ones
about the a priori natures of logic and mathematics), but his arguments against
the core of Objectivism (virtue ethical egoism) fall short, or at best
demonstrate why Objectivism is incomplete rather than wrong.
-1OrphanWilde8yHis arguments against her ethical system seem... confused.
She pretty much acknowledged that life as a good thing is taken as a first
principle, what he calls a suppressed premise - she was quite open about it, in
fact, as a large part of her ethical arguments were about ethical systems which
-didn't- take life as a good thing as a first principle.
His arguments about a priori knowledge, however, are fatally flawed. What he
calls a priori knowledge only seems intuitive once you've understood it. Try
teaching addition to somebody sometime. Regardless of whether a priori truths
exist, we only recognize their truth by reference to experience. Imagine you
live someplace where addition and subtraction never work - addition wouldn't be
intuitively true; it would be nonsense. Do you think you could get a child to
grasp addition while you perform sleight of hand and change the number of apples
or whatnot as you demonstrate the concepts? He's regarding knowledge from the
perspective of somebody who has already internalized it.
You have to have a strong grasp of abstract concepts before you can start
building the kind of stuff he insists is a priori, concepts which are built up
through experience with things like language. Mathematics wasn't discovered, it
was invented, just as much as an electric motor was invented. (You can suppose
that mathematics exists in an infinite plane of possible abstractions, but the
same is true of the electric motor.) That we chose the particular mathematics we
did is a result of the experiences we as a species have had over the past few
dozen thousand years.
(Or, to take a page out of Descartes - what would his a priori knowledge look
like if a devil were constantly changing the details of the world he lived in?
Playing sleight of hand with the apples, as it were?)
3blacktrance8yThe idea of a priori knowledge is not that it's intuitive, but that it is not
dependent on experience for it to be conceivable. Though addition may be hard to
teach without examples, it abstractly makes sense without reference to anything
in the physical world. Similarly, the truth of the statement "a bachelor is an
unmarried man" does not require any experience to know - its truth comes from
the definition of the word "bachelor".
1Strelok8yIf I am understanding your statement here correctly, you are saying that a
priori knowledge hinges on the idea that concepts can be acquired independently
of experience. If that is what you are saying, then you would be incorrect. Very
few philosophers who accept the idea of a priori knowledge—or more
appropriately: a priori justification—think that human-beings ever acquire
concepts innately or that they can otherwise conceive of them independently of
experience. A proposition is knowable a priori if it is justifiable by appeal to
pure reason or thought alone. Conversely, a proposition is knowable a posteriori
if it is justifiable in virtue of experience; where any relevant, constitutive
notion of experience would have as its meaning (a) some causally conditioned
response to particular, contingent features of the world, and (b) doxastic
states that have as their content information concerning such contingent
features of the actual world as contrasted with other possible worlds.
-3OrphanWilde8ySomebody defined the operation of addition - it did not arise out of pure
thought alone, as is evidenced by the fact that nobody bothered to define some
other operation by which two compounds could be combined to produce a lesser
quantity of some other compound (at least until people began formalizing
chemistry). There are an infinite number of possible operations, most of which
are completely meaningless for any purpose we would put them to. Knowledge of
addition isn't knowledge at all until you have something to add.
"Qwerms are infantile eloppets." Is this a true statement or not? I could
-define- a qwerm to be an infantile eloppet, but that doesn't represent any
knowledge; in the pure abstract, it is an empty referential, devoid of meaning.
Everything in the statement "a bachelor is an unmarried man" is tied to
real-world things, whatever knowledge is contains there is experience driven; if
the words mean something else - and those words are given meaning by our
experiences - the statement could be true or false.
Kant, incidentally, did not define a priori knowledge to be that which is
knowable without experience (the mutation of the original term which Ayn Rand
harshly criticized), but rather that which is knowable without reference to
-specific- experience, hence his use of the word "transcendent". If putting one
rock and another together results in three rocks, our concept of mathematics
would be radically different, and addition would not merely fail to reflect
reality, it would not for any meaningful purpose exist. Transcendent truths are
arrived at through experience, they simply don't require any -particular-
experience to be had in order to be true.
In Kantian terms, a priori, I know if I throw a rock in the air it will fall. My
posterior knowledge will be that the rock did in fact fall. There are other
transcendental things, but transcendental knowledge is generally limited to
those things which can be verified by experience (he argued that transcend
0[anonymous]8yThe problem here is that you seem to be presupposing the odd idea that, in order
for any proposition to be knowable a priori, its content must also have been
conceived a priori. (At least for the non-Kantian conceptions of the a priori).
It would be rare to find a person who held the idea that a concept be acquired
without having any experience related to it. Indeed, such an idea seems entirely
incapable of being vindicated. If I expressed a proposition such as "nothing can
be both red and green all over at the same time" to a person who had no relevant
perceptual experience with the colors I am referring to and who had failed to
acquire the relevant definitions of the color concepts I am using, then that
proposition would be completely nonsensical and unanalyzable for such a person.
However, this has no bearing on the concept of a priori knowledge whatsoever.
The only condition for a priori knowledge is for the expressed proposition be
justifiable by appeal to pure reason.
8Vaniver8yAre you looking to treat symptoms? If so, which ones?
3OrphanWilde8yLaughs I'm an Objectivist by my own accord, but I may be able to help if you
find this undesirable.
The shortest - her derivations from her axioms have a lot of implicit and
unmentioned axioms thrown in ad-hoc. One problematic case is her defense of
property - she implicitly assumes no other mechanism of proper existence for
humans is possible. (And her "proper existence" is really slippery.)
This isn't necessarily a rejection - as mentioned, I am an Objectivist - but it
is something you need to be aware of and watch out for in her writings. If a
conclusion doesn't seem to be quite right or doesn't square with your own
conception of ethics, try to figure out what implicit axioms are being slipped
in.
Reading Ayn Rand may be the best cure for Randianism, if Objectivism isn't a
natural philosophy for you, which by your apparent distress it isn't. (Honestly,
though, I'd stay the hell away from most of the critics, who do an absolutely
horrible job of attacking the philosophy. They might be able to cure you of
Randianism, but largely through misinformation and unsupported emotional
appeals, which may just result in an even worse recurrence later.)
1Viliam_Bur8yPlease correct me if I'm wrong, but it seems to me that she also did some
variant of "Spock Rationality". More precisely, it seems to me as if her heroes
have one fixed emotion (mild curious optimism?) all the time; and if someone
doesn't, that is only to show that Hero1 is not as perfect as Hero2 whose
emotional state is more constant.
5OrphanWilde8yI've mentioned this here before, but prior to reading Atlas Shrugged, I truly
believed in Spock Rationality. I used meditation to eliminate emotions as a
teenager because I saw them as irrelevant.
Atlas Shrugged convinced me that emotions were a valuable thing to have. So I
don't really see Spock Rationality in the characters.
The closest any of the characters comes to that is Galt, and it is heavily
implied he went through the same kind of utter despair as all the other
characters in the book. It's more or less stated outright that the Hero
characters experience greater emotions, in wider variety, than other characters,
and particularly the villains; the level emotions of, for example Galt, is not a
result of having no emotions, but having experienced such suffering that what he
experiences in the course of the book is insignificant by comparison.
(Relentless optimism and curiousity are treated as morally superior attitudes, I
grant, but I'd point out that this is a moral standard held to some degree here
as well. Imagine the response to somebody who insisted FAI was impossible and we
were all doomed to a singularity-induced hell. This community is pretty much
defined by curiousity, and to a lesser but still important extent optimism, in
the sense that we can accomplish something.)
2Viliam_Bur8ySome explanation: Recently I watched the beginning of Atlas Shrugged: Part I
[http://en.wikipedia.org/wiki/Atlas_Shrugged:_Part_I], and there was this
dialog, about 10 minutes from the beginning:
I didn't watch the whole movie yet, and I don't remember whether this was also
in the book. But this is what made me ask. (Also some other things seemed to
match this pattern.)
Of course there are other explanations too: Dagny can simply be hostile to
James; both implicitly understand the dialog is about a specific subset of
feelings; or this is specifically Dagny's trait, perhaps because she hasn't
experienced anything worth being emotional about, yet.
EDIT: Could you perhaps write an article about the reasonable parts of
Objectivism? I think it is worth knowing the history of previous self-described
rationality movements, what they got right, what they got wrong, and generally
what caused them to not optimize the known universe.
1[anonymous]8yI thought the exchange was supposed to be interpreted sarcastically, but the
acting in the movie was so bad it was hard to tell for sure. Having read most of
Rand during a misspent youth, I agree with OrphanWilde's interpretation of
Rand's objectivist superheroes being designed specifically to feel emotions that
are "more real" than everyday "human animals."
For what it's worth, in my opinion the only reasonable part of Objectivism is
contained in The Romantic Manifesto, which deals with all of this "authentic
emotions" stuff in detail.
0NancyLebovitz8yI also read it as Dagny being sarcastic, or at least giving up on trying to
convey anything important to James. (I haven't seen the movie-- Dagny was so
badly miscast that I didn't think I could enjoy it.)
I think a thing that's excellent in Rand not put front and center by much of
anyone else is that wanting to do things well is a primary motivation for some
people.
4[anonymous]8yNot to be snide, but... Plato? Aristotle? Kant? Nietzsche?
-1OrphanWilde8yI'd have to buy another copy of the book (I have a tendency to give my copies
away - I've gone through a few now), so I'm not sure. In the context of the
book, this would be referring to a specific subset of feelings (or more
particularly, guilt, which Ayn Rand utterly despised, and which James was kind
of an anthropomorphism of). Whether that's an appropriate description in the
context of the scene itself, I'm not sure.
(God the movie sucked. About the only thing I liked was that the villains were
updated to fit the modern era to be more familiar. They come off as strawmen in
the book unless you're familiar with the people they're caricatures of.)
2mstevens8yI initially thought she was being sarcastic. However on seeing this discussion I
find the "specific subset of feelings" theory more plausible. She's rejecting
the "feelings" James has.
3TimS8yHeinlein? I found Stranger in a Strange Land
[http://www.amazon.com/Stranger-Strange-Land-Robert-Heinlein/dp/0441790348] to
be an interesting counterpoint to Atlas Shrugged.
Both feature characters with super-human focus / capability (Rearden and
Valentine Micheal Smith). And they have totally different effects on societies
superficially similar to each other (and to our own).
There's more to say about Rand in particular, but we should probably move to the
media thread for that specifically (Or decline to discuss for Politics is the
Mindkiller [http://lesswrong.com/lw/gw/politics_is_the_mindkiller/] reasons).
Suffice it to say that uncertainty about how to treat the elite productive
elements in society predates the 1950s and 1960s.
1[anonymous]8yTime Enough for Love is an even better anti-Atlas Shrugged.
4NancyLebovitz8yWhy?
0mstevens8yI like my Heinlein, but I don't see the connection.
2CarlShulman8yThe (libertarian, but not Randian) philosopher Michael Huemer has an essay
entitled "Why I'm not an objectivist." It's not perfect, but at least the
discussion [http://home.sprynet.com/~owl1/rand.htm#5] of Rand's claim that
respect for the libertarian rights of others follows from total egoism is good.
2FiftyTwo8yGenuine question: What do you find appealing about it? I've always found the
writing impenetrable and the philosophy unappealing.
0mstevens8yThe writing, I agree, is pretty bad, and she has an odd obsession with trains
and motors. I can just about understand the "motor" part because it allows some
not very good "motor of the world" metaphors.
The appealing part is the depiction of the evil characters as endlessly
dependant on the hero characters, and their view of them as an inexhaustible
source of resources for whatever they want, and the rejection of this.
3Viliam_Bur8yThe obsession with trains is probably because in era when Ayn Rand lived, people
working with trains were an intellectual elite. They (1) worked with technology,
and often (2) travelled across the world and shared ideas with similar people.
If you worked at a railroad, sometimes you got free rides anywhere as an
employment benefit. It was an era before internet, where the best way to share
ideas with bright people was to meet them personally.
In other words, if she lived today, she would probably write about hackers, or
technological entrepreneurs. John Galt would be the inventor of internet, or
nanotechnology, or artificial intelligence. (And he would use modafinil instead
of nicotine.)
2VCavallo8yCan you explain what you mean by this? I ask because I don't know what this
means and would like to. Others here clearly seem to get what you're getting at.
Some Google searching was mostly fruitless and since we're here in this direct
communication forum I'd be interested in hearing it directly.
Thanks!
2mstevens8yI read the book Atlas Shrugged [http://en.wikipedia.org/wiki/Atlas_Shrugged] by
Ayn Rand where she sets out her philosophical views.
I found them worryingly convincing. Since they're also unpleasant and widely
rejected, I semi-jokingly semi-seriously want people to talk me out of them.
1RomeoStevens8yThink carefully through egoism.
hint: Vs rtbvfg tbnyf naq orunivbef qba'g ybbx snveyl vaqvfgvathvfunoyr sebz gur
tbnyf naq orunivbef bs nygehvfgf lbh'ir cebonoyl sbetbggra n grez fbzrjurer va
lbhe hgvyvgl shapgvba.
0blacktrance8yUbjrire, gur tbnyf bs rtbvfgf qb ybbx qvssrerag sebz gur tbnyf bs nygehvfgf, ng
yrnfg nygehvfgf nf Enaq qrsvarq gurz.
-1RomeoStevens8yFur svtugf fgenj nygehvfgf jvgu n fgenj rtbvfg ervasbeprq jvgu n pbng unatre be
gjb.
3Viliam_Bur8yI don't have a link, but I remember reading somewhere that originally the
altruism was defined as a self-destructive behavior -- ignoring one's own
utility function and only working for the others -- and only later it was
modified to mean... non-psychopatology.
In other words, it was the "egoism" which became a strawman by not being allowed
to become more reasonable, while its opposite the "altruism" was allowed to
become more sane than originally defined.
In a typical discussion, the hypothetical "altruist" is allowed to reflect on
their actions, and try to preserve themself (even if only to be able to help
more people in the future), while the hypothetical "egoist" is supposed to be
completely greedy and short-sighted.
0OrphanWilde8yhttp://hubcap.clemson.edu/~campber/altruismrandcomte.pdf
[http://hubcap.clemson.edu/~campber/altruismrandcomte.pdf]
Page 363 or so.
Auguste Comte coined the term "altruist", and it's been toned down considerably
from his original version of it, which held, in James Feiser's terms, that "An
action is morally right if the consequences of that action are more favorable
than unfavorable to everyone except the agent"
It's a pretty horrific doctrine, and the word has been considerably watered down
since Comte originally coined it. That's pretty much the definition that Ayn
Rand assaulted.
0RichardKennaway8yDepends on the discussion. Reasonable egoism is practically the definition of
"enlightened self-interest"
[http://archive.mises.org/12675/voltaire-on-enlightened-self-interest/].
4Viliam_Bur8yYeah, that's the point. To get the answer "egoism", one defines egoism as
enlightened self-interest, and altruism as self-destructive behavior. To get the
answer "altruism", one defines altruism as enlightened pro-social behavior, and
egoism as short-sighted greed. Perhaps less extremely than this, but usually
from the way these words are defined you understand which one of them is the
applause light for the person asking the question.
(I typically meet people for whom "altruism" is the preferred applause light,
but of course there are groups which prefer "egoism".)
0blacktrance8yJuvyr ure ivyynvaf ner fbzrjung rknttrengrq va gur frafr gung crbcyr va cbjre
hfhnyyl qba'g guvax va gubfr grezf (gubhtu gurve eurgbevp qbrf fbzrgvzrf fbhaq
fvzvyne), va zl rkcrevrapr gurer vf n tbbq ahzore bs beqvanel crbcyr jub guvax
dhvgr fvzvyneyl gb ure ivyynvaf. Enaq'f rknttrengvba vf cevznevyl gung vg vf
ener gb svaq nyy bs gur artngvir genvgf bs ure ivyynvaf va crbcyr jub qb zbenyyl
bowrpgvbanoyr guvatf, ohg ng yrnfg n srj bs gubfr genvgf ner gurer.
Gung'f fbzrjung orfvqrf gur cbvag, gubhtu. Znal crbcyr jubz Enaq jbhyq qrfpevor
nf nygehvfgf ner abg yvxr gur ivyynvaf bs ure obbxf va gung gurl trarenyyl qba'g
jnag gb sbepr bguref gb borl gurve jvyy (ng yrnfg abg rkcyvpvgyl). Vafgrnq,
gurve crefbany orunivbe vf frys-unezvat (vanccebcevngr srryvatf bs thvyg, ynpx
bs nffregvirarff, oryvrs gung gur qrfverf bs bguref ner zber vzcbegnag guna
gurve bja, qrfver gb cyrnfr bguref gb gur cbvag gung gur ntrag vf haunccl,
npgvat bhg bs qhgl va gur qrbagbybtvpny frafr, trahvar oryvrs va Qvivar Pbzznaq,
rgp). Nygehvfz vf arprffnevyl onq, ohg nygehvfgf ner abg arprffnevyl crbcyr jub
unez bguref - vg vf cbffvoyr naq pbzzba sbe gurve orunivbef/oryvrsf gb znvayl
unez gurzfryirf.
Enaq'f ivyynvaf ner nygehvfgf, ohg abg nyy Enaqvna nygehvfgf ner ivyynvaf - znal
ner ivpgvzf bs artngvir fbpvrgny abezf, pbtavgvir qvfgbegvbaf, onq cneragvat,
rgp.
0Douglas_Knight8yI think that most people find that it wears off after a couple of months.
0[anonymous]8yWhat do you believe, and why do you believe it?
Alternatively: What do you value, and why do you value it?
-2Yuyuko8yWe find that death grants a great deal of perspective!
5ModusPonies8yFind at least one person who you can easily communicate with (i.e., small
inferential distances) and whose opinion you trust. Have a long conversation
about your hopes and dreams. I recommend doing this in person if at all
possible.
3lsparrish8yA good place to start the search is the intersection of "things I find
enjoyable" and "things that are scarce / in demand".
3diegocaleiro8ySee which time discounts and distance discounts you make for how much you care
about others. Compare how much you care about others with how much you care
about you. act accordingly.
To know what you care about in the first place, either assess happiness at
random times and activities, or go through Connection Theory and Goal factoring.
7NancyLebovitz8yWhy do you recommend Connection Theory?
0diegocaleiro8yIt's been done to me and I like it.
It's been done to me, too, and as I recall, it didn't do all that much good. The major good effect that I can remember is indirect-- it was something to be able to talk about the inside of my head with someone who found it all interesting and a possibly useful tool for untangling problems-- this helped pull me away from my usual feeling that there's something wrong/defective/shameful about a lot of it.
1Armok_GoB8yLooki into my eyes. You want to give all your money to the MIRI. You want to
give all your money to the MIRI. You want to give all your money to the MIRI.
4FiftyTwo8ySadly I have +2 hypnosis resistance, nice try.
0[anonymous]8yFind at least one person who you can easily communicate with (i.e., small
inferential distances) and who you trust. Talk at length.
Edit: We reached our deadline on May 1st. Site is live.
Some of you may recall the previous announcement of the blog. I envisioned it as a site that discusses right wing ideas. Sanity but not value checking them. Steelmanning both the ideas themselves and the counterarguments. Most of the authors should be sympathetic to them, but a competent loyal opposition should be sought out. In sum a kind of inversion of the LessWrong demographics (see Alternative Politics Question). Outreach will not be a priority, mutual aid on an epistemically tricky path of knowledge seeking is.
The current core group working on making the site a reality consists of me, ErikM, Athrelon, KarmaKaiser and MichaelAnissimov and Abudhabi. As we approach launch time I've just sent out an email update to other contributors and those who haven't yet contributed but have contacted me. If you are interested in the hard to discuss subjects or the politics and want to join as a coauthor or approved commenter (we are seeking more) send me a PM with an email adress or comment here.
7bogus8yThis is a great idea. We should create rationalist blogs for other political
factions too, such as progressivism, feminism, anarchism, green politics and
others. Such efforts could bring our programme of "raising the sanity waterline"
to the public policy sphere -- and this might even lay some of the groundwork
for eventually relaxing the "no politics at LW" rule.
7[anonymous]8yAs I wrote before:
I don't expect LessWrong itself to become a good venue to discuss politics. I do
think LessWrong could keep its spot at the center of a "rationalist" blogosphere
that may be slowly growing. Discussions between different value systems part of
it might actually be worth following! And I do think nearly all political
factions within such a blogosphere would find benefits
[http://lesswrong.com/lw/66/rationality_common_interest_of_many_causes/] in
keeping their norms as sanity friendly as possible.
0Viliam_Bur8yI would like to see one site to describe them all. To describe all those parts
which can be defended rationally, with clear explanations and evidence.
0bogus8yYes, the issue-position-argument
[https://pdf.uservoice.com/forums/159963-pdf-applied/suggestions/2860176-an-issue-position-argument-database-for-all-persis]
(IPA) model was developed for such purposes, and similar models are widely cited
in the academic literature about argumentation and computer support for same,
etc. (One very useful elaboration of this is called TIPAESA, for: time, issue,
position, argument, evidence, source, authority. Unfortunately, I do not know of
a good reference for this model; it seems that it was only developed informally,
by anonymous folks on some political wikis.) But it's still useful to have
separately managed sites for each political faction, if only so that each
faction can develop highly representative descriptions of their own positions.
4drethelin8y"Approved Commenter" sounds pretty thought police-ey
6drethelin8yso sign me up!
5wedrifid8yThat would seem to fit with the theme rather well.
3[anonymous]8yJames Goulding aka Federico formerly of studiolo has joined us as an author.
3MugaSofer8yI hold more liberal than conservative beliefs, but I'm increasingly reluctant to
identify with any position on the left-right "spectrum". I definitely hold or
could convincingly steelman lots of beliefs associated with "conservativism",
especially if you include criticism of "liberal" positions. Would this be
included in the sort of demographic you're seeking?
6RichardKennaway8yInteresting stuff. Some links to the original material:
Original paper (paywalled) [http://prl.aps.org/abstract/PRL/v110/i16/e168702]
Original paper (free)
[http://www.alexwg.org/publications/PhysRevLett_110-168702.pdf]. (Does not
include supplementary material.)
Summary paper about the paper. [http://physics.aps.org/articles/v6/46]
Their software. [http://www.entropica.com/] Demo video, further details only on
application.
Author 1 [http://www.alexwg.org/]. Author 2 [http://math.mit.edu/~freer/].
On the one hand, these are really smart guys, no question. On the other, toy
demos + "this could be the solution to AI!" => likely to be a damp squib.
4gwern8yI've skimmed the paper and read the summary publicity, and I don't really get
how this could be construed as a general intelligence. At best, I think they
may've encoded a simple objective definition of a convergent AI drive like 'keep
your options open and acquire any kind of influence' but nothing in it seems to
map onto utility functions or anything like that.
0RichardKennaway8yI think that's an accurate informal summary of their basic mechanism.
Personally, I'm not impressed by utility functions (or much else in AGI, for
that matter), so I don't rate the fact that they aren't using them as a point
against.
2gwern8yI do, because it seems like in any nontrivial situation, simply grasping for
entropies ignores the point of having power or options, which is to aim at some
state of affairs which is more valuable than others. Simply buying options is
worthless and could well be actively harmful if you keep exposing yourself to
risks you could've shut down. They mention that it works well as a strategy in
Go playing... but I can't help but think that it must be in situations where
it's not feasible to do any board evaluation at all and where one is maximally
ignorant about the value of anything at that point.
-4timtyler8yAs I understand it, it's more a denial of that claim. The point is to maximise
entropy, and values are a means to that end.
Obviously, this is counter-intuitive, since orthodoxy has this relationship the
other way around: claiming that organisms maximise correlates of their own power
- and the entropy they produce is a byproduct. MaxEnt suggests that this
perspective may have things backwards.
0drethelin8ywhat's your preferred system for encoding values?
0RichardKennaway8y"Value" is just another word for "utility", isn't it? It's the whole idea of
utility maximisation as a fundamental principle that I think is misguided. No, I
don't have a better idea; I just think that that one is a road that, where AGI
is concerned, leads nowhere.
But AGI is not something I work on. There is no reason for anyone who does to
pay any attention to my opinions on the subject.
-2timtyler8yThe idea is that entropy can be treated as utility.
Thus entropy maximisation
[http://en.wikipedia.org/wiki/Maximum_entropy_thermodynamics]. Modern
formulizations are largely based on ideas discovered by E. T. Jaynes
[http://en.wikipedia.org/wiki/Edwin_Thompson_Jaynes].
Here is Roderick Dewar explaining the link
[http://arxiv.org/ftp/cond-mat/papers/0005/0005382.pdf].
2gwern8yI'm aware of maxent (and that's one reason why in my other comment I mentioned
the Go playing as probably reflecting a situation of maximum ignorance), but I
still do not see how maximizing entropy can possibly lead to fully intelligent
utility-maximizing behavior, or if it is unable to do so, why we would give a
damn about what maximizing entropy does. What is the maximal entropy state of
the universe but something we would abhor like a uniform warm gas? To return to
the Go playing: entropy maximization may be a useful heuristic in some positions
- but the best Go programs do not purely maximize entropy and ignore the value
of positions or ignore the value of winning.
-2timtyler8yThat's an argument from incredulity, though. Hopefully, I can explain:
If you have a maximiser of A, the ability to constrain that maximiser, and the
ability to generate A, you can use it to maximise B by rewarding the production
of B with A. If A = entropy and B = utility, Q.E.D.
Of course if you can't constrain it you just get an entropy maximiser. That
seems like the current situation with modern ecosystems. These dissipate
mercilessly, until no energy gradients - or anything else of possible value - is
left behind.
By their actions shall ye know them. Humans generate large quantities of
entropy, accelerating universal heat death. Their actions clearly indicate that
they don't really care about averting universal heat death.
In general, maximisers don't necessarily value the eventual results of their
actions. A sweet taste maximiser might not value tooth decay and obesity.
Organisms behave as though they like dissipating. They don't necessarily like
the dissipated state their actions ultimately lead to.
Maximisation is subject to constraints. Go programs are typically constrained to
play go.
An entropy maximiser whose only actions were placing pieces on go boards in
competitive situations might well attempt to play excellent go - to make humans
feed it power and make copies of it.
Of course, this is a bit different from what the original article is talking
about. That refers to "maximizing accessible future game states". If you know
go, that's pretty similar to winning. To see how, consider a variant of go in
which both passing and suicide are prohibited.
2gwern8yThat seems to simply be buck-passing. What does this gain us over simply
maximizing B? If we can compute how to maximize a predicate like A, then what
stops us from maximizing B directly?
Pretty similar, yet somehow, crucially, not the same thing. If you know go,
consider a board position in which 51% of the board has been filled with your
giant false eye, you move, and there is 1 move which turns it into a true eye
and many moves which don't. The winning-maximizing move is to turn your false
eye into a true eye, yet this shuts down a huge tree of possible futures in
which your false eye is killed, thousands of stones are removed from the board,
and you can replay the opening with its beyond-astronomical number of possible
futures...
0timtyler8yYou said you didn't see how having an entropy maximizer would help with
maximizing utility. Having an entropy maximizer would help a lot. Basically
maximizers are very useful things - almost irrespective of what they maximize.
Sure. I never claimed they were the same thing.
If you forbid passing, forbid suicide and aim to mimimize your opponent's
possible moves, that would make a lot more sense - as a short description of a
go-playing strategy.
6gwern8ySo maximizers are useful for maximizing? That's good to know.
-2timtyler8yThat's trivializing the issue. The idea is that maximisers can often be
repurposed to help other agents (via trade, slavery etc).
It sounds as though you originally meant to ask a different question. You can
now see how maximizing entropy would be useful, but want to know what advantages
it has over other approaches.
The main advantage I am aware of associated with maximizing entropy is one of
efficiency. If you maximize something else (say carbon atoms), you try and leave
something behind. By contrast, an entropy maximizer would use carbon atoms as
fuel. In a competition, the entropy maximizer would come out on top - all else
being equal.
It's also a pure and abstract type of maximisation that mirrors what happens in
natural systems. Maybe it has been studied more.
0gwern8yI already saw how it could be useful in a handful of limited situations - that's
why I brought up the Go example in the first place!
As it stands, it sounds like a limited heuristic and the claims about
intelligence grossly exaggerated.
0timtyler8yEntropy maximisation purports to explain all adaptation. However, it doesn't
tell us much that we didn't already know about how to go about making good
adaptations. For one thing, entropy maximisation is a very old idea - dating
back at least to Lotka, 1922
[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1085052/].
So, if you allow me to divide by zero, I can derive a contradiction from the basic rules of arithmetic to the effect that any two numbers are equal. But there's a rule that I cannot divide by zero. In any other case, it seems like if I can derive a contradiction from basic operations of a system of, say, logic, then the logician is not allowed to say "Well...don't do that".
So there must be some other reason for the rule, 'don't divide by zero.' What is it?
You can totally divide by zero, but the ring you get when you do that is the zero ring, and it only has one element. When you start with the integers and try dividing by nonzero stuff, you can say "you can't do that" or you can move out of the integers and into the rationals, into which the integers embed (or you can restrict yourself to only dividing by some nonzero things - that's called localization - which is also interesting). The difference between doing that and dividing by zero is that nothing embeds into the zero ring (except the zero ring). It's not that we can't study it, but that we don't want to.
Also, in the future, if you want to ask math questions, ask them on math.stackexchange.com (I've answered a version of this question there already, I think).
I mean if you localize a ring at zero you get the zero ring. Equivalently, the unique ring in which zero is invertible is the zero ring. (Some textbooks will tell you that you can't localize at zero. They are haters who don't like the zero ring for some reason.)
The theorems work out nicer if you don't. A field should be a ring with exactly two ideals (the zero ideal and the unit deal), and the zero ring has one ideal.
3Oscar_Cunningham8yAh, so it's for exactly the same reason that 1 isn't prime.
7Qiaochu_Yuan8yYes, more or less. On nLab this phenomenon is called too simple to be simple
[http://ncatlab.org/nlab/show/too+simple+to+be+simple].
1Oscar_Cunningham8yWe often want the field without zero to form a multiplicative group, and this
isn't the case in the ring with one element (because the empty set lacks an
identity and hence isn't a group). Indeed we could take the definition of a
field to be
and this is fairly elegant.
The rule isn't that you cannot divide by zero. You need a rule to allow you to divide by a number, and the rule happens to only allow you to divide by nonzero numbers.
There are also lots of things logicians can tell you that you're not allowed to do. For example, you might prove that (A or B) is equivalent to (A or C). You cannot proceed to cancel the A's to prove that B and C are equivalent, unless A happens to be false. This is completely analogous to going from AB = AC to B = C, which is only allowed when A is nonzero.
0kpreid8yHowever, {false, true} - {true} has only one member, and so values from it
become constant, whereas ℝ - {0} has many members and can therefore remain
significant.
For the real numbers, the equation a x = b has infinitely many solutions if a = b = 0, no solutions if a = 0 but b ≠ 0, and exactly one solution whenever a ≠ 0. Because there's nearly always exactly one solution, it's convenient to have a symbol for "the one solution to the equation a x = b" and that symbol is b / a; b but you can't write that if a = 0 because then there isn't exactly one solution.
3latanius8yDidn't they do the same with set theory? You can derive a contradiction from the
existence of "the set of sets that don't contain themselves"... therefore, build
a system where you just can't do that.
(of course, coming from the axioms, it's more like "it wasn't ever allowed",
like in Kindly's comment, but the "new and updated" axioms were invented
specifically so that wouldn't happen.)
-1OrphanWilde8yWe divide by zero all the time, actually; derivatives are the long way about
dividing by zero. We just work very carefully to cancel the actual zero out of
the equation.
The rule is less "Don't divide by zero", as much as "Don't perform operations
which delete your data." Dividing by zero doesn't produce a contradiction, it
eliminates meaning in the data. You -can- divide by zero, you just have to do so
in a way that maintains all the data you started with. Multiplying by zero
eliminates data, and can be used for the same destructive purpose.
4[anonymous]8yI completely fail to understand how you got such a doctrine on dividing by zero.
Mathematics just doesn't work like that.
0OrphanWilde8yAre you denying this as somebody with strong knowledge of mathematics?
(I need to know what prior I should assign to this conceptualization being
wrong. I got it from a mathematics instructor, quite possibly the best I ever
had, in his explanation on why canceling out denominators doesn't fix
discontinuities.)
ETA: The problem he was demonstrating it with focused more on the error of
-adding- information than removing it, but he did show us how information could
be deleted from an equation by inappropriately multiplying by or dividing by
zero, showing how discontinuities could be removed or introduced. He also
demonstrated a really weird function involving a square root which had two
solutions, one of which introduced a discontinuity, one of which didn't.
3[anonymous]8yI'm a graduate student, working on my thesis.
I accept that this is some pedagogical half-truth, but I just don't see how it
benefits people to pretend mathematics cares about whether or not you "eliminate
meaning in the data." There's no meta-theorem that says information in an
equation has to be preserved, whatever that means.
2mstevens8yDividing by zero leads to a contradiction
[http://www.math.utah.edu/~pa/math/0by0.html]
Never divide by zero [http://www.math.utah.edu/online/1010/zero/]
Division by zero [http://en.wikipedia.org/wiki/Division_by_zero]
-6OrphanWilde8y
0[anonymous]8yThanks, that's helpful. But I guess my point is that it seems to me to be a
problem for a system of mathematics that one can do operations which, as you
say, delete the data. In other words, isn't it a problem that it's even possible
to use basic arithmetical operations to render my data meaningless? If this were
possible in a system of logic, we would throw the system out without further
ado.
And while I can construct a proof that 2=1 (what I called a contradiction,
namely that a number be equal to its sucessor) if you allow me to divide by
zero, I cannot do so with multiplications. So the cases are at least somewhat
different.
7Jonii8yQiaochu_Yuan already answered your question, but because he was pretty technical
with his answer, I thought I should try to simplify the point here a bit. The
problem with division by zero is that division is essentially defined through
multiplication and existence of certain inverse elements. It's an axiom in
itself in group theory that there are inverse elements, that is, for each a,
there is x such that ax = 1. Our notation for x here would be 1/a, and it's easy
to see why a 1/a = 1. Division is defined by these inverse elements: a/b is
calculated by a * (1/b), where (1/b) is the inverse of b.
But, if you have both multiplication and addition, there is one interesting
thing. If we assume addition is the group operation for all numbers(and we use
"0" to signify additive neutral element you get from adding together an element
and its additive inverse, that is, "a + (-a) = 0"), and we want multiplication
to work the way we like it to work(so that a(x + y) = (ax) + (a*y), that is,
distributivity hold, something interesting happens.
Now, neutral element 0 is such that x + 0 = x, this is by definition of neutral
element. Now watch the magic happen: 0x = (0 + 0)x
= 0x + 0x So 0x = 0x + 0x.
We subtract 0x from both sides, leaving us with 0x = 0.
Doesn't matter what you are multiplying 0 with, you always end up with zero. So,
assuming 1 and 0 are not the same number(in zero ring, that's the case, also, 0
= 1 is the only number in the entire zero ring), you can't get a number such
that 0*x = 1. Lacking inverse elements, there's no obvious way to define what it
would mean to divide by zero. There are special situations where there is a
natural way to interpret what it means to divide by zero, in which cases, go for
it. However, it's separate from the division defined for other numbers.
And, if you end up dividing by zero because you somewhere assumed that there
actually was such a number x that 0*x = 1, well, that's just your own
clumsiness.
Also, you can prove 1=2 i
3[anonymous]8yExcellent explanation, thank you. I've been telling everyone I know about your
resolution to my worry. I believe in math again.
Maybe you can solve my similarly dumb worry about ethics: If the best life is
the life of ethical action (insofar as we do or ought to prefer to do the
ethically right thing over any other comforts or pleasures), and if ethical
action consists at least largely in providing and preserving the goods of life
for our fellow human beings, then if someone inhabited the limit case of the
best possible life (by permanently providing immortality, freedom, and happiness
for all human beings), wouldn't they at the same time cut everyone else off from
the best kind of life?
4drethelin8yEthical action is defined by situations. The best life in the scenario where we
don't have immortality freedom and happiness is to try to bring them about, but
the best life in the scenario where we already have them is something different.
0[anonymous]8yGood! That would solve the problem, if true. Do you have a ready argument for
this thesis (I mean "but the best life in the scenario where we already have
them is something different.")?
0drethelin8y"If true" is a tough thing here because I'm not a moral realist. I can argue by
analogy for the best moral life in different scenarios being a different life
but I don't have a deductive proof of anything.
By analogy: the best ethical life in 1850 is probably not identical to the best
ethical life in 1950 or in 2050, simply because people have different capacities
and there exist different problems in the world. This means the theoretical most
ethical life is actually divorced from the real most ethical life, because no
one in 1850 could've given humanity all those things and working toward would've
taken away ethical effort from eg, abolishing slavery. Ethics under uncertainty
means that more than one person can be living the subjectively ethically perfect
life even if only one of them will achieve what their goal is because no one
knows who that is ahead of time.
0Watercressed8yI think you mean x + 0 = x
1Jonii8yyes. yes. i remember thinking "x + 0 =". after that it gets a bit fuzzy.
-4OrphanWilde8yYou can do the same thing in any system of logic.
In more advanced mathematics you're required to keep track of values you've
canceled out; the given equation remains invalid even though the cancelled value
has disappeared. The cancellation isn't real; it's a notational convenience
which unfortunately is promulgated as a real operation in mathematics classes.
All those cancelled-out values are in fact still there. That's (one of) the
mistakes performed in the proof you reference.
1Jonii8yThis strikes to me as massively confused.
Keeping track of cancelled values is not required as long as you're working with
a group, that is, a set(like reals), and an operation(like addition) that
follows the kinda rules addition with integers and multiplication with non-zero
real values do. If you are working with a group, there's no sense in which those
canceled out values are left dangling. Once you cancel them out, they are gone.
http://en.wikipedia.org/wiki/Group_%28mathematics%29
[http://en.wikipedia.org/wiki/Group_%28mathematics%29] <- you can check group
axioms here, I won't list them here.
Then again, canceling out, as it is procedurally done in math classes, requires
each and every group axiom. That basically means it's nonsense to speak of
canceling out with structures that aren't groups. If you tried to cancel out
stuff with non-group, that'd be basically assuming stuff you know ain't true.
Which begs a question: What are these structures in advanced maths that you
speak of?
Today, I finally took a racial/sexual Implicit Association Test.
I had always more or less accepted that it was, if not perfect, at least a fairly meaningful indicator of some sort of bias in the testing population. Now, I'm rather less confident in that conclusion.
According to the test, in terms of positive associations, I rank black women above black men above white women above white men. I do not think this is accurate.
Obviously, this is an atypical result, but I believe that I received it due to confounding factors which prevented the test from being a... (read more)
Academic research tends to randomize everything that can be randomized, including the orders of the different IAT phases, so your first concern shouldn't be an issue in published research. (The keyword for this is "order effect.")
The IAT is one of several different measures of implicit attitudes which are used in research. When taking the IAT it is transparent to the participant what is being tested in each phase, so people could try harder on some trials than on others, but that is not the case with many of the other tests (many use subliminal priming, e.g. flashing either a black man's face or a white man's face on the screen for 20ms immediately before showing the stimulus that participants are instructed to respond to). The different measures tend to produce relatively similar results, which suggests that effort doesn't have that big of an effect (at least for most people). I suspect that this transparency is part of the reason why the IAT has caught on in popular culture - many people taking the test have the experience of it getting harder when they're doing a "mismatched" pairing; they don't need to rely solely on the website's report of their results.
The survey that you took is not part of the IAT. It is probably a separate, explicit measure of attitudes about race and/or gender (do any of these questions look familiar?).
0Desrtopa8yNone of those questions were on the survey, but some of the questions on the
survey were similar.
The descriptions of the other measures of implicit attitudes given on that page
aren't in-depth enough for me to critique them effectively for methodology. The
first question that comes to mind though, is to what extent these tests have
been calibrated against associations that we already know about. For example, if
people are given implicit association tests which match words with pictures of,
say, smiling children with candy versus pictures of people with injuries, how do
they score?
0NancyLebovitz8yI haven't heard of any attempts at comparing implicit association tests to
behavior.
I keep accidentally accumulating small trinkets as presents or souvenirs from well-meaning relatives! Can anyone suggest a compact unit of furniture for storing/displaying these objects? Preferably in a way that is scalable, minimizes dustiness and falling-off and has pretty good ease of packing/unpacking. Surely there's a lifehack for this!
Or maybe I would appreciate suggestions on how to deal with this social phenomenon in general! I find that I appreciate the individual objects when I receive them, but after that initial moment, they just turn into ... stuff.
The Girl Scouts currently offer a badge in the "science of happiness." I don't have a daughter, but if you do, perhaps you should look into the "science of style" badge as well.
So far, I haven't found a good way to compare organizations for the blind other than reading their wikipedia pages.
And, well, blindness organizations are frankly a political issue. Finding unbiased information on them is horribly difficult. Add to this my relatively weak Google-fu, and I haven't found much.
Conclusions:
NFB is identity politics. They're also extremely assertive.
AFB focuses on technology, inherited Hellen Keller's everything, etc.
ACB... umm... exists. They did give me a scholarship, and made the case for accessible money (Good luck with t
3RolfAndreassen8yPerhaps it would be easier to help if you said what you wanted help with. "The
most to offer" in what specific area?
0CAE_Jones8yThe trouble is that there are multiple areas of interest, and I'm not sure which
is best to focus on: life skills? Technology? Programs that I could improve?
Etc. My primary strategy has been to determine the goals of each organization
and how much success they've had in achieving them, and the trouble is that
these are hard to measure (we can tell how much policy influence NFB has had, at
least. I haven't found much about how many of AFB's recommendations have been
enacted.).
0RolfAndreassen8yThen it seems that you should recurse a level: Rather than trying to evaluate
the organisations, you should be deciding which of the possible
organisation-goals is most important. When you've decided that, judge which
organisation best achieves that optimal goal.
2CAE_Jones8yI still can't find much useful information on the AFB, but the NFB publicizes
most of their major operations. The only successful one I've come across so far
is the cancellation of the ABC sit com "Good and Evil" (it's worth noting that
ABC denied that the NFB protests had anything to do with this). They don't seem
to be having success at improving Kendel accessibility, which is more a
political matter than a technological one (Amazon eventually cut communications
with them). They're protesting Goodwill because 64/165 of their stores pay
disabled employees less than minimum wage, in a manner that strikes me as poorly
thought out (it seems to me that Goodwill has a much better image than the NFB,
so this will most likely cost the NFB a lot of political capital).
This isn't really enough for me to determine whether they're powerful, or just
loud, but so far it's making me update ever so slightly in favor of just loud.
It is worth noting that all of the above information came from publications
written by NFB members, mostly hosted on NFB web sites. If my confidence in
their abilities is hurt by writings seemingly designed to favor them, I can only
imagine what something more objective would look like.
[edit]Originally typed Givewell instead of Goodwill! Fixed![/edit]
0CAE_Jones8yLighthouse International publishes scientific-looking research (although most of
them appear to be single studies with small sample sizes, so they could stand
further vetting). This
[http://www.lighthouse.org/research/archived-studies/overprotection/] and this
[http://www.lighthouse.org/research/archived-studies/enhancing/] match my
experience pretty well, although matching my experience isn't what I'd call
criteria for effectiveness. If nothing else, I expect that they would be the
most likely to help me get a quantitative picture of other organizations.
I would like to recommend Nick Winter's book, The Motivation Hacker. From an announcement posted recently to the Minicamp Graduates mailing list:
"The book takes Luke's post about the Motivation Equation and tries to answer the question, how far can you go? How much motivation can you create with these hacks? (Turns out, a lot.) Using the example of eighteen missions I pursued over three months, it goes over in more detail how to get yourself to want to do what you always wanted to want to do."
(Disclaimer: I hadn't heard of Nick Winter until a fri... (read more)
Sex. I have a problem with it and would like to solve it.
I get seriously anxious every time I'm about to have sex for the first time with a new partner. Further times are great and awesome. But the first time leaves me very anxious; which makes me delay it as much as I can. This is not optimal.
I don't know how to fix it, if anyone can help I'd be greatly grateful
--
I notice I'm confused: I always tried to keep a healthy life: sleeping many hours, no alcohol, no smoke. I've just been living 5 days in a different country with some friends. We sleep 7 hour... (read more)
Re: sex... is there anyone with whom you're already having great awesome sex who would be willing to help out with some desensitization? For example, adding role-playing "our first time" to your repertoire? If not, how would you feel about hiring sex workers for this purpose?
Re: lifestyle... list the novel factors (dancing 4 hrs/night, spending time with people rather than alone, sleeping <7 hrs/night, diet changes, etc. etc. etc.). When you're back home, identify the ones that are easy to introduce and experiment with introducing them, one at a time, for a week. If you don't see a benefit, move on to the next one. If none of them work, try them all at once. If that doesn't work, move on to the difficult-to-introduce ones and repeat the process.
Personally, I would guess that several hours of sustained exercise and a different diet are the primary factors, but that's just a guess.
1newguy8yre: sex Not at the moment, but in some 2 months that roleplaying stuff would be
possible yes. I tried looking for some affect hacking on the website but didn't
find much practical advice unfortunately.
wrt to sex workers, no great moral objection, besides and initial emotional ugh,
but I'm unsure on how helpful it could be.
re: lifestyle this is somewhat what I had in mind, thank you.
3drethelin8ycould be a sign of a mold infestation or other environmental thing where you
normally live
2newguy8ywill do and report back.
No, I never did try that, I feel it will be only very catastrophic thoughts; I
will try to track it when the opportunity arises and update.
3falenas1088yAre you significantly happier now than before?
2newguy8yVery much so yes. Potential big confounder: never been around so many beautiful
& nice females (I'm a straight male).
But my moodflow varies between long lasting moods of feeling slightly good and
slightly bad and for the days I've been here I get consistent "great" ratings -
I feel awesome all the time.
6falenas1088yThe feeling happier part could explain looking and feeling healthier alone. I'm
stepping into the realm of guesswork here, but I would say that being around
others that you enjoy hanging out with could be the cause, or the increased
exercise from dancing so much.
Also, explaining the cigarrettes and alcohol, although there are long term risks
associated (especially for the cigarettes), that doesn't mean they cause
negative short term effects.
As for 7 hours of sleep tops, there's evidence that around 7 hours might be
best.
0drethelin8ycould be a sign of a mold infestation or other environmental thing where you
normally live
7Viliam_Bur8yIsolating and mass producing the happiness mold could be the best invention
since penicilin. :D
2Manfred8yI will make the typical recommendation: cognitive behavioral therapy
[http://en.wikipedia.org/wiki/Cognitive_behavioral_therapy] techniques. Try to
notice your emotions and responses, and just sort them into helpful or not
helpful. Studies also seem to show that this sort of thing works better when
you're talking with a professional.
1MixedNuts8yThe standard strategy seems to be to work up to sex very progressively, going a
little further on each encounter, so there's never any bright line to cross. Why
is this failing for you?
0newguy8yMaybe because there is always a clear line? I go from meeting to kissing quite
fast, and from kissing to being in my bedroom also quite fast, so there is no
small progression, it's meeting, kissing, then we end up at a sex-appropriate
place and I go trough it, but I'm incredibly anxious.
2MixedNuts8yBy "quite fast" do we mean a few hours, or a few dates? If the latter: You are
in fact allowed not to have sex on the first date, or the first time they're in
your bedroom. You can go as far as you're comfortable with and no further - and
know where you'll stop in advance, so you're not anxious beforehand, and then go
a little further on subsequent dates.
Is your anxiety tied to specific acts, or to sex itself? Does it help if I point
out that the boundaries of what counts as sex are very blurry, and do your
anxieties change if you change what you think of as sex?
2newguy8y3 meetings, wouldn't call them dates.
I understand that, but it somehow makes me feel bad to have them there and ready
and that I'm the one that actually also wants to but somehow/for some reason
can't.
Just first-time sex as in intercourse. Well, in my mind sex = intercourse [as in
penis in vagina], everything else is "fooling around". [Not debating
definitions, just saying how it feels to me].
I don't know I need to test it, but that might be useful to try, to try to think
of sex as being something else.
4MixedNuts8ySounds like your problems could cancel out. If you decline intercourse but "fool
around" a lot, they're unlikely to be too unhappy about it.
2newguy8yThis worked out (n = 3). I explicitly say that it is unlikely intercourse will
happen (to them and myself), and when it does it just feels natural, no bright
line. Thank you, this was a big problem!
A few of you may know I have a blog called Greatplay.net, located at... surprise... http://www.greatplay.net. I’ve heard some people that discovered my site much later than they otherwise would because the name of the site didn’t communicate what it was about well and sounded unprofessional.
Why Greatplay.net in the first place? I picked it when I was 12, because it was (1) short, (2) pronounceable, (3) communicable without any risk of the other person misspelling it, and (4) did not communicate any information about what the site would be about, so I coul... (read more)
2Jonii8yI don't think you need to change the domain name. For marketability, you might
wanna have the parts named so that stuff within your site becomes brand in
itself, so greatplay.net becomes associated with " utilitarianism", " design"
etc. Say, I read a blog by a chemist who has series of blog posts titled "stuff
i won't work with: ". I can't remember the domain name, but I know that whenever
I want to read about nasty chemical, i google that phrase.
0tondwalkar8yDibs on 'A Utilful Mind' if you don't take it?
1peter_hurford8yI ended up going with Everyday Utilitarian [http://www.everydayutilitarian.com],
so you can have it.
One question I like to ask in response to questions like this is "what do you plan on doing with this information?" I've generally found that thinking consequentially is a good way to focus questions.
7wedrifid8yThe simplest way of categorizing this would be based on the biology of which
nerves nerves are involved. It appears that the tickle sensation involves
signals from nerve fibres associated with both pain and touch.
[http://en.wikipedia.org/wiki/Tickling] So... "Kind of".
6[anonymous]8yIn case the answer to Qiaochu_Yuan's question
[http://lesswrong.com/r/discussion/lw/h7r/open_thread_april_1530_2013/8sia] is
something like “I'm trying to establish the moral status of tickling in my
provisional moral system”, note that IIUC the sensation felt when eating spicy
foods is also pain according to most definitions, but a moral system according
to which eating spicy foods is bad can go #$%& itself for all that I'm
concerned.
Does anyone have any real-world, object-level examples of degenerate cases)?
I think degeneracy has some mileage in terms of explaining certain types of category error, (eg. "atheism is a religion"), but a lot of people just switch off when they start hearing a mathematical example. So far, the only example I've come up with is a platform pass at a train station, which is a degenerate case of a train ticket. It gets you on the platform and lets you travel a certain number of stops (zero) down the train line.
2TimS8yGrabbing someone by the arm and dragging them across the room as a degenerate
case of kidnapping?
Trading a gun for drugs [http://en.wikipedia.org/wiki/Smith_v._United_States] as
a degenerate case of "Using a firearm in a drug transaction"? On a related note,
receiving the gun [http://en.wikipedia.org/wiki/Watson_v._United_States] is not
using a firearm in a drug transaction.
I'm sure there are more examples in the bowels of criminal law (and law
generally).
2Alejandro18yComplete anarchy as the degenerate case of a government system?
Sleeping on the floor as the degenerate case when discussing different kinds of
beds and mattresses?
Asexuality as the degenerate case of hetero/homo/bi sexuality?
2RolfAndreassen8ySerious-ish answers: The degenerate case of dieting is when you increase your
calorie intake by zero. (Also applies to government budgets, although it's then
usually referred to as a "cut".)
The degenerate case of tax reform is to pass no new laws.
The degenerate case of keeping kosher (also halal, fasting, giving things up for
Lent) is to eat anything you like.
The degenerate case of a slippery-slope argument is to say "If we do X, X will
follow, and then we'll be sure to have X, from which we'll certainly get X".
(That is, this argument is the limit as epsilon goes to zero of the argument X
-> X+epsilon -> X+2 epsilon...).
Mainly in jest: Dictatorship considered as a degenerate case of democracy: One
Man, One Vote - he is The Man, he has The Vote.
Conversely, democracy considered, Moldbug-style, as the degenerate case of
dictatorship: Each of N citizens has 1/N of the powers of the dictator.
1kpreid8yNot going anywhere is degenerate travel (but can be an especially restful
vacation).
There's a phenomenon I'd like more research done on. Specifically, the ability to sense solid objects nonvisually without direct physical contact.
I suspect that there might be some association with the human echolocation phenomenon. I've found evidence that there is definitely an audio component; I entirely by accident simulated it in a wav file (It was a long time before I could listen to that all the way through, for the strong sense that something was reaching for my head; system2 had little say in the matter).
I've also done my own experiments involving... (read more)
In chapter 1 of his book Reasoning about Rational Agents, Michael Wooldridge identifies some of the reasons for trying to build rational AI agents in logic:
There are some in the AI research community who believe that logic is (to put it crudely) the work of the devil, and that the effort devoted to such problems as logical knowledge representation and theorem proving over the years has been, at best, a waste of time. At least a brief justification for the use of logic therefore seems necessary.
First, by fixing on a structured, well-defined artificial lan
1lukeprog8yIn An Introduction to MultiAgent Systems
[http://www.amazon.com/Introduction-MultiAgent-Systems-Michael-Wooldridge/dp/0470519460/]
, he writes:
I started following DavidM's meditation technique Is there anything that I should know? Any advice or reasons on why I should choose a different type of meditation?
2Tenoke8yFWIW adding tags to distracting thoughts and feelings seems like a useful thing
(for me) even when not meditating and I haven't encountered this act of labeling
in my past short research on meditation.
Sometimes, success is the first step towards a specific kind of failure.
I heard that the most difficult moment for a company is the moment it starts making decent money. Until then, the partners shared a common dream and worked together against the rest of the world. Suddenly, the profit is getting close to one million, and each partner becomes aware that he made the most important contributions, while the others did less critical things which technically could be done by employees, so having to share the whole million with them equally is completely stupi... (read more)
4OrphanWilde8yTrue. It's harder to fake rationality than it is to fake the things that matter
today, however (say, piety). And given that the sanity waterline has increased
enough that "rational" is one of the most desirable traits for somebody to have,
fake signaling should be much harder to execute. (Somebody who views rationality
as such a positive trait is likely to be trying to hone their own rationality
skills, after all, and should be harder to fool than the same person without any
such respect for rationality or desire to improve their own.)
8Viliam_Bur8yFaking rationality would be rather easy: Criticize everything which is not
generally accepted [http://lesswrong.com/lw/1ww/undiscriminating_skepticism/]
and always find biases in people you disagree with
[http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/] (and since
they are humans, you always find some). When "rationality" becomes a popular
word, you can get many followers by doing this.
Here I assume that the popularity of the word "rationality" will come before
there are millions of x-rationalists to provide feedback against wannabe
rationalists. It would be enough if some political movement decided to use this
word as their applause light.
1OrphanWilde8yDo you see any popular people here you'd describe as faking rationality? Do we
seem to have good detectors for such behavior?
We're a pretty good test case for whether this is viable or not, after all.
(Less so for somebody co-opting words, granted...)
8Viliam_Bur8yThe community here is heavily centered around Eliezer. I guess if someone
started promoting some kind of fake rationality here, sooner or later they would
get into conflict with Eliezer, and then most likely lose the support of the
community.
For another wannabe rationalist guru it would be better to start their own
website, not interact with people on LW, but start recruiting somewhere else,
until they have greater user base than LW. At the moment their users notice LW,
all they have to do is: 1) publish a few articles about cults and mindkilling,
to prime their readers, and 2) publish a critique of LW with hyperlinks to all
currently existing critical sources. The proper framing would be that LW is a
fringe group which uses "rationality" as applause lights, but fails horribly
(insert a lot of quotations and hyperlinks here), and discussing them is really
low-status.
It would help if the new rationalist website had a more professional design, and
emphasised its compatibility with mainstream science, e.g. by linking to
high-status scientific institutions, and sometimes writing completely
uncontroversial articles about what those institutions do. In other words, the
new website should be optimized to get 100% approval of the RationalWiki
community. (For someone trying to do this, becoming a trusted member of
RationalWiki community could be a good starting point.)
2David_Gerard8yI'm busy having pretty much every function of RW come my way, in a Ponder
Stibbons-like manner, so if you can tell me where the money is in this I'll see
what I can come up with. (So far I've started a blog with no ads. This may not
be the way to fame and fortune.)
1gwern8yThe money or lack thereof doesn't matter, since RW is obviously not an
implementation of Villam's proposed strategy: it fails on the ugliness with its
stock MediaWiki appearance, has too broad a remit, and like El Reg it shoots
itself in the foot with its oh-so-hilarious-not! sense of humor (I dislike
reading it even on pages completely unrelated to LW). It may be successful in
its niche, but its niche is essentially the same niche as /r/atheism or Richard
Dawkins - mockery of the enemy leavened with some facts and references.
If - purely hypothetically speaking here, of course - one wished to discredit LW
by making the respective RW article as negative as possible, I would expect it
to do real damage. But not be any sort of fatal takedown that set a mainstream
tone or gave a general population its marching orders, along the lines of
Shermer's 'cryonics is a scam because frozen strawberries' or Gould's Mismeasure
of Man's 'IQ is racist, involved researchers like Merton faked the data because
they are racist, and it caused the Holocaust too'.
-1MugaSofer8ySo ... RationalWiki, then.
2David_Gerard8yAccomplishment is a start. Do the claims match the observable results?
-1private_messaging8yYeah, because true rationality is going to be supporting something like cryonics
that you personally believe in.
0NancyLebovitz8yI can't see any good general solutions. People are limited to their own
judgement about whether something which purports to be selling rationality
actually makes sense.
You take your chances with whether martial arts and yoga classes are useful and
safe.
LW et al. does have first mover advantage and hopefully some prestige as a
result, and I'm hoping that that resources for the general public will be
developed here. On the other hand, taking sufficient care to develop workshops
which actually work takes time-- and that's workshops for people whose
intelligence level is similar to that of the people putting on the workshops.
If we assume that rationalists should win, even over fake rationalists, then
maybe we should leave the possibility open that rationalists who are actually in
the situation of competing with fake rationalists should be in a better position
to find solutions because they'll know more than we do now.
0Viliam_Bur8yI also don't have a solution besides reminding the rationalists that we run on
corrupted hardware, and the strong feeling of "these people around me are
idiots, I could do it hundred times better" is an evolutionary adaptation for
situations when there are many resources and no significant external enemy. (And
by the way, this could explain a lot of individualism our society has these
days.) We had a few people here who got offended e.g. by Eliezer's certainty
about quantum physics, and tried to split, and failed.
So perhaps the risk is actually small. Fake rationalists may be prone to
self-sabotage [http://lesswrong.com/lw/hf/debiasing_as_nonselfdestruction/]. The
proverbial valley of the bad rationality surrounding
[http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/] the castle of
rationality can make being a half-rationalist even worse than being a
non-rationalist. So the rationalists may have a hard time fighting pure
superstition, but the half-rationalists will just conveniently destroy
themselves.
The first mover advantage works best if all players are using the same strategy.
But sometimes the new player can learn from older players' mistakes, and does
not have to pay the costs. (Google wasn't the first search engine; Facebook
wasn't the first social network; MS Windows wasn't the first operating system
with graphical interface.) The second player could learn from LW's bad PR. But
it is likely that being completely irrational would be even more profitable for
them, if profit would be the main goal.
6gwern8ySDr actually gave me his research-edition Emotiv EPOC, but... I haven't actually
gotten around to using it because I've been busy with things like Coursera and
statistics. So, eventually! Hopefully.
-2OrphanWilde8yHm. Do you know of any resources on how to use EEG information to improve your
thought processes?
(I'm highly tempted to put some of my tax return to trying it out; partially for
improvement purposes, partially because I'm curious how much is going on inside
my mind I'm unaware of.)
4gwern8yAnything labeled 'neurofeedback' seems like a good place to start. I presently
have few ideas about how to use it, aside from seeing if it's a good way to
quantify meditation quality and hence have more direction in meditation than
random books and 'well, it seems to be helping a little'.
2Zaine8yEEG machines measure frequency of neuronal firing in the cortex. The greater the
frequency, the more asynchronous the firing and thus the more active the brain.
Learning how to read EEG output
[https://en.wikipedia.org/wiki/Electroencephalography#Comparison_table] requires
training, but there might be computer programs for that. To use the machine
effectively, identify an activity for which you'd like to measure your brain
waves, exempli gratia:
Measure degrees of neuronal firing asynchrony during work periods (pomodoros) -
useful for calibrating an accurate feeling of focus; measure success of
meditation (gamma wave output), as gwern noted; measure which break activities
actually induce a relaxed state; and of course check quality of sleep.
0Emile8yI'm pretty curious about those and have considered buying one, but didn't really
think it worthwhile - I tried one and was not very impressed, though if I have a
lot of time I might give it a stab.
Cal Newport and Scott H. Young are collobarating to form a start deliberate practice course by email. Here's an excerpt from on Cal's emails to inquiring people:
The goal of the course is simple: to teach you how to apply the principles of deliberate practice to become a stand out in your job.
Why is this important? The Career Capital Theory I teach in my latest book and on Study Hacks maintains that the skills that make you remarkable are also your leverage for taking control of your working life, and transforming it into a source of passion.
On an uncharitable reading, this sounds like two wide-eyed broscientist prophets who found The One Right Way To Have A Successful Career (because by doing this their career got successful, of course), and are now preaching The Good Word by running an uncontrolled, unblinded experiment for which you pay 100$ just to be one of the lucky test subjects.
Note that this is from someone who's never heard of "Cal Newport" or "Scott H. Young" before now, or perhaps just doesn't recognize the names. The facts that they've sold popular books with "get better" in the description and that they are socially-recognized as scientists are rather impressive, but doesn't substantially raise my priors of this working or not.
So if you've already tried some of their advice in enough quantity that your updated belief that any given advice from them will work is high enough and stable enough, this seems more than worth 100$.
Just the possible monetary benefits probably outweigh the upfront costs if it works, and even without that, depending on the kind of career you're in, the VoI and RoI here might be quite high, so depending on one's career situation this might need only a 30% to 50% probability of being useful for it to be worth the time and money.
0MileyCyrus8yThey seem to get more respect on LW than average career advice bloggers, so I
was hoping someone who was familiar would comment. Nonetheless, I'm upvoting you
because it's good to hear an outsider's opinion.
0David_Gerard8yIt usually goes there, yes - presumably it was put in Main in error.
0diegocaleiro8yBefore I've checked two other open threads, including the last one, and when the
link is open it shows "Main" dark shaded on top of them.
http://lesswrong.com/lw/h3w/open_thread_april_115_2013/
[http://lesswrong.com/lw/h3w/open_thread_april_115_2013/]
For the time being I switched it to discussion.
6Vaniver8yUnfortunately, this is not an indicator that the post is actually in Main.
2diegocaleiro8yHow bizarre. :)
0lsparrish8yOriginally, they were generally in Main, since Discussion was just for putting
posts that need cleanup work. Eventually this was changed though, and we usually
keep open threads in Discussion these days.
In a few places — possibly here! — I've recently seen people refer to governments as being agents, in an economic or optimizing sense. But when I reflect on the idea that humans are only kinda-sorta agents, it seems obvious to me that organizations generally are not. (And governments are a sort of organization.)
People often refer to governments, political parties, charities, or corporations as having goals ... and even as having specific goals which are written down here in this constitution, party platform, or mission statement. They express dismay and ou... (read more)
0fubarobfusco8y"Entity that acts like it has goals." If someone says, "The Democratic Party
wants to protect the environment" or "The Republican Party wants to lower the
national debt," they are attributing goals to an organization.
0Qiaochu_Yuan8yCan you give an example of something that's not an agent?
I encountered this cute summary of priming findings, thought you guys might like it, too:
You are walking into a room. There is a man sitting behind a table. You sit down across from him. The man sits higher than you, which makes you feel relatively powerless. But he gives you a mug of hot coffee. The warm mug makes you like the man a little more. You warm to him so to speak. He asks you about your relationship with your significant other. You lean on the table. It is wobbly, so you say that your relationship is very stable. You take a sip from the coffee
Amanda Knox and evolutionary psychology - two of LessWrong's favorite topics, together in one news article / opinion piece.
The author explains the anti-Knox reaction as essentially a spandrel of an ev. psych reaction. Money quote:
In our evolutionary past, small groups of hunter-gatherers needed enforcers, individuals who took it upon themselves to punish slackers and transgressors to maintain group cohesion. We evolved this way. As a result, some people are born to be punishers. They are hard-wired for it.
I'm skeptical of the ev. psych because it... (read more)
7komponisto8yThe phenomenon of altruistic punishment itself is apparently not just a matter
of speculation. Another quote from Preston's piece:
He links to this PNAS paper [http://www.pnas.org/content/100/6/3531.long] which
uses a computer simulation to model the evolution of altruistic punishment. (I
haven't looked at it in detail.)
Whatever the explanation for their behavior (and it really cries out for one),
the anti [http://truejustice.org]-Knox [http://perugiamurderfile.org] people
[http://perugiamurderfile.net] are truly disturbing, and their existence has
taught me some very unpleasant but important lessons about Homo sapiens.
(EDIT: One of them, incidentally, is a mathematician who has written a book
about the misuse of mathematics in trials -- one of whose chapters argues, in a
highly misleading and even disingenuous manner, that the acquittal of Knox and
Sollecito represents such an instance.)
2TimS8ySkimming the PNAS paper, it appears that the conclusion is that evolved group
co-operation is not mathematically stable without evolved altruistic punishment.
I.e. populations with only evolved co-operation drift towards populations
without any group focused evolved traits, but altruistic punishment seems to
exclude enough defectors that evolved co-operation maintained frequency in the
population.
Which makes sense, but I'm nowhere close to qualified to judge the quality of
the paper or its implications for evolutionary theory.
I am aware that there have been several discussions over to what extent x-rationality translates to actual improved outcomes, at least outside of certain very hard problems like metaethics. It seems to me that one of the best ways to translate epistemic rationality directly into actual utility is through financial investment/speculation, and so this would be a good subject for discussion (I assume it probably has been discussed before, but I've read most of this website and cannot remember any in depth-thread about this, except for the mention of markets b... (read more)
0Viliam_Bur8yBring some women to the team. (Yeah, that just changes the problem to a harder
one: Where to find enough women rationalists interested in finance?) Or have
multiple men on the team, and let them decide through some kind of voting. This
would work only if their testosterone level fluctuations are uncorrelated. You
could do some things to prevent that, e.g. forbid them to meet in person, and
make their coordination as impersonal as possible, to prevent them from making
each other angry.
This sounds like a huge complication to compensate for a single source of bias,
so it needs some measurement. If this could help the team make millions, perhaps
it is worth doing.
Maybe irrationality could be modelled as just another cost of participating in
the market. There are many kinds of costs which one has to pay to participate in
the market. You pay for advertising, for transferring goods from the place they
are produced to the customer, etc. Your own body must be fed and clothed.
Irrationality is a cost of using your brain.
If you would transfer your cargo by a ship, especially a few centuries ago, you
would have to accept that some part of your ships will sink. And yet, you could
make a profit, on average. Similarly, if you use human brain to plan your
business, you have to accept that some part of your plans will fail. The profit
can still be possible, on average.
2NancyLebovitz8yThis is just from memory, but I think testosterone levels aren't (just?) about
anger. Again from memory, testosterone goes up from winning, so the problem is
overconfidence from previous victories.
0wedrifid8yI'm afraid that is opposite to a solution to this particular problem. Even
neglecting the fluctuation in women's testosterone levels and considering only
the stereotypical androgenic behaviour of the males this can be expected to (if
anything) increase the risk taking behaviours of the kind warned against here.
Adding females to an already aggressive male group gives them prospective mates
to show off to. The linked to article mentions observations of this.
(There may be other reasons to bring more women onto your professional trading
team. Just not this one.)
I wonder if many people are putting off buying a bitcoin to hang onto, due more to trivial inconvenience than calculation of expected value. There's a bit of work involved in buying bitcoins, either getting your funds into mtgox or finding someone willing to accept paypal/other convenient internet money sources.
8Qiaochu_Yuan8yWhat if we're putting off buying a bitcoin because we, uh, don't want to?
3lsparrish8yOk... Well... If that's the case, and if you can tell me why you feel that way,
I might have a response that would modify your preference. Then again, your
reasoning might modify my own preference. Cryptic non-argument isn't
particularly interesting, or helpful for coming to an Aumann Agreement.
Edit: Here is my response
[http://lesswrong.com/lw/h8z/bitcoins_are_not_digital_greenbacks/].
1) I am not at all convinced that investing in bitcoins is positive expected value, 2) they seem high-variance and I'm wary about increasing the variance of my money too much, 3) I am not a domain expert in finance and would strongly prefer to learn more about finance in general before making investment decisions of any kind, and 4) your initial comment rubbed me the wrong way because it took as a standing assumption that bitcoins are obviously a sensible investment and didn't take into account the possibility that this isn't a universally shared opinion. (Your initial follow-up comment read to me like "okay, then you're obviously an idiot," and that also rubbed me the wrong way.)
If the bitcoin situation is so clear to you, I would appreciate a Discussion post making the case for bitcoin investment in more detail.
2RomeoStevens8yregulatory uncertainty swamps any quantitative analysis I think.
5Kaj_Sotala8yThe standard advice is that normal people should never try to beat the market by
picking any single investment, but rather put their money in index funds. The
best publicly available information is already considered to be reflected in the
current prices: if you recommend in buying a particular investment, that implies
that you have knowledge that the best traders currently on the market do not
have. As a friend commented:
So if you think that people should be buying Bitcoins, it's up to you to explain
why the standard wisdom on investment is wrong in this case.
(For what it's worth, personally I do own Bitcoins, but I view it as a form of
geek gambling, not investment. It's fun watching your coins lose 60% in value
and go up 40% from that, all within a matter of a few days.)
5RomeoStevens8yBitcoins are more like investing in a startup. The plausible scenarios to
bitcoins netting you a return commensurate with the risk involve it disrupting
several 100 billion+ markets (paypal, western union). I think investing in
startups that have plausible paths towards such disruptions are worthy of a
small portion of your portfolio.
2RomeoStevens8yIt should be significantly better on may 6th presuming the coinlab/silicon
valley bank/mtgox stuff goes live.
2wedrifid8yAt the level of buying just one bitcoin the convenience is more than trivial.
Even just in the financial burden of the bank transfers changes the expected
value calculation quite a bit (allthough the cost seems to be reducing
somewhat).
0[anonymous]8yIn case anyone has difficulty with the convenience factor:
I have four bitcoins that I bought for about 100 USD. Currently MtGox is at
under 80 USD. As long as it remains at or below that rate, I am willing to sell
these at 100 USD via paypal. It's not a great price but it is much more
convenient to pay by paypal than the other methods available to buy bitcoins.
* Only lesswrongers with decent karma and posting history qualify.
* The intended purpose is for you to hold them long-term against the off chance
that it becomes a mainstream currency in the long term. Please hold them for
at least a year.
* They can be converted to paper wallet form, which I will do for you if you
opt to trust me ( because as far as I know this implies I would have access
unless/until I delete the wallet) and send it to you by snail-mail.
* I will most likely be using the proceeds to buy more via IRC.
PM if interested.
3beoShaffer8yBuss's Evolutionary Psychology is good if you are specially looking for the
evolutionary psychology element not so sure about general evolutionary biology
books. Also we have a dedicated textbook thread
[http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/].
0kgalias8yThanks. I'm aware of the topic, but sadly there's not much there related to
evolution (though I did rediscover that khan academy has some stuff). Is there
any merit to this criticism?
http://www.amazon.com/review/R3NG0J7T66E9N4/ref=cm_cr_pr_viewpnt#R3NG0J7T66E9N4
[http://www.amazon.com/review/R3NG0J7T66E9N4/ref=cm_cr_pr_viewpnt#R3NG0J7T66E9N4]
4gwern8yHe's been asked before and denied it, IIRC.
0shminux8yI guess he is just a natural genius. Nietzsche would have looked up to him.
4gwern8yOr he's just channeling regular skeptic/geek/transhumanist memes from fiction
etc. Manipulative evil AIs? Well, that's like every other Hollywood movie with
an AI in it...
2shminux8yI did not mean just this one strip, but what he draws/writes on philosophy,
religion, metaethics and transhumanism in general.
I have noticed an inconsistency between the number of comments actually present on a post and the number declared at the beginning of its comments section, the former often being one less than the latter.
For example, of the seven discussion posts starting at "Pascal's wager" and working back, the "Pascal's wager" post at the moment has 10 comments and says there are 10, but the previous six all show a count one more than the actual number of visible comments. Two of them say there is 1 comment, yet there are no comments and the text &qu... (read more)
5pragmatist8yA short while ago, spam comments in French were posted to a bunch of discussion
threads. All of these were deleted. I'm guessing this discrepancy is a
consequence of that.
This has most likely been mentioned in various places, but is it possible to make new meetup posts (via the "Add new meetup" button) to only show up under "Nearest Meetups", and not be in Discussion? Also, renaming the link to "Upcoming Meetups" to match the title on that page, and listing more than two - perhaps a rolling schedule of the next 7 days.
Is there a nice way of being notified about new comments on posts I found interesting / commented on / etc? I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.
... or a "number of green posts" indicator near the post titles when listing them? (I know it's a) takes someone to code it b) my gut feeling is that it would take a little more than usual resources, but maybe someone knows of an easier way of the same effect.)
3Oscar_Cunningham8yI don't quite see what you mean here. Do you know that each post has its own
comments RSS feed?
0latanius8y... this is the thing I've been looking for! (I think I had some strange cached
thought from who knows where that posts do not have comments feeds, so I didn't
even check... thanks for the update!)
Is there anyone going to the April CFAR Workshop that could pick me up from the airport? I'll be arriving at San Francisco International at 5 PM if anyone can help me get out there. (I think I have a ride back to the airport after the workshop covered, but if I don't I'll ask that seperately.)
Hey; we (CFAR) are actually going to be running a shuttles from SFO Thursday evening, since the public transit time / drive time ratio is so high for the April venue. So we'll be happy to come pick you up, assuming you're willing to hang out at the airport for up to ~45 min after you get in. Feel free to ping me over email if you want to confirm details.
Who is the best pro-feminist blogger still active? In the past I enjoyed reading Ozy Frantz, Clarisse Thorn, Julia Wise and Yvain, but none of them post regularly anymore. Who's left?
4shminux8yYvain still posts regularly (Google slate star codex), but he is not
pro-feminist, he is anti-bias.
0Omid8yHe's slowing down and shifting focus
[http://slatestarcodex.com/2013/06/19/a-new-chapter-in-the-codex/] which makes
him an unreliable source for rigorous defenses of feminism.
2Alicorn8yIf you liked Ozy, you might like Pervocracy too.
I wrote something on Facebook recently that may interest people, so I'll cross-post it here.
Cem Sertoglu of Earlybird Venture Capital asked me: "will traders be able to look at their algorithms, and adjust them to prevent what happened yesterday from recurring?
My reply was:
I wouldn't be surprised if traders will be able to update their algorithms so that this particular problem doesn't re-occur, but traders have very little incentive to write their algorithms such that those algorithms would be significantly more robust in general. The approaches th
Considering making my livejournal into something resembling the rationality diaries (I'd keep the horrible rambling/stupid posts for honesty/archival purposes). I can't tell if this is a good idea or not; the probability that it'd end like everything else I do (quietly stewing where only I bother going) seems absurdly high. On the other hand, trying to draw this kind of attention to it and adding structure would probably help spawn success spirals. Perhaps I should try posting on a schedule (Sunday/Tuesday/Thursday seems good, since weekends typically suck... (read more)
0CAE_Jones8yI think I'm going with something less structured, but will gear it more toward
rationality techniques, past and present, etc, and will post more often (the
three a week mentioned in the parent is what I'll be shooting for). (Previously,
I mostly just used livejournal as a dumpingground for particularly unhappy days,
hence all the stupid from before April 2013.). I was also encouraged by the idea
of web serial novels, and may or may not try to make it wind up looking like
such a thing, somehow.
I started browsing under Google Chrome for Android on a tablet recently. Since there's no tablet equivalent of mouse hovering, to see where a link points without opening it I have to press and hold on it. For off-site links in posts and comments, though, LW passes them through api.viglink.com, so I can't see the original URL through press-and-hold. Is there a way to turn that function off, or an Android-compatible browser plugin to reverse it?
Would you like to gain experience in non-profit operations by working for the Centre for Effective Altruism, a young and rapidly expanding charity based in Oxford? If so, we encourage you to apply to join our Graduate Volunteer Scheme as Finance and Fundraising Manager
I've always felt that Atlas Shrugged was mostly an annoying ad nauseum attack on the same strawman over and over, but given the recent critique of Google, Amazon and others working to minimize their tax payments, I may have underestimated human idiocy:
the Public Accounts Committee, whose general verdict was that while companies weren't doing anything legally wrong when they shifted profits around the world to lower their total tax bill, the practice was "immoral".
On the other hand, these are people wearing their MP hats, they probably sing a... (read more)
3ahbwramc8yNo, I didn't delete it. It went down to -3 karma, which apparently hides it on
the discussion page. That's how I'm assuming it works anyway, given that it
reappeared as soon as it went back up to -2. Incidentally, it now seems to be
attracting random cold fusion "enthusiasts" from the greater internet, which was
not my intention.
7TimS8yThe hide / not hide can be set individually by clicking Preferences next to
one's name. I think you are seeing the result for the default settings - I
changed mine a while ago and don't remember what the default is.
Is there anyway to see authors classified by h-index? Google scholar seems not to have that functionality. And online lists only exist of some topics...
Lewis Dennett and Pinker for instance have nearly the same h-index.
Ed Witten's is much larger than Stephen Hawkings..... etc........
If you know where to find listings of top h-indexes, please let me know!
0ahbwramc8yDepends, do you work at a university or research institution, or have access to
one? The scientific database Web of Science
[http://apps.webofknowledge.com/WOS_GeneralSearch_input.do?highlighted_tab=WOS&product=WOS&last_prod=WOS&SID=3A8ELD4J2McJ8Hf5K8j&search_mode=GeneralSearch]
has an author search function, and it can give you a full citation report for
any scientist in the database with a bunch of useful info, including h-index.
0diegocaleiro8yThat is not what I want, I want an ordered list, which is ordered according to
h-index. But thanks for letting me know.
What I want is, say a Top 100 h-indexes of all time, or top 100 within math, or
within biology, within analytic philosophy, etc...
I remember seeing a post (or more than one?) where Yudkowsky exhorts smart people (e.g. hedge fund managers) to conquer mismanaged countries, but I can't find it by googling.
Then again, in the Muggle world, all of the extremely intelligent people Harry knew about from history had not become evil dictators or terrorists. The closest thing to that in the Muggle world was hedge-fund managers, and none of them had tried to take over so much as a third-world country, a point which put upper bounds on both their possible evil and possible goodness.
I heard a speaker claim that the frequency of names in the Gospels matches the list of most popular names in the time and place they are set, not the time and place they are accepted to have been written in. I hadn't heard this argument before and couldn't think of a refutation. Assuming his facts are accurate, is this a problem?
8gwern8yA problem for what? It's not much evidence for a historical-realist-literalist
viewpoint, because the usual mythicist or less-literal theories generally
believe that the original stories would have gotten started around the time they
are set in, and so could be expected to mimick the name distribution of the
setting, and keep the mimicking (while warping and evolving in many other ways)
until such time as they are compiled by a scribe and set down into a textual
form.
Few think that Gospels were made up out of whole cloth in 300 AD and hence
having versimiltude (names matching 30s AD) is a surprising feature and evidence
against the whole-cloth theory. Generally, both believers and mythicists think
some stories and myths and sayings and parables got started in the 30s+ AD and
passed down and eventually written down, possibly generations later, at various
points like the 90s AD; what they disagree on is how much the oral transmission
and disciples affected things and what the origin was.
Toying around with the Kelly criterion I get that the amount I should spend on insurance increases with my income though my intuition says that the higher your income is the less you should insure. Can someone less confused about the Kelly criterion provide some kind of calculation?
For anyone asking, I wondered if, given income and savings rate how much should be invested in bonds, stocks, etc. and how much should be put into insurance, e.g. health, fire, car, etc. from a purely monetary perspective.
4RolfAndreassen8yThe Kelly criterion returns a fraction of your bankroll; it follows that for any
(positive-expected-value) bet whatsoever, it will advise you to increase your
bet linearly in your income. Could this be the problem, or have you already
taken that into account?
That aside, I'm slightly confused about how you can use the Kelly criterion in
this case. Insurance must necessarily have negative expected value for the
buyer, or the insurer makes no profit. So Kelly should be advising you not to
buy any. How are you setting up the problem?
3Metus8yWell that is exactly the point. It confuses me that the richer I am the more
insurance I should buy, though the richer I am the more I am able to compensate
the risk in not buying any insurance.
Yes and no. The insurer makes only a profit if the total cost of insurance is
lower than the expected value of the case with no insurance. What you pay the
insurer for is that the insurer takes on a risk you yourself are not able to
survive (financially), that is catastrophically high costs of medical
procedures, liabilities or similar. It is easily possible for the average Joe to
foot the bill if he breaks a $5 mug but it would be catastrophic for him if he
runs into an oil tank and has to foot the $10,000,000 bill to clean up the
environment. (This example is not made up but actually happened around here.)
It is here where my intuition says that the richer you are, the less insurance
you need. I could also argue that if it was the other way around, that you
should insure more the richer you are, insurance couldn't exist, seeing as the
insurer is the one who should buy insurance from the poor!
You can use the Kelly criterion in any case, either negative or positive
expected value. In the case of negative value it just tells you to take the
other side of the bet or to pay to avoid the bet. The latter is exactly what
insurance is.
I model insurance from the point of view of the buyer. In any given time frame,
I can avoid the insurance case with probability q, saving the cost of insurance
b. Or I could lose and have to pay a with the probability p = 1-q. This is the
case of not buying insurance, though it is available. So if f = p/a - q/a is
negative I should insure, if f is positive, I should take the risk. This follows
my intuition insofar that catastrophic but improbable risk (very high a, very
low p) should be insured but not probable and cheap liabilities (high p, low a).
The trick is now that f is actually the fraction of my bankroll I have to
invest. So the richer
6RichardKennaway8yThe Kelly formula assumes that you can bet any amount you like, but there are
only so many things worth insuring against. Once those are covered, there is no
opportunity to spend more, even if you're still below what the formula says.
In addition, what is a catastrophic loss, hence worth insuring against, varies
with wealth. If the risks that you actually face scale linearly with your
wealth, then so should your expenditure on insurance. But if having ten times
the wealth, your taste were only to live in twice as expensive a house, drive
twice as expensive a car, etc. then this will not be the case. You will run out
of insurance opportunities even faster than when you were poorer. At the Jobs or
Gates level of wealth, there are essentially no insurable catastrophes. Anything
big enough to wipe out your fortune would also wipe out the insurance company.
0Metus8yYour reply provides part of the missing piece. Given that I am over some kind of
absolute measure of poverty, empirically having twice as much disposable income
won't translate into twice as much insurable assets. This limits the portion of
bankroll that can be spent for insurance. Also, Kelly assumed unlitimited offer
of bets which is not that far from the truth. Theoretically I can ask the
insurer to give twice the payout for twice the cost of insurance.
And still, your answer doesn't quite answer my original question. I asked for
given (monthly) income, savings rate and maybe wealth, what is a optimal
allocation of insurance and investments, e.g. bonds or equity? And even if
assuming that I keep my current assets but double my income and wealth, Kelly
still says to buy insurance, though you admit that anything Gates would want to
insure against would ruin the insurer, but my intuition still says that Gates
does not insure anything that I would, like a car, house or health costs.
3RolfAndreassen8yPerhaps the problem lies in the dichotomy "buy insurance" versus "do not buy".
It seems to me that you have, in fact, got three, not two, options:
a) Buy insurance from someone else
b) Spend the money
c) Save the money, in effect buying insurance from yourself.
I think option c) is showing up in your analysis as "do not buy insurance",
which should be reserved for b). You are no doubt correct that Gates does not
buy car insurance (unless perhaps he is forced to by law), but that does not
mean he is not insured. In effect he is acting as his own insurer, pocketing the
profit.
It seems to me, then, that Kelly is telling you that the richer you are, the
more you should set aside for emergencies, which seems to make sense; but it
cannot distinguish between self-insurance and buying an insurance policy.
2Metus8ySo you say that if Kelly says to buy insurance for $300 if the insurance costs
$100 I should not buy the police but set aside $300 in case of emergency?
2RolfAndreassen8yInsurance whose payout is only three times the policy cost should rather be
classified as a scam. More generally, I think the strategy would be thus: Kelly
tells you to take some amount of money and spend it on insurance. If that amount
is enough to cover the payout of the insurance policy, then you should not pay
the premium; instead you should put the money in savings and enjoy the interest
payments. Only if the amount Kelly assigns to insurance is too small to cover
the payout should you consider paying the premium.
1gwern8yDepends on the cost of the risk, no? For a first generation XBox 360, paying
half the price for a new replacement is not obviously a bad deal...
2RolfAndreassen8yOk, in that case it's rather the Xbox that is the scam, but I stand by the use
of the word. If that sort of insurance is a good deal, you're being screwed over
somewhere. :)
0gwern8yI wouldn't say that; as always, the question is whether the good is +EV and the
best marginal use of your money. If the console costs $3 and insurance costs $1
and there's a >33% chance the console will break and you'll use the insurance,
given how much fun you can have with an Xbox is that really a scam? I wouldn't
say so.
In practice the insurance that you can buy is too limited and the odds too bad
to actually make it a good deal; I did some basic analysis of the issue at
http://www.gwern.net/Console%20Insurance
[http://www.gwern.net/Console%20Insurance] and you're better off self-insuring,
at least with post-second-generation Xbox 360s (the numbers look really bad for
the first-generation but hard sources are hard to come by).
1RichardKennaway8yYou can ask, but your insurer will decline. You can only insure your house for
what it's worth.
0Metus8yMaybe in that case. In the case of life insurance I have practically unlimited
options. Whether I insure for $1M or $1k in case of my death is up to me.
0RichardKennaway8yTerm life insurance, which pays out nothing if you live beyond the term, is like
other insurance: it is only worth buying to protect against some specific risk
(e.g. mortgage payments) consequent on an early death.
Life assurance is different, in that the event insured against is certain to
happen. That is why it is called assurance: assuredly, you will die. As such, it
is primarily an investment, together with an insurance component that guarantees
a payout even if you die prematurely. As an investment, you can put as much as
you like into it, but if your heirs will not be financially stricken if you die
early, you -- or rather, they -- do not need the insurance part.
1aleksiL8yYou have it backwards. The bet you need to look at is the risk you're insuring
against, not the insurance transaction.
Every day you're betting that your house won't burn down today. You're very
likely to win but you're not making much of a profit when you do. What fraction
of your bankroll is your house worth, how likely is it to survive the day and
how much will you make when it does? That's what you need to apply the Kelly
criterion to.
0Metus8yHave you read my reply to RichardKennaway
[http://lesswrong.com/lw/h7r/open_thread_april_1530_2013/8tj3]? I explicitly
look at the case you mention.
Here's something I think should exist, but don't know if it does: a list of interesting mental / neurological disorders, referencing the subjects they have bearing on.
0Qiaochu_Yuan8yWhat do you mean by "interesting" and "subjects"?
4sixes_and_sevens8yBy "interesting" I mean "of interest to the Less Wrong community", specifically
because they provide insight into common failure modes of cognition, and by
extension, cognition itself.
By "subjects", I mean the topics of inquiry they provide insight into.
Here [http://lesswrong.com/lw/20/the_apologist_and_the_revolutionary/] is an
example that should hopefully pin down what I am talking about. I personally
have a mental catalogue of such disorders, and given their prevalence in
discussion around here I suspect a number of other people do as well. It would
be nice if we all had one big catalogue.
So, I have a primitive system for keeping track of my weight: I weigh myself daily and put the number in a log file. Every so often I make a plot. Here is the current one. I have been diligent about writing down the numbers, but I have not made the plot for at least a year, so while I was aware that I'm heavier now than during last summer, I had no idea of the visual impact of that weight loss and regain. My immediate thought: Now what the devil was I doing in May of 2012, and can I repeat it this year and avoid whatever happened in July-August?
0Qiaochu_Yuan8yIf you put the data into a Google Doc, you can get a plot that updates whenever
you update the log. That's what I've been doing.
3RolfAndreassen8yConvenient, but
a) I like having things in a text file that I can open with a flick of the
keyboard on the same system I'm working on anyway b) Making my own plot, I have
full control of the formatting, plus I can do things like fit trends over given
periods, mark out particular dates, or otherwise customise c) I dread the day
when some overeager Google popup tells me that "It looks like you're trying to
control your weight! Would you like me to show you some weight-loss products?"
(At least one of these items not intended seriously).
0DaFranker8yYou mean (a), right? 'caus "flick of the keyboard" is kind of funny, but setting
that up for a particular text file sounds awfully... unworkable.
(point (c) is not nearly as unrealistic as it might seem at first - they're
pretty much already there to some extent)
3RolfAndreassen8yOh, I absolutely believe that Google will tell you about weight-loss products if
they detect you tracking a number that looks reasonable for a human weight in
pounds, and that they have an algorithm capable of doing that. It's the
overeager popup with the near-quote of Clippy (the original Microsoft version,
not our friendly local Clippy who, while he might want to turn you into your
component atoms for reuse, is at least not unbearably upbeat about it) that's
unrealistic.
"Flick of the fingers on the keyboard", then: From writing here it is
Windows-Tab, Windows-1, C-x b, w-e-i-Tab, Enter. If the file wasn't already open
in emacs, replace C-x b with C-x C-f.
0DaFranker8yAh, yes, the mighty emacs.
I should get around to installing and using that someday. >.<
-1Zaine8yIf you can, buy a machine that measures your body fat percentage as well
(bioelectrical impedance) - it's a more meaningful statistic. If you're
measuring once per month, under consistent hydration and bowel volume, it could
be pretty convenient. The alternative, buying callipers with which you'd perform
a skinfold test, requires you train yourself in their proper use (perhaps
someone could teach you).
North Korea is threatening to start a nuclear war. The rest of the world seems to be dismissing this threat, claiming it's being done for domestic political reasons. It's true that North Korea has in the past made what have turned out to be false threats, and the North Korean leadership would almost certainly be made much worse off if they started an all out war.
But imagine that North Korea does launch a first strike nuclear attack, and later investigations reveal that the North Korean leadership truly believed that it was about to be attacked and so mad... (read more)
3gwern8yWhy do we care what they think, and can you name previous examples of this?
2James_Miller8yAs someone who studies lots of history while often thinking, "how could they
have been this stupid didn't they know what would happen?", I thought it useful
to frame the question this way.
Hitler's professed intentions were not taken seriously by many.
Hitler's professed intentions were not taken seriously by many.
Taken seriously... when? Back when he was a crazy failed artist imprisoned after a beer hall putsch, sure; up to the mid-1930s people took him seriously but were more interested in accommodationism. After he took Austria, I imagine pretty much everyone started taking him seriously, with Chamberlain conceding Czechoslovakia but then deciding to go to war if Poland was invaded (hardly a decision to make if you didn't take the possibilities seriously). Which it then was. And after that...
If we were to analogize North Korea to Hitler's career, we're not at the conquest of France, or Poland, or Czechoslovakia; we're at maybe breaking treaties & remilitarizing the Rhineland in 1936 (Un claiming to abandon the cease-fire and closing down Kaesŏng).
One thing that hopefully the future historians will notice is that when North Korea attacks, it doesn't give warnings. There were no warnings or buildups of tension or propaganda crescendos before bombing & hijacking & kidnapping of Korean airliners, the DMZ ax murders, the commando assault on the Blue House, the sinking of the Cheonan, kidnapping Korean or Japanese c... (read more)
3Qiaochu_Yuan8yCertainly the consequences of us being wrong are bad, but that isn't necessarily
enough to outweigh the presumably low prior probability that we're wrong. (I'm
not taking a stance on how low this probability is because I don't know enough
about the situation.) Presumably people also feel like there are game-theoretic
reasons not to respond to such threats.
1FiftyTwo8yThere is an issue of ability vs. intention, no matter whether the North Korean
leadership wants to destroy the US or South Korea they don't have the ability to
do any major harm. The real fear is that the regime collapses and we're left
with a massive humanitarian crisis.
4drethelin8yPretty sure nuking Seoul is worse than the regime in NK collapsing. I think
annexation by either china or SK would be way better than the current system of
starvation in NK.
2NancyLebovitz8yAny thoughts about what a relatively soft landing for NK would look like?
1shminux8yMaybe a slow and controlled introduction of free enterprise, Deng
Xiaoping-style, while maintaining a tight grip on political freedoms, at least
until the economy recovers somewhat, could soften it. Incidentally, this is
apparently the direction Kim Jong-un is carefully steering towards. Admittedly,
slacklining seems like a child's play compared to the perils he'd have to go
through to land softly.
1TimS8ySome here
[http://www.samefacts.com/2010/05/international-affairs/asia/tke-korean-reunification-taboo/]
. One of the most interesting parts of the essay was the aside claiming that NK
saber rattling is an intentional effort to distraction S. Korean, US, and
Chinese attention from thinking about the mechanics of unification.
Edit: I'll just quote the interesting paragraph:
Emphasis mine.
0[anonymous]8yI was talking about this with a friend of mine, and it does seem like there is
no outcome that's not going to be hugely, hideously expensive. The costs of a
war are obviously high - even if they don't/can't use nukes, they could knock
Seoul right out of the global economy. But even if its peaceful you'd have this
tidal wave of refugees into China and the South, and South Korea will be paying
reunification costs for at least the next decade or so.
You can sort of see why SK and China are willing to pay to keep the status quo,
and screw the starving millions.
3gwern8yFar longer than that. (West) Germany is apparently still effectively subsidizing
(former) East Germany, more than 2 decades after unification - and I have read
that West & East Germany were much closer in terms of development than North &
South Korea are now. For the total costs of reunification, 'trillions' is
probably the right order of magnitude to be looking at (even though it would
eventually more than pay for itself, never mind the moral dimension).
1[anonymous]8yI quite agree, on both parts. 25 million new consumers, catch-up growth, road
networks from Seoul to Beijing, navigable waters, less political risk premium,
etc.
It's a gloomy picture though. A coup seems unlikely (given the Kim-religion) and
it'll probably be 2050-70 until Jong-un dies. I've got two hopes: the recent
provocation is aimed at a domestic audience, and once he's proved himself he'll
pull a Burma; or the international community doesn't blink and resume aid,
forcing them into some sort of opening. Not very high hopes though.
4gwern8yTo expand: a massive burst of cheap labor, a peace dividend in winding down both
militaries (on top of the reduction in risk premium) such as closing down
military bases taking valuable Seoul-area real estate, and access to all NK's
mineral and natural resources.
0drethelin8yI had a little dream scenario in my head when Jong Il died that Jong Un would
have been secretly rebellious and reasonable and start implementing better
policy bit by bit, but that clearly didn't happen. My hope is that whoever
actually has their hands on the buttons in charge of the bombs and military is
more reasonable than Jong-Un, and that he gets taken out either by us or by
someone close to him who has a more accurate view of reality. At this point, the
international rhetoric would immediately start being toned down, and the de
facto government could start making announcements about the world changing its
mind or something to smooth over increased cooperation and peace and foreign
aid.
0Estarlio8yI think part of the problem is that we don't know whether they seem to be crazy
or not.
I want to change the stylesheets on a wordpress blog so the default font is Baskerville. I'm not too experienced with editing CSS files, anyone here good at that? I know how to manually make each paragraph Baskerville.
Are you a guy that wants more social interaction? Do you wish you could get complimented on your appearance?
Grow a beard! For some reason, it seems to be socially acceptable to compliment guys on a full, >1", neatly trimmed beard. I've gotten compliments on mine from both men and women, although requests to touch it come mostly from the latter (but aren't always sexual--women with no sexual attraction to men also like it). Getting the compliments pretty much invariably improves my mood; so I highly recommend it if you have the follicular support.
9TimS8yBecause of differences in local culture, please list what country you live in,
and perhaps what region.
0khafra8yI thought of listing "Southeast USA." However, a large metropolitan area in
Florida, where I live, is a fairly cosmopolitan blend of Western culture. Not
super-hip like a world-class city; and not provincial like 50 miles in any
direction.
And the compliments have come from diverse sources--women at clubs, women on
college campuses, military officers, people one socioeconomic class up and
down...
0CAE_Jones8yI've heard people get complimented on their beards quite a bit in the past 8
years or so. Central/south Arkansas, but the locations were specifically college
campuses (I think there was some beard-complimenting going around at ASMSA
(residential high school intended to be more college-like), but I could be
misremembering since the person I'm thinking of was active in other places I
went). It was recommended to me, I think more than once.
Howdy - comment to some person I haven't identified but will probably read this:
I appreciate the upvotes, but please only upvote my comments if you agree with them/like them/find them interesting/whatever. I'm trying to calibrate what the Less Wrong community wants/doesn't want, and straight-ticket upvoting messes with that calibration, which is already dealing with extremely noisy and conflicting data.
I came across this post on Quora and it strikes me as very plausible. The summary is essentially this: "Become the type of person who can achieve the things you want to achieve." What's your (considered) opinion?
Also, this seems relevant to the post I linked, but I'm not sure exactly how.
It's an old point, probably made by Robin Hanson, that if you want to donate to charity you should actually boast about it as much as possible to get your friends to do the same, rather than doing the status-preserving, humble saint act.
I think it might be worth making an app on facebook, say, that would allow people to boast anonymously. Let's say your offered the chance to see if your friends are donating. Hopefully people bite - curiousity makes them accept (no obligation to do anything after all). But now they know that their friends are giving and the... (read more)
0drethelin8yeh, I'm unselfish enough to donate but selfish enough not to boast about it
0zslastman8yRight, but if you were given the opportunity for all your friends to that one of
their friends had donated, you'd take it right? No social cost to you.
-2MugaSofer8yEh, I'm selfish enough not to donate but unselfish enough to fill out an
anonymous form claiming I donate vast sums.
0zslastman8yCould be made verifiable by communicating with charities, to stop people doing
that. Good point though.
-2MugaSofer8yBut if you stop people lying, you'll reduce the peer pressure!
Hey, if it works for sex stuff ...
Help me get matrix multiplication? (Intuitively understand.) I've asked google and read through http://math.stackexchange.com/questions/31725/what-does-matrix-multiplication-actually-mean and similar pages&articles , and I get what linear functions mean. I've had it explained in terms of transformation matrices and I get how those work and I'm somewhat familiar with them from opengl. But it's always seemed like additional complexity that happens to work (and sometimes happens to work in a cute way) because it's this combination of multiplication and ad... (read more)
[This comment is no longer endorsed by its author]Reply
Just got bitten again by the silent -5 karma bug that happens when a post upthread from the one you're replying to gets downvoted below the threshold while you're writing your reply. If we can spare the developer resources, which I expect we can't, it would be nice if that didn't happen.
[This comment is no longer endorsed by its author]Reply
Overheard this on the bus: “If Christians are opposed to abortion because they think fetuses are people, how comes they don't hold funerals for miscarriages?”
9RolfAndreassen8yI would suppose that some of them do. I would further suppose that it's not
actually a bad idea, if the pregnancy was reasonably advanced. The grief is, I
believe, rather similar to that experienced by someone losing a child that had
been brought to term. To the extent that funerals are a grief-coping mechanism,
people probably should hold them for miscarriages.
9gwern8yA google search for 'miscarriage funeral' suggests that people do, yes, but it's
sufficiently rare that one can write articles about it.
9NancyLebovitz8yThey seem to be somewhat more standard in Japan and Taiwan
[http://en.wikipedia.org/wiki/Mizuko_kuy%C5%8D].
5skeptical_lurker8yMy favorite argument about abortion is to point out that if the soul enters the
body at conception, as identical twins split after conception, this logically
implies that one twin has no soul, and thus is evil. The evil twin can usually
be identified by sporting a goatee.
That's based on the unstated but incorrect premise that souls are indivisible and only distributed in whole number amounts. Anyone who's spent time around identical twins can tell that they only have half a soul each.
4skeptical_lurker8yOf course - this explains identical twin telepathy
[http://tvtropes.org/pmwiki/pmwiki.php/Main/TwinTelepathy]!
5shminux8yYour self-congratulatory argument's quality is just as bad as that of those
against evolution. Maybe souls twin, too. Or maybe fetus twinning is caused by
the need to accommodate a surplus soul. Or...
0skeptical_lurker8yI would like to point out that my argument is against 'the soul enters the body
at conception' not 'there exists a soul'. If souls twin, then this provides an
example where the soul enters the body after conception, proving my point.
There are plenty of beliefs in souls that do not require them entering the body
at conception. Some Hindus would say that the body, like all material objects,
is maya, or illusion, and only consciousness exists, and thus the question 'does
the soul enter the body at conception?' is meaningless.
I wouldn't say I agree with this point of view, but its a lot more reasonable.
5Alicorn8yOr genetic chimeras, who are fused fraternal twin embryos - have they got two
souls?
3skeptical_lurker8yInteresting - I had forgotten about that. If one actually assigned a non-trivial
probability to the hypothesis that the soul enters the body at conception, one
could do the stats to see if chimeras are more likely to exhibit multiple
personality disorder!
-2MugaSofer8yMaybe one of them is dead? The one that didn't form the brain, I guess.
Although, if you can have a soul at conception, the brain must be unnecessary
... hmm, transplant patients ...
4Vaniver8yThis actually happens, sometimes. Perinatal hospice
[http://www.perinatalhospice.org/] is also a thing.
4Jayson_Virissimo8yThen again, there are plenty of jurisdictions that will charge someone with
double-murder [http://en.wikipedia.org/wiki/Double_murder] if they intentionally
kill a woman they know to be pregnant (and I'm sure at least some of these
jurisdictions allow abortion). Curious. Also, some do have funerals for their
miscarried babies, but I have no idea whether Christians do so at higher rates.
-2MugaSofer8yData point: I have, in fact, been to such a funeral. However, it wasn't
official.
A NASA physicist called Harold White suggests that if he tweaks the design of an 'Alcubierre Drive', extremely fast space travel is possible. It bends spacetime around itself, apparently. I don't know enough about physics to be able to call 'shenanigans' - what do other people think?
1kpreid8yIANAPhysicist, but what seem to me to be the main points:
* The Wikipedia article [http://en.wikipedia.org/wiki/Alcubierre_drive] seems
to be a fairly good description of problems. Briefly: The warp bubble is a
possible state of the universe solely considering the equations of general
relativity. We don't yet know whether it is compatible with the rest of
physics.
* The Alcubierre drive requires a region of space with negative energy density;
we don't know any way to produce this, but if there is it would involve some
currently-unknown form of matter (which is referred to as “exotic matter”,
which is just a catch-all label, not something specific).
* The work described in the article consists of two things:
1. Refining the possible state to have less extreme requirements while still
being FTL.
2. Conducting experiments which study an, ah, extremely sub-FTL state which
is similar in some sense to the warp bubble. This part seems to me to
have a high chance of being just more confirmation of what we already
know about general relativity.
2DaFranker8yI've read and been told that this is not entirely accurate; apparently, tiny
pockets with effectively this effect have been created in labs by abusing things
I don't understand.
However, it's apparently still under question whether these can be aggregated
and scaled up at all, or if they are isolated events that can only be made under
specific one-off circumstances.
All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Harry Potter and the Methods of Rationality.
I'm not sure that's true. When I looked in the 2012 survey, I didn't see any striking gender disparity based on MoR: http://lesswrong.com/lw/fp5/2012_survey_results/8bms - something like 31% of the women found LW via MoR vs 21% of the men, but there are just not that many women in the survey...
-2diegocaleiro8yThat does not factor the main point " that would not otherwise have become
rationalist" There are loads of women out there on a certain road into
rationalism. Those don't matter. By definition, they will become rationalists
anyway.
There are large numbers who could, and we don't know how large, or how else they
could, except HPMOR
3TimS8yLeaving aside gwern's rudeness, he is right - if MoR doesn't entice more women
towards rationality than the average intervention, and your goal is to change
the current gender imbalance among LW-rationalists, then MoR is not a good
investment for your attention or time.
0gwern8yI'm sorry, I was just trying to interpret the claim in a non-stupidly
unverifiable and unprovable sense.
-2diegocaleiro8yIt is not a claim, it is an assumption that the reader ought to take for
granted, not verify. If I thought there were reliable large N data of a double
blind on the subject, I'd simply have linked the stats. As I know there are not,
I said something based on personal experience (as one should) and asked for
advice on how to improve the world, if the world turns out to correlate with my
experience of it.
Your response reminds me of Russell's joke about those who believe that "all
murderers have been caught, since all muderers we know have been caught"...
The point is to find attractors, not to reject the stats.
3gwern8yಠ_ಠ All (90%) of rationalist women who would not otherwise have become
rationalist women became so because of Baby Eaters in "Three Worlds Collide".
Thus, we need 50 Shades of Cooked Babies.
As well as good marketing designs of things that attract women into rationality.
Does this strike you as dubious? Well, it is not a claim, it is an assumption
that the reader ought to take for granted, not verify!
8ModusPonies8yFanfiction readers tend to be female. HPMoR has attracted mostly men. I'm
skeptical that your strategy will influence gender ratio.
Possible data point: are Luminosity fans predominantly female?
4falenas1088yWait, the question isn't in HPMoR attracted more women than men, it's if it the
women to man ratio is higher than other things that attracts people.
0latanius8yP(Luminosity fan | reads this comment) is probably not a good estimate... (count
me in with a "no" data point though :)) Also, what is the ratio of "Luminosity
fan because of Twilight" and "read it even though... Twilight, and liked it"
populations?
(with "read Twilight because of Luminosity" also a valid case.)
7shminux8yReminds me of this [http://www.smbc-comics.com/?id=1962]
3diegocaleiro8yWe can't afford not to do both
[http://www.kickstarter.com/projects/16029337/goldieblox-the-engineering-toy-for-girls]
0NancyLebovitz8yGoldieBlox
[http://www.amazon.com/Goldie-Blox-The-Spinning-Machine/dp/B00BCXU3PQ] got
funded at almost double its goal, has been produced, and is received with
enthusiasm by at least a fair number of little girls.
0diegocaleiro8yYes, the owner is making more than 300 000 per month on sales, or so claims Tim
Ferriss. Awesome isn't it?
4mstevens8yI am hoping for someone to write Anita Blake, Rational Vampire Hunter.
Or the rationalist True Blood (it already has "True" in the title!)
5NancyLebovitz8yIs anyone working on rationalist stand-alone fiction?
Actually, what I meant was "Is anyone in this community working on rationalist
stand-alone fiction?".
4mstevens8yNot that I've seen. It'd be cool though. I think maybe you can see traces in
people like Peter Watts, but if you take HPMOR as the defining example, I can't
think of anything.
0NancyLebovitz8yLee Child (the Jack Reacher series) presents a good bit of clear thinking.
1TimS8yI'm always found Stross (and to a lesser extent, Scalzi) to be fairly
rationalist - in the sense that I don't see anyone holding the idiot ball all
that frequently. People do stupid things, but they tend not to miss the obvious
ways of implementing their preferences.
2Document8yIsn't that a tautology?
Edit: missed this subthread
[http://lesswrong.com/lw/h7r/open_thread_april_1530_2013/8uiu?context=2] already
discussing that; sorry.
0[anonymous]8yFanfiction readers tend to be female. If this strategy were going to work, it
would have worked already.
This site is filled with examples, but this one is particularly noteworthy because they're completely unsurprised and, indeed, claim it as confirming evidence for their beliefs.
Is anyone here skilled at avoiding strawmanning and categorizing people's views? We could do with some tricks for this, kind of like the opposite of "feminist bingo".
4shminux8yBut we won't point fingers at anyone in particular, no.
Anyway, steelmanning seems like the standard approach here, if rarely used.
-2MugaSofer8yIs this intended as a criticism? I can't tell.
Steelmanning is great, but you can still end up "steelmanning" your stereotype
of someone's arguments, which is more what I'm worried about.
I know this comes up from time to time, but how soon until we split into more subreddits? Discussion is a bit of firehose lately, and has changed drastically from its earlier role as a place to clean up your post and get it ready for main. We get all kinds of meetup stuff, philosophical issues, and so forth which mostly lack relevance to me. Not knocking the topics (they are valuable to the people they serve) but it isn't helpful for me.
Mostly I am interested in scientific/technological stuff, especially if it is fairly speculative and in need of advocacy. Cryonics, satellite-based computing, cryptocurrency, open source software. Assessing probability and/or optimal development paths with statistics and clean epistemology is great, but I'm not super enthused about probability theory or philosophy for its own sake.
Simply having more threads in the techno-transhumanist category could increase the level of fun for me. But there also needs to be more of a space for long-term discussions. Initial reactions often aren't as useful as considered reactions a few days later. When they get bumped off the list in only a few days, that makes it harder to come back with considered responses, and it makes for fewer considered counter-responses. Ultimately the discussion is shallower as a result.
Also, the recent comments bar on the right is less immediately useful because you have to click to the Recent Comments page and scroll back to see anything more than a few hours in the past.
I guess instead of complaining publicly, it would be better to send a private message to a person who can do something about it, preferably with a specific suggestion, and a link to a discussion which proves that many people want it.
Long-term threads separately seems to be a very popular idea... there were even some polls in the past to prove it.
MIRI's strategy for 2013 involves more strongly focusing on math research, which I think is probably the right move, even though it leaves them with less use for me. (Math isn't my weakest suit, but not my strongest, either.)
Do we know any evolutionary reason why hypnosis is a thing?
My current understanding of how hypnosis works is:
The overwhelming majority of our actions happen automatically, unconsciously, in response to triggers. Those can be external stimuli, or internal stimuli at the end of a trigger-response chain started by an external stimulus. Stimulus-response mapping are learnt through reinforcement. Examples: walking somewhere without thinking about your route (and sometimes arriving and noticing you intended to go someplace else), unthinkingly drinking from a cup in front of you. (Finding and exploiting those triggers is incredibly useful if you have executive function issues.)
In the waking state, responses are sometimes vetted consciously. This causes awareness of intent to act. Example: those studies where you can predict when someone will press a button before they can.
This "free won't" isn't very reliable. In particular, there's very little you can do about imagery ("Don't think of a purple elephant"). Examples: advertising, priming effects, conformity.
Conscious processes can't multitask much, so by focusing attention elsewhere, stimuli cause responses more reliably and less consciously. See any study on cognitive
Recantation by Gregory Cochran
The Linear Interpolation Fallacy: that if a lot of something is very bad, a little of it must be a little bad.
Most common in politics, where people describe the unpleasantness of Somalia or North Korea when arguing for more or less government regulation as if it had some kind of relevance. Silliest is when people try to argue over which of the two is worse. Establishing the silliness of this is easy. Somalia beats assimilation by the borg, so government power is bad. North Korea beats the Infinite Layers of the Abyss, so government power is good. Surely no universal principle of government can be changed by which contrived example I pick.
And, with a little thought, it seems clear that there is some intermediate amount of goverment that supports the most eudaemonia. Figuring out what that amount is and which side of it any given goverment lies on are important and hard questions. But looking at the extremes doesn't tell us anything about them.
(Treating "government power" as a scalar can be another fallacy, but I'll leave that for another post.)
What is the smartest group/cluster/sect/activity/clade/clan that is mostly composed of women? Related to the other thread on how to get more women into rationality besides HPMOR.
Ashkenazi dancing groups? Veterinarian College students? Linguistics students? Lilly Allen admirers?
No seriously, name guesses of really smart groups, identity labels etc... that you are nearly certain have more women than men.
Academic psychologists are mostly female. That would seem to be a pretty good target audience for LW. There are a few other academic areas that are mostly female now, but keep in mind that many academic fields are still mostly male even though most new undergraduates are female in the area.
There are lists online of academic specialty by average GRE scores. Averaging the verbal and quantitative scores, and then determining which majority-female discipline has the highest average would probably get you close to your answer.
How much difference can nootropics make to one's studying performance / habits? The problems are with motivation (the impulse to learn useful stuff winning out over the impulse to waste your time) and concentration (not losing interest / closing the book as soon as the first equation appears -- or, to be more clear, as soon as I anticipate a difficult task laying ahead). There are no other factors (to my knowledge) that have a negative impact on my studying habits.
Or, to put it differently: if a defective motivational system is the only thing standing between me and success, can I turn into an uber-nerd that studies 10 h/day by popping the right pills?
EDIT: Never messed with my neurochemistry before. Not depressed, not hyperactive... not ruling out some ADD though. My sleep "schedule" is messed up beyond belief; in truth, I don't think I've even tried to sleep like a normal person since childhood. Externally imposed schedules always result in chronic sleep deprivation; I habitually push myself to stay awake till a later hour than I had gone to sleep at the previous night (/morning/afternoon) -- all of this meaning, I don't trust myself to further mess with my sleeping habits. Of what I've read so far, selegiline seems closest to the effects I'm looking for, but then again all I know about nootropics I've learned in the past 6 hours. I can't guarantee I can find most substances in my country.
... Bad or insufficient sleep can cause catastrophic levels of akrasia. Fix that, then if you still have trouble, consider other options. Results should be apparent in days, so it is not a very hard experiment to carry out - set alarms on your phone or something for when to go to bed, and make your bedroom actually dark (this causes deeper sleep) you should get more done overall because you will waste less of your waking hours.
I could have easily written this exact same post two years ago. I used to be incredibly akratic. For example, at one point in high school I concluded that I was simply incapable of doing any schoolwork at home. I started a sort of anti-system where I would do all the homework and studying I could during my free period the day it was due, and simply not do the rest. This was my "solution" to procrastination.
Starting in January, however, I made a very conscious effort to combat akrasia in my life. I made slow, frustrating progress until about a week and a half ago where something "clicked" and now I spend probably 80% of my free time working on personal projects (and enjoying it). I know, I know, this could very easily be a temporary peak, but I have very high hopes for continuing to improve.
So, keep your head up, I... (read more)
I've been reading Atlas Shrugged and seem to have caught a case of Randianism. Can anyone recommend treatment?
My own deconversion was prompted by realizing that Rand sucked at psychology. Most of her ideas about how humans should think and behave fail repeatedly and embarrassingly as you try to apply it to your life and the lives of those around you. In this way, the disease gradually cures itself, and you eventually feel like a fool.
It might also help to find a more powerful thing to call yourself, such as Empiricist. Seize onto the impulse that it is not virtuous to adhere to any dogma for its own sake. If part of Objectivism makes sense, and seems to work, great. Otherwise, hold nothing holy.
Michael Huemer explains why he isn't an Objectivist here and this blog is almost nothing but critiques of Rand's doctrines. Also, keep in mind that you are essentially asking for help engaging in motivated cognition. I'm not saying you shouldn't in this case, but don't forget that is what you are doing.
With that said, I enjoyed Atlas Shrugged. The idea that you shouldn't be ashamed for doing something awesome was (for me, at the time I read it) incredibly refreshing.
Request for practical advice on determining/discovering/deciding 'what you want.'
It's been done to me, too, and as I recall, it didn't do all that much good. The major good effect that I can remember is indirect-- it was something to be able to talk about the inside of my head with someone who found it all interesting and a possibly useful tool for untangling problems-- this helped pull me away from my usual feeling that there's something wrong/defective/shameful about a lot of it.
What did you get out of Connection Theory?
More Right
Edit: We reached our deadline on May 1st. Site is live.
Some of you may recall the previous announcement of the blog. I envisioned it as a site that discusses right wing ideas. Sanity but not value checking them. Steelmanning both the ideas themselves and the counterarguments. Most of the authors should be sympathetic to them, but a competent loyal opposition should be sought out. In sum a kind of inversion of the LessWrong demographics (see Alternative Politics Question). Outreach will not be a priority, mutual aid on an epistemically tricky path of knowledge seeking is.
The current core group working on making the site a reality consists of me, ErikM, Athrelon, KarmaKaiser and MichaelAnissimov and Abudhabi. As we approach launch time I've just sent out an email update to other contributors and those who haven't yet contributed but have contacted me. If you are interested in the hard to discuss subjects or the politics and want to join as a coauthor or approved commenter (we are seeking more) send me a PM with an email adress or comment here.
Article on an attempt to explain intelligence in thermodynamic terms.
I have a super dumb question.
So, if you allow me to divide by zero, I can derive a contradiction from the basic rules of arithmetic to the effect that any two numbers are equal. But there's a rule that I cannot divide by zero. In any other case, it seems like if I can derive a contradiction from basic operations of a system of, say, logic, then the logician is not allowed to say "Well...don't do that".
So there must be some other reason for the rule, 'don't divide by zero.' What is it?
We don't divide by zero because it's boring.
You can totally divide by zero, but the ring you get when you do that is the zero ring, and it only has one element. When you start with the integers and try dividing by nonzero stuff, you can say "you can't do that" or you can move out of the integers and into the rationals, into which the integers embed (or you can restrict yourself to only dividing by some nonzero things - that's called localization - which is also interesting). The difference between doing that and dividing by zero is that nothing embeds into the zero ring (except the zero ring). It's not that we can't study it, but that we don't want to.
Also, in the future, if you want to ask math questions, ask them on math.stackexchange.com (I've answered a version of this question there already, I think).
I mean if you localize a ring at zero you get the zero ring. Equivalently, the unique ring in which zero is invertible is the zero ring. (Some textbooks will tell you that you can't localize at zero. They are haters who don't like the zero ring for some reason.)
The theorems work out nicer if you don't. A field should be a ring with exactly two ideals (the zero ideal and the unit deal), and the zero ring has one ideal.
The rule isn't that you cannot divide by zero. You need a rule to allow you to divide by a number, and the rule happens to only allow you to divide by nonzero numbers.
There are also lots of things logicians can tell you that you're not allowed to do. For example, you might prove that (A or B) is equivalent to (A or C). You cannot proceed to cancel the A's to prove that B and C are equivalent, unless A happens to be false. This is completely analogous to going from AB = AC to B = C, which is only allowed when A is nonzero.
For the real numbers, the equation a x = b has infinitely many solutions if a = b = 0, no solutions if a = 0 but b ≠ 0, and exactly one solution whenever a ≠ 0. Because there's nearly always exactly one solution, it's convenient to have a symbol for "the one solution to the equation a x = b" and that symbol is b / a; b but you can't write that if a = 0 because then there isn't exactly one solution.
This is true of any field, almost by definition.
Today, I finally took a racial/sexual Implicit Association Test.
I had always more or less accepted that it was, if not perfect, at least a fairly meaningful indicator of some sort of bias in the testing population. Now, I'm rather less confident in that conclusion.
According to the test, in terms of positive associations, I rank black women above black men above white women above white men. I do not think this is accurate.
Obviously, this is an atypical result, but I believe that I received it due to confounding factors which prevented the test from being a... (read more)
Academic research tends to randomize everything that can be randomized, including the orders of the different IAT phases, so your first concern shouldn't be an issue in published research. (The keyword for this is "order effect.")
The IAT is one of several different measures of implicit attitudes which are used in research. When taking the IAT it is transparent to the participant what is being tested in each phase, so people could try harder on some trials than on others, but that is not the case with many of the other tests (many use subliminal priming, e.g. flashing either a black man's face or a white man's face on the screen for 20ms immediately before showing the stimulus that participants are instructed to respond to). The different measures tend to produce relatively similar results, which suggests that effort doesn't have that big of an effect (at least for most people). I suspect that this transparency is part of the reason why the IAT has caught on in popular culture - many people taking the test have the experience of it getting harder when they're doing a "mismatched" pairing; they don't need to rely solely on the website's report of their results.
The survey that you took is not part of the IAT. It is probably a separate, explicit measure of attitudes about race and/or gender (do any of these questions look familiar?).
Yet another fake number of sex partners self-reported:
Unless, of course, Canadian men tap the border.
Note: it basically evens out if you remove the 20+ partners boasters.
I keep accidentally accumulating small trinkets as presents or souvenirs from well-meaning relatives! Can anyone suggest a compact unit of furniture for storing/displaying these objects? Preferably in a way that is scalable, minimizes dustiness and falling-off and has pretty good ease of packing/unpacking. Surely there's a lifehack for this!
Or maybe I would appreciate suggestions on how to deal with this social phenomenon in general! I find that I appreciate the individual objects when I receive them, but after that initial moment, they just turn into ... stuff.
spice racks!
In that case my further advice is: Cumin! Garlic! Pepper! Coriander!
The Girl Scouts currently offer a badge in the "science of happiness." I don't have a daughter, but if you do, perhaps you should look into the "science of style" badge as well.
So far, I haven't found a good way to compare organizations for the blind other than reading their wikipedia pages.
And, well, blindness organizations are frankly a political issue. Finding unbiased information on them is horribly difficult. Add to this my relatively weak Google-fu, and I haven't found much.
Conclusions:
I would like to recommend Nick Winter's book, The Motivation Hacker. From an announcement posted recently to the Minicamp Graduates mailing list:
"The book takes Luke's post about the Motivation Equation and tries to answer the question, how far can you go? How much motivation can you create with these hacks? (Turns out, a lot.) Using the example of eighteen missions I pursued over three months, it goes over in more detail how to get yourself to want to do what you always wanted to want to do."
(Disclaimer: I hadn't heard of Nick Winter until a fri... (read more)
Sex. I have a problem with it and would like to solve it. I get seriously anxious every time I'm about to have sex for the first time with a new partner. Further times are great and awesome. But the first time leaves me very anxious; which makes me delay it as much as I can. This is not optimal. I don't know how to fix it, if anyone can help I'd be greatly grateful
--
I notice I'm confused: I always tried to keep a healthy life: sleeping many hours, no alcohol, no smoke. I've just been living 5 days in a different country with some friends. We sleep 7 hour... (read more)
Re: sex... is there anyone with whom you're already having great awesome sex who would be willing to help out with some desensitization? For example, adding role-playing "our first time" to your repertoire? If not, how would you feel about hiring sex workers for this purpose?
Re: lifestyle... list the novel factors (dancing 4 hrs/night, spending time with people rather than alone, sleeping <7 hrs/night, diet changes, etc. etc. etc.). When you're back home, identify the ones that are easy to introduce and experiment with introducing them, one at a time, for a week. If you don't see a benefit, move on to the next one. If none of them work, try them all at once. If that doesn't work, move on to the difficult-to-introduce ones and repeat the process.
Personally, I would guess that several hours of sustained exercise and a different diet are the primary factors, but that's just a guess.
Spend enough time in a third (and possibly a fourth) place to see whether your mood improves.
In re anxiety: have you tried tracking exactly what you think before first time sex?
A few of you may know I have a blog called Greatplay.net, located at... surprise... http://www.greatplay.net. I’ve heard some people that discovered my site much later than they otherwise would because the name of the site didn’t communicate what it was about well and sounded unprofessional.
Why Greatplay.net in the first place? I picked it when I was 12, because it was (1) short, (2) pronounceable, (3) communicable without any risk of the other person misspelling it, and (4) did not communicate any information about what the site would be about, so I coul... (read more)
Is tickling a type of pain?
Dissolve the question.
One question I like to ask in response to questions like this is "what do you plan on doing with this information?" I've generally found that thinking consequentially is a good way to focus questions.
Does anyone have any real-world, object-level examples of degenerate cases)?
I think degeneracy has some mileage in terms of explaining certain types of category error, (eg. "atheism is a religion"), but a lot of people just switch off when they start hearing a mathematical example. So far, the only example I've come up with is a platform pass at a train station, which is a degenerate case of a train ticket. It gets you on the platform and lets you travel a certain number of stops (zero) down the train line.
Anyone want to propose any others?
There's a phenomenon I'd like more research done on. Specifically, the ability to sense solid objects nonvisually without direct physical contact.
I suspect that there might be some association with the human echolocation phenomenon. I've found evidence that there is definitely an audio component; I entirely by accident simulated it in a wav file (It was a long time before I could listen to that all the way through, for the strong sense that something was reaching for my head; system2 had little say in the matter).
I've also done my own experiments involving... (read more)
In chapter 1 of his book Reasoning about Rational Agents, Michael Wooldridge identifies some of the reasons for trying to build rational AI agents in logic:
... (read more)I started following DavidM's meditation technique Is there anything that I should know? Any advice or reasons on why I should choose a different type of meditation?
Sometimes, success is the first step towards a specific kind of failure.
I heard that the most difficult moment for a company is the moment it starts making decent money. Until then, the partners shared a common dream and worked together against the rest of the world. Suddenly, the profit is getting close to one million, and each partner becomes aware that he made the most important contributions, while the others did less critical things which technically could be done by employees, so having to share the whole million with them equally is completely stupi... (read more)
It's much easier to signal rationality than to actually be rational.
Does anybody on here use at-home EEG monitors? (Something like http://www.emotiv.com/store/hardware/epoc-bci-eeg/developer-neuroheadset/ although that one looks rather expensive)
If you do, do you get any utility out of them?
Cal Newport and Scott H. Young are collobarating to form a start deliberate practice course by email. Here's an excerpt from on Cal's emails to inquiring people:
... (read more)Errh
On an uncharitable reading, this sounds like two wide-eyed broscientist prophets who found The One Right Way To Have A Successful Career (because by doing this their career got successful, of course), and are now preaching The Good Word by running an uncontrolled, unblinded experiment for which you pay 100$ just to be one of the lucky test subjects.
Note that this is from someone who's never heard of "Cal Newport" or "Scott H. Young" before now, or perhaps just doesn't recognize the names. The facts that they've sold popular books with "get better" in the description and that they are socially-recognized as scientists are rather impressive, but doesn't substantially raise my priors of this working or not.
So if you've already tried some of their advice in enough quantity that your updated belief that any given advice from them will work is high enough and stable enough, this seems more than worth 100$.
Just the possible monetary benefits probably outweigh the upfront costs if it works, and even without that, depending on the kind of career you're in, the VoI and RoI here might be quite high, so depending on one's career situation this might need only a 30% to 50% probability of being useful for it to be worth the time and money.
I think that the open thread belongs in Discussion, not Main.
The Care and Feeding of Your Extrovert
In a few places — possibly here! — I've recently seen people refer to governments as being agents, in an economic or optimizing sense. But when I reflect on the idea that humans are only kinda-sorta agents, it seems obvious to me that organizations generally are not. (And governments are a sort of organization.)
People often refer to governments, political parties, charities, or corporations as having goals ... and even as having specific goals which are written down here in this constitution, party platform, or mission statement. They express dismay and ou... (read more)
[link] XKCD on saving time; http://xkcd.com/1205/ Image URL (for hotlinking/embedding): http://imgs.xkcd.com/comics/is_it_wor Though it will probably be mostly unseen as the month is about to end.
I encountered this cute summary of priming findings, thought you guys might like it, too:
... (read more)How do you people pronounce MIRI? To rhyme with Siri?
Amanda Knox and evolutionary psychology - two of LessWrong's favorite topics, together in one news article / opinion piece.
The author explains the anti-Knox reaction as essentially a spandrel of an ev. psych reaction. Money quote:
I'm skeptical of the ev. psych because it... (read more)
I am aware that there have been several discussions over to what extent x-rationality translates to actual improved outcomes, at least outside of certain very hard problems like metaethics. It seems to me that one of the best ways to translate epistemic rationality directly into actual utility is through financial investment/speculation, and so this would be a good subject for discussion (I assume it probably has been discussed before, but I've read most of this website and cannot remember any in depth-thread about this, except for the mention of markets b... (read more)
I wonder if many people are putting off buying a bitcoin to hang onto, due more to trivial inconvenience than calculation of expected value. There's a bit of work involved in buying bitcoins, either getting your funds into mtgox or finding someone willing to accept paypal/other convenient internet money sources.
1) I am not at all convinced that investing in bitcoins is positive expected value, 2) they seem high-variance and I'm wary about increasing the variance of my money too much, 3) I am not a domain expert in finance and would strongly prefer to learn more about finance in general before making investment decisions of any kind, and 4) your initial comment rubbed me the wrong way because it took as a standing assumption that bitcoins are obviously a sensible investment and didn't take into account the possibility that this isn't a universally shared opinion. (Your initial follow-up comment read to me like "okay, then you're obviously an idiot," and that also rubbed me the wrong way.)
If the bitcoin situation is so clear to you, I would appreciate a Discussion post making the case for bitcoin investment in more detail.
Request for a textbook (or similar) followup to The Selfish Gene and/or The Moral Animal. Preferably with some math, but it's not necessary.
I could swear Zach Weiner reads this forum.
I have noticed an inconsistency between the number of comments actually present on a post and the number declared at the beginning of its comments section, the former often being one less than the latter.
For example, of the seven discussion posts starting at "Pascal's wager" and working back, the "Pascal's wager" post at the moment has 10 comments and says there are 10, but the previous six all show a count one more than the actual number of visible comments. Two of them say there is 1 comment, yet there are no comments and the text &qu... (read more)
This has most likely been mentioned in various places, but is it possible to make new meetup posts (via the "Add new meetup" button) to only show up under "Nearest Meetups", and not be in Discussion? Also, renaming the link to "Upcoming Meetups" to match the title on that page, and listing more than two - perhaps a rolling schedule of the next 7 days.
Is there a nice way of being notified about new comments on posts I found interesting / commented on / etc? I know there is a "comments" RSS feed, but it's hard to filter out interesting stuff from there.
... or a "number of green posts" indicator near the post titles when listing them? (I know it's a) takes someone to code it b) my gut feeling is that it would take a little more than usual resources, but maybe someone knows of an easier way of the same effect.)
Is there anyone going to the April CFAR Workshop that could pick me up from the airport? I'll be arriving at San Francisco International at 5 PM if anyone can help me get out there. (I think I have a ride back to the airport after the workshop covered, but if I don't I'll ask that seperately.)
Hey; we (CFAR) are actually going to be running a shuttles from SFO Thursday evening, since the public transit time / drive time ratio is so high for the April venue. So we'll be happy to come pick you up, assuming you're willing to hang out at the airport for up to ~45 min after you get in. Feel free to ping me over email if you want to confirm details.
Who is the best pro-feminist blogger still active? In the past I enjoyed reading Ozy Frantz, Clarisse Thorn, Julia Wise and Yvain, but none of them post regularly anymore. Who's left?
Is there a secret URL to display the oldest LW posts?
I wrote something on Facebook recently that may interest people, so I'll cross-post it here.
Cem Sertoglu of Earlybird Venture Capital asked me: "will traders be able to look at their algorithms, and adjust them to prevent what happened yesterday from recurring?
My reply was:
... (read more)Considering making my livejournal into something resembling the rationality diaries (I'd keep the horrible rambling/stupid posts for honesty/archival purposes). I can't tell if this is a good idea or not; the probability that it'd end like everything else I do (quietly stewing where only I bother going) seems absurdly high. On the other hand, trying to draw this kind of attention to it and adding structure would probably help spawn success spirals. Perhaps I should try posting on a schedule (Sunday/Tuesday/Thursday seems good, since weekends typically suck... (read more)
I started browsing under Google Chrome for Android on a tablet recently. Since there's no tablet equivalent of mouse hovering, to see where a link points without opening it I have to press and hold on it. For off-site links in posts and comments, though, LW passes them through api.viglink.com, so I can't see the original URL through press-and-hold. Is there a way to turn that function off, or an Android-compatible browser plugin to reverse it?
(Edit: Posted and discussed here.)
Some folks here might want to know that the Center for Effective Altruism is recruiting for a Finance & Fundraising Manager:
I've always felt that Atlas Shrugged was mostly an annoying ad nauseum attack on the same strawman over and over, but given the recent critique of Google, Amazon and others working to minimize their tax payments, I may have underestimated human idiocy:
On the other hand, these are people wearing their MP hats, they probably sing a... (read more)
What happened to that article on cold fusion? Did the author delete it?
Is there anyway to see authors classified by h-index? Google scholar seems not to have that functionality. And online lists only exist of some topics...
Lewis Dennett and Pinker for instance have nearly the same h-index.
Ed Witten's is much larger than Stephen Hawkings..... etc........
If you know where to find listings of top h-indexes, please let me know!
Art Carden, guest blogger at Econlog, advocates Bayes theorem as a strategy for maintaining serenity here.
I remember seeing a post (or more than one?) where Yudkowsky exhorts smart people (e.g. hedge fund managers) to conquer mismanaged countries, but I can't find it by googling.
Does anyone have a link?
—HPMoR Chapter 86
If you had something more specific in mind, I can't recall it offhand.
I heard a speaker claim that the frequency of names in the Gospels matches the list of most popular names in the time and place they are set, not the time and place they are accepted to have been written in. I hadn't heard this argument before and couldn't think of a refutation. Assuming his facts are accurate, is this a problem?
Toying around with the Kelly criterion I get that the amount I should spend on insurance increases with my income though my intuition says that the higher your income is the less you should insure. Can someone less confused about the Kelly criterion provide some kind of calculation?
For anyone asking, I wondered if, given income and savings rate how much should be invested in bonds, stocks, etc. and how much should be put into insurance, e.g. health, fire, car, etc. from a purely monetary perspective.
Jimmy Kimmel's show has no trouble finding people on the street to give a confident answer to a nonsensical question. (Happily, not all the interviewees do this.)
Here's something I think should exist, but don't know if it does: a list of interesting mental / neurological disorders, referencing the subjects they have bearing on.
Does this exist already?
So, I have a primitive system for keeping track of my weight: I weigh myself daily and put the number in a log file. Every so often I make a plot. Here is the current one. I have been diligent about writing down the numbers, but I have not made the plot for at least a year, so while I was aware that I'm heavier now than during last summer, I had no idea of the visual impact of that weight loss and regain. My immediate thought: Now what the devil was I doing in May of 2012, and can I repeat it this year and avoid whatever happened in July-August?
Hmm... com... (read more)
North Korea is threatening to start a nuclear war. The rest of the world seems to be dismissing this threat, claiming it's being done for domestic political reasons. It's true that North Korea has in the past made what have turned out to be false threats, and the North Korean leadership would almost certainly be made much worse off if they started an all out war.
But imagine that North Korea does launch a first strike nuclear attack, and later investigations reveal that the North Korean leadership truly believed that it was about to be attacked and so mad... (read more)
Taken seriously... when? Back when he was a crazy failed artist imprisoned after a beer hall putsch, sure; up to the mid-1930s people took him seriously but were more interested in accommodationism. After he took Austria, I imagine pretty much everyone started taking him seriously, with Chamberlain conceding Czechoslovakia but then deciding to go to war if Poland was invaded (hardly a decision to make if you didn't take the possibilities seriously). Which it then was. And after that...
If we were to analogize North Korea to Hitler's career, we're not at the conquest of France, or Poland, or Czechoslovakia; we're at maybe breaking treaties & remilitarizing the Rhineland in 1936 (Un claiming to abandon the cease-fire and closing down Kaesŏng).
One thing that hopefully the future historians will notice is that when North Korea attacks, it doesn't give warnings. There were no warnings or buildups of tension or propaganda crescendos before bombing & hijacking & kidnapping of Korean airliners, the DMZ ax murders, the commando assault on the Blue House, the sinking of the Cheonan, kidnapping Korean or Japanese c... (read more)
I want to change the stylesheets on a wordpress blog so the default font is Baskerville. I'm not too experienced with editing CSS files, anyone here good at that? I know how to manually make each paragraph Baskerville.
Are you a guy that wants more social interaction? Do you wish you could get complimented on your appearance?
Grow a beard! For some reason, it seems to be socially acceptable to compliment guys on a full, >1", neatly trimmed beard. I've gotten compliments on mine from both men and women, although requests to touch it come mostly from the latter (but aren't always sexual--women with no sexual attraction to men also like it). Getting the compliments pretty much invariably improves my mood; so I highly recommend it if you have the follicular support.
How feasible is it for a private individual in a western developed country to conduct or commission their own brain scan?
Howdy - comment to some person I haven't identified but will probably read this:
I appreciate the upvotes, but please only upvote my comments if you agree with them/like them/find them interesting/whatever. I'm trying to calibrate what the Less Wrong community wants/doesn't want, and straight-ticket upvoting messes with that calibration, which is already dealing with extremely noisy and conflicting data.
Now that school's out for the summer, I have an additional 40 hours per week or so of free time.
How would you use that?
Anybody on here ever sold eggs (female human gametes)? Experiences? Advice on how best to do it?
I came across this post on Quora and it strikes me as very plausible. The summary is essentially this: "Become the type of person who can achieve the things you want to achieve." What's your (considered) opinion?
Also, this seems relevant to the post I linked, but I'm not sure exactly how.
It's an old point, probably made by Robin Hanson, that if you want to donate to charity you should actually boast about it as much as possible to get your friends to do the same, rather than doing the status-preserving, humble saint act.
I think it might be worth making an app on facebook, say, that would allow people to boast anonymously. Let's say your offered the chance to see if your friends are donating. Hopefully people bite - curiousity makes them accept (no obligation to do anything after all). But now they know that their friends are giving and the... (read more)
I found this blog post on the commensurate wage fallacy, and am wondering if there are any glaring errors (you know, besides the lack of citations).
Help me get matrix multiplication? (Intuitively understand.) I've asked google and read through http://math.stackexchange.com/questions/31725/what-does-matrix-multiplication-actually-mean and similar pages&articles , and I get what linear functions mean. I've had it explained in terms of transformation matrices and I get how those work and I'm somewhat familiar with them from opengl. But it's always seemed like additional complexity that happens to work (and sometimes happens to work in a cute way) because it's this combination of multiplication and ad... (read more)
Are there any psychometric or aptitude tests that are worth taking as an adult?
Just got bitten again by the silent -5 karma bug that happens when a post upthread from the one you're replying to gets downvoted below the threshold while you're writing your reply. If we can spare the developer resources, which I expect we can't, it would be nice if that didn't happen.
Overheard this on the bus: “If Christians are opposed to abortion because they think fetuses are people, how comes they don't hold funerals for miscarriages?”
That's based on the unstated but incorrect premise that souls are indivisible and only distributed in whole number amounts. Anyone who's spent time around identical twins can tell that they only have half a soul each.
This article is fascinating: http://io9.com/5963263/how-nasa-will-build-its-very-first-warp-drive
A NASA physicist called Harold White suggests that if he tweaks the design of an 'Alcubierre Drive', extremely fast space travel is possible. It bends spacetime around itself, apparently. I don't know enough about physics to be able to call 'shenanigans' - what do other people think?
All (90%) of rationalist women who would not otherwise have become rationalist women became so because of Harry Potter and the Methods of Rationality.
Thus, we need 50 shades of Grey Matter.
As well as good marketing designs of things that attract women into rationality.
Which are the bestselling books if you only consider women? What about the best movies for women?
I'm not sure that's true. When I looked in the 2012 survey, I didn't see any striking gender disparity based on MoR: http://lesswrong.com/lw/fp5/2012_survey_results/8bms - something like 31% of the women found LW via MoR vs 21% of the men, but there are just not that many women in the survey...
Your Strength As A Rationalist [LINK]
This site is filled with examples, but this one is particularly noteworthy because they're completely unsurprised and, indeed, claim it as confirming evidence for their beliefs.
Is anyone here skilled at avoiding strawmanning and categorizing people's views? We could do with some tricks for this, kind of like the opposite of "feminist bingo".
This is a funny video for people familiar with the r/atheism community.