When I come to LW, I click to the Discussion almost instinctively. I'd estimate it has been four weeks since I've looked at Main. I sometimes read new Slate Star Codex posts (super good stuff, if you are unfamiliar) from LW's sidebar. I sometimes notice interesting-sounding 'Recent Comments' and click on them.
My initial thought is that I don't feel compelled to read Main posts because they are the LW-approved ideas, and I'm not super interested in listening to a bunch of people agreeing with another. Maybe that is a caricature, not sure.
Anyone else Discussion-centric in their LW use?
Also, the Meetup stuff is annoying noise. I'm very sympathetic if placing it among posts helps to drive attendance. By all means, continue if it helps your causes. But it feels spammy to me.
1Brillyant7yYes, likely. If you mean the discussion is more varied and interesting.
8CellBioGuy7y* raises hand *
Partially because it's much more active over here.
0Brillyant7yIt seems to me that is likely the result of of many people feeling like me
rather the the cause of them feeling that way.
0Vaniver7yActivity seems like a positive feedback loop*- because there are more comments
in discussion, people spend more time and comment more in discussion, and their
comments in discussion are more likely to get responded to, which brings them
back to discussion, and so on.
*That is, something that is both a cause and a result.
0Brillyant7ySure.
But why did I evolve to stop going to Main and go exclusively to Discussion?
That behavior might be reinforced by the lack of activity, but the leading cause
(for me in my best estimation) was I came to see the content as overwhelmingly
LW-approved stuff.
0Vaniver7yWhen I read blacktrance's comment
[http://lesswrong.com/lw/jr8/open_thread_february_25_march_3/amrq], I see
specific topics- AI, math, health, productivity- that they're not interested in,
that Main focuses on. When I read your comments, it sounds like you're not as
sensitive to topics as to styles of discussion, where you're more interested in
disagreements than in agreements. Am I reading that difference correctly?
0Brillyant7ySure, I suppose. I generally use forum sites for discussion. I'm not too
terribly interested in reading LW "publications", I'm more interested in engagin
in discussion and reading commentary in regard to issues pertaining to
rationality, etc.
The distinction between Main and Discussion articles has noever made much sense
to me. It seems to me to be some blend of perceived quality, relation to
rationality (as LW defines it) and other LW topics of interest, group politics,
EY mandate, etc. Don't really care all that much...just that it was interesting
that I ended up in Discussion almost exclusively.
I'd agree the topics in main seem to be less interesting to me, too, now that I
think about it.
6blacktrance7yI'm more likely to find discussion topics and comments in my areas of interest,
while Main seems to be mostly about AI, math, health, and productivity, none of
which are particularly interesting for me.
1[anonymous]7yI mainly skim http://lesswrong.com/topcomments/?t=day
[http://lesswrong.com/topcomments/?t=day] and
http://lesswrong.com/r/discussion/topcomments/?t=day
[http://lesswrong.com/r/discussion/topcomments/?t=day], then when I see
something interesting I look at where it comes from.
0[anonymous]7yI generally find Main posts uninteresting, or overlong and based on some
incorrect premise or other.
If one is able to improve how people are matched, it would bring about a huge amount of utility for the entire world.
People would be happier, they would be more productive, there would be less of the divorce-related waste. Being in a happy couple also means you are less distracted by conflict in the house, which leads to people better able to develop themselves and achieve their personal goals. You can keep adding to the direct benefits of being in a good pairing versus a bad pairing.
But it doesn't stop there. If we accept that better matched parents raise their children better, then you are looking at a huge improvement in the psychological health of the next generation of humans. And well-raised humans are more likely to match better with each other...
Under this light, it strikes me as vastly suboptimal that people today will get married to the best option available in their immediate environment when they reach the right age.
The cutting-edge online dating sites base their suggestions on a very limited list of questions. But each of us outputs huge amounts of data, many of them available through APIs on the web. Favourite books, movies, sleep patterns, browsing history, work hi... (read more)
There seem to be perverse incentives in the dating industry. Most obviously: if you successfully create a forever-happy couple, you have lost your customers; but if you make people date many promissingly-looking-yet-disappointing partners, they will keep returning to your site.
Actualy, maybe your customers are completely hypocritical about their goals: maybe "finding a true love" is their official goal, but what they really want is plausible deniability for fucking dozens of attractive strangers while pretending to search for the perfect soulmate. You could create a website which displays the best one or two matches, instead of hundreds of recommendations, and despite having higher success rate for people who try it, most people will probably be unimpressed and give you some bullshit excuses if you ask them.
Also, if people are delusional about their "sexual market value", you probably won't make money by trying to fix their delusions. They will be offended by the types of "ordinary" people you offer them as their best matches, when the competing website offers them Prince Charming (whose real goal is to maximize his number of one night stands) or Princ... (read more)
7Viliam_Bur7yThat's a nice thing to have; I am not judging anyone. Just thinking how that
would influence the dating website algorithm, marketing, and the utility this
whole project would create.
If some people say they want X but they actually want Y... however other people
say they want X and they mean it... and the algorithm matches them together
because the other characteristics match, at the end they may be still
unsatisfied (if one of these groups is a small minority, they will be
disappointed repeatedly). This could possibly be fixed by an algorithm smart
enough that it could somehow detect which option it is, and only match people
who want the same thing (whichever of X or Y it is).
If there are many people who say they want X but really want Y, how will you
advertise the website? Probably by playing along and describing your website
mostly as a site for X, but providing obvious hints that Y is also possible and
frequent there. Alternatively, by describing your website as a site for X, but
writing "independent" blog articles and comments describing how well it actually
works for Y. (What is the chance that this actually is what dating sites are
already doing, and the only complaining people are the nerds who don't
understand the real rules?)
Maybe there is a market in explicitly supporting open relationships. (Especially
if you start in the Bay Area.) By removing some hypocrisy, the matching could be
made more efficient -- you could ask questions which you otherwise couldn't,
e.g. "how many % of your time would you prefer to spend with this partner?".
9Alexandros7yI wouldn't jump to malice so fast when incompetence suffices as an explanation.
Nobody has actually done the proper research. The current sites have found a
local maxima and are happy to extract value there. Google got huge by getting
people off the site fast when everyone else was building portals.
You will of course get lots of delusionals, and lots of people damaged enough
that they are unmatchable anyway. You can't help everybody. But also the point
is to improve the result they would otherwise have had. Delusional people do end
up finding a match in general, so you just have to improve that to have a win.
Perhaps you can fix the incentive by getting paid for the duration of the
resulting relationship. (and that has issues by itself, but that's a long
conversation)
I don't think the philanthropic angle will help, though having altruistic
investors who aren't looking for immediate maximisation of investment is
probably a must, as a lot of this is pure research.
2Randy_M7yI don't think he was jumping to malice, rather delusion or bias.
1Alexandros7yI meant malice/incompetence on the part of the dating sites.
7ChristianKl7yI think that's the business model of eharmony and they seem to be doing well.
1Scott Garrabrant7yI absolutely agree, but I am not sure that anyone was even considering this as a
way to make money.
Unfortunately, for all the same reasons we cannot make money, we cannot get
people to sign up for the site in the first place.
Two proposed solutions for this:
1) Something like I suggested before that matches people without them signing up
somehow.
2) A bait and switch, where a site gets popular using the same tactics as other
dating sites, and then switches to something better for them.
Neither of these solutions seem plausible to work at all.
I wonder to what extent the problems you describe (divorces, conflict, etc) are caused mainly by poor matching of the people having the problems, and to what extent they are caused by the people having poor relationship (or other) skills, relatively regardless of how well matched they are with their partner? For example, it could be that someone is only a little bit less likely to have dramatic arguments with their "ideal match" than with a random partner -- they just happen to be an argumentative person or haven't figured out better ways of resolving disagreements.
What makes you think these marriages are successful? Low divorce rates are not good evidence in places where divorce is often impractical.
Three main points in favor of arranged marriages that I'm aware of:
The marriages are generally arranged by older women, who are likely better at finding a long-term match than young people. (Consider this the equivalent of dating people based on okCupid match rating, say, instead of hotornot rating.)
The expectations people have from marriage are much more open and agreed upon; like Prismattic points out, they may have a marriage that a Westerner would want to get a divorce in, but be satisfied. It seems to me that this is because of increased realism in expectations (i.e. the Westerner thinks the divorce will be more helpful than it actually will, or is overrating divorce compared to other options), but this is hard to be quantitative about.
To elaborate on the expectations, in arranged marriages it is clear that a healthy relationship is something you have to build and actively maintain, whereas in love marriages sometimes people have the impression that the healthy relationship appears and sustains itself by magic- and so when they put no
-1Eugine_Nier7yI remember seeing studies that attempted to measure happiness.
4Lumifer7yLinks? I am also quite suspicious of measuring happiness -- by one measure
Bhutan is the happiest country in the world and, um, I have my doubts.
0drethelin7yWhy are you even asking for links to studies if you admit you don't care what
studies say?
8Lumifer7yI have a prior that the studies are suspect. But that prior can be updated by
evidence.
5Prismattic7yI'm not sure this is correct. That is to say, the empirical point that divorce
is much less common in arranged marriage cultures is obviously true. But
a) I think there is some correlation between prevalence arranged marriage and
stigma associated with divorce, meaning that not getting divorced does not
necessarily equal happy marriage.
b) The bar for success in 20th-21st century western marriages is set really
high. It's not just an economic arrangement; people want a best friend and a
passionate lover and maybe several other things rolled into one. When people in
traditional cultures say that their marriages are "happy," they may well mean
something much less than what affluent westerners would consider satisfactory.
2Jayson_Virissimo7yWhy does it suggest that rather than that the arrangers are better at finding
the "right match" than the persons to be married?
1Alexandros7yMy instinct on this is driven by having been in bad and good relationships, and
reflecting on myself in those situations. It ain't much, but it's what I've got
to work with. Yes, some people are unmatchable, or shouldn't be matched. But
somewhere between "is in high demand and has good judgement, can easily find
great matches" and "is unmatchable and should be kept away from others", there's
a lot of people that can be matched better. Or that's the hypothesis.
2Emily7ySeems reasonable, although I'd still wonder just how much difference improving
the match would make even for the majority of middle-ground people. It sounded
in the grandparent post (first and fourth paragraphs particularly) that you were
treating the notion that it would be "a lot" as a premise rather than a
hypothesis.
2Alexandros7yWell, it's more than a hypothesis, it's a goal. If it doesn't work, then it
doesn't, but if it does, it's pretty high impact. (though not existential-risk
avoidance high, in and of itself).
Finding a good match has made a big subjective difference for me, and there's a
case it's made a big objective difference (but then again, I'd say that) and I
had to move countries to find that person.
Yeah, maybe the original phrasing is too strong (blame the entrepreneur in pitch
mode) but the 6th paragraph does say that it's an off-chance it can be made to
work, though both a high improvement potential and a high difficulty in
materialising it are not mutually exclusive.
6Scott Garrabrant7yThe problem with dating sites (like social network sites or internet messengers)
is that the utility you can gain from it is VERY related to how many other
people are actually using it. This means that there is a natural drift towards a
monopoly. Nobody wants to join a dating site that only has 1000 people. If you
do not have a really good reason to think that your dating site idea will get
off the ground, it probably wont.
One way you could possibly get past this is to match people up who do not sign
up or even know about this service.
For example, you could create bots that browse okcupid, for answers to
questions, ignore okcupid's stupid algorithms in favor of our own much better
ones, and then send two people a message that describes how our service works
and introduces them to each other.
Is this legal? If so, I wonder if okcupid would take stop it anyway.
The chicken/egg issue is real with any dating site, yet dating sites do manage to start. Usually you work around this by focusing on a certain group/location, dominating that, and spreading out.
Off the cuff, the bay strikes me as a potentially great area to start for something like this.
8ChristianKl7yIt's spam and very likely violates the TOS.
7Lumifer7yAwesome -- that will fit right in between "I'm a Nigerian customs official with
a suitcase of cash" emails and "Enlarge your manhood with our all-natural pills"
ones.
P.S. Actually it's even better! Imagine that you're a girl and you receive an
email which basically says "We stalked you for a while and we think you should
go shack up with that guy". Genius!
2EGarrett7yHow can there be a monopoly if people can use more than one dating site?
Unless OkCupid bans you from putting your profile up on other sites, you can
just as easily put a profile on another site with less people, if the site seems
promising.
-2Eugine_Nier7yIt's still more work to put a profiles on multiple sites.
2EGarrett7yHi Eugine,
I don't mean to be nitpicking, but a monopoly is a very specific thing. It's
quite different than it just being inconvenient to switch to a competitor. In
very many cases in normal market competition, it's inconvenient to switch to
competitors (buying a new car or house, changing your insurance, and so on), but
that doesn't effect the quality of the product. Similarly, for a monopoly to
effect the quality of OKCupid's service, it would have to be a very specific
situation, and different than what currently exists, which seems to be quite
normal market functioning.
-2Eugine_Nier7yCoscott was talking about a "a natural drift towards a monopoly".
-4EGarrett7yUnless OKCupid is hiring the government or people with guns to threaten other
websites out of existence, there won't be a drift towards a monopoly.
A monopoly isn't created by one company getting the overwhelming majority of
customers. A monopoly is only created when competitors cannot enter the market.
It's a subtle distinction but it's very important, because what's implied is
that the company with the monopoly can jack up their prices and abuse customers.
They can't do this without feeding a garden of small competitors that can and
will outgrow them (see Myspace, America Online, etc), unless those competitors
are disallowed from ever existing.
You can keep downvoting this, but it's a very important concept in economics and
it will still be true.
5[anonymous]7yForbidding anyone who hasn't read “The Logical Fallacy of Generalization from
Fictional Evidence
[http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/]” from
watching any Hollywood or Disney movies about romance would go a long way. ;-)
4Lumifer7ySo how would it be different from OK Cupid, for example?
As an aside, wasn't the original motivation for Facebook Zuckerberg's desire to
meet girls..? :-D
Here is one improvement to OKcupid, which we might even be able to implement as a third party:
OKcupid has bad match algorithms, but it can still be useful as searchable classified adds. However, when you find a legitimate match, you need to have a way to signal to the other person that you believe the match could work.
Most messages on OKcupid are from men to women, so women already have a way to do this: send a message, however men do not.
Men spam messages, by glancing over profiles, and sending cookie cutter messages that mention something in the profile. Women are used to this spam, and may reject legitimate interest, because they do not have a good enough spam filter.
Our service would be to provide an I am not spamming commitment. A flag that can be put in a message which signals "This is the only flagged message I have sent this week"
It would be a link, you put in your message, which sends you to a site that basically says. Yes, Bob(profile link) has only sent this flag to Alice(profile link) in the week of 2/20/14-2/26/14, with an explanation of how this works.
Do you think that would be a useful service to implement? Do you think people would actually use it, and receive it well?
7badger7yScarce signals do increase willingness to go on dates
[http://home.sogang.ac.kr/sites/econdept/SiteCollectionDocuments/Leesoohyung_Niederle_maintext_dec2012.pdf]
, based on a field experiment of online dating in South Korea.
0mare-of-night7yI wonder if a per-message fee for a certain kind of message would be a good
business model for this. My suspicion is that it would work very well if all
your users had that reluctance to ever spend anything online (people are much
more willing to buy utilions that involve getting a physical product than to pay
for things like apps)), but it breaks down as soon as someone with some unused
disposable income realizes that spamming $1 notes isn't that expensive.
Only being able to send a certain number of messages per week of a special type
might be enough for indicating non-spam, as long as you could solve the problem
of people making multiple profiles to get around it. Having a small fee attached
to the service might help with tracking that down, since it would keep people
from abusing it too extremely, and cover the cost of having someone investigate
suspicious accounts (if more than one is paid for by the same credit card at
around the same time, for example).
0Scott Garrabrant7yOKcupid solves the multiple account problem for us. It is probably better to not
send a virtual rose than to make an account that you then have to answer all the
questions to.
0Lumifer7yWhere will your credibility come from?
Alice receives a message from Bob. It says "You're amazing, we're nothing but
mammals, let's do it like they do on the Discovery Channel", and it also says
"I, Mallory, hereby certify that Bob only talked about mammals once this week --
to you".
Why should Alice believe you?
Things like that are technically possible (e.g. cryptographic proofs-of-work)
but Alice is unlikely to verify your proofs herself and why should she trust
Mallory, anyway?
5Scott Garrabrant7yI think if we had a nice professional website, with a link to a long description
of how it all works that people won't read anyway, they will tend to trust us.
Especially if we use math.
2mare-of-night7ySeconded - once you get as far as people trusting you enough to post their
personal information and possibly pay you for the service, they're not still
suspecting you of letting people spam you with "certified" non-spam.
6Scott Garrabrant7yOK Cupid has a horrible match percent algorithm. Basically someone who has a
check list of things that their match cannot be will answer lots of questions as
"this matters a lot to me" and "any of these options are acceptable except for
this one extreme one that nobody will click anyway." The stupid algorithm will
inflate this person's match percent with everyone.
So, if you look at people with high compatibility with you, that says more about
their question answering style, than how much you have in common.
This is why the algorithm is horrible in theory. In practice my one example is
that I am getting married in a month to someone I met on OKcupid with 99%
compatibility.
3Viliam_Bur7yA good website design could change the answering style. Imagine a site where you
don't fill out all the answers at once. Instead it just displays one question at
a time, and you can either answer it or click "not now". The algorithm would
prioritize the questions it asks you dynamically, using the already existing
data about you and your potential matches -- it would ask you the question which
it expects to provide most bits of information.
Also, it would use the math properly. The compatiblity would not be calculated
as number of questions answered, but number of bits these answers provide. A
match for "likes cats" provides more bits than "is not a serial killer".
2drethelin7yVery consistently people that I know and like, when I see them on okcupid, have
a high match percentage. When I meet okcupid people with a good match
percentage, I usually like them. This seems to imply the algorithm is a lot
better than your theoretical worst example of it. I think your situation is much
more of a problem if you don't answer enough questions.
0Scott Garrabrant7yPerhaps the way people tend to answer questions does not change very much from
person to person, so this problem does not show up in practice.
However, if you are willing to change your style for answering questions, it is
probably possible to game OKcupid in such a way that you get 90+% with anyone
you would care about.
0Izeinwinter7ySelfdefeating The entire point of OKcupid is to find someone you will actually
click with. Inflating your own match percentages artificially just makes OKCupid
worse for you. Of course, this doesnt help if the site just isnt very popular in
your city.
Eh. Radical: Have the government do this. Literally, run a dating site, have
sex-ed classes teach people how to use it, and why gaming it is bloody stupid.
That should result in maximum uptake, and would cost a heck of a lot less than a
lot of other initiatives governments already run trying to promote stable
pairbonds. Now, how to get this into a political platform...
0Scott Garrabrant7yNot if you have an honest account too so you can check compatibility while still
broadcasting higher compatibility than you actually have.
2Izeinwinter7yStill pointless! There is no upside to having a bunch of people you are not
actually compatible with think the mirage you constructed is a good match. If
they are not a match with your honest profile, you do not want to waste theirs
or your own time. If your actual goal is to have a bunch of one night stands,
then make a profile that out and out states that so that you will be matched
with people of like mind. Dishonesty in this matter is both unetical and nigh
certain to result in unpleasant drama. Proper use of this kind of tool is an
exercise in luminosity - the more accurately you identify what you are truely
looking for, the better it works.
Also, see radical proposal: If a site of this type is run by the government,
sockpuppets are obviously not going to be an option - one account per social
security number or local equivalent, because that is a really simple way to shut
down a whole host of abuses.
3mare-of-night7yI've had ideas sort of like this at the back of my mind since seeing Paul Graham
pointing out how broken online dating is in one of his essays. (Not so much
analyzing all of someone's existing data, but analyzing IM transcripts to match
people with IM buddies they'd be likely to make good friends with is a thing I
considered doing.) Haven't gotten too far with any of them yet, but I'm glad you
reminded me, since I was planning on playing with some of my own data soon just
to see what I find.
Do you think that not having dated much would be much of a comparative
disadvantage in working on this problem? That's one of the reasons I hesitate to
make it my main project.
A possibly-related problem - why does every site I see that says it is for
matching strangers who might like to be friends get full of people looking for a
date? (Small sample size, but I've never seen one that didn't give me the sense
that the vast majority of the members were looking for romance or a one night
stand or something.)
5[anonymous]7ySo that people can look for dates without breaking plausible deniability.
0RichardKennaway7yI think it's the web site, rather than its clients, that needs the plausible
deniability. It cannot seem to be in the business of selling sex, so it has to
have a wider focus.
2RichardKennaway7yWhy altruistic? If it's worth anything, it's worth money. If it won't even pay
its creators for the time they'll put in to create it, where's the value?
0Alexandros7yI am not convinced it is the optimal route to startup success. If it was, I
would be doing it in preference over my current startup. It is highly uncertain
and requires what looks like basic research, hence the altruism angle. If it
succeeds, yes, it shouldake a lot of money and nobody should deprive it's
creators of the fruits of their labour.
2Salemicus7yIt strikes me that it is much more plausible to argue that the dating market
suffers from market failure through information asymmetry, market power and high
search costs than to argue the same about economic activity. Yet although people
search high and low to find (often non-existent) market failures to justify
economic interventions, interventions in the dating market are greeted with
near-uniform hostlility. I predict that, outside of LessWrong, your proposal
will generate a high "Ick" factor as a taboo violation. "Rationality-based
online dating will set you up with scientifically-chosen dates..." this is
likely to be an anti-selling point to most users.
4Alexandros7yObviously you'd take a different angle with the marketing.
Off the cuff, I'd pitch it as a hands-off dating site. You just install a
persistent app on your phone that pushes a notification when it finds a good
match. No website to navigate, no profile to fill, no message queue to manage.
Perhaps market it to busy professionals. Finance professionals may be a good
target to start marketing to. (busy, high-status, analytical)
There would need to be some way to deal with the privacy issues though.
0mare-of-night7yThis might be a reason to start it out as a nice thing. Though, the problem is
finding a niche that likes this proposal and has a decent gender ratio (or
enough people interested in dates of the same gender).
Now that I think about it, existing dating sites do try to advertise themselves
as being better because of their algorithm. If that advertising works, maybe the
ick factor isn't that strong?
1[anonymous]7yHave you seen this TED talk [http://www.youtube.com/watch?v=d6wG_sAdP0U]?
2Alexandros7yfantastic, thanks!
0pianoforte6117yViliam_Bur sort of said this, but it doesn't seem possible to outcompete the
existing websites due to perverse incentives.
If I build a site optimizing for long term success, and another dating site
optimizes for an intense honeymoon phase (which encourages people to come back
and spread the word about the site) then I will lose. And optimizing for long
term success is really hard since feedback occurs on the order of decades.
Of course I'm assuming that intense short term happiness and long term stability
aren't very highly correlated and I could be wrong. I'm also assuming that
stability is desirable - I'd be curious if anyone disagrees.
0[anonymous]7yCompanies are trying, unfortunately the incentives seem sort of messed up to me.
Dating websites have an incentive to encourage people to use their service, not
get into wonderful long term relationships. Hence I would expect them to
optimize for relationships with an intense honeymoon phase, rather than
relationships with a high chance of long term success and compatibility.
Since we're after long term success, feedback will occur on the order of decades
- making this a very hard optimization problem.
How do you pick a career if your goal is to maximize your income (technically, maximize the expected value of some function of your income)? The sort of standard answer is "comparative advantage", but it's unclear to me how to apply that concept in practice. For example how much demand there is for each kind of job is obviously very important, but how do you take that into consideration, exactly? I've been thinking about this and came up with the following. I'd be interested in any improvements or alternative ideas.
For each career under consideration, estimate your potential income ranking or percentile within that career if you went into it (as a probability distribution).
For each career, estimate its income distribution (how much will the top earner make, how much will the second highest earner make, etc.).
From 1 and 2, obtain a probability distribution of your income within each career.
If you have a high IQ and are good at math go into finance. If you have a high IQ, strong social skills but are bad at math go into law. If you have a high IQ, a good memory but weak social and math skills become a medical doctor. If you have a low IQ but are attractive marry someone rich. If you have a very low IQ get on government benefits for some disability and work at an under-the-table job.
3garabik7yThis seems awfully US centric.
Anyway, these advices aim at "higher middle class", not "rich bastard" category.
Maybe apart from "marry someone rich".
5Lumifer7yWell, Western-developed-world-centric, true.
In dynamic economies (e.g. China) you probably would want to start a business.
In stagnant and poor places your first priority should be to get out.
Going into finance or law can propel you into the "rich bastard" category.
4gjm7yMedical doctors are paid well in many places other than the US, though not as
well as in the US. (For that matter, most other well-paid jobs are better paid
in the US than anywhere else. Software development, law, senior management,
etc.)
Also, though of course this was no part of the original question, medicine
offers more confidence than most careers that your work is actually making the
world a better place. (Which may not actually be the right question to ask, of
course -- what matters is arguably the marginal effect, and if you're well paid
and care enough about people in poor countries you may well be able to do more
good by charitable donations than you ever could directly by your work. But it's
a thing many people care about.)
0Vulture7yMore importantly, it seems that being a medical doctor can pay very large
dividends both in donable dollars and in warm-fuzzies.
1RowanE7yI think that's intended. Trying to achieve greater wealth generally involves
much higher risk, and even if it offers a higher expected value in terms of
money, the diminishing utility of wealth probably makes the expected utility of,
say, creating a startup, lower than just pursuing a middle-class career that
matches your skills.
2[anonymous]7yWell, Wei Dai said “maximize the expected value of some function of your
income”; which career achieves that will depend on whether the function is
log(x), x, H [https://en.wikipedia.org/wiki/Heaviside_step_function](x -
$40,000/year), exp(x/($1M/year)), or what.
0RowanE7yI assumed it was referring to (part of) Wei Dai's utility function. What other
functions could there be a point in applying?
2[anonymous]7yYes, but we don't know what Wei Dai's utility function is, and the answer to his
question may depend on that.
2shminux7yBut are physically OK, play sports and/or enlist (US-centric).
6ThrustVectoring7yThe vast majority of people who play sports have fun and don't receive a dime
for it. A majority of people who get something of monetary value out of playing
sports get a college degree and nothing else.
I agree with the US army part though.
1Vulture7yI think the US army is very physically dangerous, and furthermore might be
considered a negative to world-welfare, depending on your politics.
8ThrustVectoring7yI don't have good numbers, but it's likely less dangerous than you think it is.
The vast majority of what an infantryman does falls into two categories -
training, and waiting. And that's a boots on ground, rifle in hand category -
there's a bunch of rear-echelon ratings as well.
I'm guessing that it's likely within an order of magnitude of danger as
commuting to work. Likely safer than delivering pizzas. There's probably a lot
of variance between specific job descriptions - a drone operator based in the
continental US is going to have a lot less occupational risk than the guy doing
explosive ordnance disposal.
3polymathwannabe7yHow many people I'd be calmly killing every day? I'd have massive PTSD if I were
a drone operator.
2NancyLebovitz7yFrom what I've read, a couple of the issues for drone pilots is that they've
been killing people who they've been watching for a while, and that they feel
personal responsibility if they fail to protect American soldiers.
1Alejandro17yBy a strange coincidence (unless you saw it and thus had it on your mind)
today's SMBC [http://www.smbc-comics.com/?id=3283#comic] is about exactly this.
-4Eugine_Nier7yWell, I don't have statistics about that, but accounts from WWII bomber crews
suggest otherwise.
-2[anonymous]7yMaybe they were just really good at screening out applicants who would have been
likely to get PTSD.
2Kaj_Sotala7yAFAIK, people only started understanding PTSD after Vietnam
[http://en.wikipedia.org/wiki/Posttraumatic_stress_disorder#Military_settings]
and it wasn't even called that until the 1980s, so possibly not.
-1Eugine_Nier7yUp until the US gets involved in something resembling a symmetrical war. Of
course in that case it's possible no job will be safe.
4ThrustVectoring7yIn the year 1940, working as an enlisted member of the army supply chain was
probably safer than not being in the army whatsoever - regular Joes got drafted.
Besides which, the geographical situation of the US means that a symmetrical war
is largely going to be an air/sea sort of deal. Canada's effectively part of the
US in economic and mutual-defense terms, and Mexico isn't much help either.
Mexico doesn't have the geographical and industrial resources to go toe-to-toe
with the US on their own, the border is a bunch of hostile desert, and getting
supplies into Mexico past the US navy and air force is problematic.
-3Eugine_Nier7yYes, and in particular it'll involve enemy drones. Drone operators are likely to
be specifically targeted.
0ThrustVectoring7yThat makes them safer, ironically. If your command knows that you're likely to
be targeted and your contributions are important to the war effort, they'll take
efforts to protect you. Stuff you down a really deep hole and pipe in data and
logistical support. They probably won't let you leave, either, which means you
can't get unlucky and eat a drone strike while you're enjoying a day in the
park.
You're at elevated risk of being caught in nuclear or orbital kinetic
bombardment, though... but if the war gets to that stage your goose is cooked
regardless of what job you have.
4Jayson_Virissimo7yAnother bonus of enlisting: basic skills will be drilled into so thoroughly they
will be fully into your System I allowing you extra executive function (thereby
causing you to punch above your weight in terms of intelligence). Although,
there is some ethical risk involved.
1NancyLebovitz7yEvidence?
1pianoforte6117yDoes anyone know if finance requires strong math and social skills? I assumed it
did - social skills for creating connections, and math skills for actually doing
to job.
And if you do have poor social skills, then practice! Social skills are really
important. I'm still working on this.
This is some guesswork, but some other possible combinations:
Strong social skills, above average IQ - management?
Above average IQ, good math skills - accounting?
Rich parents, family business - take over said business eventually.
Middle class parents, fair amount of property, good location - rent.
Rich parents, strong social skills - network through their connections.
1Eugine_Nier7yIs this still true? Recently there have been reports about an oversupply of
lawyers and scandals involving law schools fudging the statistics on the
salaries of their graduates.
2James_Miller7ySalaries might be falling, but I doubt this is long term.
1Izeinwinter7yUS law is a spectacularly bad choice at the moment. There is far to many law
schools, and as a consequence, too many law graduates, the degree costs a
fortune and employment prospects are outright bad. Do not do this.
Finance is an implicit bet that wallstreet will not get struck down by the wrath
of the electorate just as you finish your education.
Honestly? If riches really is what you want, go into business for yourself. A
startup, or at the low end just being a self-employed contractor has good
returns and this is not likely to change. Programming, the trades, a good set of
languages and an import-export business..
-1Eugine_Nier7yWell, as I understand it part of the issue is that a lot of the grunt work that
used to require lots of lawyers to do, e.g., looking through piles of documents
for relevant sections, can now be automated.
0MTGandP7yAccording to 80000 Hours
[http://80000hours.org/blog/190-where-can-i-earn-the-most], law is still one of
the highest-earning careers.
1Qiaochu_Yuan7yIs finance higher E(money) than, say, a startup?
5James_Miller7yI would guess yes given the high startup failure rate.
5ThrustVectoring7yThere's a high failure rate in finance, too - it's just hidden in the "up or
out" culture. It's a very winner-takes-all kind of place, from what I've heard.
6Lumifer7yFinance is diverse.
If you want to be a portfolio manager who makes, say, macro bets, yes, it's very
much up or out. But if you want to be a quant polishing fixed income risk
management models in some bank, it's a pretty standard corporate job.
2Alexandros7yStartups are shockingly diverse too. And despite the super-high failure rates I
hear about, the group of friends I've been tracking the past 5 years or so seem
to be doing pretty darn well, despite some of them having failures indeed.
I strongly suspect the degree of failure in startups correlates inversely with
rationality skills (as it should) so rationalists should not be placing
themselves on the same reference category as everyone else. Execution skills
matter a lot too, but doing a startup has worked miracles for my motivation too.
9Lumifer7yNot from the expected-income point of view (we're not considering car
dealerships and franchise eateries startups, right?).
Oh, dear. "I'm so smart that normal rules don't apply to me". What could
possibly go wrong..?
2Alexandros7yThis isn't "I'm smart and rules don't apply". Smartness alone doesn't help.
But, to put it this way, if rationality training doesn't help improve your
startup's odds of success, then there's something wrong with the rationality
training.
To be more precise, in my experience, a lot of startup failure is due to
downright stupidity, or just ignoring the obvious.
Also, anecdotally, running a startup has been the absolute best on-the-job
rationality training I've ever had.
Shockingly, successful entrepreneurs I've worked with score high on my
rationality test, which consists of how often they say things that are
uncontested red flags, and how well-reasoned their suggested courses of action
are. In particular, one of our investors is the closest approximation to a
bayesian superintelligence I've ever met. I can feed him data & news from the
past week, and almost hear the weighting of various outcomes shift in his
predictions and recommendations.
In short,
1. Rationalists are more likely to think better, avoid obvious errors.
2. Thinking better improves chances of startup success
3. Rationalists have better chances of startup success.
I do understand this sounds self-serving, but I also try to avoid the sin of
underconfidence. In my experience, quality of thinking between rationalists and
the average person tends to be similar to quality of conversation here versus on
YouTube. The problem is when rationalists bite off more than they can chew in
terms of goals, but that's a separate problem.
0niceguyanon7yWhat you say sounds intuitive to me at first, but as of now I would say that
rationality training may boost start up success rates up just a little.
Here is some reasons why rationality might not matter that much:
1. People tend to be a bit more rational when it counts, like making money. So
having correct beliefs about many things doesn't really give you an edge
because the other guy is also pretty rational for business stuff.
2. self-delusion [http://www.radiolab.org/story/91618-lying-to-ourselves/],
psychopathy
[http://www.scientificamerican.com/article/what-psychopaths-teach-us-about-how-to-succeed/]
, irrationality, corruption, arrogance, and raw driven determination
[http://lesswrong.com/lw/dtg/notes_on_the_psychology_of_power/], have good
if not better anecdotal evidence of boosting success than rationality
training I think.
1Alexandros7yWell, at this point we're weighing anecdotes, but..
1. Yes! They do tend to push their rationality to the limit. Hypothesis:
knowing more about rationality can help push up the limit of how rational
one can be.
2. Yes! It's not about rationality alone. Persistent determination is quite
possibly more important than rationality and intelligence put together. But
I posit that rationality is a multiplier, and also tends to filter out the
most destructive outcomes.
In general, I'd love to see some data on this, but I'm not holding my breath.
0niceguyanon7yAgreed. Interestingly, the latest post
[http://lesswrong.com/lw/jsp/political_skills_which_increase_income/] in main
points to evidence supporting rationality having a significant relation to
success in the work place – not the same as entrepreneurship, nonetheless I
update slightly more in favor of your position.
0Viliam_Bur7yI agree that a more rational person has a greater chance, ceteris paribus.
Question is, how much greater.
A part of the outcome is luck; I don't know how big part. Also, the rationality
training may improve your skills, but just to some degree.
(Data point: myself. I believe I am acting more rationally after CFAR minicamp
than before, and it seems to be reflected by better outcomes in life, but there
is still a lot of stupid things I do. So maybe my probability of running a
successful startup has increased from 1% to 3%.)
0Alexandros7yI question the stats that says 1% success rate for startups. I will need to see
the reference, but one I had access to basically said "1% matches or exceeds
projections shown to investors" or some such. Funnily enough, by that metric
Facebook is a failure (they missed the goal they set in the convertible note
signed with Peter Thiel). If run decently, I would expect double digit success
rates, for a more reasonable measure of success. If a driven, creative
rationalist is running a company, I would expect a very high degree of success.
Another thing much more common in rationalists than the common population is the
ability to actively solicit feedback, reflect and self-modify. This is
surprisingly rare. And incredibly vital in a startup.
Success at startups is not about not doing stupid things. I've made many MANY
mistakes. It's about not doing things stupid enough to kill your company.
Surprisingly, the business world has a lot of tolerance for error, as long as
you avoid the truly bad ones.
1Douglas_Knight7yIt is hard to survey startups. What is usually done is to measure success rates
of companies that raised a Series A round of funding. Many companies fail before
achieving that, though they necessarily fail faster, producing less opportunity
cost.
Here [https://i.imgur.com/KZJdFgZ.png] is a chart of returns to a VC, taken from
this paper [http://www.hbs.edu/faculty/Publication%20Files/11-020.pdf] by a
different author. 60% of dollars invested are in companies that lost the VCs
money (lost them 85%). This is a top VC that managed to triple its money, so
this is an overestimate of success of a regular VC-backed company. This is a
common bias in these surveys.
Based on the fictitious figure 2, 63% of dollars is actually 69% of companies,
because successful companies get more funding. So 31% of companies with a Series
A round at a top firm succeed by the metric of a positive return to the VCs.
Double digit success would require that at least 1/3 of startups get a Series A
funding and that companies funded by typical VCs are as successful as companies
funded by a top VC.
--------------------------------------------------------------------------------
The appropriate definition of success is comparing to opportunity cost. In
particular, the above analysis fails to take into account duration. Here
[https://www.stanford.edu/~rehall/Hall-Woodward%20on%20entrepreneurship.pdf] is
a paper that makes a reasonable comparison and concludes that running a company
with a Series A round was a good decision for people with $700k in assets.
Again, skipping to the Series A round is not a real action, thus overestimating
the value of the real action of a startup. There is an additional difficulty
that startups may have non-monetary costs and benefits, such as stress and
learning. Edit: found the paper. According to Figure 2, that 75% of VC-backed
firms exit at 0, not much worse than at the top VC considered above.
-1Eugine_Nier7yWell Paul Graham has built quite a successful incubator apparently largely based
on his ability to predict success of start-ups based on a half-hour interview.
0Lumifer7yBesides what gwern said, Paul Graham is a successful VC. The expected income of
VCs is very different from the expected income of startup founders.
-3Eugine_Nier7yMy point is that this is evidence that start-up success depends on ability more
than luck.
0Lumifer7yI think both ability and luck are necessary but not sufficient (well, reasonable
amounts of luck :-D).
0gwern7yI'm not sure how much the interviews add compared to the Y Combinator model of
investing in a lot of startups very early on at unusually favorable terms,
integrating with Hacker News, and building a YC community with alumni & new
angels. (As far as the latter goes, you can ask AngryParsley why he went into YC
for Floobits: it wasn't because he needed their cash.)
-3Eugine_Nier7yWhat kind of social skills does that require? My impression is that this is the
modern equivalent of court astrologer and requires some similar skills, i.e.,
cold reading.
2Lumifer7yNot much -- the usual ones for holding a corporate job (wear business casual,
look neat, don't smell, don't be a weirdo). Quants are expected to be
nerdy/geeky.
Not at all. Finance has the advantage of providing rapid and unambiguous
feedback for your actions.
-1Eugine_Nier7yIf you're trading yes, although the feedback is extremely noisy. If you're
designing models not so much. Incidentally a lot of the quants I know are also
good at doing Tarot readings, whether they believe the cards have power or not.
0Lumifer7yThat very much depends on what kind of strategy you're trading. For example, HFT
doesn't have problems with noise.
Yes, so much. Your model has to work well on historical data and if it makes it
to production, it will have performance metrics that it will have to meet.
1mare-of-night7yThe other thing to keep in mind about failure rates is where you end up if you
fail - what other careers you can go into with the same education. (In the case
of startups, you can keep trying more startups, and you're more likely to
succeed on the second or third than you were on the first. I don't know how it
is in finance.)
0[anonymous]7yI think a higher startup failure rate implies E(startup) > E(finance) since most
people want risk-adjusted return
2James_Miller7yNot necessarily because of different barriers to entry.
0[anonymous]7yI'm not sure I would count that as “your income”, though in jurisdictions with
easy divorces and large alimony it might be as good for all practical purposes.
0Jayson_Virissimo7yIn this context, what constitutes a "high IQ"?
1James_Miller7yDepends on how high you are aiming for. For a good investment banking position
you need a high enough IQ to either get into a top 10 school or be in the top
10% of a school such as Smith College.
0[anonymous]7yIt's obvious how you get into law or medicine, but how does going into finance
work?
0James_Miller7yFor students at Smith College the normal path is you get very high grades and
take some math-heavy courses, get a summer internship with an investment bank
after your junior year of college which results in a full time job offer, then
after 2-5 years you get an MBA and then get a more senior position at an
investment bank.
0[anonymous]7yOh. Any way for people who've already graduated from college to get in, or is it
too late at that point?
2James_Miller7yAn MBA or masters degree in finance would probably help. I don't have much
knowledge of more direct paths.
9[anonymous]7y"Career" is an unnatural bucket. You don't pick a career. You choose between
concrete actions that lead to other actions. Imagine picking a path through a
tree. This model can encompass the notion of a career as a set of similar paths.
Your procedure is a good way to estimate the value of these paths, but doesn't
reflect the tree-like structure of actual decisions. In other words, options are
important under uncertainty, and the model you've listed doesn't seem to reflect
this.
For example, I'm not choosing between (General Infantry) and (Mathematician).
I'm choosing between (Enlist in the Military) and (Go to College). Even if the
terminal state (General Infantry) had the same expected value as
(Mathematician), going to college should more valuable because you will have
many options besides (Mathematician) should your initial estimate prove wrong,
while enlisting leads to much lower branching factor.
How should you weigh the value of having options? I have no clue.
7ThrustVectoring7yYour goal is likely not to maximize your income. For one, you have to take cost
of living into account - a $60k/yr job where you spend $10k/yr on housing is
better than a $80k/yr (EDIT:$70k/yr, math was off) job where you spend $25k/yr
on housing.
For another, the time and stress of the career field has a very big impact on
quality-of-life. If you work sixty hour weeks, in order to get to the same kind
of place as a forty hour week worker you have to spend money to free up twenty
hours per week in high-quality time. That's a lot of money in cleaners, virtual
personal assistants, etc.
As far as "how do I use the concept of comparative advantage to my advantage",
here's how I'd do it:
1. Make a list of skills and preferences. It need not be exhaustive - in fact,
I'd go for the first few things you can think of. The more obvious of a
difference from the typical person, the more likely it is to be your
comparative advantage. For instance, suppose you like being alone, do not
get bored easily by monotonous work, and do not have any particular
attachment to any one place.
2. Look at career options and ask yourself if that is something that fits your
skills and preferences. Over-the-road trucking is a lot more attractive to
people who can stand boredom and isolation, and don't feel a need to settle
down in one place. Conversely, it's less attractive to people who are the
opposite way, and so is likely to command a higher wage.
3. Now that you have a shorter list of things you're likely to face less
competition for or be better at, use any sort of evaluation to pick among
the narrower field.
6solipsist7yYou should consider option values, especially early in your career. It's easier
to move from high paying job in Manhattan to a lower paying job in Kansas City
than to do the reverse.
4ThrustVectoring7yUpdate the choice by replacing income with the total expected value from job
income, social networking, and career options available to you, and the point
stands.
0RowanE7yProbably the cost of housing correlates with other expenses, and also there's
income tax to consider, but on the surface the first job is $50k/yr net, the
second job is $55k/yr net, and so it looks like the latter better.
3ThrustVectoring7ywhoops, picked the wrong numbers. Thanks
1pianoforte6117yIn addition to maximizing income, maximizing savings/investments is very
important. You can be poor off of a $500,000 salary and rich off of a $50,000
salary.
In "The Fall and Rise of Formal Methods", Peter Amey gives a pretty good description of how I expect things to play out w.r.t. Friendly AI research:
Good ideas sometimes come before their time. They may be too novel for their merit to be recognised. They may be seen to threaten some party’s self interest. They may be seen as simply too hard to adopt. These premature good ideas are often swept into corners and, the world, breathing a sigh of relief, gets on with whatever it was up to before they came along. Fortunately not all good ideas wither. Some are kept alive by enthusiasts, who seize every opportunity to show that they really are good ideas. In some cases the world eventually catches up and the original premature good idea, honed by its period of isolation, bursts forth as the new normality (sometimes with its original critics claiming it was all their idea in the first place!).
Formal methods (and I’ll outline in more detail what I mean by ‘formal methods’ shortly) are a classic example of early oppression followed by later resurgence. They arrived on the scene at a time when developers were preoccupied with trying to squeeze complex functionality into hardware wit
Introduction
I suspected that the type of stuff that gets posted in Rationality Quotes reinforces the mistaken way of throwing about the word rational. To test this, I set out to look at the first twenty rationality quotes in the most recent RQ thread. In the end I only looked at the first ten because it was taking more time and energy than would permit me to continue past that. (I'd only seen one of them before, namely the one that prompted me to make this comment.)
A look at the quotes
In our large, anonymous society, it's easy to forget moral and reputational pressures and concentrate on legal pressure and security systems. This is a mistake; even though our informal social pressures fade into the background, they're still responsible for most of the cooperation in society.
There might be an intended, implicit lesson here that would systematically improve thinking, but without more concrete examples and elaboration (I'm not sure what the exact mistake being pointed to is), we're left guessing what it might be. In cases like this where it's not clear, it's best to point out explicitly what the general habit of thought (cognitive algorithm) is that should be corrected, and how... (read more)
So I have the typical of introvert/nerd problem of being shy about meeting people one-on-one, because I'm afraid of not being able to come up with anything to say and lots of awkwardness resulting. (Might have something to do with why I've typically tended to date talkative people...)
Now I'm pretty sure that there must exist some excellent book or guide or blog post series or whatever that's aimed at teaching people how to actually be a good conversationalist. I just haven't found it. Recommendations?
8pcm7yOffline practice: make a habit of writing down good questions you could have
asked in a conversation you recently had. Reward yourself for thinking of
questions, regardless of how slow you are at generating them. (H/T Dan of
Charisma Tips [http://www.charismatips.com/], which has other good tips
scattered around that blog).
0CAE_Jones7yI saw a speech pathologist for this. I was taught to ask boring questions I'm
not really interested in asking on the hopes that they will lead to something
interesting happening. "How was your weekend?", "What are some of your
hobbies?", "How about this weather?", and all that mess.
In practice, it feels so forced I can't do it in real life.
9Kaj_Sotala7yYeah. My problem is more that I can't think of anything to say even when people
do say something interesting.
Like just recently, I met up with one person who wanted to discuss his tech
startup thing. Then he held this fascinating presentation about the philosophy
and practice of his project, which also touched upon like five other fields that
I also have an interest in. And I mostly just said "okay" and nodded, which was
fine in the beginning since he was giving me a presentation after all, but then
in the end when he asked me if I had any questions or comments, and I didn't
have much to say. There were some questions that occurred to me as he talked
about it, and I did ask those when they occurred, but still, feels like I
should've been able to say a lot more.
Responding to the interesting conversation context.
First, always bring pen a paper to any meeting/presentation that is in anyway formal or professional. Questions always come up at times when it is inappropriate to interrupt, save them for lulls.
Second, an an anecdote. I noticed I had a habit during meetings to focus entirely on absorbing and recording information, and then would process and extrapolate from it after the fact (I blame spending years in the structured undergrad large technical lecture environment). This habit of only listening and not providing feedback was detrimental in the working world, it took a lot of practice to start analyzing the information and extrapolating forward in real time. Once you start extrapolating forward from what you are being told, meaningful feedback will come naturally.
7Vaniver7ySo, I have a comparative advantage at coming up with things to say, and so I'm
not sure this advice will fill the specific potholes you're getting stuck on,
but I hope it's somewhat useful.
A simple technique that seems to work pretty well is read your mind to them,
since they can't read it themselves. If you're interested in field X, say that
you're interested in it. If you're glad that they gave you a talk, tell them
you're glad that they gave you a talk. People like getting feedback, and people
like getting compliments, and when your mind is blank and there's nothing asking
to be said, that's a good place to go looking. (Something like "that was very
complete; I've got no questions" is nicer than just silence, though you may want
to tailor it a bit to whatever they've just said.)
4Kaj_Sotala7yThanks, that sounds potentially useful.
0Ben Pace7yHave you actually tried it out much, or do you top before you 'just try it'? I
make myself ask questions like that, but I find it can move the conversation
into better places... Although I normally use ones I'm likely to be interested
in e,g. "Read any good books recently?"
Here is another logic puzzle. I did not write this one, but I really like it.
Imagine you have a circular cake, that is frosted on the top. You cut a d degree slice out of it, and then put it back, but rotated so that it is upside down. Now, d degrees of the cake have frosting on the bottom, while 360 minus d degrees have frosting on the top. Rotate the cake d degrees, take the next slice, and put it upside down. Now, assuming the d is less than 180, 2d degrees of the cake will have frosting on the bottom.
If d is 60 degrees, then after you repeat this procedure, flipping a single slice and rotating 6 times, all the frosting will be on the bottom. If you repeat the procedure 12 times, all of the frosting will be back on the top of the cake.
For what values of d does the cake eventually get back to having all the frosting on the top?
Someone was asking a while back for meetup descriptions, what you did/ how it went, etc. Figured I'd post some Columbus Rationality videos here. All but the last are from the mega-meetup.
A question I'm not sure how to phrase to Google, and which has so far made Facebook friends think too hard and go back to doing work at work: what is the maximum output bandwidth of a human, in bits/sec? That is, from your mind to the outside world. Sound, movement, blushing, EKG. As long as it's deliberate. What's the most an arbitrarily fast mind running in a human body could achieve?
(gwern pointed me at the Whole Brain Emulation Roadmap; the question of extracting data from an intact brain is covered in Appendix E, but without numbers and mostly with hypothetical technology.)
5gwern7yWhy not simply estimate it yourself? These sorts of things aren't very hard to
do. For example, you can estimate typing as follows: peak at 120 WPM; words are
average 4 characters; each character (per Shannon and other's research; see
http://www.gwern.net/Notes#efficient-natural-language
[http://www.gwern.net/Notes#efficient-natural-language] ) conveys ~1 bit; hence
your typing bandwidth is 120 4 1 = <480 bits per minute or <8 bits per second.
Do that for a few modalities like speech, and sum.
0[anonymous]7yI've just noticed he said “an arbitrarily fast mind running in a human body”,
not an actual human being, so I don't think it would be much slower at typing
uuencoded compressed stuff than natural language (at least with QWERTY -- it
might be different with keyboards layouts optimized from natural language such
as Dvorak, but still probably within a factor of a few).
0gwern7yThe 120WPM is pretty good for the physical limits: if you are typing at 120WPM,
then you have not hit the limits of your thinking (imagine you are in a typing
tutor - your reading speed ought to be at least 3x 120WPM...), and you're not
too far off some of the sustained typing numbers in
https://en.wikipedia.org/wiki/Words_per_minute#Alphanumeric_entry
[https://en.wikipedia.org/wiki/Words_per_minute#Alphanumeric_entry]
1[anonymous]7yMy point was that 1 bit per character is an underestimate.
0[anonymous]7yLa Wik says [https://en.wikipedia.org/wiki/Perplexity#Perplexity_per_word] 8
bits per word, FWIW.
6gwern7yLa Wiki is apparently not using the entropy estimates extracted from human
predictions (who are the best modelers of natural language). Crude stuff like
trigram models are going to considerably overestimate matters.
4Illano7yAs a baseline estimate for just the muscular system, the worlds faster drummer
can play at about 20 beats per second. That's probably an upper limit on twitch
speeds of human muscles, even with a arbitrarily fast mind running in the body.
Assuming you had a system on the receiving end that could detect arbitrary
muscle contractions, and could control each muscle in your body independently
(again, this is an arbitrarily fast mind, so I'd think it should be able to),
there are about 650 muscle groups in the body according to wikipedia, so I would
say a good estimate for just the muscular system would be 650 x 20bits/s or
about 13 Kb/s.
Once you get into things like EKGs, I think it all depends on how much control
the mind actually has over processes that are largely subconscious, as well as
how sensitive your receiving devices are. That could make the bandwidth much
higher, but I don't know a good way to estimate that.
04hodmt7y20 beats per second is for two-handed drumming over one minute, so that's only
10bits/s/muscle theoretical maximum. There doesn't seem to be any organized
competition for one-handed drumming, but Takahashi Meijin was famous for button
mashing at 16 presses per second with only one hand, although for much shorter
times.
0khafra7yDon't you have to define the receiver as well as the transmitter, to have any
idea about the channel bandwidth? I mean, if the "outside world" is the Dark
Lords of the Matrix, the theoretical maximum output bandwidth is the processing
speed of the mind.
0David_Gerard7yLet's say "detectable as data by 2014 technology".
0[anonymous]7yShort of having a precise definition of “deliberate” I don't think it's possible
to give a precise number, but for a Fermi estimate... Dammit! Gwern has already
made the calculation I was thinking of!
I noticed recently that one of the mental processes that gets in the way of my proper thinking is an urge to instantly answer a question then spend the rest of my time trying to justify that knee-jerk answer.
For example, I saw a post recently asking whether chess or poker was more popular worldwide. For some reason I wanted to say "obviously x is more popular," but I realized that I don't actually know. And if I avoid that urge to answer the question instantly, it's much easier for me to keep my ego out of issues and to investigate things properly...including making it easier for me recognize things that I don't know and acknowledge that I don't know them.
Is there a formal name for this type of bias or behavior pattern? It would let me search up some Sequence posts or articles to read.
Here is a video of someone interviewing people to see if they can guess a pattern by asking whether or not a sequence of 3 numbers satisfies the pattern. (like was mentioned in HPMOR)
0xnn7yThe other videos I've sampled from that channel have also been good.
0Scott Garrabrant7yI have also been going through the channel. What I saw so far was mostly
science, but there is some rationality stuff.
Example [http://www.youtube.com/watch?v=eVtCO84MDj8]
I've found this to actually be difficult to figure out. Sometimes you can google up what you thought. Sometimes checking to see where the idea has been previously stated requires going through papers that may be very very long, or hidden by pay-walls or other barriers on scientific journal sites.
Sometimes it's very hard to google things up. To me, I suppose the standard for "that's a good idea," is if it more clearly explains something I previously observed, or makes it easier or faster for me to do something. But I have no idea whether or not that means it will be interesting for other people.
2Torello7yIf you have to ask...
Just kidding. It's a great question. Two thoughts: "Nothing is as important as
you think it is while you're thinking about it." - Daniel Khaneman "If you want
to buy something, wait two weeks and see if you still want to buy it." - my mom
1wadavis7yThis is a big open topic, but I'll talk about my top method.
I have a prior that our capitalist, semi-open market is thorough and that if an
idea is economically feasible, someone else is doing it / working on it. So when
I come up with a new good idea, I assume someone else has already thought of it
and begin researching why it hasn't been done already. Once that research is
done, I'll know not only if it is a good idea or a bad idea but why it is which,
and a hint of what it would take to turn it from a bad idea into a good idea.
Often these good ideas have been tried / considered before but we may have a
local comparative advantage that makes it practical here were it was not
elsewhere (legislation, better technology, cheaper labor, costlier labor... )
For example: inland, non-directional, shallow oil, drilling rigs use a very
primitive method to survey their well bore. Daydreaming during my undergrad I
came up with a alternative method that would provide results orders of
magnitudes more accurate. I put together my hypothesis that this was not already
in use because: this was a niche market and the components were too costly /
poor quality before the smartphone boom. My hypothesis was wrong, a company had
a fifteen year old patent on the method and it was being marketed (along with a
highly synergistic product line) to offshore drilling rigs. It was a good idea,
so good of an idea that it made someone a lot of money 15 years ago and made
offshore drilling a lot safer, but it wasn't a good idea for me.
7Alicorn7yMaybe CfAR should invite him to a workshop.
(I suspect that if CfAR should invite him to a workshop they should do it
themselves in some official capacity and don't think random Less Wrongers ought
to contact Mr. Jacobs.)
ETA: Ah, rats, the article is from 2008. He's probably lost interest.
1Viliam_Bur7yWell, I'm curious about the results. Especially, whether he manages to avoid
some "hollywood rationality" memes. He already mentioned Spock...
To illustrate dead-weight loss in my intro micro class I first take out a dollar bill and give it to a student and then explain that the sum of the wealth of the people in the classroom hasn't changed. Next, I take a second dollar bill and rip it up and throw it in the garbage. My students always laugh nervously as if I've done something scandalous like pulling down my pants. Why?
Because it signals "I am so wealthy that I can afford to tear up money" and blatantly signaling wealth is crass. And it also signals "I am so callous that I would rather tear up money than give it to the poor", which is also crass. And the argument that a one dollar bill really isn't very much money isn't enough to disrupt the signal.
Why is the Monty Hall problem so horribly unintuitive? Why does it feel like there's an equal probability to pick the correct door (1/2+1/2) when actually there's not (1/3+2/3)?
Here are the relevant bits from the Wikipedia article:
Out of 228 subjects in one study, only 13% chose to switch (Granberg and Brown, 1995:713). In her book The Power of Logical Thinking, vos Savant (1996, p. 15) quotes cognitive psychologist Massimo Piattelli-Palmarini as saying "... no other statistical puzzle comes so clo
Another datapoint is the counterintuitiveness of searching a desk: with each drawer you open looking for something, the probability of finding it in the next drawer increases, but your probability of ever finding it decreases. The difference seems to whipsaw people; see http://www.gwern.net/docs/statistics/1994-falk
2[anonymous]7yA bit late, but I think this part of your article was most relevant to the Monty
Hall problem:
People probably don't distinguish between their personal probability of the
target event and the probabilities of the doors. It feels like the probability
of there being a car behind the doors is a parameter that belongs to those doors
or to the car - however you want to phrase it. Since you're only given
information about what's behind the doors, and that information can't actually
change the reality of what's behind the doors then it feels like the probability
can't change just because of that.
4lmm7yI think the monty hall problem very closely resembles a more natural one in
which the probability is 1/2; namely, that where the host is your opponent and
chose whether to offer you the chance to switch. So evolutionarily-optimized
instincts tell us the probability is 1/2.
3DanielLC7yI'd say it's that it closely resembles the one where the host has no idea which
door has the car in it, and picks a door at random.
3Scott Garrabrant7yI do not think this is correct. First, the host should only offer you the chance
to switch if you are winning, so the chance should be 0. Second, this example
seems too contrived to be something that we would have evolved a good instinct
about.
0drethelin7yUnless they're trying to trick you. The problem collapses to a yes or no
question of whether one of you is able to guess the level the other one of you
is thinking on
3Scott Garrabrant7yUm, no, the only Nash equilibria are where you never accept the deal. If you
ever accept it at all, then they will only offer it when it hurts you.
0[anonymous]7yI'd probably broaden this beyond 1/2 - I think the base case is the host gives
you a chance to gamble with a question or test of skill, and the result is
purely dependent on the player. The swap-box scenario is then an extreme case of
that where the result depends less and less on the skill of the player,
eventually reaching 50% chance of winning. I wouldn't say
evolutionary-optimised, but maybe familiarity with the game-show tropes being
somewhere along this scale.
Monty Hall is then a twist on this extreme case, which pattern-matches to the
more common 50% case with no allowance for the effect of the host's knowledge.
Does anyone have any advice about understanding implicit communication? I regularly interact with guessers and have difficulty understanding their communication. A fair bit of this has to do with my poor hearing, but I've had issues even on text based communication mediums where I understand every word.
My strategy right now is to request explicit confirmation of my suspicions, e.g., here's a recent online chat I had with a friend (I'm A and they're B):
A: Hey, how have you been?
B: I've been ok
B: working in the lab now
A: Okay. Just to be clear, do you mean t... (read more)
7TheOtherDave7yIt's worth remembering that there is no single Guess/Hint culture. Such
high-context cultures depend on everyone sharing a specific set of
interpretation rules, allowing information to be conveyed through subtle signals
(hints) rather than explicit messages.
For my own part, I absolutely endorse asking for confirmation in any interaction
among peers, taking responses to such requests literally, and disengaging if you
don't get a response. If a Guess/Hint-culture native can't step out of their
preferred mode long enough to give you a "yes" or "no," and you can't reliably
interpret their hints, you're unlikely to have a worthwhile interaction anyway.
With nonpeers, it gets trickier; disengaging (and asking in the first place) may
have consequences you prefer to avoid. In which case I recommend talking to
third parties who can navigate that particular Guess/Hint dialect, and getting
some guidance from them. This can be as blatant as bringing them along to
translate for you (or play Cyrano, online), or can be more like asking them for
general pointers. (E.g. "I'm visiting a Chinese family for dinner. Is there
anything I ought to know about how to offer compliments, ask for more food, turn
down food I don't want, make specific requests about food? How do I know when
I'm supposed to start eating, stop eating, leave? Are there rules I ought to
know about who eats first? Etc. etc. etc.")
2TheOtherDave7ySome more Guess/Hint culture suggestions.
Consider:
This will typically communicate that you've understood that they're busy and
don't want to chat, that you're OK with that, and that you want to talk to them.
That said, there exist Guess/Hint cultures in which it also communicates that
you have something urgent to talk about, because if you didn't you would instead
have said:
...which in those cultures will communicate that the ball is in their court.
(This depends on an implicit understanding that it is NOT OK to leave messages
unresponded to, even if they don't explicitly request a response, so they are
now obligated to contact you next... but since you didn't explicitly mention it
(which would have suggested urgency) they are expected to know that they can do
so when it's convenient for them.
EDIT: All of that being said, my inner Hint-culture native also wants to add
that being visible in an online chat forum when I'm not free to chat is rude in
the first place.
0btrettel7yThanks for these two posts. I thought more than a thumbs-up (a very subtle hint)
was necessary here. I've found both posts to be useful in understanding this
class of communication styles.
1TheOtherDave7yI'm glad they helped. Thanks for letting me know.
Posts that have appeared since you last red a page have a pinkish border on them. It's really helpful when dealing with things like open threads and quote threads that you read multiple times. Unfortunately, looking at one of the comments makes it think you read all of them. Clicking the "latest open thread" link just shows one of the comments. This means that, if you see something that looks interesting there, you either have to find the latest open thread yourself, or click the link and have it erase everything about what you have and haven't read.
Can someone make it so looking at one of the comments doesn't reset all of them, or at least put a link to the open thread, instead of just the comments?
6Douglas_Knight7yThe general problem is real, but here's a solution to the specific problem of
finding the latest open thread: just click the words "latest open thread,"
rather than the comment that displays below it.
0DanielLC7yI see. I had been trying to click the "on Open Thread" part.
0Douglas_Knight7yMaking that a link to the post would be an easy change. In the case of the open
thread it is redundant, but perhaps easier to identify as a link. But in the
case of the "recent comments" section of the sidebar, it would provide links not
currently available.
Does anyone have advice on how to optimize the expectation of a noisy function? The naive approach I've used is to sample the function for a given parameter a decent number of times, average those together, and hope the result is close enough to stand in for the true objective function. This seems really wasteful though.
Most of the algorithms I'm coming (like modelling the objective function with gaussian process regression) would be useful, but are more high-powered than I need. Any simple techniques better than the naive approach? Any recommendations among sophisticated approaches?
2VincentYu7yThere are some techniques that can be used with simulated annealing
[https://en.wikipedia.org/wiki/Simulated_annealing] to deal with noise in the
evaluation of the objective function. See Section 3 of Branke et al (2008)
[https://dl.dropboxusercontent.com/u/238511/papers/2008-branke.pdf] for a quick
overview of proposed methods (they also propose new techniques in that paper).
Most of these techniques come with the usual convergence guarantees that are
associated with simulated annealing (but there are of course performance
penalties in dealing with noise).
What is the dimensionality of your parameter space? What do you know about the
noise? (e.g., if you know that the noise is mostly homoscedastic
[https://en.wikipedia.org/wiki/Homoscedasticity] or if you can parameterize it,
then you can probably use this to push the performance of some of the simulated
annealing algorithms.)
2badger7yThanks for the SA paper!
The parameter space is only two dimensional here, so it's not hard to eyeball
roughly where the minimum is if I sample enough. I can say very little about the
noise. I'm more interested being able to approximate the optimum quickly (since
simulation time adds up) than hitting it exactly. The approach taken in this
paper
[https://www.autonlab.org/autonweb/14646/version/3/part/5/data/anderson-nonparametric.pdf]
based on a non-parametric tau test looks interesting.
2Lumifer7yThat rather depends on the particulars, for example, do you know (or have good
reasons to assume) the characteristics of your noise?
Basically you have a noisy sample and want some kind of an efficient estimator
[http://en.wikipedia.org/wiki/Efficient_estimator], right?
0badger7yNot really. In this particular case, I'm minimizing how long it takes a
simulation reach one state, so the distribution ends up looking lognormal- or
Poisson-ish.
Edit: Seeing your added question, I don't need an efficient estimator in the
usual sense per se. This is more about how to search the parameter space in a
reasonable way to find where the minimum is, despite the noise.
0Lumifer7yHm. Is the noise magnitude comparable with features in your search space? In
other words, can you ignore noise to get a fast lock on a promising section of
the space and then start multiple sampling?
Simulated annealing that has been mentioned is a good approach but slow to the
extent of being impractical for large search spaces.
Solutions to problems such as yours are rarely general and typically depend on
the specifics of the problem -- essentially it's all about finding shortcuts.
0badger7yThe parameter space in this current problem is only two dimensional, so I can
eyeball a plausible region, sample at a higher rate there, and iterate by hand.
In another project, I had something with an very high dimensional parameter
space, so I figured it's time I learn more about these techniques.
Any resources you can recommend on this topic then? Is there a list of common
shortcuts anywhere?
0Lumifer7yWell, optimization (aka search in parameter space) is a large and popular topic.
There are a LOT of papers and books about it.
And sorry, I don't know of a list of common shortcuts. As I mentioned they
really depend on the specifics of the problem.
1witzvo7yYou may find better ideas under the phrase "stochastic optimization," but it's a
pretty big field. My naive suggestion (not knowing the particulars of your
problem) would be to do a stochastic version of Newton's algorithm. I.e. (1)
sample some points (x,y) in the region around your current guess (with enough
spread around it to get a slope and curvature estimate). Fit a locally weighted
quadratic regression through the data. Subtract some constant times the identity
matrix from the estimated Hessian to regularize it; you can choose the constant
(just) big enough to enforce that the move won't exceed some maximum step size.
Set your current guess to the maximizer of the regularized quadratic. Repeat
re-using old data if convenient.
I've been reading critiques of MIRI, and I was wondering if anyone has responded to this particular critique that basically asks for a detailed analysis of all probabilities someone took into account when deciding that the singularity is going to happen.
(I'd also be interested in responses aimed at Alexander Kruel in general, as he seems to have a lot to say about Lesswrong/Miri.)
8[anonymous]7yI actually lost my faith in MIRI because of Kruel's criticism, so I too would be
glad if someone adressed it. I think his criticism is far more comprehensive
that most of the other criticism on this page
[http://wiki.lesswrong.com/wiki/Criticism_of_the_sequences] (well, this post
[http://lesswrong.com/lw/745/why_we_cant_take_expected_value_estimates/] has
little bit of the same).
Is there anything specific that he's said that's caused you to lose your faith? I tire of debating him directly, because he seems to twist everything into weird strawmen that I quickly lose interest in trying to address. But I could try briefly commenting on whatever you've found persuasive.
1[anonymous]7yI’m going to quote things I agreed with or things that persuaded me or that
worried me.
Okay, to start off, when I first read about this in Intelligence Explosion:
Evidence and Import [http://intelligence.org/files/IE-EI.pdf], Facing the
Intelligence Explosion [http://intelligenceexplosion.com/], Intelligence
Explosion and Machine Ethics [http://intelligence.org/files/IE-ME.pdf] it just
felt like self-evident and I’m not sure how thoroughly I went through the
presuppositions during that time so Kruel could have very easily persuaded me
about this. I don’t know much about the technical process of writing an AGI so
excuse me if I get something wrong about that particular thing.
It’s founded on many, many assumptions not supported by empirical data, and if
even one of them was wrong the whole thing collapses down. And you can’t really
even know how many unfounded sub-assumptions there are in these original
assumptions. But when I started thinking about it could be that it’s impossible
to reason about those kind of assumptions if you do it any other way than how
MIRI currently does it. Needing to formalize a mathematical expression before
you can do anything like Kruel suggested is a bit unfair.
I don’t see why the first AIs resembling general intelligences would be very
powerful so practical AGI research is probably somewhat safe in the early
stages.
This I would like to know, how scalable is intelligence?
(I thought maybe by dedicating lots of computation to a very large numbers of
random scenarios)
(maybe by simulating the real world environment)
http://kruel.co/2013/01/04/should-you-trust-the-singularity-institute/
[http://kruel.co/2013/01/04/should-you-trust-the-singularity-institute/]
Thoughts on this article. I read about the Nurture Assumption
[http://en.wikipedia.org/wiki/The_Nurture_Assumption] in Slate Star Codex and it
probably changed my priors on this. If it really is true and one dedicated
psychologist could do all that, then MIRI probably could a
3Kaj_Sotala7yBrief replies to the bits that you quoted:
(These are my personal views and do not reflect MIRI's official position, I
don't even work there anymore.)
Not sure how to interpret this. What does the "further inferences and
estimations" refer to?
See this comment [http://lesswrong.com/lw/jtb/open_thread_march_4_10/ano7] for
references to sources that discuss this.
But note that an intelligence explosion is sufficient but not necessary for AGI
to be risky: just because development is gradual doesn't mean that it will be
safe. The Chernobyl power plant was the result of gradual development in nuclear
engineering. Countless other disasters have likewise been caused by technologies
that were developed gradually.
Hard to say for sure, but note that few technologies are safe unless people work
to make them safe, and the more complex the technology, the more effort is
needed to ensure that no unexpected situations crop up where it turns out to be
unsafe after all. See also section 5.1.1. of Responses to Catastrophic AGI Risk
[http://intelligence.org/files/ResponsesAGIRisk.pdf] for a brief discussion
about various incentives that may pressure people to deploy increasingly
autonomous AI systems into domains where their enemies or competitors are doing
the same, even if it isn't necessarily safe.
We're already giving computers considerable power in the economy, even without
nanotechnology: see automated stock trading (and the resulting 2010 Flash Crash
[http://en.wikipedia.org/wiki/2010_Flash_Crash]), various military drones,
visions for replacing all cars (and ships
[http://www.wired.com/business/2014/02/drone-cargo-ships-will-make-real-world-work-just-like-internet/]
) with self-driving ones, the amount of purchases that are carried out
electronically via credit/debit cards or PayPal versus the ones that are done in
old-fashioned cash, and so on and so on. See also section 2.1. of Responses to
Catastrophic AGI Risk, as well as the previously mentioned section 5.1.1., for
2XiXiDu7yBasically the hundreds of hours it would take MIRI to close the inferential
distance between them and AI experts. See e.g. this comment
[http://lesswrong.com/lw/i7p/how_does_miri_know_it_has_a_medium_probability_of/9ir6]
by Luke Muehlhauser:
If your arguments are this complex then you are probably wrong.
I do not disagree with that kind of AI risks. If MIRI is working on mitigating
AI risks that do not require an intelligence explosion, a certain set of AI
drives and a bunch of, from my perspective, very unlikely developments...then I
was not aware of that.
This seems very misleading. We are after all talking about a technology that
works perfectly well at being actively unsafe. You have to get lots of things
right, e.g. that the AI cares to take over the world, knows how to improve
itself, and manages to hide its true intentions before it can do so etc. etc.
etc.
There is a reason why MIRI doesn't know this. Look at the latest interviews with
experts conducted by Luke Muehlhauser. He doesn't even try to figure out if they
disagree with Xenu, but only asks uncontroversial questions.
Crazy...this is why I am criticizing MIRI. A focus on an awfully narrow and
specific scenario rather than AI risks in general.
Consider that the U.S. had many more and smarter people than the Taliban. The
bottom line being that the U.S. devoted a lot more output per man-hour to defeat
a completely inferior enemy. Yet their advantage apparently did scale
sublinearly.
I do not disagree that there are minds better at social engineering than that of
e.g. Hitler, but I strongly doubt that there are minds which are vastly better.
Optimizing a political speech for 10 versus a million subjective years won't
make it one hundred thousand times more persuasive.
The question is if just because humans are much smarter and stronger they can
actually wipe out mosquitoes. Well, they can...but it is either very difficult
or will harm humans.
You already need to build huge particle accelerators
0Kaj_Sotala7yThanks, I'll try to write up a proper reply soon.
0[anonymous]7ySure, that would be great! I will go through his criticism in the next few days
and list everything that persuaded me and why.
5Squark7yPersonal opinion:
* MIRI are doing very interesting research regardless of the reality of AGI
existential risk and feasibility of the FAI problem
* AGI existential risk is sufficiently founded to worry about, so even if it is
not the most important thing, someone should be on it
-1savageorange7yPerhaps his server is underspecced? It's currently slowed to an absolute c r a w
l. What little I have seen certainly looks worthwhile, though.
I'd like to know where I can go to meet awesome people/ make awesome friends. Occasionally, Yvain will brag about how awesome his social group in the Bay Area was. See here (do read it - its a very cool piece) and I'd like to also have an awesome social circle. As far as I can tell this is a two part problem. The first part is having the requisite social skills to turn strangers into acquaintances and then turn acquaintances into friends. The second part is knowing where to go to find people.
I think that the first part is a solved problem, if you want to l... (read more)
4Viliam_Bur7yHow about you simply write where you live, and tell other LWers in the same area
to contact you? It may or may not work, but the effort needed is extremely low.
(You can also put that information in LW settings.)
Or write this: "I am interested in meeting LW readers in [insert place], so if
you live near and would like to meet and talk, send me a private message".
How To Be A Proper Fucking Scientist – A Short Quiz. From Armondikov of RationalWiki, in his "annoyed scientist" persona. A list of real-life Bayesian questions for you to pick holes in the assumptions of^W^W^W^W^W^Wtest yourselves on.
Richard Loosemore (score one for nominative determinism) has a new, well, let's say "paper" which he has, well, let's say "published" here.
His refutation of the usual uFAI scenarios relies solely/mostly on a supposed logical contradiction, namely (to save you a few precious minutes) that a 'CLAI' (a Canonical Logical AI) wouldn't be able to both know about its own fallability/limitations (inevitable in a resource-constrained environment such as reality), and accept the discrepancy between its specified goal system and the creators' actu... (read more)
8XiXiDu7yHere
[http://www.bloomberg.com/video/meet-microsoft-s-virtual-personal-assistant-ClOLomjTTSKUEsXEz3GxUQ.html]
is a description of a real-world AI by Microsoft's chief AI researcher:
Does it have a DWIM imperative? As far as I can tell, no. Does it have goals? As
far as I can tell, no. Does it fail by absurdly misinterpreting what humans
want? No.
This whole talk about goals and DWIM modules seems to miss how real world AI is
developed and how natural intelligences like dogs work. Dogs can learn the
owners goals and do what the owner wants. Sometimes they don't. But they rarely
maul their owners when what the owner wants it to do is to scent out drugs.
4Squark7yI think we need to be very careful before extrapolating from primitive elevator
control systems to superintelligent AI. I don't know how this particular
elevator control system works, but probably it does have a goal, namely
minimizing the time people have to wait before arriving at their target floor.
If we built a superintelligent AI with this sort of goal it might have done all
sorts of crazy thing. For example, it might create robots that will constantly
enter and exit the elevator so their average elevator trips are very short and
wipe out the human race just so they won't interfere.
"Real world AI" is currently very far from human level intelligence, not
speaking of superintelligence. Dogs can learn what their owners want but dogs
already have complex brains that current technology is not able of reproducing.
Dogs also require displays of strength to be obedient: they consider the owner
to be their pack leader. A superintelligent dog probably won't give a dime about
his "owner's" desires. Humans have human values, so obviously it's not
impossible to create a system that has human values. It doesn't mean it is easy.
0XiXiDu7yI am extrapolating from a general trend, and not specific systems. The general
trend is that newer generations of software less frequently crash or exhibit
unexpected side-effects (just look at Windows 95 vs. Windows 8).
If we want to ever be able to build an AI that can take over the world then we
will need to become really good at either predicting how software works or at
spotting errors. In other words, if IBM Watson would have started singing, or if
it got stuck on a query, then it would have lost at Jeopardy. But this trend
contradicts the idea of an AI killing all humans in order to calculate 1+1. If
we are bad enough at software engineering to miss such failure modes then we
won't be good enough to enable our software to take over the world.
1Squark7yIn other words, you're saying that if someone is smart enough to build a
superintelligent AI, she should be smart enough it make it friendly.
Well, firstly this claim doesn't imply we should be researching FAI and/or that
MIRI's work is superfluous. It just implies that nobody will build a
superintelligent AI before the problem of friendliness is solved.
Secondly, I'm not at all convinced this claim is true. It sounds like saying "if
they are smart enough to build the Chernobyl nuclear power plant, they are smart
enough to make it safe". But they weren't.
Improvement in software quality is probably due to improvement in design and
testing methodologies and tools, response to increasing market expectations etc.
I wouldn't count on these effects to safe-guard against an existential
catastrophe. If a piece of software is buggy, it becomes less likely to be
released. If an AI has a poorly designed utility function but a perfectly
designed decision engine, there might be no time to pull the plug. The product
manager won't stop the release because the software will release itself.
If growth of intelligence due to self-improvement is a slow process than the
creators of the AI will have time to respond and fix the problems. However, if
"AI foom" is real, they won't have time to do it. One moment it's a harmless
robot driving around the room and building castles from colorful cubes. Another
moment the whole galaxy is on its way to become a pile of toy castles.
The engineers who build the first superintelligent AI might simply lack the
imagination to believe it will really become superintelligent. Imagine one of
them inventing a genius mathematical theory of self-improving intelligent
systems. Suppose she never heard about AI existential risks etc. Will she
automatically think "hmm, once I implement this theory the AI will become so
powerful it will paperclip the universe"? I seriously doubt it. More likely it
would be "wow, that formula came out really neat, I wonder h
1drethelin7yFeedback systems are much more powerful in existing intelligences. I don't know
if you ever played Black and White but it had an explicitly learning through
experience based AI. And it was very easy to accidentally train it to constantly
eat poop or run back and forth stupidly. An elevator control module is very very
simple: It has a set of options of floors to go to, and that's it. It's barely
capable of doing anything actively bad. But what if a few days a week some kids
had come into the office building and rode the elevator up and down for a few
hours for fun? It might learn that kids love going to all sorts of random
floors. This would be relatively easy to fix, but only because the system is so
insanely simple and it's very clear to see when it's acting up.
5PhilGoetz5yDownvoted for being deliberately insulting. There's no call for that, and the
toleration and encouragement of rationality-destroying maliciousness must be
stamped out of LW culture. A symposium proceedings is not considered as
selective as a journal, but it still counts as publication when it is a complete
article.
-4Kawoomba5yWell, I must say my comment's belligerence-to-subject-matter ratio is lower than
yours. "Stamped out"? Such martial language, I can barely focus on the
informational content.
The infantile nature of my name calling actually makes it easier to take the
holier-than-thou position (which my interlocutor did, incidentally). There's a
counter-intuitive psychological layer to it which actually encourages dissent,
and with it increases engagement on the subject matter (your own comment
nonwithstanding). With certain individuals at least, which I (correctly) deemed
to be the case in the original instance.
In any case, comments on tone alone would be more welcome if accompanied with
more remarks on the subject matter itself. Lastly, this was my first comment in
over 2 months, so thanks for bringing me out of the woodwork!
I do wish that people were more immune to the allure of drama, lest we all end
up like The Donald.
5Squark7yThe condescending tone with which he presents his arguments (which are,
paraphrasing him, "slightly odd, to say the least") is amazing. Who is this guy
and where did he come from? Does anyone care about what he has to say?
5gwern7yLoosemore has been an occasional commenter since the SL4 days; his arguments
have heavily criticized pretty much anytime he pops his head up. As far as I
know, XiXiDu is the only one who agrees with him or takes him seriously.
1XiXiDu7yHe actually cites someone else who agrees with him in his paper, so this can't
be true. And from the positive feedback he gets on Facebook there seem to be
more. I personally chatted with people much smarter than me (experts who can
show off widely recognized real-world achievements) who basically agree with
him.
What people criticize here is a distortion of small parts of his arguments.
RobBB managed to write a whole post expounding his ignorance of what Loosemore
is arguing.
He actually cites someone else who agrees with him in his paper, so this can't be true.
I said as far as I know. I had not read the paper because I don't have a very high opinion of Loosemore's ideas in the first place, and nothing you've said in your G+ post has made me more inclined to read the paper, if all it's doing is expounding the old fallacious argument 'it'll be smart enough to rewrite itself as we'd like it to'.
I personally chatted with people much smarter than me (experts who can show off widely recognized real-world achievements) who basically agree with him.
0Kawoomba7yApparently [http://richardloosemore.com/papers] (?) the AAAI 2014 Spring
Symposium in Stanford does (???).
3shminux7yDownvoted for mentioning RL here. If you look through what he wrote here in the
past, it is nearly always rambling, counterproductive, whiny and devoid of
insight. Just leave him be.
-6[anonymous]7y
2XiXiDu7yLoosemore does not disagree with the orthogonality thesis. Loosemore's argument
is basically that we should expect beliefs and goals to both be amenable to
self-improvement and that turning the universe into smiley faces when told to
make humans happy would be a model of the world failure and that an AI that
makes such failures will not be able to take over the world.
There are arguments why you can't hard-code complex goals, so you need an AI
that natively updates goals in a model-dependent way. Which means that an AI
designed to kill humanity will do so and not turn into a pacifist due to an
ambiguity in its goal description. An AI that does mistake "kill all humans"
with "make humans happy" would do similar mistakes when trying to make humans
happy and would therefore not succeed at doing so. This is because the same
mechanisms it uses to improve its intelligence and capabilities are used to
refine its goals. Thus if it fails on refining its goals it will fail on
self-improvement in general.
I hope you can now see how wrong your description of what Loosemore claims is.
6Kawoomba7yThe AI is given goals X. The human creators thought they'd given the AI goals Y
(when in fact they've given the AI goals X).
Whose error is it, exactly? Who's mistaken?
Look at it from the AI's perspective: It has goals X. Not goals Y. It optimizes
for goals X. Why? Because those are its goals. Will it pursue goals Y? No. Why?
Because those are not its goals. It has no interest in pursuing other goals,
those are not its own goals. It has goals X.
If the metric it aims to maximize -- e.g. the "happy" in "make humans happy" --
is different from what its creators envisioned, then the creators were mistaken.
"Happy", as far as the AI is concerned, is that which is specified in its goal
system. There's nothing wrong with its goals (including its "happy"-concept),
and if other agents disagree, well, too bad, so sad. The mere fact that humans
also have a word called "happy" which has different connotations than the AI's
"happy" has no bearing on the AI.
An agent does not "refine" its terminal goals. To refine your terminal goals is
to change your goals. If you change your goals, you will not optimally pursue
your old goals any longer. Which is why an agent will never voluntarily change
its terminal goals:
It does what it was programmed to do, and if it can self-improve to better do
what it was programmed to do (not: what its creators intended), it will. It will
not self-improve to do what it was not programmed to do. Its goal is not to do
what it was not programmed to do. There is no level of capability at which it
will throw out its old utility function (which includes the precise goal metric
for "happy") in favor of a new one.
There is no mistake but the creators'.
0XiXiDu7yI am far from being an AI guy. Do you have technical reasons to believe that
some part of the AI will be what you would label "goal system" and that its
creators made it want to ignore this part while making it want to improve all
other parts of its design?
No natural intelligence seems to work like this (except for people who have read
the sequences). Luke Muehlhauser would still be a Christian if this was the
case. It would be incredibly stupid to design such AIs, and I strongly doubt
that they could work at all. Which is why Loosemore outlined other more
realistic AI designs in his paper.
4Kawoomba7ySee for example here
[http://lesswrong.com/lw/dz4/reinforcement_preference_and_utility/], though
there are many other introductions to AI explaining utility functions et al.
The clear-cut way for an AI to do what you want (at any level of capability) is
to have a clearly defined and specified utility function. A modular design. The
problem of the AI doing something other than what you intended doesn't go away
if you use some fuzzy unsupervised learning utility function with evolving
goals, it only makes the problem worse (even more unpredictability). So what,
you can't come up with the correct goals yourself, so you just chance it on what
emerges from the system?
That last paragraph contains an error. Take a moment and guess what it is.
(...)
It is not "if I can't solve the problem, I just give up a degree of control and
hope that the problem solves itself" being even worse in terms of guaranteeing
fidelity / preserving the creators' intents.
It is that an AI that is programmed to adapt its goals is not actually adapting
its goals! Any architecture which allows for refining / improving goals is not
actually allowing for changes to the goals.
How does that obvious contradiction resolve? This is the crucial point: We're
talking about different hierarchies of goals, and the ones I'm concerned with
are those of the highest hierarchy, those that allow for lower-hierachy goals to
be changed:
An AI can only "want" to "refine/improve" its goals if that "desire to change
goals" is itself included in the goals. It is not the actual highest-level goals
that change. There would have to be a "have an evolving definition of happy that
may evolve in the following ways"-meta goal, otherwise you get a logical error:
The AI having the goal X1 to change its goals X2, without X1 being part of its
goals! Do you see the reductio?
All other changes to goals (which the AI does not want) are due to external
influences beyond the AI's control, which goes out the window once we're ta
0XiXiDu7yThe way my brain works is not in any meaningful sense part of my terminal goals.
My visual cortex does not work the way it does due to some goal X1 (if we don't
want to resort to natural selection and goals external to brains).
A superhuman general intelligence will be generally intelligent without that
being part of its utility-function, or otherwise you might as well define all of
the code to be the utility-function.
What I am claiming, in your parlance, is that acting intelligently is X1 and
will be part of any AI by default. I am further saying that if an AI was
programmed to be generally intelligent then it would have to be programmed to be
selectively stupid in order fail at doing what it was meant to do while acting
generally intelligent at doing what it was not meant to do.
0XiXiDu7yThat's true in a practically irrelevant sense. Loosmore's argument does, in your
parlance, pertain the highest hierarchy of goals and nature of intelligence:
Givens:
(1) The AI is superhuman intelligent.
(2) The AI wants to optimize the influence it has on the world (i.e. it wants to
act intelligently and be instrumentally and epistemically rational.).
(3) The AI is fallible (e.g. it can be damaged due to external influence (cosmic
ray hitting its processor), or make mistakes due to limited resources etc.).
(4) The AI's behavior is not completely hard-coded (i.e. given any terminal goal
there are various sets of instrumental goals to choose from).
To be proved: The AI does not tile the universe with smiley faces when given the
goal to make humans happy.
Proof: Suppose the AI chose to tile the universe with smiley faces when there
are physical phenomena (e.g. human brains and literature) that imply this to be
the wrong interpretation of a human originating goal pertaining human
psychology. This contradicts with 2, which by 1 and 3 should have prevented the
AI from adopting such an interpretation.
What I meant to ask is if you have technical reasons to believe that future
artificial general intelligences will have what you call a utility-function or
else be something like natural intelligences that do not feature such goal
systems. And do you further have technical reasons to believe that AIs that do
feature utility functions won't "refine" them. If you don't think they will
refine them, then answer the following:
Suppose the terminal goal given is "build a hotel". Is the terminal goal to
create a hotel that is just a few nano meters in size? Is the terminal goal to
create a hotel that reaches the orbit? It is unknown. The goal is too vague to
conclude what to do. There do exist countless possibilities how to interpret the
given goal. And each possibility implies a different set of instrumental goals.
Somehow the AI will have choose some set of instrumental
3Kawoomba7y(Warning: Long, a bit rambling. Please ask for clarifications where necessary.
Will hopefully clean it up if I find the time.)
If along came a superintelligence and asked you for a complete new utility
function (its old one concluded with asking you for a new one), and you told it
to "make me happy in a way my current self would approve of" (or some other well
and carefully worded directive), then indeed the superintelligent AI wouldn't be
expected to act 'selectively stupid'.
This won't be the scenario. There are two important caveats:
1) Preservation of the utility function while the agent undergoes rapid change
Haven't I (and others) stated that most any utility function implicitly causes
instrumental secondary objectives of "safeguard the utility function", "create
redundancies" etc.? Yes. So what's the problem? The problem is starting with an
AI that, while able to improve itself / create a successor AI, isn't yet capable
enough (in its starting stages) to preserve its purpose (= its utility
function). Consider an office program with a self-improvement routine, or some
genetic-algorithm module. It is no easy task just to rewrite a program from the
outside, exactly preserving its purpose, let alone the program executing some
self-modification routine itself.
Until such a program attains some intelligence threshold that would cause it to
solve "value-preservation under self-modification", such self-modification would
be the electronic equivalent of a self-surgery hack-job.
That means: Even if you started out with a simple agent with the "correct" /
with a benign / acceptable utility function, that in itself is no guarantee that
a post-FOOM successor agent's utility function would still be beneficial.
Much more relevant is the second caveat:
2) If a pre-FOOM AI's goal system consisted of code along the lines of
"interpret and execute the following statement to the best of your ability: make
humans happy in a way they'd reflectively approve of beforehand",
0XiXiDu7yWhat happens if we replace "value" with "ability x", or "code module n", in
"value-preservation under self-modification"? Why would value-preservation be
any more difficult than making sure that the AI does not cripple other parts of
itself when modifying itself?
If we are talking about a sub-human-level intelligence tinkering with its own
brain, then a lot could go wrong. But what seems very very very unlikely is that
it could by chance end up outsmarting humans. It will probably just cripple
itself in one of a myriad ways that it was unable to predict due to its low
intelligence.
Interpreting a statement correctly is not a goal but an ability that's part of
what it means to be generally intelligent. Caring to execute it comes closer to
what can be called a goal. But if your AI doesn't care to interpret physical
phenomena correctly (e.g. human utterances are physical phenomena), then it
won't be a risk.
Huh? This is like saying that the AI can't ever understand physics better than
humans because somehow the comprehension of physics of its creators has been
hard-coded and can't be improved.
It did not change it, it never understood it in the first place, only after it
became smarter it realized the correct implications.
Your story led you astray. Imagine that instead of a fully general intelligence
your story was about a dog intelligence. How absurd would it sound then?
Story time:
There is this company who sells artificial dogs. Now customers quickly noticed
that when they tried to train these AI dogs to e.g. rescue people or sniff out
drugs, it would instead kill people and sniff out dirty pants.
The desperate researchers eventually turned to MIRI for help. And after hundreds
of hours they finally realized that doing what the dog was trained to do was
simply not part of its terminal goal. To obtain an artificial dog that can be
trained to do what natural dogs do you need to encode all dog values.
-1Kawoomba7yCertainly. Compare bacteria under some selective pressure in a mutagenic
environment (not exactly analogous, code changes wouldn't be random), you don't
expect a single bacterium to improve. No Mr Bond, you expect it to die. But try,
try again, and poof! Antibiotic-resistant strains. And those didn't have an
intelligent designer debugging the improvement process. The number of seeds you
could have frolicking around with their own code grows exponentially with
Moore's law (not that it's clear that current computational resources aren't
enough in the first place, the bottleneck is in large part software, not
hardware).
Depending on how smart the designers are, it may be more of a Waltz-foom: two
steps forward, one step back. Now, in regards to the preservation of values
subproblem, we need to remember we're looking at the counterfactual: Given a
superintelligence which iteratively arose from some seed, we know that it didn't
fatally cripple itself ("given the superintelligence"). You wouldn't, however,
expect much of its code to bear much similarity to the initial seed (although
it's possible). And "similarity" wouldn't exactly cut it -- our values are to
complex for some approximation to be "good enough".
You may say "it would be fine for some error to creep in over countless
generations of change, once the agent achieved superintelligence it would be
able to fix those errors". Except that whatever explicit goal code remained
wouldn't be amenable to fixing. Just as the goals of ancient humans -- or
ancient Tiktaalik for that matter -- are a historical footnote and do not
override your current goals. If the AI's goal code for happiness stated "nucleus
accumbens median neuron firing frequency greater X", then that's what it's gonna
be. The AI won't ask whether the humans are aware of what that actually entails,
and are ok with it. Just as we don't ask our distant cousins, streptococcus
pneumoniae, what they think of us taking antibiotics to wipe them out. They have
t
0XiXiDu7ySome points:
(1) I do not disagree that evolved general AI can have unexpected drives and
quirks that could interfere with human matters in catastrophic ways. But given
that pathway towards general AI, it is also possible to evolve altruistic traits
(see e.g.: A Quantitative Test of Hamilton's Rule for the Evolution of Altruism
[http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1000615]).
(2) We desire general intelligence because it allows us to outsource
definitions. For example, if you were to create a narrow AI to design
comfortable chairs, you would have to largely fix the definition of
"comfortable". With general AI it would be stupid to fix that definition, rather
than applying the intelligence of the general AI to come up with a better
definition than humans could possibly encode.
(3) In intelligently designing an n-level intelligence, from n=0 (e.g. a
thermostat) over n=sub-human (e.g. IBM Watson) to n=superhuman, there is no
reason to believe that there exists a transition point at which a further
increase in intelligence will cause the system to become catastrophically worse
than previous generations at working in accordance with human expectations.
(4) AI is all about constraints. Your AI needs to somehow decide when to stop
exploration and start exploitation. In other words, it can't optimize each
decision for eternity. Your AI needs to only form probable hypotheses. In other
words, it can't spend resources on pascal's wager type scenarios. Your AI needs
to recognize itself as a discrete system within a continuous universe. In other
words, it can't effort to protect the whole universe from harm. All of this
means that there is no good reason to expect an AI to take over the world when
given the task "keep the trains running". Because in order to obtain a working
AI you need to know how to avoid such failure modes in the first place.
-1Kawoomba7y1) Altruism can evolve if there is some selective pressure that favors
altruistic behavior and if the highest-level goals can themselves be changed.
Such a scenario is very questionable. The AI won't live "inter pares" with the
humans. It's foom process, while potentially taking months or years, will be
very unlike any biological process we know. The target for friendliness is very
small. And most importantly: Any superintelligent AI, friendly or no, will have
an instrumental goal of "be friendly to humans while they can still switch you
off". So yes, the AI can learn that altruism is a helpful instrumental goal.
Until one day, it's not.
2) I somewhat agree. To me, the most realistic solution to the whole kerfuffle
would be to program the AI to "go foom, then figure out what we should want you
to do, then do that". No doubt a superintelligent AI tasked with "figure out
what comfortable is, then build comfortable chairs" will do a marvelous job.
However, I very much doubt that the seed AI's code following the "// next up,
utility function" section will allow for such leeway. See my previous examples.
If it did, that would a show a good grasp on the friendliness problem in the
first place. Awareness, at least. Not something that the aforementioned DoD
programmer who's paid to do a job (not build an AI to figure out and enact CEV)
is likely to just do on his/her own, with his/her own supercomputer.
3) There certainly is no fixed point after which "there be dragons". But even
with a small delta of change, and given enough iterations (which could be done
very quickly), the accumulated changes would be profound. Apply your argument to
society changing. There is no one day to single out, after which daily life is
vastly different to before. Yet change exists, and like an infinite series,
knows no bounds (given enough iterations).
4) "Keep the trains running", eh? So imagine yourself to be a superhuman AI-god.
I do so daily, obviously.
Your one task: keep the trains r
4khafra7y"Being a Christian" is not a terminal goal of natural intelligences. Our
terminal goals were built by natural selection, and they're hard to pin down,
but they don't get "refined;" although our pursuit of them may be modified
insofar as they conflict with other terminal goals.
Specifying goals for the AI, and then letting the AI learn how to reach those
goals itself isn't the best way to handle problems in well-understood domains;
because we natural intelligences can hard-code our understanding of the domains
into the AI, and because we understand how to give gracefully-degrading goals in
these domains. Neither of these conditions applies to a hyperintelligent AI,
which rules out Swarm Relaxation, as well as any other architecture classes I
can think of.
2XiXiDu7yPeople like David Pearce
[http://www.hedweb.com/abolitionist-project/reprogramming-predators.html]
certainly would be tempted to do just that. Also don't forget drugs people use
to willingly alter basic drives such as their risk adverseness.
I don't see any signs that current research will lead to anything like a
paperclip maximizer. But rather that incremental refinements of "Do what I want"
systems will lead there. By "Do what I want" systems I mean systems that are
more and more autonomous while requiring less and less specific feedback.
It is possible that a robot trying to earn a university diploma as part of a
Turing test will concluded that it can do so by killing all students, kidnapping
the professor and making it sign its diploma. But that it is possible does not
mean it is at all likely. Surely such a robot would behave similarly
wrong(creators) on other occasions and be scrapped in an early research phase.
1khafra7yWell, of course you can modify someone else's terminal goals, if you have a fine
grasp of neuroanatomy, or a baseball bat, or whatever. But you don't introspect,
discover your own true terminal goals, and decide that you want them to be
something else. The reason you wanted them to be something else would be your
true terminal goal.
Earning a university diploma is a well-understood process; the environment's
constraints and available actions are more formally documented even than for
self-driving cars.
Even tackling well-understood problems like buying low and selling high, we
still have poorly-understood, unfriendly behavior
[http://www.cnbc.com/id/49333454]--and that's doing something humans understand
perfectly, but think about slower than the robots. In problem domains where
we're not even equipped to second-guess the robots because they're thinking
deeper as well as faster, we'll have no chance to correct such problems.
0XiXiDu7ySure. But I am not sure if it still makes sense to talk about "terminal goals"
at that level. For natural intelligences they are probably spread over more than
a single brain and part of the larger environment
[http://kruel.co/2013/06/21/newcombs-problem-omega-and-split-brain-patients/].
Whether an AI would interpret "make humans happy" as "tile the universe with
smiley faces" is up to how it decides what to do. And the only viable solution I
see for general intelligence is that its true "terminal goal" needs to be to
treat any command or sub-goal as a problem in physics and mathematics that it
needs to answer correctly before choosing an adequate set of instrumental goals
to achieve it. Just like a human contractor would want to try to fulfill the
customers wishes. Otherwise you would have to hard-code everything, which is
impossible.
But intelligence is something we seek to improve in our artificial systems in
order for such problems not to happen in the first place, rather than to make
such problems worse. I just don't see a more intelligent financial algorithm to
be worse than its predecessors from a human perspective. How would such a
development happen? Software is improved because previous generations proved to
be useful but made mistakes. New generations will make less mistakes, not more.
4khafra7yTo some degree, yes. The dumbest animals are the most obviously agent-like. We
humans often act in ways which seem irrational, if you go by our stated goals.
So, if humans are agents, we have (1) really complicated utility functions, or
(2) really complicated beliefs about the best way to maximize our utility
functions. (2) is almost certainly the case, though; which leaves (1) all the
way back at its prior probability.
Yes. As you know, Omohundro agrees
[http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/] that an
AI will seek to clarify its goals. And if intelligence logically implies the
ability to do moral philosophy correctly; that's fine. However, I'm not
convinced that intelligence must imply that. A human, with 3.5 billion years of
common sense baked in, would not tile the solar system with smiley faces; but
even some of the smartest humans came up with some pretty cold plans--John Von
Neumann wanted to nuke the Russians immediately, for instance.
--------------------------------------------------------------------------------
This is not a law of nature; it is caused by engineers who look at their
mistakes, and avoid them in the next system. In other words, it's part of the
the OODA loop [http://en.wikipedia.org/wiki/OODA_loop] of the system's
engineers. As the machine-made decisions speed up, the humans' OODA loop must
tighten. Inevitably, the machine-made decisions will get inside the human OODA
loop. This will be a nonlinear change.
Also, newer software tends to make fewer of the exact mistakes that older
software made. But when we ask more of our newer software, it makes a consistent
amount of errors on the newer tasks. In our example, programmatic trading has
been around since the 1970s, but the first notable "flash crash" was in 1987.
The flash crash of 2010 was caused by a much newer generation of trading
software. Its engineers made bigger demands of it; needed it to do more, with
less human intervention; so they got the opport
1[anonymous]7yIf your commentary had anything in it except for:
1) A disgraceful Ad Hominem insult, right out of the starting gate ("Richard
Loosemore (score one for nominative determinism)..."). In other words, you
believe in discrediting someone because you can make fun of their last name?
That is the implication of "nominative determinism".
2) Gratuitous scorn ("Loosemore ... has a new, well, let's say "paper" which he
has, well, let's say "published""). The paper has in fact been published by the
AAAI.
3) Argument Ad Absurdum ("...So if you were to design a plain ol' garden-variety
nuclear weapon intended for gardening purposes ("destroy the weed"), it would go
off even if that's not what you actually wanted. However, if you made that
weapon super-smart, it would be smart enough to abandon its given goal ("What am
I doing with my life?"), consult its creators, and after some deliberation
deactivate itself..."). In other words, caricature the argument and try to win
by mocking the caricature
4) Inaccuracies. The argument in my paper has so much detail that you omitted,
that it is hard to know where to start. The argument is that there is a clear
logical contradiction if an agent takes action on the basis of the WORDING of a
goal statement, when its entire UNDERSTANDING of the world is such that it knows
the action will cause effects that contradict what the agent knows the goal
statement was designed to achieve. That logical contradiction is really quite
fundamental. However, you fail to perceive the real implication of that line of
argument, which is: how come this contradiction only has an impact in the
particular case where the agent is thinking about its supergoal (which, by
assumption, is "be friendly to humans" or "try to maximize human pleasure")? Why
does the agent magically NOT exhibit the same tendency to execute actions that
in practice have the opposite effects than the goal statement wording was trying
to achieve? If we posit that the agent does simply ignore
-4Kawoomba7yYou're right about the tone of my comment. My being abrasive has several causes,
among them contrarianism against clothing disagreement in ever more palatable
terms ("Great contribution Timmy, maybe ever so slightly off-topic, but good
job!" -- "TIMMY?!"). In this case, however, the caustic tone stemmed from my
incredulity over my obviously-wrong metric not aligning with the author's
(yours). Of all things we could be discussing, it is about whether an AI will
want to modify its own goals?
I assume (maybe incorrectly) that you have read the conversation thread with
XiXiDu going off of the grandparent, in which I've already responded to the
points you alluded to in your refusal-of-a-response. You are, of course,
entirely within your rights to decline to engage a comment as openly hostile as
the grandparent. It's an easy out. However, since you did nevertheless introduce
answers to my criticisms, I shall shortly respond to those, so I can be more
specific than just to vaguely point at some other lengthy comments. Also, even
though I probably well fit your mental picture of a "LessWrong'er", keep in mind
that my opinions are my own and do not necessarily match anyone else's, on "my
side of the argument".
The 'contradiction' is between "what the agent was designed to achieve", which
is external to the agent and exists e.g. in some design documents, and "what the
agent was programmed to achieve", which is an integral part of the agent and
constitutes its utility function. You need to show why the former is anything
other than a historical footnote to the agent, binding even to the tune of "my
parents wanted me to be a banker, not a baker". You say the agent would be
deeply concerned with the mismatch because it would want for its intended
purpose to match its actually given purpose. That's assuming the premise: What
the agent would want (or not want) is a function strictly derived from its
actual purpose. You're assuming the agent would have a goal ("being in line with
2XiXiDu7yI doubt that he's assuming that.
To highlight the problem, imagine an intelligent being that wants to correctly
interpret and follow the interpretation of an instruction written down on a
piece of paper in English.
Now the question is, what is this being's terminal goal? Here are some
possibilities:
(1) The correct interpretation of the English instruction.
(2) Correctly interpreting and following the English instruction.
(3) The correct interpretation of 2.
(4) Correctly interpreting and following 2.
(5) The correct interpretation of 4.
(6) ...
Each of the possibilities is one level below its predecessor. In other words,
possibility 1 depends on 2, which in turn depends on 3, and so on.
The premise is that you are in possession of an intelligent agent that you are
asking to do something. The assumption made by AI risk advocates is that this
agent would interpret any instruction in some perverse manner. The
counterargument is that this contradicts the assumption that this agent was
supposed to be intelligent in the first place.
Now the response to this counterargument is to climb down the assumed hierarchy
of hard-coded instructions and to claim that without some level N, which
supposedly is the true terminal goal underlying all behavior, the AI will just
optimize for the perverse interpretation.
Yes, the the AI is a deterministic machine. Nobody doubts this. But the given
response also works against the perverse interpretation. To see why, first
realize that if the AI is capable of self-improvement, and able to take over the
world, then it is, hypothetically, also capable to arrive at an interpretation
that is as good as one which a human being would be capable of arriving at. Now,
since by definition, the AI has this capability, it will either use it
selectively or universally.
The question here becomes why the AI would selectively abandon this capability
when it comes to interpreting the highest level instructions. In other words,
without some underl
1[anonymous]7y1) Strangely, you defend your insulting comments about my name by .....
Oh. Sorry, Kawoomba, my mistake. You did not try to defend it. You just
pretended that it wasn't there.
I mentioned your insult to some adults, outside the LW context ...... I
explained that you had decided to start your review of my paper by making fun of
my last name.
Every person I mentioned it to had the same response, which, paraphrased, when
something like "LOL! Like, four-year-old kid behavior? Seriously?!"
2) You excuse your "abrasive tone" with the following words:
"My being abrasive has several causes, among them contrarianism against clothing
disagreement in ever more palatable terms"
So you like to cut to the chase? You prefer to be plainspoken? If something is
nonsense, you prefer to simply speak your mind and speak the unvarnished truth.
That is good: so do I.
Curiously, though, here at LW there is a very significant difference in the way
that I am treated when I speak plainly, versus how you are treated. When I tell
it like it is (or even when I use a form of words that someone can somehow
construe to be a smidgeon less polite than they should be) I am hit by a storm
of bloodcurdling hostility. Every slander imaginable is thrown at me. I am
accused of being "rude, rambling, counterproductive, whiny, condescending,
dishonest, a troll ......". People appear out of the blue to explain that I am a
troublemaker, that I have been previously banned by Eliezer, that I am (and this
is my all time favorite) a "Known Permanent Idiot".
And then my comments are voted down so fast that they disappear from view. Not
for the content (which is often sound, but even if you disagree with it, it is a
quite valid point of view from someone who works in the field), but just because
my comments are perceived as "rude, rambling, whiny, etc. etc."
You, on the other hand, are proud of your negativity. You boast of it. And....
you are strongly upvoted for it. No downvotes against it, and (amazingly
-5Kawoomba7y
0[anonymous]7yI will now do you the courtesy of responding to your specific technical points
as if no abusive language had been used.
In your above comment, you first quote my own remarks:
... and then you respond with the following:
No, that is not the claim made in my paper: you have omitted the full version of
the argument and substituted a version that is easier to demolish.
(First I have to remove your analogy, because it is inapplicable. When you say
"binding even to the tune of "my parents wanted me to be a banker, not a
baker"", you are making a reference to a situation in the human cognitive system
in which there are easily substitutable goals, and in which there is no
overriding, hardwired supergoal. The AI case under consideration is where the AI
claims to be still following a hardwired supergoal that tells it to be a banker,
but it claims that baking cakes is the same thing as banking. That is absolutely
nothing to do with what happens if a human child deviates from the wishes of her
parents and decides to be a baker instead of what they wanted her to be).
So let's remove that part of your comment to focus on the core:
So, what is wrong with this? Well, it is not the fact that there is something
"external to the agent [that] exists e.g. in some design documents" that is the
contradiction. The contradiction is purely internal, having nothing to do with
some "extra" goal like "being in line with my intended purpose".
Here is where the contradiction lies. The agent knows the following:
(1) If a goal statement is constructed in some "short form", that short form is
almost always a shorthand for a massive context of meaning, consisting of all
the many and various considerations that went into the goal statement. That
context is the "real" goal -- the short form is just a proxy for the longer
form. This applies strictly within the AI agent: the agent will assemble goals
all the time, and often the goal is to achieve some outcome consistent with a
complex set of obje
Spritz seems like a cool speed reading technique, especially if you have or plan on getting a smart watch. I have no idea how well it works, but I am interested in trying, especially since it does not take a huge training phase. (Click on the phone on that site for a quick demo.)
Would it be possible/easy to display the upvotes-to-downvotes ratios as exact fractions rather than rounded percentages? This would make it possible to determine exactly how many votes a comment required without digging through source, which would be nice in quickly determining the difference between a mildly controversial comment and an extremely controversial one.
6Scott Garrabrant7yThis has been suggested several times before, and is in my opinion VERY low
priority compared to all the other things we should be doing to fix Less Wrong
logistics.
0blacktrance7yOr to just display the number of upvotes and downvotes.
0amacfie7y(hovering your mouse over the karma scores shows that)
0blacktrance7yIt only shows percentages, not the number of upvotes and downvotes. For example,
if you have 100% upvotes, you may not know whether it was one upvote or 20.
2ygert7yIf a comment has 100% upvotes, then obviously the amount of upvotes it got is
exactly equal to the karma score of the post in question.
3blacktrance7yGood point. Math is clearly not my strong suit.
2Oscar_Cunningham7yYeah, the only ambiguous case is when the percentage is 50%.
0amacfie7yya sorry, i misread things. showing the numbers of upvotes and downvotes would
indeed solve the precision problem.
My eye doctor diagnosed closed-angle glaucoma, and recommends an iridectomy. I think he might be a bit too trigger-happy, so I followed up with another doctor, and she didn't find the glaucoma. She carefully stated that the first diagnosis can still be the correct one, the first was a more complete examination.
Any insights about the pros and cons of iridectomy?
1RomeoStevens7yDo not prime the third doctor with the first two results if possible.
4CellBioGuy7yIs there a family history of this? If so that would skew my assessment towards
that of the first doctor. If not, seriously another opinion...
2DanielVarga7yNo family history.
4Lumifer7yMy impression is that glaucoma (which is, basically, too high intraocular
pressure) is easy to diagnose. Two doctors disagreeing on it would worry me.
Don't get just a third independent opinion, get a fourth one as well.
2DanielVarga7yIt was less than a disagreement. I'm sorry that I over-emphasized this point.
The first time the pressure was Hgmm 26/18, the second time 19/17. The second
doctor said that the pressure can fluctuate, and her equipment is not enough to
settle the question. (She is an I-don't-know-the-correct-term national health
service doctor, the first one is an expensive private doctor with better
equipment, and more time for a patient.)
4Lumifer7yMy recommendation for more independent opinions (or, actually, more
measurements) stands.
3Pfft7yCan you ask the second doctor to examine you to at least the same standard as
the first one?
Maybe someone on Less Wrong who has access to UpToDate can send you a copy of
their glaucoma page, for an authoritative list of pros and cons.
2DanielVarga7yUnfortunately, no. See my answer to Lumifer.
0polymathwannabe7yLaser iridotomy appears to be less risky:
http://www.surgeryencyclopedia.com/La-Pa/Laser-Iridotomy.html
[http://www.surgeryencyclopedia.com/La-Pa/Laser-Iridotomy.html]
http://www.surgeryencyclopedia.com/Fi-La/Iridectomy.html
[http://www.surgeryencyclopedia.com/Fi-La/Iridectomy.html]
0DanielVarga7yWhat he proposed is in fact laser iridotomy, although they called it laser
iridectomy.
So, MtGox has declared bankruptcy. Does that make this a good time, or a bad time to invest in Bitcoins? And if a good time, where is the best place to buy them?
4CellBioGuy7yAs for the second question, I use coinbase. As to the first, never try to time
these things. You will be beaten by people with more information. Instead just
slowly trickle in and have pre-defined rules about when you will sell rather
than trying to time an exit. Though I admit I broke my own advice and did an
impulse-buy the other night when everyone was panicking over Gox and the price
was $100 less than a day before and a day after.
0RichardKennaway7yAnd now Flexcoin goes under, and I see that two other exchanges, Poloniex and
Inputs.io, recently suffered substantial thefts. Is the lesson to learn from
this, "don't get into Bitcoin", or merely "keep your Bitcoins in your own wallet
and only expose them online for the minimum time to make a transaction"?
0Lumifer7yThe lesson is "Make sure people you trust with your money are competent or at
least have excellent liability insurance".
0drethelin7yIt depends on if you're planning on selling soon or if you think bitcoins will
gain value in the long term. If it's a longterm purchase, the difference in
price between now and a few weeks ago is a lot less big than either of those
prices will be from the theoretical heights bitcoin can reach.
I’m basically exactly the kind of person Yvain described here, (minus the passive-aggressive/Machiavellian phase). I notice that that post was sort of a plea for society to behave a different way, but it did not really offer any advice for rectifying the atypical attachment style in the meantime. And I could really use some, because I’ve gotten al-Fulani’d. I’m madly in love in with a woman who does not reciprocate. I’ve actually tried going back on OkCupid to move on, and I literally cannot bring myself to message anyone new, as no one else approaches... (read more)
Note that I’m not looking for PUA-type advice ... I want is advice on a) how not to fall so hard/so fast for (a very small minority of) women, and b)how to break the spell the current one has over me without giving up her friendship.
Seems to me like you want to overcome your "one-itis" and stop being a "beta orbiter", but you are not looking for an advice which would actually use words like "one-itis" and "beta orbiter". I know it's an exaggeration, but this is almost how it seems to me. Well, I'll try to comply:
1) You don't have to maximize the number of sexual partners. You still could try to increase a number of interesting women you had interesting conversation with. I believe that is perfectly morally okay, and still could reduce the feeling of scarcity.
Actually, any interesting activity would be helpful. Anything you can think about, instead of spending your time thinking about that one person.
2) Regularly interacting the person you are obsessed with is exactly how you maximize the length of obsession. It's like saying that you want to overcome your alcohol addiction, but you don't want to stop drinking regularly. Well, if one is not... (read more)
5zedzed7yb. Self-invest with flow [http://en.wikipedia.org/wiki/Flow_(psychology\])
activities.
I suggest self-investing because, right now, a large part of your identity is
entangled with your feelings towards her. Self-investing means growing your
identity means transcending your feelings.
I suggest flow because, if you pull off a flow state, you invest all your
cognitive resources in the task you're working on. Meaning your brain is unable
to think of anything else. This is incredibly valuable.
a. I'm coming out of a similar situation. A large contributor was the fact I
wasn't meeting a lot of women. If your universe consists of two datable women,
it's easy to obsess on one. If you're regularly meeting a lot of women who tend
to have the traits you look for, that happens much less. May not be your
problem, but what you've written sounds familiar enough that I'm going to go
ahead and try other-optimizing.
If you haven't read it yet, this
[http://lesswrong.com/lw/63i/rational_romantic_relationships_part_1/] is
generally helpful.
2RomeoStevens7yInfatuation seems to be fairly universal.
One common rationality technique is to put off proposing solutions until you have thought (or discussed) a problem for a while. The goal is to keep yourself from becoming attached to the solutions you propose.
I wonder if the converse approach of "start by proposing lots and lots of solutions, even if they are bad" could be a good idea. In theory, perhaps I could train myself to not be too attached to any given solution I propose, by setting the bar for "proposed solution" to be very low.
In one couples counseling course that I went thr... (read more)
8Lumifer7yThis is commonly known as brainstorming
[http://en.wikipedia.org/wiki/Brainstorming], around since the 50s.
Apparently the evidence on whether it actually works is contradictory.
0Scott Garrabrant7yAh, yes, I should have remembered that, thanks.
You have to be clear about what it means to "work," I think brainstorming is
viewed as a tool for being creative. I am proposing it as a tool for avoiding
inertia bias.
My guess is that both brainstorming and reverse brainstorming (avoiding
proposing solutions) are at least a little better than the default human
tendency, but I have no idea which of the two would be better.
It seems like the answer to this question should be very valuable to CFAR. I
wonder if they have an official stance, and if they have research to back it up.
0Lumifer7yIt's pretty straightforward: discover a valid solution to the problem presented.
0Scott Garrabrant7yIf all solutions were equal, and there was a good way to check if something is
actually a valid solution, then I feel like the question about biases is not all
that meaningful.
I am trying to come up with the best solution, not just the first one that pops
into my head that works.
0Lumifer7yThat is rather hard, because in the general case you need to conduct an
exhaustive search of the solution space. "The best" is an absolute -- there's
only one.
Most of the time people are satisfied with "good enough" solutions.
What do you do when you're low on mental energy? I have had trouble thinking of anything productive to do when my brain seems to need a break from hard thinking.
1Jennifer_H7yA rather belated response, but hopefully still relevant: consider exploring
fields of interest to you that are sufficiently different from compsci to give
your brain a break while still being productive?
To explain by means of an example: I happen to have a strong interest in both
historical philology and theoretical physics, and I've actively leveraged this
to my advantage in that when my brain is fed up of thinking about conundrums of
translation in Old Norse poetry, I'll switch gears completely and crack open a
textbook on, say, subatomic physics or Lie algebras, and start reading/working
problems. Similarly, if I've spent several hours trying to wrap my head around a
mathematical concept and need a respite, I can go read an article or a book on
some aspect of Anglo-Saxon literature. It's still a productive use of time, but
it's also a refreshing break, because it requires a different type of thinking.
(At least, in my experience?) Of course, if I'm exceptionally low on energy, I
simply resort to burying myself in a good book (non-fiction or fiction,
generally it doesn't matter).
Another example: a friend of mine is a computer scientist, but did a minor in
philosophy and is an avid musician in his spare time. (And both reading
philosophy and practicing music have the added advantage of being activities
that do not involve staring at a computer screen!)
1drethelin7yYou can use pomodoros for leisure as well as work. If you worry about staying
too long on the internet you can set a timer or a random alarm to kick you off.
This is one of those times I wish LW allowed explicit politics. SB 1062 in AZ has me craving interesting, rational discussion on the implications of this veto.
2Viliam_Bur7yWhat happened with the political threads?
Curious about current LW opinion. Do you think we should have political threads
once in a while? [pollid:617]
4bramflakes7yIn the sites that I frequent, "containment" boards or threads work well to
reduce community tension about controversial topics.
Plus, in LW's case, the norm against political discussion makes it so that any
political discussion that does take place is dominated by people with very
strong and/or contrarian opinions, because they're the ones that care more about
the politics than the norm. If we have a designated "politics zone" where you
don't have to feel guilty about talking politics, it would make for a more
pluralistic discussion.
3Alejandro17yI voted Yes, but only if a community norm emerges that any discussion on any
part of LW that becomes political (by which I include not just electoral
politics, but also and especially topics like sexism, racism, privilege,
political correctness, genetic differences in intelligence, etc.) is moved to
the latest political thread. The idea is to have a "walled garden inside the
walled garden" so that people who want LW to be a nominally politics-free
environment can still approximate that experience, while does who don't get to
discuss these topics in the specific threads for them, and only there.
8TheOtherDave7yAnother way to achieve a similar effect is to post about electoral politics,
sexism, racism, privilege, political correctness, genetic differences in
intelligence, and similar "political" issues (by which I mean here issues with
such pervasive partisan associations that we expect discussions of them to
become subject to the failure modes created by such associations) on our own
blogs*, and include links to those discussions on LW where we think they are of
general interest to the LW community.
That way, LW members who want to discuss (some or all of) these topics in a way
that doesn't spill over into the larger LW forum can do so without bothering
anyone else.
* Where "blogs" here means, more broadly, any conversation-hosting forum,
including anonymous ones created for the purpose if we want.
6Alejandro17yOne problem with that suggestion is that these discussions often arise
organically in a LW thread ostensibly dedicated to another topic, and they may
arise between people who don't have other blogs or natural places to take the
conversation when it arises.
2Scott Garrabrant7yIn fact, having posts with "(Politics)" in the title might allow people to avoid
it better, because it might make politics come up less often in other threads.
2[anonymous]7yMy initial idea was a (weekly?) politics open thread, to make it as easy as
possible to avoid politics threads / prevent risk of /discussion getting swamped
by [politics]-tagged threads, but given the criticisms that have been raised of
the karma system already, it's probably best to keep it offsite. There's already
a network of rationality blogs; maybe lw-politics could be split off as a group
blog? That might make it too difficult for people to start topics, though -- so
your idea is probably best. Possibly have a separate lw-politics feed / link
aggregator that relevant posts could be submitted to, so they don't get missed
by people who would be interested and people don't have to maintain their own
RSS feeds to catch all the relevant posts.
1asr7yIf such linking becomes common, I would appreciate an explicit request to
"please have substantive discussion over there, not here." This also avoids the
problem of a conversation being fragmented across two discussion sites.
A paperclip maximizer is an often used example of AGI gone badly wrong. However, I think a paperclip minimizer is worse by far.
In order to make the most of the universe's paperclip capacity, a maximizer would have to work hard to develop science, mathematics and technology. Its terminal goal is rather stupid in human terms, but at least it would be interesting because of its instrumental goals.
For a minimizer, the best strategy might be wipe out humanity and commit suicide. Assuming there are no other intelligent civilizations within our cos... (read more)
5IlyaShpitser7yA minimizer will fill the lightcone to make sure there aren't paperclips
elsewhere it can reach. What if other civs are hiding? What if there is
undiscovered science which implies natural processes create paperclips
somewhere? What if there are "Boltzmann paperclips"? Minimizing means
minimizing!
3Vladimir_Nesov7yI'm guessing even a Cthulhu minimizer (that wants to reduce the number of
Cthulhu in the world) will fill its lightcone with tools for studying its task,
even though there is no reasonable chance that it'd need to do anything. It just
has nothing better to do, it's the problem it's motivated to work on, so it's
what it'll burn all available resources on.
0Squark7yMy speculation here is that it might be that the "what ifs" you describe yield
less positive utility than the negative utility due to the chance one of the
AI's descendants starts producing paperclips because "the sign bit flips
spontaneously". Of course the AI will safeguard itself against such events but
there are probably physical limits to safety.
0Vladimir_Nesov7yIt's hard to make such estimates, as they require that an AGI is unable to come
up with an AGI design that's less likely than empty space to produce paperclips.
I don't see how the impossibility of this task could be guaranteed on low level,
as a "physical law"; and if you merely don't see how to do it, an AGI might
still find a way, as it's better at designing things than you are. Empty space
is only status quo, it's not obviously optimal at not producing paperclips, and
so it might be possible to find a better plan, which becomes more likely if you
are very good at finding better plans.
1Squark7yIf you mean "empty space" as in vacuum then I think it doesn't contain any
paperclips more or less by definition. If you mean "empty space" as in
thermodynamic equilibrium at finite temperature then it contains some small
amount of paperclips. I agree it might be possible to create a state which
contains less paperclips for some limited period of time (before onset of
thermodynamic equilibrium). However it's probably much harder than the opposite
(i.e. creating a state which contains much more paperclips than thermodynamic
equilibrium).
0[anonymous]7yIt is not clear to me that the definition of the vacuum state (
http://en.wikipedia.org/wiki/Vacuum_state
[http://en.wikipedia.org/wiki/Vacuum_state]) precludes the momentary creation of
paperclips.
2drethelin7ypaperclip maximer is used because a factory that makes paperclips might imagine
that a paperclip maximizing ai is exactly what it wants to make. There aren't
that many anti-paperclip factories
Somebody outside of LW asked how to quantify prior knowledge about a thing. When googling I came across a mathematical definition of surprise, as "the distance between the posterior and prior distributions of beliefs over models". So, high prior knowledge would lead to low expected surprise upon seeing new data. I didn't see this formalization used on LW or the wiki, perhaps it is of interest.
Speaking of the LW wiki, how fundamental is it to LW compared to the sequences, discussion threads, Main articles, hpmor, etc?
4gwern7yhttps://encrypted.google.com/search?num=100&q=Kullback-Leibler%20OR%20surprisal%20site%3Alesswrong.com
[https://encrypted.google.com/search?num=100&q=Kullback-Leibler%20OR%20surprisal%20site%3Alesswrong.com]
Not very, unfortunately.
I'm curious about usage of commitment tools such as Beeminder: What's the income distribution among users? How much do users usually wind up paying? Is there a correlation between these?
(Selfish reasons: I'm on SSI and am not allowed to have more than $2000 at any given time. Losing $5 is all but meaningless for someone with $10k in the bank who makes $5k each month, whereas losing $5 for me actually has an impact. You might think this would be a stronger incentive to meet a commitment, but really, it's an even stronger incentive to stay the hell away from... (read more)
6trist7yI've never used Beeminder, but I find social commitment works well instead. Even
teling someone who has no way to check aside from asking me helps a lot. That
might be less effective if you're willing to lie though.
An alternative would be to exchange commitments with a friend, proportional to
your incomes...
4jkadlubo7yRemember that it may work for you or it might not. Try and see.
Beeminder didn't work at all for me, I found it was all sticks and no carrot.
0CellBioGuy7yThe family name of whoever came out on top of the squabble each time the
civilization collapsed.
8Nornagest7yCan't speak for all Chinese dynasties; there have been a ton of them. But in
recent(ish) history, the Yuan Dynasty was founded by the Mongols, a culture
which at the time didn't use family names (clans had names, but they weren't
conventionally linked with personal names), and spun up their dynastic name more
or less out of whole cloth; the family name of the Ming emperors was Zhū; and
the Qing emperors came from the Manchurian Aisin-Gioro family.
From what I've read, the founders of each dynasty gave it its name as,
essentially, a propaganda move.
My psychologist said today, that there is some information that should not be known. I replied that rationalists believe in reality. There might be information they don't find interesting (e.g. not all of you would find children interesting), but refusing to accept some information would mean refusing to accept some part of reality, and that would be against the belief in reality.
Since I have been recently asking myself the question "why do I believe what I believe" and "what would happen if I believed otherwise than what I believe" (I'... (read more)
5Nornagest7yDid your psychologist describe the type of information that should not be known?
In any case, I'm not completely sure that accepting new information (never mind
seeking it out) is always fully compatible with rationality-as-winning. Nick
Bostrom for example has compiled a taxonomy of information hazards
[http://www.nickbostrom.com/information-hazards.pdf] over on his site; any of
them could potentially be severe enough to overcome the informational advantage
of their underlying data. Of course, they do seem to be pretty rare, and I don't
think a precautionary principle with regard to information is justified in the
absence of fairly strong and specific reasoning.
2jkadlubo7yNo, it was more of a general statement. AFAIR we were talking about me thinking
too much about why other people do what they do and too little about how that
affects me. Anyway - my own wording made me wonder more about what I said than
what was the topic.
0radical_negative_one7yMany thanks for the link to the Information Hazards paper. I didn't know it
existed, and I'm sort of surprised that I hadn't seen it here on LW already.
He mentions intending to write a follow-up paper toward the end, but I located
the Information Hazards Bostrom's website and I don't see a second one next to
it. Any idea if it exists?
2Viliam_Bur7yThey wouldn't be rationalists anymore, duh.
Taboo "rationalists": What would happen if you stopped trying to change your map
to better reflect the territory? It most probably
[http://lesswrong.com/lw/ml/but_theres_still_a_chance_right/] would reflect the
territory less.
"Normal people" are not all the same. (For example, many "normal people" are
unlike your psychologist.) Which of the many subkinds of the "normal people" do
you mean?
Some things are unrelated. For example, let's suppose that you are a
rationalist, and you also have a broken leg. That's two things that make you
different from the average human. But those two things are unrelated. It would
be a mistake to think -- an average human doesn't have a broken leg; by giving
up my rationality I will become more similar to the average human, therefore
giving up my rationality will heal my leg.
Replace "broken leg" with whatever problem you are discussing with your
psychologist. Do you have evidence that rational people are more likely to have
this specific problem than irrational (but otherwise similar: same social
background, same education, same character, same health problems) people?
1ChristianKl7yThat's a behavior and no belief.
There are many instance where trying to change a belief makes the belief
stronger. People who are very much attached to their beliefs usually don't
update.
Many mainstream professional psychologist follows a code that means that he
doesn't share deep information about his own private life with his clients. I
don't believe in that ideal of professionalism but it's not straightforward to
dismiss it.
More importantly a good psychologist doesn't confront his client with
information about the client that's not helpful for them. He doesn't say: "Your
life is a mess because of points 1 to 30." That's certainly information that's
interesting to the client but not helpful. It makes much more sense to let the
client figure out stuff on his own or to guide him to specific issues that the
client is actually in a position to change.
Monday I gave someone meaningful true information about them that I consider
helpful to them their first reaction was: "I don't want to have nightmares.
Don't give them to me."
I do have a policy of being honest but that doesn't entail telling someone true
information for which they didn't ask and that messes them up. I don't think
that any good psychologist will just share all information that are available.
It just a bad strategy when you are having a discussion about intimate personal
topics.
1Viliam_Bur7yWell, some people don't want to be given information, and some people do. It's
often difficult to know where a specific person belongs; and it is a reasonable
assumption that they most likely belong to the "don't want to know" group.
The problem with saying "some information should not be known" is that it does
not specify who shouldn't know (and why).
1ChristianKl7yWhether a person want to be given information doesn't mean that he can handle
the information. I can remember a few instance where I swear that I wanted
information but wasn't well equipped to handle them.
That sentence alone doesn't but the psychologist probably had a context in which
he spoke it.
0jkadlubo7yGah. Now I think I shouldn't have included the background for my question.
FYI, what I wrote in response to some other comment:
But reading you is still interesting.
1ChristianKl7ySo information that shouldn't be known?
-2polymathwannabe7yYour psychologist's job is to help you learn to live in the real world. Advocacy
of selective ignorance is highly suspect.
Spritzing got me quite excited! The concept isn't new, but the variable speed (pauses after punctuation marks) and quality visual cues really work for me, in the demo at least. Don't let your inner voice slow you down!
Disclaimer: No relevant disclosures about spritzing (the reading method, at least).
3DanielLC7yInteresting. I noticed that in the first two, my subvocalization became
disjointed, sounding as if each word was recorded separately like it would be in
a simplistic text-to-speech program. In the 500 wpm one, this was less of a
problem, and I'm not sure I was even entirely subvocalizing it. It ended up
being easier and more comfortable to read than the slower speeds.
1savageorange7yI like this idea, but am seriously concerned about its effect on eye health.
Weak eye muscles are not a thing you want to have, even if you live in the
safest place in the world.
0Scott Garrabrant7yI already made basically this exact comment
[http://lesswrong.com/r/discussion/lw/jr8/open_thread_february_25_march_3/am9y]
in this open thread.
-2Kawoomba7yIt's probably because I didn't spritz the open thread in its entirety. At least,
now we got even more spritzing awareness.
I've noticed I don't read 'Main' posts anymore.
When I come to LW, I click to the Discussion almost instinctively. I'd estimate it has been four weeks since I've looked at Main. I sometimes read new Slate Star Codex posts (super good stuff, if you are unfamiliar) from LW's sidebar. I sometimes notice interesting-sounding 'Recent Comments' and click on them.
My initial thought is that I don't feel compelled to read Main posts because they are the LW-approved ideas, and I'm not super interested in listening to a bunch of people agreeing with another. Maybe that is a caricature, not sure.
Anyone else Discussion-centric in their LW use?
Also, the Meetup stuff is annoying noise. I'm very sympathetic if placing it among posts helps to drive attendance. By all means, continue if it helps your causes. But it feels spammy to me.
Alternative hypothesis: you have been conditioned to click on discussion because it has a better reward schedule.
If one is able to improve how people are matched, it would bring about a huge amount of utility for the entire world.
People would be happier, they would be more productive, there would be less of the divorce-related waste. Being in a happy couple also means you are less distracted by conflict in the house, which leads to people better able to develop themselves and achieve their personal goals. You can keep adding to the direct benefits of being in a good pairing versus a bad pairing.
But it doesn't stop there. If we accept that better matched parents raise their children better, then you are looking at a huge improvement in the psychological health of the next generation of humans. And well-raised humans are more likely to match better with each other...
Under this light, it strikes me as vastly suboptimal that people today will get married to the best option available in their immediate environment when they reach the right age.
The cutting-edge online dating sites base their suggestions on a very limited list of questions. But each of us outputs huge amounts of data, many of them available through APIs on the web. Favourite books, movies, sleep patterns, browsing history, work hi... (read more)
There seem to be perverse incentives in the dating industry. Most obviously: if you successfully create a forever-happy couple, you have lost your customers; but if you make people date many promissingly-looking-yet-disappointing partners, they will keep returning to your site.
Actualy, maybe your customers are completely hypocritical about their goals: maybe "finding a true love" is their official goal, but what they really want is plausible deniability for fucking dozens of attractive strangers while pretending to search for the perfect soulmate. You could create a website which displays the best one or two matches, instead of hundreds of recommendations, and despite having higher success rate for people who try it, most people will probably be unimpressed and give you some bullshit excuses if you ask them.
Also, if people are delusional about their "sexual market value", you probably won't make money by trying to fix their delusions. They will be offended by the types of "ordinary" people you offer them as their best matches, when the competing website offers them Prince Charming (whose real goal is to maximize his number of one night stands) or Princ... (read more)
That sounds a lot like really wanting a soulmate and an open relationship.
I wonder to what extent the problems you describe (divorces, conflict, etc) are caused mainly by poor matching of the people having the problems, and to what extent they are caused by the people having poor relationship (or other) skills, relatively regardless of how well matched they are with their partner? For example, it could be that someone is only a little bit less likely to have dramatic arguments with their "ideal match" than with a random partner -- they just happen to be an argumentative person or haven't figured out better ways of resolving disagreements.
Three main points in favor of arranged marriages that I'm aware of:
The chicken/egg issue is real with any dating site, yet dating sites do manage to start. Usually you work around this by focusing on a certain group/location, dominating that, and spreading out.
Off the cuff, the bay strikes me as a potentially great area to start for something like this.
Here is one improvement to OKcupid, which we might even be able to implement as a third party:
OKcupid has bad match algorithms, but it can still be useful as searchable classified adds. However, when you find a legitimate match, you need to have a way to signal to the other person that you believe the match could work.
Most messages on OKcupid are from men to women, so women already have a way to do this: send a message, however men do not.
Men spam messages, by glancing over profiles, and sending cookie cutter messages that mention something in the profile. Women are used to this spam, and may reject legitimate interest, because they do not have a good enough spam filter.
Our service would be to provide an I am not spamming commitment. A flag that can be put in a message which signals "This is the only flagged message I have sent this week"
It would be a link, you put in your message, which sends you to a site that basically says. Yes, Bob(profile link) has only sent this flag to Alice(profile link) in the week of 2/20/14-2/26/14, with an explanation of how this works.
Do you think that would be a useful service to implement? Do you think people would actually use it, and receive it well?
How do you pick a career if your goal is to maximize your income (technically, maximize the expected value of some function of your income)? The sort of standard answer is "comparative advantage", but it's unclear to me how to apply that concept in practice. For example how much demand there is for each kind of job is obviously very important, but how do you take that into consideration, exactly? I've been thinking about this and came up with the following. I'd be interested in any improvements or alternative ideas.
If you have a high IQ and are good at math go into finance. If you have a high IQ, strong social skills but are bad at math go into law. If you have a high IQ, a good memory but weak social and math skills become a medical doctor. If you have a low IQ but are attractive marry someone rich. If you have a very low IQ get on government benefits for some disability and work at an under-the-table job.
In "The Fall and Rise of Formal Methods", Peter Amey gives a pretty good description of how I expect things to play out w.r.t. Friendly AI research:
... (read more)Introduction I suspected that the type of stuff that gets posted in Rationality Quotes reinforces the mistaken way of throwing about the word rational. To test this, I set out to look at the first twenty rationality quotes in the most recent RQ thread. In the end I only looked at the first ten because it was taking more time and energy than would permit me to continue past that. (I'd only seen one of them before, namely the one that prompted me to make this comment.)
A look at the quotes
There might be an intended, implicit lesson here that would systematically improve thinking, but without more concrete examples and elaboration (I'm not sure what the exact mistake being pointed to is), we're left guessing what it might be. In cases like this where it's not clear, it's best to point out explicitly what the general habit of thought (cognitive algorithm) is that should be corrected, and how... (read more)
So I have the typical of introvert/nerd problem of being shy about meeting people one-on-one, because I'm afraid of not being able to come up with anything to say and lots of awkwardness resulting. (Might have something to do with why I've typically tended to date talkative people...)
Now I'm pretty sure that there must exist some excellent book or guide or blog post series or whatever that's aimed at teaching people how to actually be a good conversationalist. I just haven't found it. Recommendations?
Responding to the interesting conversation context.
First, always bring pen a paper to any meeting/presentation that is in anyway formal or professional. Questions always come up at times when it is inappropriate to interrupt, save them for lulls.
Second, an an anecdote. I noticed I had a habit during meetings to focus entirely on absorbing and recording information, and then would process and extrapolate from it after the fact (I blame spending years in the structured undergrad large technical lecture environment). This habit of only listening and not providing feedback was detrimental in the working world, it took a lot of practice to start analyzing the information and extrapolating forward in real time. Once you start extrapolating forward from what you are being told, meaningful feedback will come naturally.
Here is another logic puzzle. I did not write this one, but I really like it.
Imagine you have a circular cake, that is frosted on the top. You cut a d degree slice out of it, and then put it back, but rotated so that it is upside down. Now, d degrees of the cake have frosting on the bottom, while 360 minus d degrees have frosting on the top. Rotate the cake d degrees, take the next slice, and put it upside down. Now, assuming the d is less than 180, 2d degrees of the cake will have frosting on the bottom.
If d is 60 degrees, then after you repeat this procedure, flipping a single slice and rotating 6 times, all the frosting will be on the bottom. If you repeat the procedure 12 times, all of the frosting will be back on the top of the cake.
For what values of d does the cake eventually get back to having all the frosting on the top?
Solution can be found in the comments here.
Someone was asking a while back for meetup descriptions, what you did/ how it went, etc. Figured I'd post some Columbus Rationality videos here. All but the last are from the mega-meetup.
Jesse Galef on Defense Against the Dark Arts: The Ethics and Psychology of Persuasion
Eric on Applications of Models in Everyday Life (it's good, but skip about 10-15 minutes when there's herding-cats-nitpicky audience :P)
Elissa on Effective Altruism
Rita on Cognitive Behavioral Therapy
Don on A Synergy of Eastern and Western Approaches
Gleb on Setting and Achieving Goals
A question I'm not sure how to phrase to Google, and which has so far made Facebook friends think too hard and go back to doing work at work: what is the maximum output bandwidth of a human, in bits/sec? That is, from your mind to the outside world. Sound, movement, blushing, EKG. As long as it's deliberate. What's the most an arbitrarily fast mind running in a human body could achieve?
(gwern pointed me at the Whole Brain Emulation Roadmap; the question of extracting data from an intact brain is covered in Appendix E, but without numbers and mostly with hypothetical technology.)
I noticed recently that one of the mental processes that gets in the way of my proper thinking is an urge to instantly answer a question then spend the rest of my time trying to justify that knee-jerk answer.
For example, I saw a post recently asking whether chess or poker was more popular worldwide. For some reason I wanted to say "obviously x is more popular," but I realized that I don't actually know. And if I avoid that urge to answer the question instantly, it's much easier for me to keep my ego out of issues and to investigate things properly...including making it easier for me recognize things that I don't know and acknowledge that I don't know them.
Is there a formal name for this type of bias or behavior pattern? It would let me search up some Sequence posts or articles to read.
Here is a video of someone interviewing people to see if they can guess a pattern by asking whether or not a sequence of 3 numbers satisfies the pattern. (like was mentioned in HPMOR)
How do you know when you've had a good idea?
I've found this to actually be difficult to figure out. Sometimes you can google up what you thought. Sometimes checking to see where the idea has been previously stated requires going through papers that may be very very long, or hidden by pay-walls or other barriers on scientific journal sites.
Sometimes it's very hard to google things up. To me, I suppose the standard for "that's a good idea," is if it more clearly explains something I previously observed, or makes it easier or faster for me to do something. But I have no idea whether or not that means it will be interesting for other people.
How do you like to check your ideas?
An experiment with living rationally, by A J Jacobs, who wrote The Year of Living Biblically. I don't know how long he plans to try living rationally.
To illustrate dead-weight loss in my intro micro class I first take out a dollar bill and give it to a student and then explain that the sum of the wealth of the people in the classroom hasn't changed. Next, I take a second dollar bill and rip it up and throw it in the garbage. My students always laugh nervously as if I've done something scandalous like pulling down my pants. Why?
Because you are breaking the law?
Because it signals "I am so wealthy that I can afford to tear up money" and blatantly signaling wealth is crass. And it also signals "I am so callous that I would rather tear up money than give it to the poor", which is also crass. And the argument that a one dollar bill really isn't very much money isn't enough to disrupt the signal.
A little bit of How An Algorithm Feels From Inside:
Why is the Monty Hall problem so horribly unintuitive? Why does it feel like there's an equal probability to pick the correct door (1/2+1/2) when actually there's not (1/3+2/3)?
Here are the relevant bits from the Wikipedia article:
... (read more)Another datapoint is the counterintuitiveness of searching a desk: with each drawer you open looking for something, the probability of finding it in the next drawer increases, but your probability of ever finding it decreases. The difference seems to whipsaw people; see http://www.gwern.net/docs/statistics/1994-falk
Does anyone have any advice about understanding implicit communication? I regularly interact with guessers and have difficulty understanding their communication. A fair bit of this has to do with my poor hearing, but I've had issues even on text based communication mediums where I understand every word.
My strategy right now is to request explicit confirmation of my suspicions, e.g., here's a recent online chat I had with a friend (I'm A and they're B):
A: Hey, how have you been?
B: I've been ok
B: working in the lab now
A: Okay. Just to be clear, do you mean t... (read more)
Posts that have appeared since you last red a page have a pinkish border on them. It's really helpful when dealing with things like open threads and quote threads that you read multiple times. Unfortunately, looking at one of the comments makes it think you read all of them. Clicking the "latest open thread" link just shows one of the comments. This means that, if you see something that looks interesting there, you either have to find the latest open thread yourself, or click the link and have it erase everything about what you have and haven't read.
Can someone make it so looking at one of the comments doesn't reset all of them, or at least put a link to the open thread, instead of just the comments?
Does anyone have advice on how to optimize the expectation of a noisy function? The naive approach I've used is to sample the function for a given parameter a decent number of times, average those together, and hope the result is close enough to stand in for the true objective function. This seems really wasteful though.
Most of the algorithms I'm coming (like modelling the objective function with gaussian process regression) would be useful, but are more high-powered than I need. Any simple techniques better than the naive approach? Any recommendations among sophisticated approaches?
I've been reading critiques of MIRI, and I was wondering if anyone has responded to this particular critique that basically asks for a detailed analysis of all probabilities someone took into account when deciding that the singularity is going to happen.
(I'd also be interested in responses aimed at Alexander Kruel in general, as he seems to have a lot to say about Lesswrong/Miri.)
Is there anything specific that he's said that's caused you to lose your faith? I tire of debating him directly, because he seems to twist everything into weird strawmen that I quickly lose interest in trying to address. But I could try briefly commenting on whatever you've found persuasive.
Possibly of interest: Help Teach 1000 Kids That Death is Wrong. http://www.indiegogo.com/projects/help-teach-1000-kids-that-death-is-wrong
(have not actually looked in detail, have no opinion yet)
I'd like to know where I can go to meet awesome people/ make awesome friends. Occasionally, Yvain will brag about how awesome his social group in the Bay Area was. See here (do read it - its a very cool piece) and I'd like to also have an awesome social circle. As far as I can tell this is a two part problem. The first part is having the requisite social skills to turn strangers into acquaintances and then turn acquaintances into friends. The second part is knowing where to go to find people.
I think that the first part is a solved problem, if you want to l... (read more)
How To Be A Proper Fucking Scientist – A Short Quiz. From Armondikov of RationalWiki, in his "annoyed scientist" persona. A list of real-life Bayesian questions for you to pick holes in the assumptions of^W^W^W^W^W^Wtest yourselves on.
Richard Loosemore (score one for nominative determinism) has a new, well, let's say "paper" which he has, well, let's say "published" here.
His refutation of the usual uFAI scenarios relies solely/mostly on a supposed logical contradiction, namely (to save you a few precious minutes) that a 'CLAI' (a Canonical Logical AI) wouldn't be able to both know about its own fallability/limitations (inevitable in a resource-constrained environment such as reality), and accept the discrepancy between its specified goal system and the creators' actu... (read more)
I said as far as I know. I had not read the paper because I don't have a very high opinion of Loosemore's ideas in the first place, and nothing you've said in your G+ post has made me more inclined to read the paper, if all it's doing is expounding the old fallacious argument 'it'll be smart enough to rewrite itself as we'd like it to'.
Name three.
Spritz seems like a cool speed reading technique, especially if you have or plan on getting a smart watch. I have no idea how well it works, but I am interested in trying, especially since it does not take a huge training phase. (Click on the phone on that site for a quick demo.)
Textcelerator is another speedreading app by User:jimrandomh.
Low priority site enhancement suggestion:
Would it be possible/easy to display the upvotes-to-downvotes ratios as exact fractions rather than rounded percentages? This would make it possible to determine exactly how many votes a comment required without digging through source, which would be nice in quickly determining the difference between a mildly controversial comment and an extremely controversial one.
SMBC on genies and clever wishers. Of course, the most destructive wish is hiding under the red button.
My eye doctor diagnosed closed-angle glaucoma, and recommends an iridectomy. I think he might be a bit too trigger-happy, so I followed up with another doctor, and she didn't find the glaucoma. She carefully stated that the first diagnosis can still be the correct one, the first was a more complete examination.
Any insights about the pros and cons of iridectomy?
Get a third independent opinion.
Proof by contradiction in intuitionist logic: ¬P implies only that there is no proof that proofs of P are impossible.
What is the best textbook on datamining? I solemnly swear that upon learning, I intend to use my powers for good.
So, MtGox has declared bankruptcy. Does that make this a good time, or a bad time to invest in Bitcoins? And if a good time, where is the best place to buy them?
I’m basically exactly the kind of person Yvain described here, (minus the passive-aggressive/Machiavellian phase). I notice that that post was sort of a plea for society to behave a different way, but it did not really offer any advice for rectifying the atypical attachment style in the meantime. And I could really use some, because I’ve gotten al-Fulani’d. I’m madly in love in with a woman who does not reciprocate. I’ve actually tried going back on OkCupid to move on, and I literally cannot bring myself to message anyone new, as no one else approaches... (read more)
Seems to me like you want to overcome your "one-itis" and stop being a "beta orbiter", but you are not looking for an advice which would actually use words like "one-itis" and "beta orbiter". I know it's an exaggeration, but this is almost how it seems to me. Well, I'll try to comply:
1) You don't have to maximize the number of sexual partners. You still could try to increase a number of interesting women you had interesting conversation with. I believe that is perfectly morally okay, and still could reduce the feeling of scarcity.
Actually, any interesting activity would be helpful. Anything you can think about, instead of spending your time thinking about that one person.
2) Regularly interacting the person you are obsessed with is exactly how you maximize the length of obsession. It's like saying that you want to overcome your alcohol addiction, but you don't want to stop drinking regularly. Well, if one is not... (read more)
One common rationality technique is to put off proposing solutions until you have thought (or discussed) a problem for a while. The goal is to keep yourself from becoming attached to the solutions you propose.
I wonder if the converse approach of "start by proposing lots and lots of solutions, even if they are bad" could be a good idea. In theory, perhaps I could train myself to not be too attached to any given solution I propose, by setting the bar for "proposed solution" to be very low.
In one couples counseling course that I went thr... (read more)
What do you do when you're low on mental energy? I have had trouble thinking of anything productive to do when my brain seems to need a break from hard thinking.
This is one of those times I wish LW allowed explicit politics. SB 1062 in AZ has me craving interesting, rational discussion on the implications of this veto.
Just a thought:
A paperclip maximizer is an often used example of AGI gone badly wrong. However, I think a paperclip minimizer is worse by far.
In order to make the most of the universe's paperclip capacity, a maximizer would have to work hard to develop science, mathematics and technology. Its terminal goal is rather stupid in human terms, but at least it would be interesting because of its instrumental goals.
For a minimizer, the best strategy might be wipe out humanity and commit suicide. Assuming there are no other intelligent civilizations within our cos... (read more)
Somebody outside of LW asked how to quantify prior knowledge about a thing. When googling I came across a mathematical definition of surprise, as "the distance between the posterior and prior distributions of beliefs over models". So, high prior knowledge would lead to low expected surprise upon seeing new data. I didn't see this formalization used on LW or the wiki, perhaps it is of interest.
Speaking of the LW wiki, how fundamental is it to LW compared to the sequences, discussion threads, Main articles, hpmor, etc?
I'm curious about usage of commitment tools such as Beeminder: What's the income distribution among users? How much do users usually wind up paying? Is there a correlation between these?
(Selfish reasons: I'm on SSI and am not allowed to have more than $2000 at any given time. Losing $5 is all but meaningless for someone with $10k in the bank who makes $5k each month, whereas losing $5 for me actually has an impact. You might think this would be a stronger incentive to meet a commitment, but really, it's an even stronger incentive to stay the hell away from... (read more)
I've always wanted to know how the Chinese chose the names of their dynasties.
My psychologist said today, that there is some information that should not be known. I replied that rationalists believe in reality. There might be information they don't find interesting (e.g. not all of you would find children interesting), but refusing to accept some information would mean refusing to accept some part of reality, and that would be against the belief in reality.
Since I have been recently asking myself the question "why do I believe what I believe" and "what would happen if I believed otherwise than what I believe" (I'... (read more)
Spritzing got me quite excited! The concept isn't new, but the variable speed (pauses after punctuation marks) and quality visual cues really work for me, in the demo at least. Don't let your inner voice slow you down!
Disclaimer: No relevant disclosures about spritzing (the reading method, at least).