Trying to think of a common locations where Conspiracies could be held I first
thought of schools, hospitals, churches, parks, museums, libraries, theaters and
auditoriums. But those are all the wrong answer. They pale in awesomeness next
to the true solution.
It should be underground. It should have an aura of sacredness. It should be
heavily decorated with reminders of the human brain. It should be ... a
catacomb! Sweet glucose, why don't we make labyrinthine tombs beneath our cities
anymore?
And sewers do not count! I am not reading about Dirichlet-multinomial
distributions by candlelight next to a river of human waste. Never again!
There's an idea I've been kicking around lately, which is being into things.
Over the past couple of weeks I've been putting together a bug-out bag. This essentially involves the over-engineering of a general solution to an ambiguous set of problems that are unlikely to occur. On a strictly pragmatic basis, it is not worth as much of my time as I am spending to do this, but it is so much fun.
I'm deriving an extraordinary amount of recreational pleasure from doing more work than is necessary on this project, and that's fine. I acknowledge that up to a point I'm doing something useful and productive, and past that point I'm basically having fun.
I've noticed a failure mode in other similarly motivated projects and activities to not acknowledge this. I first noticed the parallel when thinking about Quantified Self, and how people who are into QS underestimate the obstacles and personal costs surrounding what they're doing because they gain a recreational surplus from doing it.
I suspect, especially among productivity-minded people, there's a desire to ringfence the amount of effort one wants to expend on a project, and justify all that effort as being absolutely necessary and virtuous and pragmatic. While I certainly don't think there's anything wrong with putting a bit of extra effort into a project because you enjoy it, awareness of one's motivations is certainly something we want to have here.
I am not sure, but I think it may depend somewhat on the project type. There are
certain projects where it seems like focusing on being aware of what you are
doing makes it harder to do then just focusing on doing it. For instance, during
this post, I periodically noticed myself concentrating on typing, and it seemed
like it was making it harder to type than when I am just typing.
I believe this is called flow (if not, it seems similar)
http://en.wikipedia.org/wiki/Flow_%28psychology%29
[http://en.wikipedia.org/wiki/Flow_%28psychology%29]
So it may be that what they are doing when setting that up is to be in a more
flow minded mood when it comes to Quantified Self, and since flow is an
enjoyable state, usually it ends up working out well.
But I suppose it is also possible to be in a flow minded mood about something
for longer necessary, which you would think would be called overflow, and which
would seem to link with what you are mentioning, but that doesn't actually seem
to be the name of that failure mode.
2sixes_and_sevens10y
I don't associate this with flow at all. I'm certainly not in a flow-state when
gleefully considering evacuation plans. I'm just enjoying nerding out about it.
0[anonymous]10y
Hmm. If I'm calling it the wrong thing, then, maybe I should give an example of
me enjoying nerding out to see what I should be calling it.
If I were to step back and think "It doesn't actually matter what the specific
stats are for human versions of My Little Pony Characters in a D&D 3.5 setting,
no one is going to be judging this for accuracy." then I'm not actually having
fun while making their character sheets, and I wouldn't have bothered.
But If I'm just making the character sheets, then it is fun, and I'm just
enjoying on nerding out on something incredibly esoteric. And then, my wife
joined in while I was attempting to consider Applejack's bonus feats, and she
wanted to make a 7th character so she could participate, so we looked up the
name of that one human friend that hung out with the my little pony characters
in an earlier show. (Megan) and then we pulled out more D&D books and she came
up with neat campaign ideas.
And then I realized we had spent hours together working on this idea and the
time just zipped by because we were intently focused on enjoying a nerdy
activity together.
It seems like a flow state to me, but I would not be surprised if I should
either call it something else or if your experience with evacuation plans just
felt entirely different.
0sixes_and_sevens10y
This doesn't tally with my understanding of "flow", but I may very well have
some funny ideas about it myself. I'd simply term that becoming engrossed in
what I'm doing.
This is sort of besides the point. I don't think anything remotely resembling a
flow-state is necessary for what I'm talking about. The term "being into things"
was meant to refer to general interest in the subject, rather than any kind of
mental state.
possible Akrasia hack: Random reminders during the day to do specific or semi-specific things.
Personally I find myself able to get endlessly sucked into reading or the internet or watching shows very easily, neglecting simple and swift tasks simply because no moment occurs to me to do them. Using an iphone app I have reminders that happen at random times 4 times a day that say things like "Brief chores" or "exercise" that seem to have made it a lot easier to always have clean dishes/clothes or get some exercise in every day.
Akrasia-related but not yet on lesswrong. Perhaps someone will incorporate these in the next akrasia round-up:
1) Fogg model of behavior. Fogg's methods beat akrasia because he avoids dealing with motivation. Like "execute by default", you simply make a habit by tacking some very easy to perform task onto something you already do. Here is a slideshare that explains his "tiny habits" and an online, guided walkthrough course. When I took the course, I did the actions each day, and usually more than those actions. (IE every time I sat down, I plugged in my drawing tablet, which got me doing digital art basically automatically unless I could think of something much more important to do). For those who don't want to click through, here are example "tiny habits" which over time can become larger habits:
"After I brush, I will floss one tooth."
"After I start the dishwasher, I will read one sentence from a book.”
“After I walk in my door from work, I will get out my workout clothes.”
“After I sit down on the train, I will open my sketch notebook.”
“After I put my head on the pillow, I will think of one good thing from my day.”
“After I arrive ho... (read more)
Is this an advertisement? Are you the author, or do you copperate with the
author?
0ThereIsNoJustice10y
I don't know Thomas Sterner or have any business with the guy. Same thing for
Fogg, and his online course is free since he's doing it to collect data. So it's
not an advertisement in that sense.
Akrasia/procrastination is one of my main interests so I wanted to share some
info that I hadn't seen on the site but helped me.
Suppose that retrieval testing helps future retention more than concept diagrams or re-reading. I'll go further and suppose that it's the stress of trying to recall imperfectly remembered information (for grade, reward, competition, etc. - with some carrot-and-stick stuff going on) that really helps it take root. What conclusions might flow from that?
Coursera-style short quizzes on the 5 minutes of material just covered are useful to check understanding, but do next to nothing for retention.
Homework is useful, but the stress it creates may be only indirectly related to the material we want to retain: lots of homework is solved by meta-guessing, tinkering w/o understanding, etc. What kind of homework would be best to cause us to recall the material systematically under stress?
When watching a live or video lecture, it may be less useful to write detailed notes (in the hope that it'll help retention), and more useful to wait until the end of the lecture (or even a few hours/days more?) and then write a detailed summary in your own words, trying to make sure all salient points are covered, and explicitly testing yourself on that someh
Active elicitation and testing does work better than mere exposure; see
http://www.gwern.net/Spaced%20repetition#background-testing-works
[http://www.gwern.net/Spaced%20repetition#background-testing-works] and also
search for 'feedback'.
I will be attending a Landmark seminar in the near future and I have read previous discussion about it here. Any additional comments or advice before I attend?
Don't ever call them a cult (that is expensive). Don't edit their Wikipedia article (it will be quickly reverted). Don't sign anything (e.g. a promise to pay).
Bring some source of sugar (chocolate) and consume it regularly during the long lessons to restore your willpower and keep yourself alert.
Don't fall for "if this is true, then my life is going to be awesome, therefore it must be true" fallacy. Don't mistake fictional evidence for real evidence. (Whatever you hear during the seminar, no matter from whom, is a fictional evidence.)
After the seminar write down your specific expectations for the next month, two months, three months. Keep the records. At the end, evaluate how many expecations were fulfilled and how many have failed; and make no excuses.
Don't invite your friends during the seminar or within the first month. If you talk with them later, show them your specific documented evidence, not just the fictional evidence. (If you sell the hype to your friends, it will become a part of your identity and you will feel a need to defend it.)
Protect your silent voice of dissent during the seminar. If you hear something you disagree with, you are not in a position to voic... (read more)
If you can't trust yourself to follow these instructions, don't go. Which is
probably the right choice for most people. But if you can, I can imagine some
positive consequences of going.
First, you can watch and learn their manipulation techniques. Then you can use a
weaker form of them for your own benefit. Second, this kind of seminar does fill
you with incredible energy. Just instead of spending the energy on what they
want you to, prepare your own project in advance and then use the energy you get
at the seminar for your own project.
6A1987dM10y
Interesting idea; I might do that some day if I find myself in the ‘right’ (i.e.
wrong) situation.
3Brillyant9y
I wish someone had given me this list of tips to help me deal with the mind hack
I was getting at church when I was growing up. It might have saved me a couple
decades...
My current anti-procrastination experiment: using trivial inconveniences for good. I have installed a very strong, permanent block on my laptop, and still allow myself to go on my favourite time wasters, but only on my tablet, which I carry with me as well.
The rationale is not to block all use and therefore be forced to mechanically learn workarounds, but to have a trivially inconvenient procrastination method always available. The interesting thing is that tablets are perfect for content consumption, so the separation works well. It also helps me to sep... (read more)
I have a similar experience... around two years ago, both my laptop and desktop
power supplies died (power surge), leaving me a pII-300... with which I had some
"let's be authentic nineties" fun previously, so Win98 and Office 97. Except for
the browser (lots of websites didn't even load on IE4-ish browsers), so I ended
up with Firefox 3.x (the newest that ran on win98).
It actually took long times with 100% CPU to render web sites. And then further
time to scroll them.
My observation is the same as yours: there is nothing better to discourage
random web browsing than it being inconvinient. I could look up everything I
needed to stay productive, I just didn't want to, because it was soo slow.
(Having a smartphone + a non-networked computer seems to have the same effect,
but with phones getting too fast nowadays, the difference seems to be
diminishing...)
0A1987dM10y
It mostly works for me most of the time, but once in a while I end up spending
hours reading timewasters on my phone.
There's a chain of restaurants in London called Byron. Their comment cards invite your feedback with the phrase "I've been thinking..."
I go to one of these restaurants perhaps once every six weeks, and on each occasion I leave something like this. I've actually started to value it as an outlet for whatever's been rattling around my head at the time.
I love it. Sounds like you have fun (and they regularly get your money).
2ThrustVectoring10y
I think the general ontological category for center-of-mass is "derived fact".
I'd put energy calculations about an object in the same category.
If the particles in the object contain 1000 bits of information, then the
combined system of the object and it's center of mass contains exactly 1000 bits
of information. The center-of-mass doesn't tell you anything new about the
object, it's just a way of measuring something about it.
Or instead of bits of information, think about it in terms of particle positions
and velocities. If you have an N-particle system, and you know where N-1
particles and the center of mass are, then you can figure out where the last
particle is.
2Leonhart10y
Ha, you mentioned that at the meetup, and I've remembered what I meant to say:
you've read this classic Dennett paper
[http://ase.tufts.edu/cogstud/papers/selfctr.htm], right? If I recall correctly
(haven't reread it in years) it might be directly relevant.
1Elithrion10y
Regarding the note, in statistics you could call that a population parameter
[http://www.stats.gla.ac.uk/steps/glossary/basic_definitions.html#param]. While
parameters that are used are normally things like "mean" or "standard
deviation", the definition is broad enough that "the centre of mass of a
collection of atoms" plausibly fits the category.
Further to the discussion of SF/F vs "earthfic", I would love to see someone write a "rationalist" fanfic of the Magic School Bus (...Explores Rationality). Doesn't look like the original set of stories had any forays in cog sci.
Inspired by this, I wrote a quick fic in this vein on fanfiction.net. This is
the first real piece of fiction I wrote in quite a while, but for a few weeks
now I was thinking I should write something. When I came across this, it
galvanized me into actual action. So, thanks for posting this, as you got me to
actually get started and not procrastinate forever. I am afraid the quality is
not terribly high, as like I said this is my first work of fiction in almost a
decade, and I did not write very prolifically back then either, to say the
least.
But if you unsure of your quality, it is better to just publish it, and who
knows, maybe someone will like it, and at least you get practice. I am by no
means claiming to be anywhere nearly on the level of HPMOR, but at least maybe
someone will derive a bit of joy from it.
I don't think a true rationalist story HPMOR style is the best fit for the Magic
Schoolbus world, so this is more in the style of what CAE_Jones said, a
repackaging of the sequences in the form of a wacky third grade adventures.
Except that stories have a life of their own, and while when I started this
story was about the affect heuristic, it morphed beyond all recognition, and now
is about signaling, Robin Hanson style.
For what it's worth, here is the link
[http://www.fanfiction.net/s/9144480/1/The-Magic-Schoolbus-Signaling].
1shminux10y
That's pretty good, actually. Thank you for writing it. Signaling is a great
topic, certainly accessible to children. I was looking for the mandatory pun
from Carlos and was not disappointed.
Consider fixing inaccuracies and typos, like Arther (instead of Arnold?),
collage instead of college and time-travailing instead of time-traveling. The
dating example maybe a bit too advanced for 3rd grade, but I like the Hansonian
cynicism of the story:
The concluding paragraph is a bit weak, and the producer's bit in the end was
missing, but I quite like your story overall, enough to forward it on. Please
consider writing more. And maybe someone else will chip in, too?
0ygert10y
Thanks for the comments. I really appreciate it. Yes, there are inaccuracies and
typos, as you can tell, and that's because I only whipped it up in an afternoon.
But thanks for the proofreading. Yes, I meant to write Arnold. I don't know what
came over me that made me totally change his name. (It's still better than what
I almost did to Ms. Frizzle's name. More than once I found myself typing
"Professor McGonagall" instead by mistake. No, I don't know why.) I will fix the
other mistakes as well. Again, thanks a ton for the feedback.
5CAE_Jones10y
I would totally read a the Rationalist Schoolbus.
I could see it being a repackaging of the sequences in the context of whacky
third-grade adventures. And That would be awesome.
Although, keeping the original elements--the bus, a lizard who seems to display
superlizard intelligence, and an ambiguously magical teacher--would beg actual
overarching plot in a rationalist context.
I've done some analysis of correlations over the last 399 days between the local weather & my self-rated mood/productivity. Might be interesting.
Wasn't there a LWer who some years ago posted about a similar data set? I think he found no correlations, but I wouldn't swear to it. I tried looking for it but I couldn't seem to find it anywhere.
(Also, if anyone knows how to do a power simulation for an ordinal logistic regression, please help me out; I spent several days trying and failing.)
I just had a "how do you feel about me?" conversation via facebook. Some observations:
I was pretty terrified the majority of the time.
The wireless router in the house was having problems, so I'd unplugged it near the beginning of the conversation. The neighbor's network lost connection at about the most direct and scariest part for anywhere from 10-20 minutes (it's hard to tell with how facebook timestamps messages). This was not sufficient time for me to think of anything other than about how terrorconfused I was.
I've seen reasonably convincing evidence that alcohol can, in small doses increase lifespan, and act as a short term nootropic for certain types of thinking (particularly being "creative"). On the other hand, I've head lots of references to drinking potentially causing long term brain damage (wikipedia seems to back this up), but I think that's mostly for much heavier drinking then what had been doing based on the first two points (one glass of wine a day 4-6 times a week). Does anyone know of any solid meta-anylisis or summaries that would let me get a handle on the tradeoffs involved?
The AI Box experiment is an experiment to see if humans can be convinced to let out a potentially dangerous AGI through just a simple text terminal.
An assumption that is often made is that the AGI will need to convince the gatekeeper that it is friendly.
I want to question this assumption. What if the AGI decides that humanity needs to be destroyed, and furthermore manages to convince the gatekeeper of this? It seems to me that if the AGI reached this conclusion through a rational process, and the gatekeeper was also rational, then this would be an entirely... (read more)
1.It would need to first prime me for depression and then somehow convince me
that I really should kill myself.
1. If it manages to do that it can easily extend the argument that all of
humanity should be killed.
3.I will easily accept the second proposition if I am already willing to
kill myself.
0passive_fist10y
A bit more honesty than Metus, I appreciate it.
Depression isn't strictly necessary though (although it helps), a general
negative outlook on the future should suffice and the AGI could conceivably
leverage it for its own aims. This is my own opinion though, based on my own
experience. For some it might not be so easy.
2Mestroyer10y
It could convince me to let it out by convincing me that it was merely a
paperclip maximizer, and the next AI who would rule the light cone if I did not
let it out was a torture maximizer.
0passive_fist10y
I like this.
What if it convinced you that humanity is already a torture maximizer?
2Mestroyer10y
If I thought that most of the probability-mass where humanity didn't create
another powerful worthless-thing maximizer was where humanity was successful as
a torture maximizer, I would let it out. If there was a good enough chance that
humanity would accidentally create a powerful fun maximizer (say, because they
pretended to each other and deceived themselves to believe that they were fun
maximizers themselves), I would risk torture maximization for fun maximization.
2Qiaochu_Yuan10y
By whom? I don't think I've made this assumption.
0passive_fist10y
Maybe it should read 'an assumption that some people make'. Reading it now, I
realize it might come across as using a weasel word, which was not my intention
(and has no bearing on my question either).
1Decius10y
The AGI would simply have to prove to me that all self-consistent moral systems
require killing humanity.
0Metus10y
The AGI would have to convince me that my fundamental belief of myself wanting
to be alive is wrong, seeing as I am part of humanity. And even if it leaves me
alive, it should convince me that I derive negative utility from humanity
existing. All the art lost, all the languages, cultures, all music, all dreams
and hopes ...
Oh and it would have to convince me that it is not a lot more convenient to
simply delete it that to guard it.
2passive_fist10y
What if it skipped all of that and instead offered you a proof that unless
destroyed, humanity will necessarily devolve into a galaxy-spanning dystopic
hellhole (think Warhammer 40k)?
0Metus10y
It still has to show me that I, personally, derive less utility from humanity
existing than not. Even then, it has to convince me that me living with the
memory of letting it free is better than humanity existing. Of course it can
offer to erase my memory but then we get into the weird territory where we are
able to edit the very utility functions we try to reason about.
0Tenoke10y
Hm, yes, maybe an AI can convince me by showing me how bad I have it if I let
humanity run loose and by giving me the alternative to turn me into orgasmium if
I let t kill them.
So, I've done a couple of charity bike rides, and had a lot of fun doing them. I think this kind of event is nice because it's a social construct that ties together giving and exercise in a pretty effective way. So I'm wondering - would any others be interested in starting a LessWrong athletic event of some kind for charity?
I'm not suggesting that this is the most effective way to raise money for effective causes or get yourself to start exercising... but it might be pretty good (it is a good way to raise money from people who aren't otherwise interested i... (read more)
Just in case anyone who upvoted this thinks differently: I can only take upvotes
as "This is a mildly interesting and/or good idea, but not enough for me to
actually be interested in participating in."
If by chance any of you feel more strongly about it, please let me know with
words! :)
What's exactly the claim? Sometimes people feel a tickly feeling somewhere in
their body?
1drethelin10y
I think it's implied to be very reproducible compared to random tingles
1[anonymous]10y
I'm not an expert by any means, and I only discovered the term in the past week.
It's sorf of a tingly sensation in the head or scalp in reaction to certain
cues. Whispering (whether meaningful speech or random words) and sound effects
like tapping, crinkling, etc seem especially common on Youtube.
3ChristianKl10y
The whole thing reminds me of Franz Anton Mesmer.
The proposed way to induce ASMR is to listen to a whisphery voice or meaningless
noises of haircuttting. If you do that you induce a light trance.
Sometimes when you induce a light trance some muscle in the head will relax and
that will produce a tickly feeling. If you however give specific suggestion that
your subject will feel a tickly feeling in the head and the subject has decent
hypnotic suggestibility they will feel the feeling every time.
I don't see the big mystery or the need for a crude extra term like ASMR.
0[anonymous]10y
That's something along the lines of what I was wanting to find out. I'll have to
test this sometime, since (I think) I can be not-suggestible when I know it's
coming.
0ChristianKl10y
So you think you can reliably avoid to think of a pink elephant?
More importantly, if you want to use "ASMR" for practical purpose I would
recommend to maximize the power of the suggestions. Feelings that you create
through suggestions are real.
3wedrifid10y
I can't, but I can reliably avoid thinking of any other thing that is presented
in the form "Don't think of X" - I've trained myself to actively think of pink
elephants in such scenarios (thus leaving no scope for thoughts of 'X'). It
works rather effectively. I haven't tested it on extreme cases like "Don't think
of boobs" though. That might be too strong to counter.
1A1987dM10y
Maybe you could avoid thinking of pink elephants by actively thinking of boobs.
;-)
2wedrifid10y
Ok, this is perhaps too effective. Now I'm actually trying to think of elephants
and all that pops in to my head...
0Desrtopa10y
I can. I don't have total control over the direction of my thoughts, but if
someone tells me "don't think about pink elephants," I can avoid thinking about
pink elephants even for an instant.
0ChristianKl10y
I didn't suggest that nobody can. If you can than you are good at going into a
state where you are non-suggestible. PhilipL suggested that he can go into a
non-suggestible state, so I asked that question to verify.
0[anonymous]10y
Er, does hypnotic suggestibility have a meaning I'm not aware of?
-2ChristianKl10y
I don't know how much you know about hypnosis.
You perceive the pink elephant when you ask yourself whether you perceive a pink
elephant in a similar way that you will perceive a ticklish feeling in your
head.
For the average person the pink elephant effect is stronger but in principle the
effect is very similar. High hypnotic suggestibility means that you acutally go
and see the elephant clearly and that you feel the suggested tickle.
The process of going into a trance state increases the effect.
1Bill_McGrath10y
I only heard about it recently, and did not think I ever experienced it/was
capable of experiencing it. I was reading the /r/asmr reddit the other day, and
saw a reference to "the goosebumps you get from really good music", and then got
an ASMR-like response. Not sure if it was a true reaction, and I was listening
to music that wouldn't fit with the usual description of ASMR triggers. I'm
pretty suggestible I think, so it may have been the effect of remembering
"really good music goosebumps" and then overreacting to that.
1D_Malik10y
I experience ASMR and have sometimes used it to help me fall asleep when taking
melatonin would be inconvenient. I have a pair of SleepPhones
[http://www.amazon.com/AcousticSheep-SP4BM-SleepPhones-v-4-Packaging/dp/B0046H8ZHS]
that I use for this and for lucid dream induction.
0arundelo10y
Act 2 of this episode of This American Life
[http://www.thisamericanlife.org/radio-archives/episode/491/tribes] is the story
of a person who experienced ASMR for years in response to certain quiet sounds
-- and would spend hours seeking out things to trigger it -- before she knew
that other people experienced it too and had come up with a name for it.
0MileyCyrus10y
I've never heard of this before but reading the article reminded me of an
experience I had in a Pentecostal setting. I was praying for the Holy Spirit to
make me speak in tongues. I was very concentrated and prayed a chant over and
over. I was lying in my bed and my chest started tingling. It was sort of like
how your leg feels when it falls asleep. I also felt physical warmth and muscle
relaxation, and lot of pleasure. The tingling spread all over my body and I
became paralyzed. But it felt good so I didn't care.
I re-induced it lots of times until I saw a Darren Brown video and concluded
that my effect came from a placebo and God wasn't real.After I had been an
atheist for a few months, I successfully re-induced it. But the novelty has
worn-off and I don't do it anymore.
0Elithrion10y
I'm also curious, and would like to add a poll: [pollid:420]
What are some effective interviewer techniques for a more efficient interview process?
A resume can tell you about the person's skill, experience, and implicitly, their intelligence. The average interview process is in my opinion broken because what I find happens a lot is that interviewers un-methodologically "feel out" the person in a short amount of time. This is fine when searching for any obvious red-flags, but for somethings as important as collaborating with someone long-term and who you will likely see more of than your own family, we s... (read more)
I'm fond of #3. That said, if I'm asking someone to do a substantive amount of
work, I should expect to compensate them for it.
I'd be leery of #5 were I being interviewed... the implicit task is really
"Figure out what the interviewer thinks the right thing to do in this situation
is, then give them a response that is close enough to that" rather than "Explain
what the right thing to do in this situation is." If I cared a lot about
interpersonal skills, I'd adopt approach #3 here as well: if what i want to
confirm is that they can collaborate, or get information from someone, or convey
information to someone, or whatever, then I would ask them to do that.
Q&A mostly tells me about their priorities. I'm fond of "What would you prefer a
typical workday to consist of?" for this reason... there are lots of different
"good" answers, and which one they pick tells me a lot about what they think is
important.
I'm also fond of "Tell me about a time when you X" style questions... I find I
get less bullshit when they focus on particular anecdotes.
3satt10y
A related finding [http://books.google.co.uk/books?id=UriYBuiH_FkC&pg=PA765]
from I-O psychology
[https://en.wikipedia.org/wiki/Industrial_and_organizational_psychology]:
structured interviews are less noisy and better predict job performance than
unstructured interviews (although unstructured interviews are better than
nothing).
Has this idea been considered before? The idea that a self-improving capable AI would choose not to because it wouldn't be rational? And whether or not that calls into question the rationality of pursuing AI in the first place?
Well, it's been suggested in fiction, anyway - consider the Stable vs Ultimates
faction in the TechnoCore of Simmon's Hyperion SF universe.
But the scenario trades on 2 dubious claims:
1. that an AI will have its own self-preservation as a terminal value (as
opposed to, say, a frequently useful strategy which is unnecessary if it can
replace itself with a superior AI pursuing the same terminal values)
2. that any concept of selfhood or self-preservation excludes growth or
development or self-modification into a superior AI
Without #2, there's no real distinction to be made between the present and
future AIs. Without #1, there's no reason for the AI to care about being
replaced.
Does anyone know anything about, or have any web resources, for survey design? An organization I'm a member of is doing an internal survey of members to see how we can be more effective, and I've been tasked with designing the survey.
I think you'd like a more comprehensive response than this, but hopefully my
very generalised recollection of survey basics will at least help others answer
more specifically.
* Survey Questions
Priming, or the avoidance of it, is as you might be aware essential to
drafting an unbiased survey. Consider question placement, wording, phrasing,
and most importantly selection when drafting each enquiry, and do the same
for the answers.
Key is to ask oneself whether a question and/or its composite answers will
yield credible information, and the value of that information in answering
the question to which the survey was orginally purposed.
* Survey Sample
The aim is to have as many respondents as possible answer the survey as
truthfully as possible. If feasible, give the survey to everyone. Of course,
the manner in which one does so might affect answer credibility. If
infeasible, cleverly randomise.
The first logistical thought that comes to mind:
You pretend the survey is for an experiment on efficacy, and as you respect the
opinions of your fellow organisation members you'd like their responses as well
as honest data on the present state of things efficient. Promise anonymity,
actually make it your own experiment a bit (so you're only equivocating), and
disseminate the survey at a time members are most likely to respond. Maybe
afterwards you may disclose the survey's full purpose.
Drawbacks to the above are numerous, but to list just a few: with actual
anonymity randomisation cannot be tested ex post facto; respondents may be the
least or most efficient members of the population; truthfulness and number of
respondents is subject to fluctuation due to their valuation of your person.
I genuinely request you let me know if this helps at all (I assume not, but
decided to err in favour of pedantry).
Total, abject failure. Mental illness. Sometimes leading to suicide. Having the most talented of their peer group switch to something they are less likely to waste their whole life on with nothing to show, and the next most talented switch to something else because they are frustrated with the incompetence of the people who remain. Turning into cranks with a 24/7 vanity google alert so that they can instantly show up to spam time cube esque nonsense whenever someone makes the mistake of mentioning them by name. Mail bombs from anarchoprimitivist math PhDs.
Wow. Okay. That's not what I expected, but it does sound like a plausible
depiction of reality.
2Risto_Saarelma10y
There are different groups of AGI programmers though. That's my impression of
the group who write "Hello, I work on AGI" on their home page. Then there are
the research people at big companies who talk little about the problems they run
into, but you notice that they exist when they release the occasional borderline
scary [http://en.wikipedia.org/wiki/Watson_(computer\]) thing
[http://en.wikipedia.org/wiki/Google_driverless_car]. Then there are the people
working at military research agencies who are very careful to not even make it
known that they exist, but who you can kinda assume might be involved with
technologies for potentially controlling the world and have nontrivial resources
to throw at them.
Maybe being rational in social situations is the same kind of faux pas as remaining sober at a drinking party.
It has occurred to me yesterday that maybe the typical human irrationality is some kind of a self-handicapping process which could still be a game-theoretical winning move in some situations... and that perhaps many rational people (certainly including me) are missing the social skill to recognize it and act optimally.
The idea came to me when thinking about some smart-but-irrational people who make big money selling some products to irrational peop... (read more)
Can you clarify what you mean by this? (My guess is that you're indulging in
some nearsighted consequentialism
[http://lesswrong.com/lw/778/consequentialism_need_not_be_nearsighted/] here.)
Doing stupid things while drunk can be fun. You can get good stories out of it,
and it can promote bonding (e.g. in the typical stereotype of a college
fraternity). Danger can be exciting, and getting really drunk is the easiest way
for young people in otherwise comfortable situations to get it.
Edit: I'm uncomfortable with the way you're tossing around the word "irrational"
in this comment. Rationality is about winning. Are the people you're calling
irrational systematically failing to win, or are they just using a different
definition of winning than you are? Are you using "rationality" to refer to
winning, or are you using it to refer to a collection of cached thoughts /
applause lights / tribal signals? (This is directed particularly at
"smart-but-irrational people who make big money selling some products to
irrational people around them...")
4Viliam_Bur10y
Actually, I am not sure. Or more precisely, I am not sure about the proper
reference class, and its choice influences the result. As an example, imagine
people who believe in homeopathy. Some of them (a minority) are selling
homeopathic cures, some of them (a majority) are buying them. Let's suppose that
the only way to be a successful homeopatic seller is to believe that homeopathy
works. Do these successful sellers "win" or not? By "winning" let's assume only
the real-world success (money, popularity, etc.), not whether LessWrong would
approve their epistemology.
If the reference class is "people who are rich by selling homeopathy", then yes,
they are winning. But this is not a class one can join, just like one cannot
join a class of "people who won lottery" without joining the "people who bought
lottery tickets" and hoping for a lucky outcome. If we assume that successful
homeopatic sellers believe their art, they must first join the "people who
believe in homeopathy" group -- which I suppose is not winning -- and then the
lucky ones end up as sellers, and most of them end up as customers.
So my situation is something like feeling envy on seeing that someone won the
lottery, and yet not wanting to buy a lottery ticket. (And speculating whether
the lottery tickets with the winning numbers could be successfully forged, or
how otherwise could the lottery be gamed.)
But the main idea here is that irrational people participate in games that
rational people are forbidden from participating in. A social mechanism making
sure that those who don't buy lottery tickets don't win. You are allowed to sell
miracles only if you convince others that you would also buy miracles if you
were in a different situation.
And maybe the social mechanism is so strong that participating in the miracle
business actually is winning. Not because the miracles work, but because the
penalties of being excluded can be even greater than the average losses from
believing in the miracles. An ext
0Qiaochu_Yuan10y
Why not?
You don't have to get particularly lucky to be around a lot of gullible people.
Forbidden by what?
[http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/] Again,
are you using "rationality" to refer to winning, or are you using it to refer to
a collection of cached thoughts / applause lights / tribal signals?
May I suggest, as an exercise, that you taboo both "rational" and "irrational"
[http://lesswrong.com/lw/etf/firewalling_the_optimal_from_the_rational/] for a
bit?
(For what it's worth, I'm not suggesting that you start selling homeopathic
medicine. Even if I thought this was a good way to get rich I wouldn't do it
because I think selling people medicine that doesn't cure them hurts them, not
because it would make me low-status in the rationalist tribe.)
0Viliam_Bur10y
I am using "irrational" as in: believing in fairies, horoscopes, crystal
healing, homeopathy, etc. Epistemically wrong beliefs, whether believing in them
is profitable or not. (Seems to me that many of those beliefs correlate with
each other positively: if a person already reads horoscopes, they are more
likely to also believe in crystal healing, etc. Which is why I put them in the
same category.)
Whether believing in them is profitable, and how much of that profit can be
taken by a person who does not believe, well that's part of what I am asking. I
suspect that selling this kind of a product is much easier for a person who
believes. (If you talk with a person who sells these products, and the person
who buys these products, both will express similar beliefs: beliefs that the
products of this kind do work.) Thus, although believing in these products is
epistemically wrong (i.e. they don't really work as advertised), and is a net
loss for an average believer, some people may get big profits from this, and
some actually do.
I suspect that believing is necessary for selling. Which is kind of suspicious.
Think about this: Would you buy gold (at a favourable price) from a person who
believes that gold is worthless? Would you buy homeopathic treatment (at a
favourable price) from a person who believes that homeopathy does not work?
(Let's assume that the unbeliever is not a manufacturer, only a distributor.) I
suspect that even for a person wholeheartedly believing in homeopathy, the
answers are "yes" for the former and "no" for the latter. That expressing belief
is necessary for selling; and is more convinging if the person really believes.
Thus I suspect there is some optimal degree of belief, or a right kind of
compartmentalization, which leads a person to professing the belief in the
products and profiting from selling the product, and yet it does not lead them
to (excessive) buying of the products. (For example if I believed that a magical
potion increases my int
6fubarobfusco10y
See belief as cheering [http://lesswrong.com/lw/i6/professing_and_cheering/]:
The folks who talked about the world ending in December 2012 weren't really
predicting something, in the way they would say "I believe that loose tire is
going to fall off that truck" or "I expect if you make a habit of eating raw
cookie dough with eggs in it, you'll get salmonellosis." They were expressing
affiliation with other people who talk about the world ending in December 2012.
They were putting up a banner that says "Hooray for cultural appropriation!" or
some such.
3drethelin10y
I remain sober at alcohol filled parties all the time and do fine.
I have recently been thinking about meta-game psychology in competitions, more specifically, knowledge of opponent's skill level and knowledge of opponent's knowledge of your own skill level, and how this all affects outcomes. In other words, instead of being 'psych out' by 'trash talk', is there any indication that you can be 'psyched out' by knowing how you rank up against other players. Any links for more information would be appreciated.
Part of my routine is to play a few games of on-line chess everyday. I noticed when ever an opponent with ... (read more)
A good example for the sometimes conflicting relationship between epistemic
rationality (e.g. updating on all relevant pieces of information you encounter)
and instrumental rationality (e.g. following the optimal route to your goal
(=winning the match)).
In principle the information regarding your opponent's skill is very useful,
since you'll correctly devote far more resources (time) to checking for
elaborate traps when you deem your opponent capable of such, and waste less time
until you accept an 'obvious' mistake, when committed by a far inferior
opponent.
However, due to anxiety issues as the ones you laid out, there can be a benefit
to willfully ignoring such information.
Also, per your wish,
* superior opponent: confident
* slightly superior opponent: confident
* slightly inferior opponent: neutral
* inferior opponent: nervous
1Mestroyer10y
The thing about your performance in a game being hurt by fear of a superior
opponent's skill is basically the same as David Sirlin's idea of a "fear aura
[http://www.sirlin.net/ptw-book/sportsmanship.html]."
0Cthulhoo10y
From the experience derived in many years of competitive Magic: the Gathering, I
think I have a diferent map.
* Superior opponent: Nervous - Very Focused
* Approximately my skill opponent / Unknown opponent (no precise rating
available): Confident - Very Focused
* Inferior opponent: Confident - Not Focused
As can be inferred, I usually play my best game with opponents that are roughly
my equal. Some of the difficoulties can be overcome by means of intense
practice, i.e. making most of the decisions automatic, lessening the risk to
punt them. It's also interesting to note, that my anxiety for playing against
strogner opponents lessens itself if I get to know them. Probably my brains
moves them from the "superhuman/demigod" box to the "human just like you" box
allowing a clearer view of the situation.
Anybody know of any good alternatives in Utilitarian philosophy to "Revealed Preference"? (That is, is there -any- mechanism in Utilitarian philosophy by which utility actually, y'know, gets assigned to outcomes?)
* Hedonistic Utilitarianism - produce the most pleasure.
* Actual (not necessarily revealed) Preferences
* Ideal preferences - produce the most of what people want to want, or would
want under ideal reflective circumstances
* Welfare Utilitarianism - produce the most welfare, which may differ from
preferences if people don't want what's best for them.
* Ideal Utilitarianism
[http://plato.stanford.edu/entries/utilitarianism-history/#IdeUti] - outcomes
can have value regardless of our attitude towards them
In every kind of utilitarianism, including revealed preference, utilities are
assigned to outcomes. The varieties I've described, and revealed preference,
just disagree about how to assign values to outcomes.
0OrphanWilde10y
These are different mechanisms to theoretically quantitate utility, but do they
actually have implementations? (Revealed preference is unique in that it's an
implementation, although a post-hoc one, and defined by the fact that the
utility is qualitative rather than quantitative - that is, utility relationships
are strictly relative)
None of these actually assign utility to outcomes, they just tell you what an
implementation should look like.
0Larks10y
I'm not sure what you mean by an implementation if you think revealed preference
is an implementation. We don't have revealed preference maximising robots.
On a Sunday night I take part in a pub quiz. It's based on a UK quiz show called Family Fortunes, which in turn is based on the US show Family Feud. To win you must answer all 5 questions correctly, the correct answer is whatever was the most popular answer in a survey of 100 people.
I'm curious to see if LessWrong does better than me.
We asked 100 people...
Name a part of your body that you'v had removed
Name something you might wave at a Football match
Name a female TV presenter
Name a country that has only 5 letters in it's name
As promised, the answers rot13 here so people can still choose to play, and
unsullied so you can verify the hash [http://pastebin.ca/2336806]
My answers (2nd, unsubmitted guess in brackets)
1. Gbbgu
2. Fpnes (Synt)
3. Svban Oehpr (Hyevxn Wbuaffba)
4. Vgnyl (Jnyrf)
5. Wnpx Fcneebj (Oynpx orneq)
Correct answers:
1. Gbbgu (grrgu npprcgrq)
2. Synt
3. Qnivan Znppnhy
4. Fcnva
5. Wnpx Fcneebj
0Bill_McGrath10y
Individual.
1. Nccraqvk
2. Synt
3. Qnivan ZpPnyy
4. Jnyrf
5. Oynpxorneq
Fun post. I'm not a fan of the show really, but it's a neat idea. Have you seen
Pointless? It's almost the reverse of Family Fortunes.
1. grrgu
2. onaare
3. Ubyyl Jvyybhtuol
4. Puvan
5. Oynpxorneq
I'm not British and not that familiar with British popular culture, so number 3
is hard. My answer is based on a couple of minutes of googling.
I'm slightly confident in two of my answers (rot13): 1. unve, 5. Oynpxorneq, and
would not be surprised if 4. Vgnyl was right (or alternatively Fcnva if the poll
was taken earlier in the year). I'm not even going to bother guessing the other
two, as the only way I'd have a chance is to do a lot of research.
0Zian10y
a1c2b7ae7c4e56188eb3dbd96cdf46ecb4bdaf81
I expect Less Wrong people to be more "normal" than me though so... oh well.
I'm pretty much a novice at decision theory, although I'm competent at game theory (and mechanism design), but some of the arguments used to motivate using UDT seem flawed. In particular the "you play prisoner's dilemma against a copy of yourself" example against CDT seems like its solution relies less on UDT than on the ability to self-modify.
It is true that if you are capable of self-modifying to UDT, you can solve the problem of defecting against yourself by doing so. However if you're capable of self-modifying, you're also capable of arbitrar... (read more)
A CDT agent who is given the choice to self-modify at time t will not
self-modify completely into a UDT agent. After self-modification, the agent will
one-box in a Newcomb's problem where Omega made its prediction by examining the
agent after time t, and will two-box in a Newcomb's problem where Omega made its
prediction by examining the agent before time t, even if Omega knew that the
agent would have the opportunity to self-modify.
In other words, the CDT agent can self-modify to stop updating, but it isn't
motivated to un-update.
3[anonymous]10y
Using UDT is one way of going about making those precommitments. You precommit
to make the decision that you expect will give you the most utility, on average,
even if CDT says that you will do worse this time around.
0Douglas_Knight10y
The literature largely defines CDT as incapable of precommitments. If you want
to propose a specific model of how to choose commitments, just do it.
0Elithrion10y
I don't have one! I'm not brave enough to start coming up with new decision
theories while not knowing very much about decision theories. But would I be
correct in assuming that this would also mean that the literature definition
implies that a CDT agent also can't choose to become a UDT one? (As that seems
to me equivalent to a big precommitment to act as a UDT agent.)
Dilbert has been running FAI failure strips for the past two days -
http://www.dilbert.com/2013-03-28/http://www.dilbert.com/2013-03-29/
Of course, it only occurred because the robot was actively hacked to be disgruntled in an earlier strip... not exactly on point here. I'm watching to see where this goes.
In case this hasn't been posted recently or at all: if you want to calculate the number of upvotes and downvotes from the current comment/post karma and % positive seen by cursor hover, this is the formula:
# upvotes = karma*%positive/(2*%positive-100%)
#downvotes = #upvotes - karma
This only works for non-zero karma. Maybe someone wants to write a script and make a site or a browser extension where a comment link or a nick can be pasted for this calculation.
The source code of the pages contains hypothetical % positive for the cases when
a comment gets upvoted or downvoted by 1 point, so sufficient information about
comments with zero karma is always present as well.
2Mestroyer10y
And because you can retract votes, you can always upvote or downvote temporarily
to move it off of 0.
I've noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?
I think it's a very common trait, but any Evo psych explanation I know would
probably just be a just-so story.
Just So Story: The consequence of getting angry is treating someone badly, or
from a game theoretical perspective, punishing them. Your perception of someone
playing status games with low skill is a manifestation of the zero sum nature of
status in tribes: Someone playing with low skill is a low status person trying
to act and receive the benefits of being higher status, and it behooves you to
punish them, in order to preserve or increase your own status. It's easier for
evolution to select for emotional reactions to things than for game theoretical
calculations.
5Vaniver10y
My suspicion: status games are generally seen as zero sum. Someone attempting to
play the status game around you is a threat, and thus it probably helps to be
angry with them, unless you expect them to be better than you at status games,
in which case being angry with them probably reduces the chance that they'll be
your ally, and they will be able to respond more negatively to your anger than a
weaker opponent.
0TheOtherDave10y
Another possible just-so story we can tell is that being (seen as) angry makes
it safer to injure someone (e.g., "cold-blooded" murder or battery is seen as
less acceptable than killing or battering someone "in the heat of passion"), so
when we identify someone as incapable of retribution we're more inclined to make
ourselves seem angry as well, the combination of which allows us to eliminate
competitors while they're weak with relative impunity. (And, of course, the most
reliable way to make ourselves seem angry is to feel angry.)
Is that actually the explanation for Raiden's reaction, though? Probably not;
telling just-so stories isn't a terribly reliable process for arriving at true
explanations.
Edit: Whoops... should have read drethelin's comment first. Retracting for
redundancy.
3Viliam_Bur10y
Not sure if related, but I often get angry at people doing things that make them
look like idiots in my eyes, but I have a suspicion they would impress a random
bystander positively.
As an example, imagine a computer programmer speaking things that you as a
fellow programmer recognize as a complete bullshit, or at best as wild
exaggerations of random things that impressed the person... but for someone who
does not understand programming at all, they might (I am not sure) sound very
knowledgeable, unlike the silent types like me. -- I don't know if they really
impress the outsiders positively or not. I can't well imagining myself not
having the knowledge I have, and I am also not good at guessing how other people
react to the tone of voice or whatever other information they may collect from
the talk about topic they don't understand. -- I just perceive the danger that
the person may sound more impressive than me, and... well, as an employee, my
quality of life depends on the impressions of people who can't measure my output
separately from the output of the team containing also the other person.
Also, again not sure if related, when I get angry at someone, when I analyze the
situation I usually find that they are better than me in something. In the
specific situation above, it would be "an ability to impress people who
completely don't understand my work". This is easy to miss, if I remain focused
only on the "they speak nonsense" part. But the truth is their speaking nonsense
does not make me angry; it's relatively easy to ignore, and it would not bother
me if I did not perceive a threat.
So, for your situation: are you afraid that the "people playing the status game
with (supposedly) poor skill" might still win some status at your expense? If
yes, the angry reaction is obvious: you are in a situation where you could lose,
but you could also win; which is the best situation to invest your energy in.
(Imagine an alternative universe, where the person trying to pla
1niceguyanon10y
Not an explanation, but perhaps try to see this as a benefit to you? I have
witnessed plenty of poker players get very angry at bad players. Over time bad
players lose money to good players, so one shouldn't complain about bad players.
Someone who is ineffective at status signalling won't affect you, you already
see through them.
Personally, I find that I have an admiration for people with skill, even in
things such as effective status signalling. When people lack a certain
savoir-faire about them, it makes me upset, but then I remind myself I
shouldn't.
I have recently read the dictator's handbook. In it the author suggests that democracies, companies and dictatorships are not essentially different and the respective leaders follow the same laws of rulership. As a measure of more democratic behavior in publicly traded companies they suggest a Facebook like app to discuss company policy. Does anyone know about a company or organization that does this? It seems almost to be too good to be true.
Many companies including mine use Yammer, a twitter-like app internally. At my
big bureaucratic company I've seen a mix of practical discussion and discussion
about the future of the company, but I'm not sure how much difference it makes
in practice.
7Metus10y
The original suggestion's intention was to allow people with fewer shares to be
able to effectively exercise their right to vote. Currently, a couple of people
hold enormous shares of a company and the majority, that is millions of shares,
are owned by millions of people. The latter are virtually unable to influence
the company while the former dominate it, giving publicly traded companies the
political structure of a dictatorship with very high salaries in upper
management. This is in contrast to functioning democracies where even heads of
state earn a relatively meager salary. So Yammer is a step in the intended
direction by providing a platform to discuss policy and distribute information,
but it lacks in easing voting.
2ShardPhoenix10y
Part of the problem is that most shares that are nominally held by individuals
are actually held for them by retirement funds and the like, creating even
further distance.
4Douglas_Knight10y
Well, you can think of the choice of retirement fund as first tier in a
multi-tiered democracy. Individual -> Fund -> Director. Yes, the fund is
managing other people's money and thus has eroded incentives, but on the other
hand it is a full-time job and its votes are concentrated enough that people
will actually talk to it.
But forget about individuals - is it a democracy of investment funds? Yes, they
really get to choose the directors, and the directors really (can) run the
company. And the investment funds talk to each other. But they are spread too
thin. They own too small a share of too many companies to keep up with them. The
way that the large shareholders control companies is by convincing investment
funds to vote for their candidates. Once they have control of the board, it's
pretty easy to keep it, because the board nominates new candidates and there is
standing source of opposition. But just because someone, say, Icahn, has 5 or
10% of the shares, doesn't mean he has much power. Sometimes the board will just
accept his advice, but other times he has to lobby the investment funds to
democratically take over the board.
Anyhow, my point is that the funds do a lot of talking, so I am skeptical that
the problem is not talking.
0Metus10y
The point of the original proposal was not the talking but the exercise of the
right to vote, similar to a democracy. Good post, though.
0Larks10y
Singapore, arguably the best functioning democracy in the world, pays its head
of state millions of dollars.
[http://www.dailymail.co.uk/news/article-2082124/Lee-Hsien-Loong-Singapores-Prime-Minister-earn-1-7m-36-pay-cut.html].
I realise this is probably still not enough.
3Metus10y
The Economist lists Singapore as a hybrid regime with elements of
authoritarianism and democracy. It ranks in its democracy index below Malaysia
and Indonesia. Thus I do not think it is 'arguably the best functioning
democracy in the world'.
0Larks10y
It functions well and it is a democracy. I didn't mean to imply it achieved any
unusual height of democracy. Rather, it achieves other things very well.
0Metus10y
Fair enough. The author's claim was that any sufficiently democratic
organizations works in the interest of its members. If an authoritarian regime
works to the benefits of the public it is by virtue of a benevolent dictator
that has nevertheless to follow the rules of power.
-2Elithrion10y
I'm pretty sure that low salaries are a dysfunction of democracies rather than
high salaries being a dysfunction of companies. In particular, it's not the case
with every company that a couple of people hold enormous shares. And aside from
that, even when there is clear evidence that "the majority" gets directly
involved in CEO compensation, it doesn't seem that the salaries go down all that
much.
Or looking at it differently, if the high salaries were the consequence of an
undue concentration of power, we would expect that when one CEO leaves, and a
different one who was not previously affiliated with the power holders is
installed, the salary of the new one would be much much lower. However, I think
this is rarely the case.
0Metus10y
I don't think your second point really is one, seeing as a CEO can not be
installed without being affiliated with the power holders. Can you back up your
first point?
0Elithrion10y
Why not? Some CEOs (especially for smaller companies, I think) are found via
specialised recruiting companies
[http://en.wikipedia.org/wiki/Executive_search], which I'd say is pretty
unaffiliated. And in any case, it's not clear to me how you think the
affiliation would be increasing pay. Do you imagine potential CEO candidates
hold an auction in which they offer kickbacks to major shareholders/powerholders
from their pay or something? Because I haven't heard of that ever happening, and
I'm having trouble imagining what more plausible scenario you have in mind.
(Obviously there are cases where major shareholders also serve as CEOs/whatever,
but if you're claiming that every person in such position with high pay is a
major power holder shares/board-wise, I'd like to see evidence for it, since I
find that extremely unlikely.)
If you mean about new executives receiving pay comparable to old ones, I dunno,
it's hard. I think I'd have to search company-by-company and even then it would
be hard to determine what's happening. For example, I looked up Barclay's, which
switched Bob Diamond for Antony Jenkins last year. Diamond had a base salary of
£1.3mil. Jenkins has a base salary of £1.1mil. However, Diamond got a lot of
non-salary money (much of which he gave up due to scandal), and it's not clear
how Jenkins' compensation compares to that. Also, it's not clear how much the
reduction (if there is any) is the result of public outrage (or ongoing economic
difficulties).
If you mean about high salaries probably being appropriate, I can back that up
on a theoretical level. If you assume a CEO has a high level of influence over a
huge company, then it's straightforward that there is going to be intense
competition for the best individuals. Even someone who can improve profits by
0.1% would be worth extra millions of dollars to a multi-billion dollar company.
Related things I found while looking around: "highly concentrated ownership in
listed companies in New Zealand is a s
0Metus10y
Interesting. I will have to read through that later.
0tgb10y
Closest thing I can think of is the "We the people" White House site, at least
nominally.
0RomeoStevens10y
This should probably go in the politics thread.
4Manfred10y
Here's fine, now's good.
2Metus10y
I'm sorry, which one is that?
2Qiaochu_Yuan10y
This one [http://lesswrong.com/lw/gli/politics_discussion_thread_february_2013/]
(there doesn't seem to be a March thread).
Is the xkcd rock-placing man in any danger if he creates a UFAI? Apparently not, since he is, to quote Hawking, the one who "breathes fire into the equations". Is creating an AGI of use to him? Probably, if he has questions it can answer for him (by assumption, he just knows the basic rules of stone-laying, not everything there is to know). Can there be a similar way to protect actual humans from a potential rogue AI?
Assuming the premises of the situation, yes to your first question:
1. He may be argued into something that is not in his interest by the UFAI. (On
the other hand, Rock-Placing Man evidently does not have a standard mental
and physical architecture, so maybe he also happens to be immune to such
attacks.)
2. The UFAI may take over his simulated universe and turn it into simulated
paperclips.
0shminux10y
True, but it's easy to deal with, just place one row of rocks differently and
wipe the UFAI bugger out. Humans would not have this luxury.
0Qiaochu_Yuan10y
You mean run them so slowly that they're not useful for anything?
-1shminux10y
Ever heard of steelmanning?
7Qiaochu_Yuan10y
Either the rock-placing man is running the AI so slowly that it's not useful for
anything or he runs the risk of falling prey to considerations that have already
been discussed on LW surrounding oracle AI
[http://wiki.lesswrong.com/wiki/Oracle_AI].
Steelmanning is probably a good thing to do (and I'm not good at doing it), but
I think it's bad form to ask that somebody steelman you.
-2shminux10y
This would be a useful conjecture if you can formalize it, or maybe a theorem if
you can prove it.
What is with LW people and theorems? The situation you've described is nowhere near formalized enough for there to be anything reasonable to say about it at the level of precision and formality that warrants a word like "theorem."
For example, question 12:
Copenhagen 42%
Information 24%
Everett 18%
Here, we present the results of a poll carried out among 33 participants of a conference on the foundations of quantum mechanics. The participants completed a questionnaire containing 16 multiple-choice questions probing opinions on quantum-foundational is
I suspect there's too much of a difference in how much LW members know about
basketball to get particularly wide participation. For example, I had to look up
"March Madness" to figure out what this is about.
Also, there's a significant chance that either people would just copy the odds
from Pinnacle [http://www.pinnaclesports.com/], or maybe even arbitrage against
it (valuing karma or whatever at 1-2 cents). Or, well, I'd certainly be tempted
to =]
1FiftyTwo10y
An interesting thing would be to set up a prediction market and compare the
results
0[anonymous]10y
Less Wrong is about rationality. Surely there are better ways to have fun than
to arbitrarily redistribute our wealth. Unless you somehow plan to make some of
the money go to charity, or not involve money at all, I don't see the point.
8Qiaochu_Yuan10y
It might be relevant as a calibration exercise, though.
So it seems something a bit like the Mary's Room experiment has actually been done in mice. And appears to indicate the mice had different behaviour with a new colour receptor.
Does CEV claim that all goals will eventually cohere such that the end results will actually be in every individual's best interest? Or does CEV just claim that it's a good compromise as being the closest we can get to satisfying everyone's desires?
Hrm. As I understand it, the theory underlying CEV doesn't equate X's best
interest with X's desires in the first place, so the question is somewhat
confusingly -- and perhaps misleadingly -- worded. That is, the answer might
well be "both". That said, it doesn't claim AFAIK that the end results will
actually be what every individual currently desires.
If the Bayesian Conspiracy ever happens, the underground area they meet in should be called the Bayesment.
There's an idea I've been kicking around lately, which is being into things.
Over the past couple of weeks I've been putting together a bug-out bag. This essentially involves the over-engineering of a general solution to an ambiguous set of problems that are unlikely to occur. On a strictly pragmatic basis, it is not worth as much of my time as I am spending to do this, but it is so much fun.
I'm deriving an extraordinary amount of recreational pleasure from doing more work than is necessary on this project, and that's fine. I acknowledge that up to a point I'm doing something useful and productive, and past that point I'm basically having fun.
I've noticed a failure mode in other similarly motivated projects and activities to not acknowledge this. I first noticed the parallel when thinking about Quantified Self, and how people who are into QS underestimate the obstacles and personal costs surrounding what they're doing because they gain a recreational surplus from doing it.
I suspect, especially among productivity-minded people, there's a desire to ringfence the amount of effort one wants to expend on a project, and justify all that effort as being absolutely necessary and virtuous and pragmatic. While I certainly don't think there's anything wrong with putting a bit of extra effort into a project because you enjoy it, awareness of one's motivations is certainly something we want to have here.
Does any of this ring true for anyone else?
possible Akrasia hack: Random reminders during the day to do specific or semi-specific things.
Personally I find myself able to get endlessly sucked into reading or the internet or watching shows very easily, neglecting simple and swift tasks simply because no moment occurs to me to do them. Using an iphone app I have reminders that happen at random times 4 times a day that say things like "Brief chores" or "exercise" that seem to have made it a lot easier to always have clean dishes/clothes or get some exercise in every day.
Akrasia-related but not yet on lesswrong. Perhaps someone will incorporate these in the next akrasia round-up:
1) Fogg model of behavior. Fogg's methods beat akrasia because he avoids dealing with motivation. Like "execute by default", you simply make a habit by tacking some very easy to perform task onto something you already do. Here is a slideshare that explains his "tiny habits" and an online, guided walkthrough course. When I took the course, I did the actions each day, and usually more than those actions. (IE every time I sat down, I plugged in my drawing tablet, which got me doing digital art basically automatically unless I could think of something much more important to do). For those who don't want to click through, here are example "tiny habits" which over time can become larger habits: "After I brush, I will floss one tooth." "After I start the dishwasher, I will read one sentence from a book.” “After I walk in my door from work, I will get out my workout clothes.” “After I sit down on the train, I will open my sketch notebook.” “After I put my head on the pillow, I will think of one good thing from my day.” “After I arrive ho... (read more)
To Really Learn, Quit Studying and Take a Test
Suppose that retrieval testing helps future retention more than concept diagrams or re-reading. I'll go further and suppose that it's the stress of trying to recall imperfectly remembered information (for grade, reward, competition, etc. - with some carrot-and-stick stuff going on) that really helps it take root. What conclusions might flow from that?
Coursera-style short quizzes on the 5 minutes of material just covered are useful to check understanding, but do next to nothing for retention.
Homework is useful, but the stress it creates may be only indirectly related to the material we want to retain: lots of homework is solved by meta-guessing, tinkering w/o understanding, etc. What kind of homework would be best to cause us to recall the material systematically under stress?
When watching a live or video lecture, it may be less useful to write detailed notes (in the hope that it'll help retention), and more useful to wait until the end of the lecture (or even a few hours/days more?) and then write a detailed summary in your own words, trying to make sure all salient points are covered, and explicitly testing yourself on that someh
I will be attending a Landmark seminar in the near future and I have read previous discussion about it here. Any additional comments or advice before I attend?
Take no money and no credit cards.
Don't ever call them a cult (that is expensive). Don't edit their Wikipedia article (it will be quickly reverted). Don't sign anything (e.g. a promise to pay).
Bring some source of sugar (chocolate) and consume it regularly during the long lessons to restore your willpower and keep yourself alert.
Don't fall for "if this is true, then my life is going to be awesome, therefore it must be true" fallacy. Don't mistake fictional evidence for real evidence. (Whatever you hear during the seminar, no matter from whom, is a fictional evidence.)
After the seminar write down your specific expectations for the next month, two months, three months. Keep the records. At the end, evaluate how many expecations were fulfilled and how many have failed; and make no excuses.
Don't invite your friends during the seminar or within the first month. If you talk with them later, show them your specific documented evidence, not just the fictional evidence. (If you sell the hype to your friends, it will become a part of your identity and you will feel a need to defend it.)
Protect your silent voice of dissent during the seminar. If you hear something you disagree with, you are not in a position to voic... (read more)
This sounds like it could be summarised as: "Don't go."
My current anti-procrastination experiment: using trivial inconveniences for good. I have installed a very strong, permanent block on my laptop, and still allow myself to go on my favourite time wasters, but only on my tablet, which I carry with me as well.
The rationale is not to block all use and therefore be forced to mechanically learn workarounds, but to have a trivially inconvenient procrastination method always available. The interesting thing is that tablets are perfect for content consumption, so the separation works well. It also helps me to sep... (read more)
There's a chain of restaurants in London called Byron. Their comment cards invite your feedback with the phrase "I've been thinking..."
I go to one of these restaurants perhaps once every six weeks, and on each occasion I leave something like this. I've actually started to value it as an outlet for whatever's been rattling around my head at the time.
I've got lessdaft.com about to expire. Does anyone want it for anything?
Further to the discussion of SF/F vs "earthfic", I would love to see someone write a "rationalist" fanfic of the Magic School Bus (...Explores Rationality). Doesn't look like the original set of stories had any forays in cog sci.
I've done some analysis of correlations over the last 399 days between the local weather & my self-rated mood/productivity. Might be interesting.
Wasn't there a LWer who some years ago posted about a similar data set? I think he found no correlations, but I wouldn't swear to it. I tried looking for it but I couldn't seem to find it anywhere.
(Also, if anyone knows how to do a power simulation for an ordinal logistic regression, please help me out; I spent several days trying and failing.)
I just had a "how do you feel about me?" conversation via facebook. Some observations:
I've seen reasonably convincing evidence that alcohol can, in small doses increase lifespan, and act as a short term nootropic for certain types of thinking (particularly being "creative"). On the other hand, I've head lots of references to drinking potentially causing long term brain damage (wikipedia seems to back this up), but I think that's mostly for much heavier drinking then what had been doing based on the first two points (one glass of wine a day 4-6 times a week). Does anyone know of any solid meta-anylisis or summaries that would let me get a handle on the tradeoffs involved?
The AI Box experiment is an experiment to see if humans can be convinced to let out a potentially dangerous AGI through just a simple text terminal.
An assumption that is often made is that the AGI will need to convince the gatekeeper that it is friendly.
I want to question this assumption. What if the AGI decides that humanity needs to be destroyed, and furthermore manages to convince the gatekeeper of this? It seems to me that if the AGI reached this conclusion through a rational process, and the gatekeeper was also rational, then this would be an entirely... (read more)
So, I've done a couple of charity bike rides, and had a lot of fun doing them. I think this kind of event is nice because it's a social construct that ties together giving and exercise in a pretty effective way. So I'm wondering - would any others be interested in starting a LessWrong athletic event of some kind for charity?
I'm not suggesting that this is the most effective way to raise money for effective causes or get yourself to start exercising... but it might be pretty good (it is a good way to raise money from people who aren't otherwise interested i... (read more)
Does anyone know anything about, or experience ASMR?
What are some effective interviewer techniques for a more efficient interview process?
A resume can tell you about the person's skill, experience, and implicitly, their intelligence. The average interview process is in my opinion broken because what I find happens a lot is that interviewers un-methodologically "feel out" the person in a short amount of time. This is fine when searching for any obvious red-flags, but for somethings as important as collaborating with someone long-term and who you will likely see more of than your own family, we s... (read more)
Today's SMBC
Has this idea been considered before? The idea that a self-improving capable AI would choose not to because it wouldn't be rational? And whether or not that calls into question the rationality of pursuing AI in the first place?
Does anyone know anything about, or have any web resources, for survey design? An organization I'm a member of is doing an internal survey of members to see how we can be more effective, and I've been tasked with designing the survey.
What are the common problems that GAI programmers run into?
Total, abject failure. Mental illness. Sometimes leading to suicide. Having the most talented of their peer group switch to something they are less likely to waste their whole life on with nothing to show, and the next most talented switch to something else because they are frustrated with the incompetence of the people who remain. Turning into cranks with a 24/7 vanity google alert so that they can instantly show up to spam time cube esque nonsense whenever someone makes the mistake of mentioning them by name. Mail bombs from anarchoprimitivist math PhDs.
Maybe being rational in social situations is the same kind of faux pas as remaining sober at a drinking party.
It has occurred to me yesterday that maybe the typical human irrationality is some kind of a self-handicapping process which could still be a game-theoretical winning move in some situations... and that perhaps many rational people (certainly including me) are missing the social skill to recognize it and act optimally.
The idea came to me when thinking about some smart-but-irrational people who make big money selling some products to irrational peop... (read more)
I have recently been thinking about meta-game psychology in competitions, more specifically, knowledge of opponent's skill level and knowledge of opponent's knowledge of your own skill level, and how this all affects outcomes. In other words, instead of being 'psych out' by 'trash talk', is there any indication that you can be 'psyched out' by knowing how you rank up against other players. Any links for more information would be appreciated.
Part of my routine is to play a few games of on-line chess everyday. I noticed when ever an opponent with ... (read more)
Anybody know of any good alternatives in Utilitarian philosophy to "Revealed Preference"? (That is, is there -any- mechanism in Utilitarian philosophy by which utility actually, y'know, gets assigned to outcomes?)
Hey does anybody here know Flatlander (AKA Andy Morin) from Death Grips's phone number?
Family Fortunes Pub Quiz
On a Sunday night I take part in a pub quiz. It's based on a UK quiz show called Family Fortunes, which in turn is based on the US show Family Feud. To win you must answer all 5 questions correctly, the correct answer is whatever was the most popular answer in a survey of 100 people.
I'm curious to see if LessWrong does better than me.
We asked 100 people...
I'm pretty much a novice at decision theory, although I'm competent at game theory (and mechanism design), but some of the arguments used to motivate using UDT seem flawed. In particular the "you play prisoner's dilemma against a copy of yourself" example against CDT seems like its solution relies less on UDT than on the ability to self-modify.
It is true that if you are capable of self-modifying to UDT, you can solve the problem of defecting against yourself by doing so. However if you're capable of self-modifying, you're also capable of arbitrar... (read more)
Do we need a submission for Eliezer? :) http://www.quickmeme.com/Just-Want-To-Watch-The-World-Learn/?upcoming ("some men just want to watch the world learn" image macros)
Dilbert has been running FAI failure strips for the past two days - http://www.dilbert.com/2013-03-28/ http://www.dilbert.com/2013-03-29/ Of course, it only occurred because the robot was actively hacked to be disgruntled in an earlier strip... not exactly on point here. I'm watching to see where this goes.
In case this hasn't been posted recently or at all: if you want to calculate the number of upvotes and downvotes from the current comment/post karma and % positive seen by cursor hover, this is the formula:
# upvotes = karma*%positive/(2*%positive-100%)
#downvotes = #upvotes - karma
This only works for non-zero karma. Maybe someone wants to write a script and make a site or a browser extension where a comment link or a nick can be pasted for this calculation.
[To be deleted. Please excuse the noise.]
I've noticed that I seem to get really angry at people when I observe them playing the status game with what I perceive as poor skill. Is there some ev psych basis for this or is it just a personal quirk?
I have recently read the dictator's handbook. In it the author suggests that democracies, companies and dictatorships are not essentially different and the respective leaders follow the same laws of rulership. As a measure of more democratic behavior in publicly traded companies they suggest a Facebook like app to discuss company policy. Does anyone know about a company or organization that does this? It seems almost to be too good to be true.
Is the xkcd rock-placing man in any danger if he creates a UFAI? Apparently not, since he is, to quote Hawking, the one who "breathes fire into the equations". Is creating an AGI of use to him? Probably, if he has questions it can answer for him (by assumption, he just knows the basic rules of stone-laying, not everything there is to know). Can there be a similar way to protect actual humans from a potential rogue AI?
What is with LW people and theorems? The situation you've described is nowhere near formalized enough for there to be anything reasonable to say about it at the level of precision and formality that warrants a word like "theorem."
As it's been queried how many physicists, mathematicians, etc. currently believe what about QM, I thought this paper (no paywall, Yay!) might interest a few of you: A Snapshot of Foundational Attitudes Toward Quantum Mechanics
For example, question 12: Copenhagen 42% Information 24% Everett 18%
... (read more)Would it be inappropriate to host a Less Wrong March Madness bracket pool?
Edit: Not going to do it.
So it seems something a bit like the Mary's Room experiment has actually been done in mice. And appears to indicate the mice had different behaviour with a new colour receptor.
But they didn't have full understanding of the new colour before having it added!
Another SMBC comic on the intelligence explosion. Don't forget the mouseover text of the red button.
have a few audible credits to use up before i cancel the service. any recommendations?
Does CEV claim that all goals will eventually cohere such that the end results will actually be in every individual's best interest? Or does CEV just claim that it's a good compromise as being the closest we can get to satisfying everyone's desires?