In the notes for the current chapter of HPMOR, we have the following:
General P/S/A: If you were good at algebra and are presently making less than $120,000/year, you should test yourself to see if you enjoy computer programming. Demand for programmers far outweighs supply, and if you have high talent it's an extremely easy and well-paying career to enter. I expect that at least 1% of the people reading this could be better employed as programmers than in their present occupations.
I greatly enjoy programming, and am currently employed at about half that doing tech support, where my only time to actively program is in bash scripts. I followed the link to the quixey challenge, and while I was not solving them in under a minute, I am consistently solving the practice problems. My question is this: now what?
I have no experience in actual development, beyond the algorithm analysis classes I took 6 years ago. I have a family of 6, and live in the KCMO area- how do I make the jump into development, from no background? Anyone have any experience in that transition?
I don't, but you might want to check out communities like Slashdot
(http://slashdot.org [http://slashdot.org]) or Stack Overflow
(http://stackoverflow.com [http://stackoverflow.com]) if you don't get responses
here.
A meta-anthropic explanation for why people today think about the Doomsday Argument: observer moments in our time period have not solved the doomsday argument yet, so only observer moments in our time period are thinking about it seriously. Far-future observer moments have already solved it, so a random sample of observer moments that think about the doomsday argument and still are confused are guaranteed to be on this end of solving it.
(I don't put any stock in this. [Edit: this may be because I didn't put any stock in the Doomsday argument either.])
You have reduced the DA to an absurdity, which comes from the DA itself. Clever.
Any self referencing is quite a dangerous thing for a statement. If something
can be self referenced it is often prone to some paradoxical consequences what
invalidates it.
3Oscar_Cunningham11y
If the conditions of this argument were true, it would annul the Doomsday
Argument, thus bringing about its own conditions!
0Grognor11y
Yes, that's my favorite thing about it and the reason I considered it worthy of
posting. (It only works if everyone knows about it, though.)
1orthonormal11y
The moon and sun are almost exactly the same size as seen from Earth, because in
worlds where this is not the case, observers pick a different interesting
coincidence to hold up as non-anthropic in nature.
0Grognor11y
What?
0orthonormal11y
Meta-anthropics is fun!
0steven046111y
But if even a tiny fraction of future observers thinks seriously about the
hypothesis despite knowing the solution...
1Grognor11y
My current guess is that having the knows-the-solution property puts them in a
different reference class. But if even a tiny fraction deletes this knowledge...
0syzygy11y
Isn't this true about any conceivable hypothesis?
7Grognor11y
Yes, but most hypotheses don't take the form, "Why am I thinking about this
hypothesis?" and so your comment is completely irrelevant.
To elaborate: the doomsday argument says that the reason we find ourselves here
rather than in an intergalactic civilization of trillions is because such a
civilization never appears. I give a different explanation which relies on the
nature of anthropic arguments in general.
A notion I got from reading the game company discussion-- how much important invention comes from remembering what you wanted before you got used to things?
I didn't want to put this as a discussion post in its own right, since it's not really on topic, but I suspect it might be of use to people. I'd like a "What the hell do you call this?" thread. It's hard to Google a concept, even when it might be a well-established idea in some discipline or other. For example:
Imagine you're playing a card game, and another player accidentally exposes their cards just before you make some sort of play. You were supposed to make that play in ignorance, but you now can't. There are several plays you could make... (read more)
The English Stack Exchange [http://english.stackexchange.com/] is a great site
for getting answers to "what is a word or short phrase for ... ?" questions.
3Grognor11y
That heuristic where, to make questions of fact easier to process internally,
you ask "what does the world like if X is true? what are the consequences and
testable predictions of X?" rather than just "is X true?" which tends to just
query your inner Google and return the first result, oftentimes after a period
of wait that feels like thinking but isn't.
I want to know what to call that heuristic.
-2[anonymous]11y
Making beliefs pay rent?
0[anonymous]11y
What is it called when you meet two acquaintances and begin to introduce them to
each other, only to realize that you have forgotten both of their names?
How difficult would it be to code the user displays so that they also show average karma per comment, or better yet a karma histogram? Would that significantly increase the time it takes the site to load?
Because the number of quotes already used is increasing, and the number of LW users is increasing, I propose that the next quotes thread should include a new rule: use the search feature to make sure your quote has not already been posted.
For a balance, once every two years there could be a thread for already posted
quotes. Like "choose the best quote ever", to filter the best from the best.
Then the winning quotes could randomly appear on the LW homepage.
4Oscar_Cunningham11y
It's already considered bad form to repeat a quote. I thought this was one of
the listed rules, but since it isn't (at least in the current thread) I agree
that it should be added.
3TimS11y
No repeats should be in the rules, but a posting on the rationality quotes pages
is not and should not be a certification that the posters has investigated and
is confident that there is no repeat.
If I had to investigate that hard before posting on that thread, I'd never do it
because it wouldn't be worth the investment of time. And the real consequences
for repeating a rule are so low. In short:
Good rule.
Bad rule, as phrased.
0wedrifid11y
It certainly should be a certification that poster copied some keywords from the
quote into the search box and pressed enter.
If you are referring specifically to the literal meaning of 'sure' then fine. If
you refer to the more casual meaning of "yeah, I checked this with search" then
I disagree and would suggest that you implement the "it's not worth it for you"
contingency.
1TimS11y
I've always found the search engine quite clunky, and of questionable
reliability. I think an actually explicit social norm will solve most of the
problem. That said, I won't be put out if posting rationality quotes is not
worth my effort.
3NancyLebovitz11y
So far as I know, the rule is just that a quote shouldn't have appeared in a
quotes thread, but if it's appeared elsewhere, it's ok to post it in a quotes
thread.
A cached thought: We need a decent search engine, and the more posts and
comments accumulate, the more we need it.
1Grognor11y
I don't. Posting rationality quotes is one of the few things new members can do
effectively, and new members are the least liable to know of any social norms.
That's why I said make the search feature explicit. Also, it's good at finding
quotes, since exact words are used, if at all possible (which is why it's not
called "Rationality Paraphrases").
0TimS11y
I suspect most of our disagreement is about how bad it is for there to be
repeats. At the level of bad I assign, making the norm explicit is sufficient to
diminish the problem sufficiently. You think the downside is a bit worse, so you
support a more intrusive, but more effective, solution.
I want to post some new decision theory math in the next few days. The problem is that it's a bit much for one post, and I don't like writing sequences, and some people don't enjoy seeing even one mathy post, never mind several. What should I do? Compress it into one post, make it a sequence, keep it off LW, or something else?
I for one often don't do more than skim mathy posts, but I think they're
important and I'm glad people make them. (So my vote is for either one post or a
sequence, and it sounds like you're leaning towards the former.)
Edit:
The reasons I often skim mathy posts (probably easy to guess, but included for
completeness):
1. The math is often above my level.
2. They take more time and attention to read than non-mathy ones.
--------------------------------------------------------------------------------
-- Neal Stephenson, Reamde
[http://www.amazon.com/Reamde-A-Novel-Neal-Stephenson/dp/0061977969/]
2GLaDOS11y
Those people need to learn to live with seeing math if they want to be on a site
trying its best to refine human rationality.
Post it please.
1cousin_it11y
I already have: 1
[http://lesswrong.com/lw/b0e/a_model_of_udt_without_proof_limits/], 2
[http://lesswrong.com/lw/b0c/the_limited_predictor_problem/].
0GLaDOS11y
Yay ^_^
1WrongBot11y
My preference would be for one post per major idea, however short or long that
ends up.
Please keep posting mathy stuff here, I find it extremely interesting despite
not having much of a math background.
Hi. Long time reader, first time poster (under a new name). I posted once before, than quit because I am not good at math and this website doesn't offer many examples of worked out problems of Bayes theorem.
I have looked for a book or website that gives algebraic examples of basic Bayesian updates. While there are many books that cover Bayes, all require calculus, which I have not taken.
In a new article by Kaj_Sotala, fallacies are interpreted in the light of Bayes theorem. I would like to participate in debates and discussion where I can identify common ... (read more)
My favorite explanation
[http://oscarbonilla.com/2009/05/visualizing-bayes-theorem/] of Bayes' Theorem
barely requires algebra. (If you don't need the extended explanation, just
scroll to the bottom, where the problem is solved.)
1lucent11y
That is a good article. I am looking for dozens of examples, all worked out in
subtly different ways, just like a mini-textbook. OR a chapter in a regular text
book. I'll probably have to just do it myself.
I decided to finally start reading the The Hanson-Yudkowsky AI-Foom Debate. I am not sure how much time I will have but I will post my thoughts along the way as replies to this comment. This also an opportunity for massive downvotes :-)
In The Weak Inside View [http://lesswrong.com/lw/vz/the_weak_inside_view/]
Eliezer Yudkowsky writes that it never occured to him that his views about
optimization ought to produce quantitative predictions.
Eliezer further argues that we can't use historical evidence to evaluate
completely new ideas.
Not sure what he means by "loose qualitative conclusions".
He says that he can't predict how long it will take an AI to solve various
problems.
Argh...I am getting the impression that it was a really bad idea to start
reading this at this point. I have no clue what he is talking about.
I don't know what the law of 'Accelerating Change' is and what exogenous means
and what ontologically fundamental means and why not even such laws can break
down beyond a certain point.
Oh well...I'll give up and come back to this when I have time to look up every
term and concept and decrypt what he means.
0Grognor11y
Some context:
He means that, because the inside view is weak, it cannot predict exactly how
powerful an AI would foom, exactly how long it would take for an AI to foom,
exactly what it might first do after the foom, exactly how long it will take for
the knowledge necessary to make a foom, and suchlike. Note how three of those
things I listed are quantitative. So instead of strong, quantitative predictions
like those, he sticks to weak general qualitative ones: "AI go foom."
He means, in this example anyway, that the reasoning "historical trends usually
continue" applied to Moore's Law doesn't work when Moore's Law itself creates
something that affects Moore's Law. In order to figure out what happens, you
have to go deeper than "historical trends usually continue".
I didn't know what exogenous means when I read this either, but I didn't need to
to understand. (I deigned to look it up. It means generated by the environment,
not generated by organisms. Not a difficult concept.) Ontologically fundamental
is a term we use on LW all the time; it means at the base level of reality, like
quarks and electrons. The Law of Accelerating Change is one of Kurzweil's
inventions; it's his claim that technological change accelerates itself.
Indeed, if you're not even going to try to understand, this is the correct
response, I suppose.
Incidentally, I disapprove of your using the open thread as your venue for this
rather than commenting on the original posts asking for explanations. And giving
up on understanding rather than asking for explanations.
2khafra11y
He's not really giving up, he's using a Roko algorithm
[http://lesswrong.com/lw/6kv/guardian_column_on_ugh_fields_mentions_lw/4htf]
again.
-1XiXiDu11y
In retrospect I wish I would have never come across Less Wrong :-(
9TheOtherDave11y
This is neither a threat nor a promise, just a question: do you estimate that
your life would be improved if you could somehow be prevented from ever viewing
this site again? Similarly, do you estimate that your life would be improved if
you could somehow be prevented from ever posting to this site again?
-3XiXiDu11y
I am trying this for years now but just giving up sucks as well. So I'll again
log out now and (try) not come back for a long time (years).
-3XiXiDu11y
My intuitive judgement of the expected utility of reading what Eliezer Yudkowsky
writes is low enough that I can't get myself to invest a lot of time on it. How
could I change my mind about that? It feels like reading a book on string
theory, there are no flaws in the math but you also won't learn anything new
about reality.
ETA That isn't the case for all people. I have read most of Yvain's posts for
example because I felt that it is worth it to read them right away. ETA2 Before
someone is going to nitpick, I haven't read posts like 'Rational Home Buying'
because I didn't think it would be worth it. ETA3 Wow I just realized that I
really hate Less Wrong, you can't say something like 99.99% and mean "most" by
it.
I thought it might help people to see exactly how I think about everything as I
read it and where I get stuck.
I do try, but I got the impression that it is wrong to invest a lot of time on
it at this point when I haven't even learnt basic math yet.
Now you might argue that I invested a lot of time into commenting here, but that
was rather due to a weakness of will and psychological distress than anything
else. Deliberately reading the Sequences is very different here, because it
takes an effort that is high enough to make me think about the usefulness of
doing so and decide against it.
When I comment here it is often because I feel forced to do it. Often because
people say I am wrong etc. so that I feel forced to reply.
4NancyLebovitz11y
I don't know if it's something you want to take public, but it might make sense
to do a conscious analysis of what you're expecting the sequences to be.
If you do post the analysis, maybe you can find out something about whether the
sequences are like your mental image of them, and even if you don't post, you
might find out something about whether your snap judgement makes sense.
1XiXiDu11y
In Engelbart As UberTool?
[http://www.overcomingbias.com/2008/11/engelbarts-uber.html] Robin Hanson talks
about a dude who actually tried to apply recursive self-improvement to his
company. He is till trying [http://dougengelbart.org/home/welcome-redirect.html]
(wow!).
It seems humans, even groups of humans, are not capable of fast recursive
self-improvement. That they didn't take over the world might be partly due to
strong competition from other companies that are constantly trying the same.
What is it that is missing that doesn't allow one of them to prevail?
Robin Hanson further asks what would have been a reasonable probability estimate
to assign to the possibility of a company taking over the world at that time.
I have no idea how I could possible assign a number to that. I would just have
said that it is unlikely enough to be ignored. Or that there is not enough data
to make a reasonable guess either way. I don't have the resources to take every
idea seriously and assign a probability estimate to it. Some things get just
discounted by my intuitive judgment.
0Viliam_Bur11y
I would guess that the reason is people don't work with exact numbers, only with
approximations. If you make a very long equation, the noise kills the signal. In
mathematics, if you know "A = B" and "B = C" and "C = D", you can conclude that
"A = D". In real life your knowledge is more like "so far it seems to me that
under usual conditions A is very similar to B". A hypothetical perfect Bayesian
could perhaps assign some probability and work with it, but even our estimates
of probabilities are noisy. Also, the world is complex, things do not add to
each other linearly.
I suspect that when one tries to generalize, one gets a lot of general rules
with maybe 90% probabilities. Try to chain dozen of them together, and the
result is pathetic. It is like saying "give me a static point and a lever and I
will move the world" only to realize that your lever is too floppy and you can't
move anything that is too far and heavy.
0XiXiDu11y
In Fund UberTool? [http://www.overcomingbias.com/2008/11/fund-ubertool.html],
Robin Hanson talks about a hypothetical company that applies most of its
resources to its own improvement until it would burst out and take over the
world. He further asks what evidence it would take to convince you to invest in
them.
This post goes straight to the heart of Pascal's mugging, vast utilities that
outweigh tiny probabilities. I could earn a lot by investing in such a company
if it all works as promised. But should I do that? I have no idea.
What evidence would make me invest money into such a company? I am very risk
averse. Given my inability to review mathematical proofs, and advanced technical
proofs of concept, I'd probably hesitant and fear that they are bullshitting me.
In the end I would probably not invest in them.
0faul_sname11y
By "a hypothetical company that applies most of its resources to its own
improvement" do you mean a tech company? Because that's exactly what tech
companies do, and they seem to be pretty powerful, if not "take over the world"
powerful. And I do invest in those companies.
-1XiXiDu11y
In Friendly Teams [http://www.overcomingbias.com/2008/11/englebart-not-r.html]
Robin Hanson talks about the guy who tried to get his company to undergo
recursive self-improvement and how he was a really smart fellow who saw a lot of
things coming.
Robin Hanson further argues that key insights are not enough but that it takes
many small insights that are the result of a whole society of agents.
Robin further asks what it is that makes the singleton AI scenario more
reasonable if does not work out for groups of humans, not even remotely. Well, I
can see that people would now say that an AI can directly improve its own
improvement algorithm. I suppose the actual question that Robin asks is how the
AI will reach that point in the first place. How is it going to acquire the
capabilities that are necessary to improve its capabilities indefinitely.
f.lux and sleep aid follow-up: About a month or two ago, I posted on the open thread about some things I was experimenting with to get to bed regularly at a decent hour. Here are the results:
f.lux: I installed f.lux on my computer. This is a program that through the course of the day, changes your display from blue light to red light, on the theory that the blue light from your computer keeps you awake. When I first installed it, the red tint to my screen was VERY noticeable. Anecdotally, I ended up feeling EXTREMELY tired right after installing it, and fe... (read more)
I have been wondering for a while: what's the source for the A Human's Guide to Words sequence? I mean EY had to come up with that somehow and unlike with the probability and cognitive science stuff, I have no idea what kind of books inspired A Human's Guide to Words. What are the keywords here?
Eliezer had read Language in Thought and Action
[http://en.wikipedia.org/wiki/Language_in_Thought_and_Action] prior to writing
this sequence, and he might have gotten some of it from Steven Pinker or the MIT
Encyclopedia of the Cognitive Sciences as well.
1beoShaffer11y
If I understand correctly, it was partially inspired by general semantics
[http://www.generalsemantics.org/].
I'm often walking to somewhere and I notice that I have a good amount of thinking time, but that I find my head empty. Has anyone any good ideas on useful things to occupy my mind during such time? Visualisation exercises, mental arithmetic, thinking about philosophy?
It depresses me a little, how much easier it is to make use of nothing but a pen and paper, than it is to make use of when that is removed and one has only one's own mind.
How often do you think in words, and how often in visuals, sounds, and so on? Do
you normally think by picturing things, or engaging in an internal monologue, or
what? Or is the distribution sort of even?
0oliverbeatson11y
I'd say something like internal monologue, for thinking anyway (this may be
internally sounded, I know that I think word-thoughts in my own voice, but I
regularly think much faster than I could possibly speak, until I realise that
fact, when the voice becomes slow and I start repeating myself, and then get
annoyed at my brain for being so distracting).
For calculating or anything vaguely mathematical I use abstractly spatial/visual
sorts of thoughts -- abstract meaning I don't have sufficient awareness of the
architecture of my brain to tell you accurately what I even mean. Generally I'm
not very visual, but I would say I use a spatial sort of visual awareness quite
often in thought. If this makes sense.
Does this imply something about the sorts of tasks I could do that were most
useful? I'm intrigued by the reasons you have for requesting the data you did.
:)
0Crux11y
I requested that data because for some reason, in my own experience, I've
noticed the tendency you mentioned in your previous post as being strongest when
I'm trying to avoid the internal monologue way of thinking.
If I try to avoid using words in my thought process, I often find myself walking
around empty-headed for some reason. It's as if it's a lot harder to start a
non-verbal thought, or something. I don't know.
When walking around with a lot of thinking time on my hands, I've found a lot of
success keeping myself occupied by simply saying words to myself and then seeing
where it takes me. For example, I may vocalize in my head "epistemology", or
"dark arts", or something like that, and then see where it takes me (making sure
to start verbalizing my thought process if I stall at any point).
Maybe I'm on a different topic though. Are you simply asking what you should
spend your time thinking about, and I'm going into the topic of how to start a
thought process (whatever it is)? This seems like an unlikely interpretation
though because you said the problem is not having a pen and paper, which
suggests to me that you know what to think about, but end up not doing anything
if all you can't write or draw.
Sorry if this is pretty messy. I wanted to respond to this, but didn't have much
time.
0oliverbeatson11y
I see, that's interesting. That feels recognisable: I think when I hear my own
voice/internal monologue, it brings to memory things I've already said or talked
about, so I dwell on those things rather than think of fresh topics. So I think
of the monologue itself as being the source of the stagnant thinking, and shut
it down hoping insight will come to me wordlessly. Having said all that about
having an internal monologue, I now think I do have a fair number of non-verbal
thoughts, but these still use some form of mental labelling to organise concepts
as I think about them.
That sounds an interesting experiment to do, next time I need to travel
bipedally I'll get on to checking out those default conceptual autocompletes*
that I get from different words. Thanks!
*Hoping I haven't been presumptious in my use of technical metaphors -- in the
course of writing this I've had to consciously reign in my desire to use
programming metaphors for how my brain seems to work.
I suppose among the questions I was interested in, was indeed what I should
spend my time thinking about. I had the idea that there must be
high-computational-requiring and low-requisite-knowledge-requiring mental tasks,
akin to how one learning electronics might spend time extrapolating the design
of a one-bit adder with a pen and paper and requisite knowledge of logic gates.
But crucially, without a pen and paper. So in what area can I use my
pre-existing knowledge to productively generate new ideas or thoughts without a
pen and paper. Possibly advancing in some sense my 'knowledge' of those areas at
the same time.
Sidenote: I like reading detailed descriptions of people's thought-processes
like this, because of the interleaved data on what they pay attention to when
thinking; and especially when there isn't necessarily a point to it in the
sequences-/narrative-/this post has a lesson related to this anecdote-style, and
when it's just describing the mechanics of their thought stream for the sake of
un
I'm an undergraduate student majoring in computer science. What career and subsequent studies should I aim for in order to be able to solve interesting and useful problems?
Wow, something has gone horribly wrong if this is outsiders' perception of FAI
researchers.
1Vladimir_Nesov11y
The article Tim linked is a reply to another article
[http://www.thenewatlantis.com/publications/machine-morality-and-human-responsibility]
that only quotes some of CFAI, so it's possible that the author was only exposed
to the quotations from CFAI in that article.
Universal power switch symbols are counter-intuitive. A straight line ends. It doesn't go anywhere. It should mean "stop." A circle is continuous and should mean "on". A line penetrating a circle has certain connotations that means keep it going (or coming) but definitely not "standby". How can we change this?
Polyamory: if anyone is interested in my notes ( http://dl.dropbox.com/u/5317066/2012-gwern-polyamory.txt ), I've updated them with a big extract from Anapol 2010 - apparently she noticed a striking frequency of Asperger's in polyamory circles. Of course LW has never been accused of hosting very many of those...
The Poly-English Dictionary
[http://www.polyfamilies.com/poly-english-dictionary.html] may need updating.
2wedrifid11y
I think I just got converted. I'm willing to sleep with lots of people so long
as it means I get to hang out with lots of nerds and discuss fantasy books. Hang
on... how many females are there in this community? 3?
Are there any good examples of what would be considered innate human abilities (cognitive or otherwise) that are absent or repressed in an entire culture?
This is for reasoning about criticisms to universal grammar, in particular the lack of recursion in the Pirahã language, so that one is kind of begging the question. The closest I can come up with at the moment (which really isn't very close ... (read more)
A vague discussion of AI risks has just broken out at http://marginalrevolution.com/marginalrevolution/2012/03/amazing-bezos.html#comments Marginal Revolution gets a lot of readers who are roughly in the target demographic for LW - anyone fancy having a go at making a sensible comment in that thread that points people in the right direction?
Any recommendations for books/essays on contemporary hermeneutics whose authors are aware of Schellingian game theory and signalling games? Google Scholar has a few suggestions but not many and they're hard to access.
Would it be useful to make a compressed version of the Sequences, at the ratio one Sequence into one article which is approximately one article into one paragraph? It would provide an initial information for people who would like to read the Sequences, but do not have enough time. Each paragraph would be followed by a "read more" hyperlink to the original article.
There are summary posts like this
[http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/], but if you're
thinking about a more coherent presentation, "one article into one paragraph"
probably won't work.
A proposal: make public an anonymised dataset of all Karma activity over an undisclosed approximate three-month period from some point in the past 18 months.
What I would like is a list of anonymised users, a list of posts and comments in the given three-month period (stripped of content and ancestry, but keeping a record of authorship), and all incidents of upvotes and downvotes between them that took place in the given period. This is for purposes of observing trends in Karma behaviour, and also sating my curiosity about how some sort of graph-theoretic-... (read more)
Is the LW database structure available? If yes, you could prepare some SELECT
queries and ask admins to run them for you and send you the result.
Anonymization: Replace user ids with "f(id+c)" where "f" is a hash function and
"c" is a constant that will be modified by the admin before running you script.
Replace times of karma clicks with "ym(time+r)" where "r" is a random value
between 0 and 30 days, and "ym" is a function that returns only month and year.
Select only data from recent year and only from users who are were active during
the whole year (made at least one vote in the first and last months of the time
period). Would such data be still useful to you?
0sixes_and_sevens11y
My day job is DB admin and development. In the unlikely event of LW back-end
admin-types being comfortable running a query sent in by some dude off the site,
I wouldn't be comfortable giving it to them. The effort of due diligence on a
foreign script is probably greater than that required to put it together.
The data I want correspond to:
* the IDs (i.e. primary key, not the username) of all the users
* the IDs (PK) and authorship (user ID) of all posts and comments in a
contiguous ~3 month period
* the adjacency of users and posts as upvotes and downvotes over this period (I
assume this is a single junction table)
If I were providing this data, I would also scramble the IDs in some fashion
while maintaining the underlying relationships, as consecutive IDs could provide
some small clue as to the identity and chronology of users or posts. While this
is pretty straightforward, the mechanism for such scrambling should not be known
to recipients of the data.
Is there a term in many-party game theory for a no-win, no-lose scenario; that is where by sacrificing a chance of winning you can prevent losing (neutrality or draw)?
I don't know any game theory terms, but in law, there's the high-low
[http://www.settlementperspectives.com/2008/12/what-high-low-agreements-can-do-for-you-settlement-structures-part-iii/]
agreement, where the plaintiff agrees that the maximum exposure is X, and the
defendant agrees that the minimum exposure is Y (a lower number). It aims to
reduce the volatility of trial.
I've been using the Epic Win [http://www.rexbox.co.uk/epicwin/] iPhone app as an
organizer, task reminder and somewhat effective akrasia-defeater for about a
year now, and think it has helped me quite a bit. SuperBetter is similar, but
has more aspects, and is not portable (for now). I anticipate that I will prefer
Epic Win's simplicity and accessibility to SuperBetter.
A one minute piece of Feynman lecture candy wrapped in reasonable commentary. Excellent and most importantly brief intro level thinking about our physical world. Apologies if it has been linked to before, especially since I can't say I would be surprised if it was.
Here it is, in a nutshell: The logic of science boiled down to one, essential idea. It comes from Richard Feynman, one of the great scientists of the 20th century, who wrote it on the blackboard during a class at Cornell in 1964.
YouTube
Think about
... (read more)
[This comment is no longer endorsed by its author]Reply
Is it possible to increase our computational resources by putting ourselves in a simulation run in such a way as to not require as much quantum wave function collapse to produce a successive computational state?
Something I would quite like to see after looking at this post: a poll of LW users' stances on polarised political issues.
There are a whole host of issues which we don't discuss for fear of mindkilling. While I would expect opinion to be split on a lot of politically sensitive subjects, I would be fascinated to see if the LW readership came down unilaterally on some unexpected issue. I'd also be interested to see if there are any heavily polarised political issues that I currently don't recognise as such.
I would be astonished if one result of such a poll was not quite a lot of
discussion of the polarized political issues that we don't discuss for fear of
mindkilling. Whether that's a bad thing or not depends on your beliefs about
such discussion, of course.
Also, if what you're interested in is (a) issues where we all agree, and (b)
issues you don't think of as polarized political issues in the first place, it
seems a poll is neither necessary nor sufficient for your goals. For any stance
S, you can find out whether S is in class (a) by writing up S and asking if
anyone disagrees. And no such poll will turn up results about any issue the poll
creator(s) didn't consider controversial enough to include in the poll.
That said, I'd be vaguely interested (not enough to actually do any work to find
out) in how well LW users can predict how popular various positions are on LW,
and how well/poorly accuracy in predicting the popularity of a position
correlates with holding that position among LW users.
0sixes_and_sevens11y
How I imagined it going:
0) Prohibit actual discussion of the subjects in question, with the
understanding that comments transgressing this rule would be downvoted to
oblivion by a conscientious readership (as they generally are already)
1) Request suggestions for dichotomies that people believe would split popular
opinion. Let people upvote and downvote these on the basis of whether they'd be
fit for the purpose of the poll.
2) Take the most popular dichotomies and put them in a poll, with a "don't care"
and "wrong dichotomy" option, which I hope are fairly self-explanatory.
2) a) To satisfy your curiosity on how well LW users can predict the beliefs of
other LW users, also have a "what do you think most LW users would pick as an
answer to this question?" option.
3) Have people vote, and see what patterns emerge.
Does anyone know much about general semantics? Given the very strong outside view similarities between it and less wrong. Not to mention the extent to which it directly influenced the sequences it seems like it's history could provide some useful lessons. Unfortunately, I don't really know that much about it.
EDIT: disregard this comment, I mistook general semantics for, well, semantics.
I'm no expert on semantics but I did take a couple of undergrad courses on
philosophy of language and so forth. My impression was that EY has already taken
all the good bits, unless you particularly feel like reading arguments about
whether a proposition involving "the current king of France" can have a truth
value or not. (actually, EY already covered that one when he did rubes and
bleggs).
In a nutshell, the early philosophers of language were extremely concerned about
where language gets its meaning from. So they spent a lot of time talking about
what we're doing when we refer to people or things, eg. "the current king of
France" and "Sherlock Holmes" both lack real-world referents. And then there's
the case where I think your name is John and refer to you as such, but your name
is really Peter, so have I really succeeded in referring to you? And at some
point Tarski came up with "snow is white" is a true proposition if and only if
snow is white. And that led into the beginning of modern day
formal/compositional semantics, where you have a set of things that are snow,
and a set of things that are white, and snow is white if and only if the set of
things that are snow overlaps completely with the set of things that are white.
0beoShaffer11y
I see. Do you know much about the history of it as a movement? While I do have
some interest in the actual content of the area I was mostly looking at it as a
potentail member of the same refernce class as LW. Specially, I was wondering if
its history might contain lessons that are generally useful to any organization
that is trying to improve peoples' thinking abilities. Particularly those that
have formed a general philosophy based off of insights gained from
cross-disiplinary study.
1erratio11y
My apologies, I went off in completely the wrong direction there. I don't know
too much of it as a movement, other than that all the accounts of it I've seen
make it sound distinctly cultish, and that the movement was carried almost
entirely by Korzybski and later by one of his students.
0NancyLebovitz11y
I was and am very influenced by Stuart Chase's The Tyranny of Words-- what I
took away from it is to be aware that you never have the complete story, and
that statements frequently need to be pinned down as to time, place, and degree
of generality.
Cognitive psychology has a lot of overlap with general semantics-- I don't know
whether there was actual influence or independent invention of ideas.
I just thought of a way to test one of my intuitions about meta-ethics, and I'd appreciate others thoughts.
I believe that human morality is almost entirely socially constructed (basically an anti-realist position). In other words, I think that the parts of the brain that implement moral decision-making are incredibly plastic (at least at some point in life).
Independently, I believe that behaviorism (i.e. the modern psychological discipline descended from classical conditioning and operant conditioning) is just decision theory plus an initially plastic pun... (read more)
People have irrational beliefs. When people come to lesswrong and talk about them, many say "oops" and change their mind. However, often they keep their decidedly irrational beliefs despite conversation with other Lesswrongers who often point out where they went wrong, and how they went wrong, and perhaps a link to the Sequence post where the specific mistake is discussed in more detail.
Half the people you listed were insanely rude at pretty much every single
comment they posted.
Jake Witmer was pretty much accusing of communism everyone who downvoted him.
911truther deliberately chose a provocative name and kept wailing in every
single post about the downvotes he received (which of course caused him to get
more downvotes).
sam0345's main problem wasn't that he was irrational, it was that he was an ass
all the time.
But I don't even know why you chose to list the above as belonging to the same
category with decent people like Mitchell_Porter and MrHen, people who don't
follow assholish tactics, and are therefore generally well received and treated
as proper members of the community, even if occasionally downvoted (whether
rightly or wrongly). As you yourself saw.
The main problem with half the people you listed was that they were assholes,
not that they were wrong. If people enjoy being assholes, if their utility
function doesn't include a factor for being nice at people, how do you change
that with mere unbiasing? Not caring about how whether you treat others nicely
or nastily has to do with empathy, not with intellectual power.
-2Nectanebo11y
The rudeness wouldn't help with the downvotes, I can understand that.
But the factor that I was pointing out, and the common factor for my grouping
them together was the lack of being able to say "oops". I am sorry, I didn't
make it very clear. Thus why I listed the assholes with nice people.
MrHen left LessWrong believing in a God, and Mitchell_Porter (as far as I can
tell) still believes dualism needs to be true if colour exists (or whatever his
argument was, I'm embarrasing myself by trying to simplify it when I had a poor
understanding of what he was trying to say). They were/are also great
rationalists apart from that, and they both make sure to be very humble in
general while on the site.
The other 3 were often rude, but the main reason I pointed them out was their
lack of ability to say "oops" when their rational failings were pointed out to
them. Unlike the other two, these 2 them proceeded to act very douchey until
friven from the site, but their first posts are much less abrasive and rude.
In general though, if they aren't going to work out they are wrong at LessWrong,
where are they going to?
Some of these people may work it out with time, and it may be unreasonable to
expect them to change their mind straight away.
But this should show at least how difficult it is for an irrational person to
attempt to become more rational; it's like having to know the rules to play the
rules.
What does it take to commit to wanting rationality from a beginning of
irrationality?
These examples show the existence of people on LessWrong who aren't rational,
and while that isn't a surprise, I feel like the Lesswrong community should be
perhaps learn from the failings of some of these people, in order to better
react to situations like this in the future, or something. I don't know.
In any case, thank you for replying.
3GLaDOS11y
Compartmentalization [http://lesswrong.com/tag/compartmentalization/].
Bold statment that somehow still seems true: Most LessWrongers probably have a
belief of comparable wrongness. MrHen is just unlucky.
1Mitchell_Porter11y
The argument is that for dualism not to be true, we need a new ontology of
fundamental quantum monads that no-one else quite gets. :-) My Chalmers-like
conclusion that the standard computational theory of mind implies dualism, is an
argument against the standard theory.
0TheOtherDave11y
Deciding that being less wrong than I am now is valuable, realizing that doing
what I've been doing all along is unlikely to get me there, and being willing to
give up familiar habits in exchange for alternatives that seem more likely to
get me there. These are independently fairly rare and the intersection of them
is still more so.
This doesn't get me to wanting "rationality" per se (let alone to endorsing any
specific collection of techniques, assumptions, etc., still less to the specific
collection that is most popular on this site), it just gets me looking for some
set of tools that is more reliable than the tools I have.
I've always understood the initial purpose of LW to be to present a specific
collection of tools such that someone who has already decided to look can more
easily settle on that specific collection (which, of course, is endorsed by the
site founder as particularly useful), at-least-ostensibly in the hope that some
of them will subsequently build on it and improve it.
Getting someone who isn't looking to start looking is a whole different problem,
and more difficult on multiple levels (practical, ethical, etc.).
0Viliam_Bur11y
You need some intial luck
[http://lesswrong.com/lw/rs/created_already_in_motion/]. It's like human mind is
a self-modifying system, where the rules can change the rules, and again, and
again. Thus human mind is floating around in a mindset space. The original
setting is rather fluid, for evolutionary reasons -- you should be able to join
a different tribe if it becomes essential for your survival. On the other hand,
the mindset space contains some attractors; if you happen to have some set of
rules, these rules keep preserving themselves. Rationality could be one of these
attractors.
Is the inability to update one's mind really so exceptional on LW? One way of
not updating is "blah, blah, blah, I don't listen to you". This happens a lot
everywhere on the internet, but for these people probably LW is not attractive.
The more interesting case is "I listen to you, and I value our discussion, but I
don't update". This seems paradoxical. But I think it's actually not unusual...
the only unusual thing is the naked form -- people who refuse to update, and
recognize that they refuse to update. The usual form is that people pretend to
update... except that their updates don't fully propagate. In other words, there
is no update, only belief in update. Things like: yeah I agree about Singularity
and stuff, but somehow I don't subscribe for cryopreservation; and I agree human
lives are valuable and there are charities which can save hundred human lifes
for every dollar sent to them, but somehow I didn't send a single dollar yet;
and I agree that rationality is very important and being strategic can increase
one's utility, and then I procrastinate on LW and other web sites and my
everyday life goes on without any changes.
We are so irrational that even our attempts to become rational are horribly
irrational, and that's why they often fail.
4Grognor11y
Absolutely nothing. Your sample is a selection bias of all the worst examples
you can think of. Please don't make a discussion post about this.
0GLaDOS11y
Not really. He had major problems with his tone though.
Recommendations for a book/resource on comparative religion/mythology, ideally theory-laden and written by someone with good taste for hermeneutics? Preferably something that doesn't assume that gods aren't real. (I'm approaching the subject from the Gaimanian mythological paradigm, i.e. something vaguely postmodern and vaguely Gods Need Prayer Badly, but that perspective is only provisional and I value alternative perspectives.)
I mean, the classic is Jospeh Cambell and The Hero with a Thousand Faces.
There's also The Masks of God and other books by him.
2khafra11y
It's not book-length, but Eric S. Raymond's Dancing With the Gods
[http://www.catb.org/~esr/writings/dancing.html] treats them as, at least,
intersubjectively real.
0Will_Newsome11y
I've read it. ESR is... a young soul, hard for me to learn from.
0Will_Newsome11y
Thanks yo, will read.
3Incorrect11y
What's your empirical definition of god here?
1NancyLebovitz11y
Not what you're asking for, but possibly interesting: A World Full of Gods: An
Inquiry into Polytheism
[http://www.amazon.com/World-Full-Gods-Inquiry-Polytheism/dp/0976568101], a
polytheistic theology. The author said it was the first attempt at such.
This review
[http://www.amazon.com/review/R1EIXMHA3BUQ2X/ref=cm_cr_dp_perm?ie=UTF8&ASIN=0976568101&nodeID=283155&tag=&linkCode=]
has enough quotes that you should be able to see whether you want to read it.
A week and a half ago, I either caught some bug or went down with food poisoning. Anyway, in the evening I suddenly felt like shit and my body temperature jumped to 40C. My mom gave me some medicine and told me to try and get some sleep. My state of mind felt a bit altered, and I started praying fervently to VALIS. My Gnostic faith has been on and off for the last few years, but in that moment, I was suddenly convinced that it was a test of some sort, and that the fickle nature of reality would be revealed to me if I wouldn't waver i... (read more)
In the notes for the current chapter of HPMOR, we have the following:
I greatly enjoy programming, and am currently employed at about half that doing tech support, where my only time to actively program is in bash scripts. I followed the link to the quixey challenge, and while I was not solving them in under a minute, I am consistently solving the practice problems. My question is this: now what?
I have no experience in actual development, beyond the algorithm analysis classes I took 6 years ago. I have a family of 6, and live in the KCMO area- how do I make the jump into development, from no background? Anyone have any experience in that transition?
A meta-anthropic explanation for why people today think about the Doomsday Argument: observer moments in our time period have not solved the doomsday argument yet, so only observer moments in our time period are thinking about it seriously. Far-future observer moments have already solved it, so a random sample of observer moments that think about the doomsday argument and still are confused are guaranteed to be on this end of solving it.
(I don't put any stock in this. [Edit: this may be because I didn't put any stock in the Doomsday argument either.])
A notion I got from reading the game company discussion-- how much important invention comes from remembering what you wanted before you got used to things?
I didn't want to put this as a discussion post in its own right, since it's not really on topic, but I suspect it might be of use to people. I'd like a "What the hell do you call this?" thread. It's hard to Google a concept, even when it might be a well-established idea in some discipline or other. For example:
Imagine you're playing a card game, and another player accidentally exposes their cards just before you make some sort of play. You were supposed to make that play in ignorance, but you now can't. There are several plays you could make... (read more)
How difficult would it be to code the user displays so that they also show average karma per comment, or better yet a karma histogram? Would that significantly increase the time it takes the site to load?
Because the number of quotes already used is increasing, and the number of LW users is increasing, I propose that the next quotes thread should include a new rule: use the search feature to make sure your quote has not already been posted.
I want to post some new decision theory math in the next few days. The problem is that it's a bit much for one post, and I don't like writing sequences, and some people don't enjoy seeing even one mathy post, never mind several. What should I do? Compress it into one post, make it a sequence, keep it off LW, or something else?
Hi. Long time reader, first time poster (under a new name). I posted once before, than quit because I am not good at math and this website doesn't offer many examples of worked out problems of Bayes theorem.
I have looked for a book or website that gives algebraic examples of basic Bayesian updates. While there are many books that cover Bayes, all require calculus, which I have not taken.
In a new article by Kaj_Sotala, fallacies are interpreted in the light of Bayes theorem. I would like to participate in debates and discussion where I can identify common ... (read more)
I decided to finally start reading the The Hanson-Yudkowsky AI-Foom Debate. I am not sure how much time I will have but I will post my thoughts along the way as replies to this comment. This also an opportunity for massive downvotes :-)
f.lux and sleep aid follow-up: About a month or two ago, I posted on the open thread about some things I was experimenting with to get to bed regularly at a decent hour. Here are the results:
f.lux: I installed f.lux on my computer. This is a program that through the course of the day, changes your display from blue light to red light, on the theory that the blue light from your computer keeps you awake. When I first installed it, the red tint to my screen was VERY noticeable. Anecdotally, I ended up feeling EXTREMELY tired right after installing it, and fe... (read more)
I have been wondering for a while: what's the source for the A Human's Guide to Words sequence? I mean EY had to come up with that somehow and unlike with the probability and cognitive science stuff, I have no idea what kind of books inspired A Human's Guide to Words. What are the keywords here?
Activity on these seems to be dying down, so my own reply to this comment is a poll.
Upvote this comment if you prefer the status quo of two open threads per month. Downvote it if you prefer to go back to one open thread per month.
The Unreasonable Effectiveness of Data talk by Peter Norvig.
I'm often walking to somewhere and I notice that I have a good amount of thinking time, but that I find my head empty. Has anyone any good ideas on useful things to occupy my mind during such time? Visualisation exercises, mental arithmetic, thinking about philosophy?
It depresses me a little, how much easier it is to make use of nothing but a pen and paper, than it is to make use of when that is removed and one has only one's own mind.
I'm an undergraduate student majoring in computer science. What career and subsequent studies should I aim for in order to be able to solve interesting and useful problems?
Did you folk see this one?
The Problem with ‘Friendly’ Artificial Intelligence - Adam Keiper and Ari N. Schulman.
Universal power switch symbols are counter-intuitive. A straight line ends. It doesn't go anywhere. It should mean "stop." A circle is continuous and should mean "on". A line penetrating a circle has certain connotations that means keep it going (or coming) but definitely not "standby". How can we change this?
Polyamory: if anyone is interested in my notes ( http://dl.dropbox.com/u/5317066/2012-gwern-polyamory.txt ), I've updated them with a big extract from Anapol 2010 - apparently she noticed a striking frequency of Asperger's in polyamory circles. Of course LW has never been accused of hosting very many of those...
Are there any good examples of what would be considered innate human abilities (cognitive or otherwise) that are absent or repressed in an entire culture?
For example, are there examples of culture-wide face-blindness/prosopagnosia? Are there examples of cultures that can't apply the Gaze heuristic, or can't subitize?
This is for reasoning about criticisms to universal grammar, in particular the lack of recursion in the Pirahã language, so that one is kind of begging the question. The closest I can come up with at the moment (which really isn't very close ... (read more)
A vague discussion of AI risks has just broken out at http://marginalrevolution.com/marginalrevolution/2012/03/amazing-bezos.html#comments Marginal Revolution gets a lot of readers who are roughly in the target demographic for LW - anyone fancy having a go at making a sensible comment in that thread that points people in the right direction?
Richard Carrier's book looks like it's going to spread the word of Bayes. To the theists, too. And there's a media-friendly academic fight in progress. Just the thing!
Any recommendations for books/essays on contemporary hermeneutics whose authors are aware of Schellingian game theory and signalling games? Google Scholar has a few suggestions but not many and they're hard to access.
Would it be useful to make a compressed version of the Sequences, at the ratio one Sequence into one article which is approximately one article into one paragraph? It would provide an initial information for people who would like to read the Sequences, but do not have enough time. Each paragraph would be followed by a "read more" hyperlink to the original article.
A proposal: make public an anonymised dataset of all Karma activity over an undisclosed approximate three-month period from some point in the past 18 months.
What I would like is a list of anonymised users, a list of posts and comments in the given three-month period (stripped of content and ancestry, but keeping a record of authorship), and all incidents of upvotes and downvotes between them that took place in the given period. This is for purposes of observing trends in Karma behaviour, and also sating my curiosity about how some sort of graph-theoretic-... (read more)
Is there a term in many-party game theory for a no-win, no-lose scenario; that is where by sacrificing a chance of winning you can prevent losing (neutrality or draw)?
Jane McGonigal's new project SuperBetter may be useful to you as an incentive framework for self-improvement.
The Essence Of Science Explained In 63 Seconds
A one minute piece of Feynman lecture candy wrapped in reasonable commentary. Excellent and most importantly brief intro level thinking about our physical world. Apologies if it has been linked to before, especially since I can't say I would be surprised if it was.
... (read more)Is it possible to increase our computational resources by putting ourselves in a simulation run in such a way as to not require as much quantum wave function collapse to produce a successive computational state?
Something I would quite like to see after looking at this post: a poll of LW users' stances on polarised political issues.
There are a whole host of issues which we don't discuss for fear of mindkilling. While I would expect opinion to be split on a lot of politically sensitive subjects, I would be fascinated to see if the LW readership came down unilaterally on some unexpected issue. I'd also be interested to see if there are any heavily polarised political issues that I currently don't recognise as such.
Why would this be a bad idea?
Does anyone know much about general semantics? Given the very strong outside view similarities between it and less wrong. Not to mention the extent to which it directly influenced the sequences it seems like it's history could provide some useful lessons. Unfortunately, I don't really know that much about it.
I just thought of a way to test one of my intuitions about meta-ethics, and I'd appreciate others thoughts.
I believe that human morality is almost entirely socially constructed (basically an anti-realist position). In other words, I think that the parts of the brain that implement moral decision-making are incredibly plastic (at least at some point in life).
Independently, I believe that behaviorism (i.e. the modern psychological discipline descended from classical conditioning and operant conditioning) is just decision theory plus an initially plastic pun... (read more)
I think have seen offers to help edit LW post, but can't remember were. Does anyone know what I may be thinking of?
People have irrational beliefs. When people come to lesswrong and talk about them, many say "oops" and change their mind. However, often they keep their decidedly irrational beliefs despite conversation with other Lesswrongers who often point out where they went wrong, and how they went wrong, and perhaps a link to the Sequence post where the specific mistake is discussed in more detail.
Some examples:
http://lesswrong.com/user/Jake_Witmer/
This guy was told he was being Mindkilled. Many people explained to him what was wrong with his thinking, and... (read more)
Recommendations for a book/resource on comparative religion/mythology, ideally theory-laden and written by someone with good taste for hermeneutics? Preferably something that doesn't assume that gods aren't real. (I'm approaching the subject from the Gaimanian mythological paradigm, i.e. something vaguely postmodern and vaguely Gods Need Prayer Badly, but that perspective is only provisional and I value alternative perspectives.)
[Weird irrational rant]
A week and a half ago, I either caught some bug or went down with food poisoning. Anyway, in the evening I suddenly felt like shit and my body temperature jumped to 40C. My mom gave me some medicine and told me to try and get some sleep. My state of mind felt a bit altered, and I started praying fervently to VALIS. My Gnostic faith has been on and off for the last few years, but in that moment, I was suddenly convinced that it was a test of some sort, and that the fickle nature of reality would be revealed to me if I wouldn't waver i... (read more)