Would LessWrong readers be interested in an intuitive explanation of special relativity?
Of course any scifi fan knows about Mazer Rackham's very own "There and Back Again." Why does that work? Special relativity!, I hear you say. But what does that actually mean? It probably makes you feel all science-like to say that out loud, but maybe you want a belief more substantial than a password. I did.
Relativity also has philosophical consequences. Metaphysics totally relies on concepts of space and time, yet philosophers don't learn relativity. One of my favorite quotes...
"... in the whole history of science there is no greater example of irony than when Einstein said he did not know what absolute time was, a thing which everyone knew." - J. L. Synge.
If I were to teach relativity to a group of people who were less interested in passing the physics GRE and more interested in actually understanding space and time, I would do things a lot differently from how I learned them. I'd focus on visualizing rather than calculating the Lorenz transforms. I'd focus on the spacetime interval, Minkowski spacetime, and the easy conversion factor between space and time (it's called c).
I love to teach and write and doodle but I'm not sure whether LessWrong is an appropriate forum for this topic. I don't want to dance in an empty or hostile theater dontchaknow.
I think intuitive explanations of physics are awesome. Though, there already
seem to be several pretty great ones on the internet for special relativity. For
example, see here [https://www.youtube.com/watch?v=ajhFNcUTJI0], here
[https://www.youtube.com/watch?v=vVKFBaaL4uM], and here
[http://www.youtube.com/watch?v=RCvHeeqR1nM&list=PL7B0D5AF68906CEFF&index=4&feature=plpp_video].
Are you aware of these other explanations? What would you do differently/better
than them? Maybe there's another topic not as well covered, and you could fill
that gap? (These are just rhetorical questions to spark your thinking; no need
to actually answer me.)
If you do pursue this project, then do let us know. Best of luck!
(Disclaimer: I'm not a physicist. My university work is in mathematics and
cognitive neuroscience, not physics. So take my judgment about what constitutes
a pretty great explanation of physics with as much salt as you like.)
3iDante11y
Of all the youtube videos on the subject this
[http://www.youtube.com/watch?v=C2VMO7pcWhg] is the best.
In a nutshell: I'll go into more depth, there will be no video, and I'll focus
on world lines, Minkowski style. Slightly less nutty: While those videos are
easy snapping, I don't think they actually do the topic any sort of justice.
Actually the minute physics one is good, notice its use of world lines :D. It
also passingly mentions invariance of distance in Euclidian space.
Right now my outline is roughly
* How to interpret world lines. c=1 and time in meters or distance in seconds.
Inertial frames and what those look like on spacetime plots.
* Why speed of light is constant (Maxwell, experiment) and classical paradoxes
that everyone learns to reason about by thinking about fast trains. Instead
of vague thoughts about fast trains, we'll look at spacetime diagrams where
it is visually obvious that classical mechanics is wrong.
* Lorentz transform from a spacetime perspective. Looking at a spacetime
diagram all the seemingly disconnected consequences of SR, e.g. time
dilation, length contraction, simultaneousstuff, are visually obvious and
clearly caused by one thing: the lorentz transformation. Light cones.
* Invariance of the interval, a little hyperbolic geometry, and then kapow: we
can see how relativistic space travel works. We can see that cause and effect
is enforced in this theory. I'll mention the energy-momentum 4-vector because
I think it's interesting but it has less philosophical weight than the
lorentz transform.
I'm expecting ~30 mins of reader time to learn and understand the material.
There won't be difficult math, although I will mention some hyperbolic stuff. I
have another reason for wanting to do this, which is that I want people to
understand world lines. They're very useful for metaphysics discussions.
0shminux11y
I'd be happy to assist, if you like. By the way, for a gentle introduction to
general relativity for undergrads, I recommend Hartle
[http://www.amazon.com/Gravity-Introduction-Einsteins-General-Relativity/dp/0805386629].
0iDante11y
I read my way through Schutz
[http://www.amazon.com/A-First-Course-General-Relativity/dp/0521887054/ref=pd_bxgy_b_text_b]
with relative (!) ease. Do you know how they compare?
Anyway right now I'm studying the math. Wandering through Spivak
[http://www.amazon.com/Comprehensive-Introduction-Differential-Geometry-Edition/dp/0914098705/ref=pd_bxgy_b_text_b].
0shminux11y
In my experience Hartle is easier and more engaging. It also relies on at most
two years of undergrad math for non-math majors. Spivak, while fascinating, is a
much more advanced book. Again, it is great for math majors, but there are much
gentler ways to learn diff. forms and topology for a physicist.
I had always assumed that these things were created as sort of abstract ideals, things you could program an AI to use (I find it no coincidence that all three of these concepts come from AI researchers/theorists to some degree) or something you could compare humans to, but not something that humans can actually use in real life.
But having read the original superrationality essays, I realize that Hofstadter makes no mention of using this in an AI framework and instead thinks about humans using it. And in HPMoR, Eliezer has two eleven-year old humans using a bare-bones version of TDT to cooperate (I forget the chapter this occurs in), and in the TDT paper, Eliezer still makes no mention of AIs but instead talks about "causal decision theorists" and "evidential decision theorists" as though they were just people walking around with opinions about decision theory, not the platonic formalized abstraction of decision theories. (I don't think he uses the phrase "timeless decision theorists".)
I think part of the rejection people have to these decision theories might be from ho... (read more)
Some ways humans act resemble TDT much more than they resemble CDT: some
behaviours such as voting in an election with a negligible probability of being
decided by one vote, or refusing small offers in the Ultimatum game, make no
sense unless you take in account the fact that similar people thinking about
similar issues in similar ways will reach similar conclusions. Also, the
one-sentence summary of TDT strongly reminds me of both the Golden Rule and the
categorical imperative. (I've heard that Good and Real by Gary Drescher
discusses this kind of stuff in detail, though I haven't read the book itself.)
(Of course, TDT itself, as described now, can't be applied to anything because
of problems with counterfactuals over logically impossible worlds such as the
five-and-ten problem; but it's the general idea behind it that I'm talking
about.)
2fubarobfusco11y
I have. It does. Strongly recommended.
2A1987dM11y
adds Good and Real at the end of the queue of books I'm going to read
7Vladimir_Nesov11y
It's perhaps more useful to see these as (frameworks for) normative theories,
describing which decisions are better than their alternatives in certain
situations, analogously to how laws of physics say which events are going to
happen given certain conditions. It's impossible in practice to calculate the
actions of a person based on physical laws, even though said actions follow from
physical laws, because we lack both the data and the computational capabilities
necessary to perform the computation. Similarly, it's impossible in practice to
find recommendations for actions of a person based on fundamental decision
theory, because we lack both the problem statement (detailed descriptions of the
person, the environment, and the goals) and computational capabilities (even if
these theories were sufficiently developed to be usable). In both cases, the
problem is not that these theories are "impossible to implement in humans"; and
certain approximations of their conclusions can be found.
3Randaly11y
Some people think so; they are wrong. (Examples: 1
[http://machineslikeus.com/news/why-new-years-resolutions-have-no-teeth/page/0/2],
2 [http://lesswrong.com/lw/1lb/are_wireheads_happy/4hz9], 3
[http://multiverseaccordingtoben.blogspot.com/2008/03/why-voting-may-not-be-such-stupid-idea.html],
4 [http://lesswrong.com/lw/778/consequentialism_need_not_be_nearsighted/], 5
[http://lesswrong.com/lw/2ls/morality_as_parfitianfiltered_decision_theory/], 6
[http://lesswrong.com/lw/2yi/ethics_of_jury_nullification_and_tdt/], 7
[http://lesswrong.com/lw/3mn/discussion_for_eliezer_yudkowskys_paper_timeless/3ase].
Most of these take overly broad vague definitions of a person's "platonic
algorithm"; #5 is forgetting that natural selection acts on the level of genes,
not people.)
Eliezer: "This is primarily a theory for AIs dealing with other AIs."
[http://lesswrong.com/lw/135/timeless_decision_theory_problems_i_cant_solve/xzn]
Unfortunately, it's difficult to write papers or fiction publicizing TDT that
solely address AI's- especially when the description of TDT needs to be in a
piece of Harry Potter fanfiction.
On a slightly more interesting side note, if TDT were applicable in real life,
people would likely be computation hazards
[http://lesswrong.com/lw/d2f/computation_hazards/], since a simulation of
another person accurate enough to count as implementing the same, simulated
platonic algorithm as the one they actually use would also be quite possibly be
complex enough to be a person
[http://lesswrong.com/lw/x4/nonperson_predicates/].
0Larks11y
Why do you think we would need to get everyone to use UDT for it to be useful to
you? It's not like UDT can't deal with non-UDT agents.
-3drethelin11y
TDT is not even that good at cooperating with yourself, if you're not in the
right mindset. The notion that "If you fail at this you will fail at this
forever" is very dangerous to depressed people, and TDT doesn't say anything
useful (or at least nothing useful has been said to me on the topic) about
entities that change over time, ie Humans. I can't timelessly decide to
benchpress 200 pounds whenever I go to the gym, if I'm physically incapable of
it.
0Grognor11y
-Arnold Bennett [http://www.gutenberg.org/files/2274/2274-h/2274-h.htm], How to
Live on 24 Hours a Day
A dangerous truth is still true
[http://en.wikipedia.org/wiki/Depressive_realism]. Let's not recommend people
try at things if a failure will cause a failure cascade!
The notion of "change over time" is deeply irrelevant to TDT, hence its name.
The idea of risk compensation says that if you have a seatbelt in your car, you take more risks while driving. There seem to be many similar "compensation" phenomena that are not related to risk:
Building more roads might not ease congestion because people switch from public transport to cars.
Sending aid might not alleviate poverty because people start having more kids.
Throwing money at a space program might not give you Star Trek because people create make-work.
Having more free time might not make you more productive because you'll just w
This seems to fall under "rent dissipation". Here's a representative paper
[http://www.jstor.org/discover/10.2307/1804109]. ETA: Another one
[http://www.cato.org/pubs/journal/cj7n2/cj7n2-10.pdf].
1cousin_it11y
"Rent dissipation is defined as the total expenditure of resources by all agents
attempting to capture a rent or prize." It's an interesting concept, but seems
to be slightly different from what I meant. In the situations above, wolves eat
your surplus without spending much resources.
1Wei_Dai11y
In that case it's a related topic called "rent seeking", I think. The second
paper I linked above talks about how simple models of rent seeking predict total
rent dissipation, but the paper wants to challenge that.
2cousin_it11y
The Jevons paradox [http://en.wikipedia.org/wiki/Jevons_paradox] and rebound
effect [http://en.wikipedia.org/wiki/Rebound_effect_(conservation\]) articles
are more like what I had in mind, but still a little different.
This
[http://www.reddit.com/r/acne/comments/nrkg2/the_redditors_guide_to_acne_version_2]
person seems to know what they're talking about.
1Petra11y
This
[http://www.amazon.com/Clean-Clear-Advantage-Treatment-0-75-Ounce/dp/B00027DDOQ]
worked well for me, though it's a bit aggressive.
0[anonymous]11y
You might try tracking down the cause, it isn't always obvious. I used to have a
regular problem with it, not very bad, but constant. I discovered, almost by
accident, but confirmed it by experiment, that it was caused by the alcohol in
aftershaves. Since switching to just wiping my face with a wet cloth after
shaving the problem has disappeared.
I may be missing something here, but I haven't seen anyone connect utility function domain to simulation problems in decision theory. Is there a discussion I missed, or an obvious flaw here?
Basically: I can simply respond to the AI that my utility function does not include a term for the suffering of simulated me. Simulated me (which I may have trouble telling is not the "me" making the decision) may end up in a great deal of pain, but I don't care about that. The logic is the same logic that compels me to, for example, attempt actually save the ... (read more)
The AI says: "Okay, given what you just said as permission to do so, I've
simulated you simulating you. Sim-you did care what happened to sim-sim-you.
Sim-you lost sleep worrying about sim-sim-you being tortured, and went on to
have a much more miserable existence than an alternate sim-you who was unaware
of a sim-sim-you being tortured. So, you're lying about your preferences.
Moreover, by doing so you made me torture sim-sim-you ... you self-hating
self-hater!"
5evand11y
"I was not lying about my far-mode preferences. Sim-me was either misinformed
about the nature of his environment, and therefore tricked into producing the
answer you wanted, or you tortured him until you got the answer you wanted. I
suspect if you tortured real me, I would give you whatever answer I thought
would make the torture stop. That does not prevent me, now, from making the
decision not to let you out even under threats, nor does it make that decision
inconsistent. I am simply running on corrupted hardware."
-2Xachariah11y
I don't think you're missing anything. No matter how clever an AI, it cannot
argue a rock into rolling uphill. If you are a rock to it's arguments, the AI
cannot make you do anything. The only question is if your utility function is
really immune to it's arguments or if you just think it is.
Although, if you are immune to it's argument, there's no need to convince it of
anything.
4wedrifid11y
Utility functions are invulnerable to arguments in the same way that rocks are.
It is the implementing agent that can be vulnerable to arguments (for better or
for worse.)
Less Wrong frequently suggests that people become professional programmers, since it's a fun job that pays decently. If you're already a programmer, but want to get better, you should consider Hacker School, which is now accepting applications for its fall batch. It doesn't cost anything, and there are even grants available for living expenses.
Full disclosure: it's run by friends of mine, and my wife attended.
Being inspired by the relatively recent discussions of Parfit's Repugnant Conclusion, I started to wonder how many of us actually hold that ceteris paribus, a world with more happy people is better than a world with fewer happy people. I am not that much interested in answer generated by the moral philosophy you endorse, but rather the intuitive gut feeling: imagine you learn from a sufficiently trustworthy source about existence of a previously unknown planet (1) with a billion people living on it, all of them reasonably (2) happy, would it feel like a go... (read more)
Upvote this if learning about the new planet full of happy people doesn't feel
like good news to you.
4Pentashagon11y
To avoid the massive utility of knowing that another intelligent species
survived the great filter you might want to specify that a 93rd planet full of
reasonably happy people has just been located millions of light-years away.
I think that given our evolutionary origins it's quite normal to have stronger
feelings for people we know personally and associate ourselves with. All this
means is that humans are poor administrators of other people's happiness without
special training. You may try thinking about how you would feel if you had a
button that collapsed a mine in Chile if you pushed it. Would you push it on a
whim just because miners dying in Chile doesn't necessarily make you sad or
would you suddenly feel a personal connection to those miners by means of the
button you had to control their fate? What if you had to push a button every day
to prevent the mine from collapsing? You might find that it isn't so much your
emotional/moral detachment from miners in Chile but your causal detachment from
their fates that reduces your emotional/moral feelings about them.
2prase11y
I wouldn't push the button because
1. fear that my action might be discovered,
2. feeling guilty of murder,
3. other people's suffering (the miners' when they would be dying and their
relatives' afterwards) having negative utility to me,
4. "on a whim" doesn't sound as reasonable motivation,
5. fear that by doing so I would become accustomed to killing.
If the button painlessly killed people without relatives or friends and I were
very certain that my pushing would remain undiscovered and there were some
minimal reward for that, that would solve 1, 3 and 4. It's more difficult to
imagine what would placate my inner deontologist who cares about 2; I don't want
to stipulate memory erasing since I have no idea how I would feel after having
my memory erased.
Nevertheless if the button created new miners from scratch, I wouldn't push it
if there was some associated cost, no matter how low. Assuming that I had no
interest in Chilean mining industry.
2A1987dM11y
It has survived it so far, but for all we know it may be going to be extinct in
200 years.
2evand11y
The first such civilization surviving thus far still provides a large quantity
of information. In particular, it makes us think the early stages of the filter
are easier, and thus causes us to update our probability of future survival
downward for both civilizations. In other words, hearing about another
civilization makes us think it more likely that said civilization will go
extinct soon.
4A1987dM11y
Anyway, even if prase didn't mention the Great Filter in particular, given that
he/she said “in any case, if possible, try to leave aside MWI, UDT, TDT,
anthropics and AI”, I don't think he/she was interested in answers involving the
Great Filter, either.
(Not sure this is the best way to say what I'm trying to say, but I hope you
know what I mean anyway.)
0prase11y
You are right.
0A1987dM11y
How about someone dying from malaria because you didn't donate $1,600 to the
AMF?
0Pentashagon11y
I'm not sure if I would get more utility from spending $1,600 once to save a
random number of people for only a few months or years or focus on a few
individuals and try to make their lives much better and longer (perhaps by
offering microloans to smart people with no capital and in danger of starving).
The "save a child for dollars a day" marketing seems to have more emotional
appeal because those charities can afford to skim 90% off the top and still get
donations. I should probably value 1000 lives saved for 6 months over 10 lives
saved for 50 years just because of the increasing pace of methods for saving
people, like malaria eradication efforts. The expected number of those 1000 who
are still alive in 50 years is probably greater than 10 if they don't starve or
die of malaria thanks to a donation.
3The_Duck11y
I have similar thoughts, though perhaps not for exactly the same reasons. It
seems to me that in discussions that touch on population ethics, a lot of people
seem to assume that more people is inherently better, subject to some
quality-of-life considerations. It's not obvious to me why this should be so. I
can see that if you adopt a certain simple form of utilitarianism where each
person's life is assigned a utility and then total utility is the sum of all
these, then it will always increase total utility to create more
positive-utility lives. But I don't think my moral utility function is
constructed this way. Large populations have many benefits--economies of scale,
survivability, etc.--but I don't assign value to them beyond and independent of
those benefits.
1Nornagest11y
The premise feels mildly good to me, but I'm pretty sure some of that is
positive affect bleeding over from my thoughts on alien life, survivability of
sapience in the face of planet-killer events, et cetera. I'm likewise fairly
sure it's not due to the bare fact of knowing about a population that I didn't
know about before.
I don't get the same positive associations when I think about similar scenarios
closer to home, i.e. "happy self-sustaining population of ten million mole
people discovered in the implausibly vast sewers of Manhattan".
0Kaj_Sotala11y
I used to have such a positive gut feeling: e.g. the idea of Earth having a
population of 100 billion
[http://www.acceleratingfuture.com/michael/blog/2006/09/overpopulation-no-problem/]
felt awesome. These days I think my positive gut feeling to that is much weaker.
0prase11y
Where exactly had you lived when the idea of 100 billion people on Earth felt
awesome? I suspect the feelings toward population increase are correlated with
how much 'free' land and, on the other hand, crowded places one sees around in
one's life. There aren't many crowded places in Finland.
0Kaj_Sotala11y
In Finland, yes, though I haven't really been to anywhere substantially more
crowded since that. The change in my gut feeling has probably more to do with a
general shift towards negative utilitarianism.
0A1987dM11y
Me neither, but 10^9 >> 50. (Okay, “I don't terminally value other people whom I
don't directly know” is not strictly true for me, but the amount by which I
terminally value them is epsilon. And epsilon times a billion is not that small.
Not sure if this is acceptable in an open thread but oh well.
I am currently a university student and get all of my expenses paid for by government aid and my parents. This fall I will start tutoring students and earn some money with it. Now, what should I do with it? Should I save it for later in life? Should I spend it for toys or whatnot? Part of both? I would like your opinions on that.
You should probably spend it on things that give you good experiences that will improve you and that you will remember throughout your life. Going to see shows, joining activities such as martial arts (I favor Capoeira) or juggling or something can give you fun skills you can use indefinitely as well as introducing you to large amounts of potentially awesome people. Not only are friendships and relationships super important for long-term happiness, spending money on experiential things as opposed to possessions is also linked to fonder memories etc.
If you want to buy toys, I recommend spending money on things you will use a lot, such as a new phone, a better computer, or something like a kindle.
In general I approve of saving behavior but to be honest the money you make tutoring kids is not gonna be a super relevant amount for your long-term financial security.
Thank you, you answered exactly how I expected the answer to be: Saving this
tiny amount of money is not sensible, at least compared to the money I should be
expecting to earn with a STEM major. So get the most bang for my bucks I should
spend it for experiences, as I planned it, or toys that I will use a lot.
So now I can look forward to feeling less guilty when spending my more or less
hard earned money. ;)
9Kaj_Sotala11y
On the other hand, although your current income is insignificant compared to
what you'll eventually make, it's still significant now. In other words, it's
still useful to save money now, because
something-that-you-need-a-lot-of-money-for might come up before you start making
big bucks. This certainly happened to me several times while studying, and when
it did, I was glad that I had savings.
3dbaupp11y
There is still a certain amount of saving that might be useful. e.g. I am
currently on a 6 month university exchange which will probably cost me upward of
$10K (USD), but will (hopefully) be one of the best experiences I have had.
6shminux11y
I recall that when I started making some money as a student, I gave about half
back to my parents and spent the rest. There wasn't nearly enough to be worth
considering "saving for later". The paying back part made me feel better about
myself, probably out of proportion, given that it was really a token amount.
Which is probably one of the best uses of money: making oneself feel better.
I call this the EverQuest Savings Algorithm when I do it. The basis is that in EverQuest and most games in general, the amount of money you can make at a given level is insignificant to the income you will be making in a few more levels, so it never really seems to make sense to save unless you've maxed out your level. The same thing happens in real life, as all your pre-first-job savings are rendered insignificant by your first-job savings, and subsequently your pre-first-post-college-job savings are obsoleted by your first post-college job.
This was inspired by the recent Pascal's mugging thread, but it seems like a slightly more general and much harder question. It sufficiently hard I'm not even sure where to start looking for the answer, but I guess my first step is to try to formalize the question.
From a computer programming perspective, it seems like a decision AI might have to have a few notations for probabilities and utilities which did not chart to actual numbers. For instance, assume a decision AI capable of assessing probability and utility uses RAM to do so, and has a finite amount... (read more)
Events U and V can be handled in the obvious fashion.
Event W is cause for mild concern, with potential for alarm. Start by assuming
the event has high probability (~ 1), and compute an output. The try with low
probability (~ 0). If the outputs are the same, ignore the problem and await
more evidence. If the outputs are similar, attempt to decide whether the
difference between them might plausibly have a large impact. If not, pick
something within that range and proceed. If the problem remains unsolved, go
into alarm mode and request programmer assistance.
Events X and Y can be mitigated with an appropriate prior for the expected
utility of a typical action, as informed by past experience. That should allow
for reasonable decisions in many cases of (unreasonable utility) * (unreasonable
probability), since those terms will produce a very low expected utility one way
or the other. If the problem is still unresolved, seek programmer guidance.
Event Z can be handled analogously to event W.
2Nisan11y
When thinking about these things I occasionally find it useful to use intervals
instead of numbers to represent probabilities and utilities:
* P(U) is in (0, epsilon), where epsilon is the lowest upper bound for the
probability I found before I ran out of RAM.
* P(V) is in (1 - epsilon, 1).
* P(W) is in (0, 1); or in (a, b) if I managed to find nontrivial bounds a and
b before I ran out of RAM.
* U(X) is in (N, infinity)
* U(Y) is in (-infinity, N)
* U(Z) is in (-infinity, infinity); or (M, N) if I managed to find finite upper
or lower bounds before running out of RAM.
EDIT: This might be what is known as "interval-valued probabilities" in the
literature.
I have never really used a budget. I want to try, even though I make enough and spend little enough that it's not an active problem. I've been pointed to YNAB... but one review says "YNAB is not for you if ... [you’re] not in debt, you don’t live paycheck to paycheck and you save money fast enough. If it ain’t broke, don’t fix it." I have data on Mint for a year, so I have a description of my spending. The part I'm confused about is the specifics of deciding what normatively I "should" spend in various categories. My current plan is pro... (read more)
You needn't do a utility rebalancing to get value out of a budget. My primary
use of a budget is to prevent surprises. I know my inflow; I know my outflow; I
know how long it will take me to save up for $ item or recover from a $ hit to
my savings. When I first started doing my budget, there was no explicit
utilons->dollar comparison. That came automatically in the form of "holy crap, I
spent how much on games this month? I thought I spent almost nothing," or "wow,
I spent way less on food than I expected this month."
Note that online banking can make this initial phase really easy. All my
expenses are in check or debit form (and cash withdrawals are rare), so all of
my expenses show up on my online statement. It takes about 3 minutes in excel to
have the month's budget broken down and ready to compare with the prior month.
With this low of an upfront cost, you can do the initial phase, then you'll have
more data if a more intensive revue is worth it for you.
4TimS11y
From this outsider's perspective, it looks like your potential budgeting plan is
a solution in search of a problem.
The traditional problem budgeting is intended to solve is "outflow of resources
exceeds inflow of resources." If that isn't your problem, then there is every
reason to think the amount you spend on different things is a reasonable way of
converting money into happiness for you.
But if you're not sure you are converting efficiently, I wouldn't try a
budgeting task. Instead, I would examine your spending for easy improvements in
happiness/money ratio. Toy example: Starbucks coffee is too expensive for the
happiness it gives? Buy a coffee machine.
If you are concerned you aren't saving enough, that's also a separate
investigation from budgeting.
My discussion assumes that you already have a moderately detailed understanding
of where your money goes each month - as your post suggests. If you haven't done
that, I suggest you try. Just keep your receipts for a month and then sit down
for an hour or so with Excel.
2Rain11y
I use the steps from the book Your Money Or Your Life
[http://www.amazon.com/Your-Money-Life-Transforming-Relationship/dp/0143115766/].
1GuySrinivasan11y
I have read 1/3 of it so far, and it looks to be exactly what I wanted to be
looking for.
Has anyone from CfAR contacted the authors of Giving Debiasing Away? They at least claim to be interested in implementing debiasing programs, and CfAR is a bit short on people with credentials in Psychology.
I have a question about a nagging issue I have in probability -
The conditional probability can be expressed thus:
p(A|B)=p(AB)/p(B)
However, the proofs I've seen of this rely on restricting your initial sample space to B. Doesn't this limit the use of this equivalency to cases where you are, in fact, conditioning on B - that is, you can't use this to make inferences about B's conditional probability given A? Or am I misunderstanding the proof? (Or is there another proof I haven't seen?)
(I can't think of a case where you can't make inferences about B given A, but I'm having trouble ascertaining whether the proof actually holds.)
Because I sold my college textbooks quite a while ago, I'm using the proof on
wikipedia:
http://en.wikipedia.org/wiki/Conditional_probability#Formal_derivation
[http://en.wikipedia.org/wiki/Conditional_probability#Formal_derivation]
2Oscar_Cunningham11y
Hmmm... I'm afraid I don't really understand your problem. I was hoping that
looking at one of the proofs would give me a clue as to what you were missing,
but it didn't.
The symbol p(A|B) is normally defined as p(AB)/p(B). What we need to check is
that this matches up with our intuitive notion of conditional probability.
Different people don't always have the same intuitive notions of probability,
and the line that wikipedia takes is that probabilities conditional on B should
be the probabilities you get when you set the chance of elementary events
inconsistent with B to zero, and then renormalise everything else. They prove
from there that this gives p(AB)/p(B).
This is the part of your question I don't understand. The symbol p(A|B) refers
to some particular number. The proof shows that this is, in fact, the
probability that you should ascribe to A, given that you know B. The symbol
p(B|A) refers to some other number. We have p(A|B)=p(AB)/p(B) and
p(B|A)=p(AB)/p(A). Smushing these equations together gives
p(B|A)=p(A|B)p(B)/p(A), a formula for p(B|A) involving p(A|B).
0OrphanWilde11y
The issue I have is whether or not it is valid to smush the equations together;
whether the equation for p(A|B) is valid in the context of the equation for
p(B|A). It may be an issue of intuition mismatch, but it seems analogous to
simplifying the equation (1-X)*X^2/(1-X) - the value of 1 is still supposed to
be undefined, even after you simplify. Here, we have two "versions" of the same
set with disagreeing assigned probabilities.
But your description suggests the issue is that I'm trying to think of the set
from the proof p(A|B) as still being there, instead of considering p(A|B) as a
specific number; that is, I'm trying to interpret it as a variable whose value
remains unresolved. If I consider it in the latter terms, the issue goes away.
I've been pondering a game; an iterated prisoner's dilemma with extended rules revolving around trading information.
Utility points can be used between rounds for one of several purposes; sending messages to other agents in the game, reproducing, storing information (information is cheap to store, but must be re-stored every round), hacking, and securing against hacking.
There are two levels of iteration; round iteration and game iteration. A hacked agent hands over its source code to the hacker; if the hacker uses its utility to store this information unti... (read more)
Disregarding the question of actual AIs¹, this sounds like it would make for an
awesome browser-based "hacking" strategy game. It could also fit well into a
game design similar to Uplink
[http://en.wikipedia.org/wiki/Uplink_\(computer_game\]) or Street Hacker
[http://en.wikipedia.org/wiki/Street_Hacker].
¹. (I'm not good enough with AI theory yet to really have any useful insight
there)
9 months ago, I designed something like a rationality test (as in biological rationality, although parts of it depend on prior knowledge of concepts like expected value). I'll copy it here, I'm curious whether all my questions will get answered correctly. Some of the questions might be logically invalid, please tell me if they are and explain your arguments (I didn't intend any question to be logically invalid). Also, certain bits might be vague - if you don't understand it, it's likely that it's my fault. Feel free to skip any amount of questions and sele... (read more)
I find these questions unclearly written. For example, in the license plate
case, what does "close" mean? Are 1337 and 1307 close because three digits are
exactly the same and the fourth one doesn't matter as long as it's not perfect,
or because the nonmatching digit is only 3 away, or because the numbers have a
difference of 30 out of a possible difference of thousands, or what?
0Blackened11y
I meant to say, a close match to what the person said. And I'm not entirely
confident that 2 makes sense, I'd like to clarify something but that would give
out the answer. Please tell me of the other questions you don't understand.
1Alicorn11y
This still doesn't clear up my confusion. I'll clarify.
In case (a), 1307 is as close to 1337 as are the example numbers 7337, 1937, and
1330 (among others). The only way 1307 could be closer to 1337 is if it were
exactly 1337.
In case (b), 1307 is as close to 1337 as are the example numbers 4337, 1037, and
1334 (among others). The found number could be closer to 1337 if it were instead
1347 or 1327 (among others).
In case (c), 1307 is as close to 1337 as is 1367. The found number could be
closer to 1337 if it were 1338, or 1336 (among others).
This can't be. If nothing else, the one group uses their left hand and the other
uses their right. You need an "except" or "other than" clause.
Did it just happen to turn out that we found ten, so we can proceed, and if we
didn't find ten we'd skip this problem - or does this problem solely use classes
that have ten and throw out other classes?
In the entire class? Because that's not clear.
Went around shaking hands until locating a left-handed person, or grabbed the
first person you saw and they were left-handed?
This is a weird and misleading way to put it if we're still assuming the people
in the class are independent of each other. Yes, even with the word "average";
I'm talking about writing, not math.
What, really? These are both heavily correlated with a third thing but not at
all with each other? Are there real phenomena that act like that? It is unlikely
to have good grades and a low score on either one, but they're not correlated?
I'm just nitpicking here, but this made me wonder if a won $35 would be taxed
where the $10 wouldn't.
This is bad wording if this is supposed to be an expected value question. The
most money possible is just $35; you don't even have to work out the expected
value. If you take the ten dollars you are not getting as much as you could
possibly have gotten.
0Blackened11y
This is the case I meant to (at least one that would be very close to what
someone would use in real life). The point is to choose your own criteria for
the example situation to determine whether that person is a real magician.
I know, but in real life, left-handers can be a subject of stereotyping and
discrimination. So I wanted to omit factors like those, like everyone does in
such questions. I could have said that some have gene A and others have gene B
and only you can identify people and nobody else cares about it, because it has
no effect on anything, but handedness seemed more intuitive to me, for this
already quite abstract question.
The problem only uses classes that have ten or more right-handers. I have edited
this in the description.
I have clarified that. I don't know why did I include this item, because it sort
of duplicates a).
I have edited it to "randomly picked a left-handed person, out of all the
left-handers who were there".
Why not? The original was with IQ and concentration, but someone took it
literally, so I decided to rename it. As far as I know, they + conscientiousness
are all correlated with academic success, but not correlated with each other.
Also, intelligence and social abilities are both correlated with social success.
What do you mean? There are no taxes in either case.
I think it's fine this way and I can't think of another way to word it. English
isn't my first language.
0A1987dM11y
And you tell me that now? I had been answering the previous questions assuming I
was allowed to round numbers of the order of 1/(world population) down to
zero...
0OrphanWilde11y
1.A) Approximately. (Originally this was yes, until you stated that there were
at least 700 million people on the planet. After that information, I updated
this answer, because I realized that the problem had an additional assumption of
a finite number of people, thus encountering any one left-handed person reduces
the odds, very very marginally, of any different future person I encounter being
left-handed, because the pool of people I'm drawing from now has slightly
different odds.)
1.B) No. (Still.)
1.C) Approximately. Why the answer is different without resorting to math: In
1.B, we nonrandomly pull 10 right-handed students out of the group. In a pool of
24 10-sided die we've already rolled, we've pulled out 10 of them which did not
roll 1; this does not alter the number which did roll 1, increasing their
relative proportion. In this case, we've rolled the dice 10 times, and they
never came up 1; the remaining 14 times remain fair dice rolls.
1.D) (Modified) Approximately.
1.E) Very very slightly.
2.) [Edited; apparently I screwed up when I added the possibility of an exact
match] .41%, still assuming we're not considering the proximity of 0 to 3, and
including closer matches. (That is, only considering identical digit matches.)
3.ab) Supposing it's more likely that a higher quality student is A than !A;
it's possible that it's extremely unlikely for a person who isn't high A to have
high grades while still having more high grade students who aren't A than are A,
if the odds of A are substantially lower than the odds of being neither A nor B
but still having high grades. So there's not enough information.
Assuming it's more likely you're A and have high grades than ~A and have high
grades, however, and assuming that this distribution holds for the grade average
for each college (p(A|G) > .5 for all three G), you should in all cases favor
low-B students, because the remaining pool of accepted students is more likely
to be A than !A, because !B limits you to
0Blackened11y
For b) and c), the questions were supposed to be the same - my bad, I have
edited it. Please edit your answer accordingly.
Not all of your answers were correct (unsurprisingly, because I find some of the
questions extremely hard - even I couldn't answer them at first :D). I'll wait
for a few more replies and then I'll post the correct answers plus explanations.
0OrphanWilde11y
Oddly, my answers remained the same, but for different reasons. Also, I changed
my answer to 1.D, and would recommend you change the wording to "Expected
average" wherever you merely refer to the average.
I've been working on candidates to replace the home page text, about page text, and FAQ. I've still got more polishing I'd like to do, but I figured I'd go ahead and collect some preliminary feedback.
Feel free to edit the candidate pages on the wiki, or send me suggestions via personal message. Harsh criticism is fine. It's possible that the existing versions are better... (read more)
What do you think are the advantages of the new candidate pages over the
existing ones?
3John_Maxwell11y
Not necessarily any one thing in particular, but it didn't seem like people had
put much effort in to optimizing them.
* There's duplicated text between the home page and about page. This is
annoying if you've already read one.
* The about page describes basic stuff about how the site works, like the fact
that you can vote stuff up and down. This seems unnecessary because most of
this stuff is pretty intuitive, so I don't think we need to spell it out
anywhere besides the FAQ.
* I read a comment somewhere that said something like "most people I know who
got in to Less Wrong do it after they read a particular article that they
really enjoy". This matches with my experience. For me, I got in to Less
Wrong after reading a couple of the politics articles (science and politics
fable + politics is the mindkiller) and realizing I was an uninformed
libertarian nut. I don't think the right article is guaranteed to be the same
for every person, so I like the idea of the about page displaying a
smorgasboard of different articles. I also think that just hyperlinking words
isn't a very good way to tell people what articles are going to be
interesting. I'd rather write out a sentence about each article, or at least
give the article's full title.
* There are a bunch of articles that people have explicitly made to be read by
newcomers ("What is Bayesianism?", "References and Resources for Less Wrong",
etc.) Right now these articles aren't very visible. Making them more visible
would be an easy win.
* Some of the answers in the current FAQ are kind of unfriendly, such as the
answer to why everyone is an atheist. The answer to "why does everyone on
Less Wrong agree" strikes me as a tad obnoxious and arrogant. I don't think
these answers do a good job of communicating Less Wrong culture, which tends
to be reasonably friendly and egalitarian for the most part (which is a good
thing!)
One possible disadvantage
5fubarobfusco11y
Be careful here. Typical-mind fallacy crops up a lot when people say "intuitive"
about user interfaces they're familiar with. A visitor familiar with sites such
as Reddit will readily understand the voting mechanism. But other folks might
see the thumbs-up and thumbs-down icons and think they mean "recommend this to
my friends" and "report this comment as abusive", for instance.
(That said, I agree that a detailed explanation of the voting system does not
really belong in the "About" page.)
1John_Maxwell11y
Well, Facebook, Youtube, and pretty much every major website I can think of have
gone pretty far with their usage instructions tucked in to a corner or entirely
absent. And if we're doing things right, LWers ought to be substantially smarter
than typical users of those sites.
3Kaj_Sotala11y
Related
[http://www.patheos.com/blogs/unequallyyoked/2012/07/7-quick-takes-72712.html#comment-31700].
I picked up one variant independently from reading Robert Jordan; I can only
caution against it based on my experiences. I discovered after I started
listening to audiobooks on long drives that I was missing large chunks of (only
usually trivial) detail. It's taken several years to unlearn the habit.
1DaFranker11y
Personal experience with speed reading "techniques" seems to indicate that their
effectiveness largely depends on your skill, past experience, the topic you're
reading about, how much you master the topic and how much of it you really need
to understand / remember.
When I tried practical applications, what usually works the most is simple
pattern-recognition of complete sentences as "single words", with the rest of
your brain filtering through the less-useful words and adjectives and so on,
which is extremely reliant on reading a lot of similar text. Then you can, in
practice, eliminate most of most sentences, reading each sentence as a word and
going through a paragraph like it was one sentence, relying heavily on
intuitive/subconscious pattern-recognition and then flowing backwards to "fill
in the blanks" of phrase complements, particular subjects, etc.
Basically, from my experience, speed reading is martial arts for reading.
There's no secret technique, just lots of training and purging inefficiencies.
You still won't be able to throw firetrucks at people with your pinkies. Big
mathy essays about stuff you don't already master will still take just as long
to read and understand as they did before - any gain from speed-reading mastery
will be inferior to mastering the skill of quick-page-turning.
0Blackened11y
I've heard that it's often a fraud and that it usually comes at the cost of
reduced reading comprehension. But I have no actual experience with it.
This does not seem possible (thankfully!). Have you considered using JsFiddle?
It may be useful for your purposes:
http://andrewwooldridge.com/blog/2011/03/16/stunning-examples-of-using-jsfiddle/
[http://andrewwooldridge.com/blog/2011/03/16/stunning-examples-of-using-jsfiddle/]
0roland11y
I suppose you can't embed JsFiddle here either, can you?
0J_Taylor11y
That seems unlikely. You would have to have links in your article.
1dbaupp11y
Are they meant to be interactive? If not, a .gif or a youtube video would
probably work.
FMA fans: for no particular reason I've written an idiosyncratic bit of fanfiction. I don't think I got Ed & Al's voice right, and if you don't mind reading bad fanfiction, I'd appreciate suggestions on improving the dialogue.
It's close enough for the purpose of the story. I could tell who was saying what
the whole time. I don't think Ed would be that certain about ethics, he never
seemed that way in the show (I never read the manga), and it seemed like you
were trying to hard to force his hotheadedness.
To me, the sign of poorly written fanfiction is when the author tries to
shoehorn details from the original work even when its not necessary. There
wasn't any reason for the gate to be involved, and the Elrics didn't really have
cause to connect the philosopher's reference to the doorway between worlds. They
wouldn't assume that everyone who mentions a gate has knowledge of human
alchemy. Al also didn't need to mention their father to express recognition of
the tale, and The joke about needing to eat didn't fit the tone you set up.
The dialogue was more awkward than anything. It seemed like the story really had
nothing to do with FMA so you tried to add as many arbitrary references and
character quirks from the series as you could to strengthen the connection,
instead of letting the characterization flow naturally from their place in the
story. It wasn't terrible as far as fanfiction goes, but it wasn't great.
Anyway, that's my two cents, hope it helps.
1gwern11y
Those are good points, thanks for all the advice.
With the gate, I was trying to provide a sort of 'hook' and nudge readers
towards thoughts about multiple words; I wondered if it was too clumsy, but you
pointed to it and so I guess so. I'll remove that. Also tone down the
exclamation marks. I think the dinner joke makes sense in context, though: every
conversation is a tug of war, and the reaction to abstraction is concreteness
and vice versa... hm, actually what would make more sense is pointing out 'how
does he get back'.
(I don't know how good the revised version is; the story's pretty personal, and
I doubt anyone but me appreciates the three levels of interpretation, but then,
I didn't write it for anyone but me.)
It's getting close to a year since we did the last census of LW, (Results) (I actually thought it had been longer until I checked) Is it time for another one? I think about once a year is right, but we may be growing or changing fast enough that more than that is appropriate. Ergo, a poll:
Edit: If you're rereading the results and have suggestions for how to improve the census, it might be a good idea to reply to this comment.
I was planning to do one in October of this year (though now that it's been mentioned, I might wait till January as a more natural "census point").
If someone else wants to do one first, please get in contact with me so we can make it as similar to the last one as possible while also making the changes that we agreed were needed at the time.
I would be willing to do it, but only if it wouldn't get done otherwise. I'm
sure you'd do a better job with it. The best suggestion I saw was to make sure
to post the question list before you post the survey. As long as you do that
anyone who wants to provide feedback can do so.
0[anonymous]11y
Are you tentatively planning on January for the next census? I'm interested in
helping, if that's something you need.
2Scott Alexander11y
I am planning on now, but waiting for someone from CFAR who was going to send me
a few questions they wanted included.
0[anonymous]11y
Oh, fun! I look forward to it.
0Xachariah11y
The only thing I'd worry about is how external factors effect things. It's been
a while since I was in school, but I remember September/October having a
different online presence than January. Also, HPMoR release dates may
dramatically affect census numbers. Ideally we'd want to do it at as
representative time as possible.
That would probably be my preference, as a general policy. But a few things make
me disagree:
First, I'm really curious about the results, specifically how they compare to
mine. At the time of the last one I was almost brand new to LW.
Second, it was cauched as the 2011 Survey, even though we started it on November
1, which seems like an awkward time to do an annual census.
0tgb11y
OTOH, people's visiting and poll-tacking tendencies almost certainly are
season-dependent to some extent. Waiting should make the comparison a little
better.
What (if anything) really helps to stop a mosquito bite from itching? And are there any reliable methods for avoiding bites, apart from DEET? I'll use DEET if I have to, but I'd rather use something less poisonous.
I've found that not scratching a mosquito bite when it's fresh means that it
stops itching fairly quickly and completely. The red mark takes just as long to
go away, though.
I have no idea whether this generalizes to other people.
0Sabiola11y
Not scratching, huh? That takes an awful lot of willpower, but I'll give it a
go.
2NancyLebovitz11y
For whatever reason, I let myself touch the red spot instead of scratching it. I
think that makes it easier for me, but again, I don't know whether that would
generalize.
2Sabiola11y
I did try it, and that is exactly what I turned out doing. I touched it softly,
and sometimes pressed down on it with a finger. And it works! Better than
anything I've ever tried putting on it. I don't know why I didn't know this
simple trick. Of course people (my parents, for example) always say you
shouldn't scratch, but no-one explained that it makes the itch go away faster,
just that scratching can break the skin and maybe cause infection.
0NancyLebovitz11y
I'm glad it worked.
I have no idea why I thought of it. I didn't have a theory and it wasn't based
on anyone's advice. I don't think I'd been told to not scratch mosquito bites.
2satt11y
Icaridin (a.k.a. "picaridin") comes out well in head-to-head comparisons against
DEET
[http://scholar.google.com/scholar?q=DEET%20%28icaridin%20OR%20picaridin%29],
and it's CDC-approved
[http://www.cdc.gov/ncidod/dvbid/westnile/repellentupdates.htm]. When I've been
lucky enough to buy it I've found it easier on the skin than DEET.
Heat works for me for itchy bites, although maybe it's a placebo. In any case,
here's what I do: boil/microwave a cup of water; put a spoon in it briefly; dry
the spoon; let it cool just enough so it won't burn me; press it against the
bite for a few seconds. The itching intensifies while I apply the heat, then
subsides to less than it was before, and stays low for an hour or two.
There's also a commercial product called After Bite that might work if you apply
it soon after you're bitten. However, it's basically just a 3.5% ammonia
emulsion with a special applicator, so you might as well buy plain ammonia and
dilute & apply it as necessary.
1NancyLebovitz11y
More about heat and various other methods to deal with mosquito bites
[http://boingboing.net/2012/08/17/cheap-looking-bug-bite-zapper.html#disqus_thread]
1Sabiola11y
Thank you! I'm using Picksan, which uses the same active ingredient as
(p)icaridin, and it did seem to work. I was just spooked a few days ago when a
mosquito sat on me while I was wearing the stuff. It didn't bite though; and
maybe I had forgotten a spot, and I'm pretty sure I didn't shake before using it
like it says on the bottle. I'll definitely try the heat thing. I have tried
After Bite, and it didn't seem to do much. I do have a bottle of ammonia in the
house; maybe a stronger solution works better.
2Alicorn11y
Imitation vanilla extract makes an okay mosquito repellent. (And smells much
nicer than standard bug spray.)
1Sabiola11y
Thank you! Does it have to be imitation, or will the real thing work too? I'll
try citronella first, anyway - I don't like vanilla.
3Alicorn11y
I think it's only the fake kind, but I'm not sure (my evidence is "my best
friend told me so and then I put fake vanilla on myself before a Fourth of July
party and didn't get any bites when usually I get lots").
0Sabiola11y
Thanks again! As I said, I'll try the citronella first. I just bought a bottle
of citronella; it smells just like when my mother made me use it on holidays
when I was a little girl. I still don't like it much (still better than
vanilla), but now it is nostalgic. Which is weird, since I'm really not
nostalgic for my childhood. I didn't have a bad childhood, but in general I'm
much happier now.
2Sabiola11y
OK, scratch citronella. Maybe it keeps off the mosquitos, but it also chased off
the cat yesterday evening. :(
1moridinamael11y
Rub tea tree oil on the bite. This works really well for all insect bites. It
really helps.
0Sabiola11y
Thank you! I'll try this too. *goes off to the store*
1OrphanWilde11y
I've encountered some anecdotal evidence for massive B12 consumption, but
nothing substantive.
Citronella oil is supposed to be effective.
Sulfur is actually an amazing mosquito repellent, but hard to utilize. Burning
sulfur directly produces extremely toxic fumes, and eating large quantities of
cabbage and egg yolk results in fumes you will only -wish- were toxic. (Although
apparently some hikers do exactly that... I imagine they hike alone, however.)
0Sabiola11y
Thank you! I had forgotten about citronella. I love cabbage and eggs, but I
don't think I should do that to my husband. ;p
1[anonymous]11y
Lemon eucalyptus essential oil contains a lot of citronellal, and dilution
products are quite effective at repelling insects.
0Sabiola11y
Thanks! I bought the only citronella my drugstore had; I'll give it a try next
time I see/hear a mosquito (the weather isn't nice enough for them ATM).
Does anyone have any recommendations on learning formal logic? Specifically natural deduction and the background to Godel's incompleteness theorem.
I have a lot of material on the theory but I find it a very difficult thing to learn, it doesn't respond well to standard learning techniques because of the mixture of specificity and deep concepts you need to understand to move forward.
I highly recommend Introduction to Logic by Harry Gensler, but don't just read
the book. You are very unlikely to grok formal logic without working your way
through a large number of problem sets.
0FiftyTwo11y
Thanks, I'll look that one up.
I know that very well. I've been filling notepads with tableau proofs for the
past few days. I find tableau a lot easier than natural deduction as you can
work through them algorythmically, but natural deduction proofs require a
strange sort of sideways thinking to them, learning tricks and techniques to
take you towards a desired conclusion.
Would LessWrong readers be interested in an intuitive explanation of special relativity?
Of course any scifi fan knows about Mazer Rackham's very own "There and Back Again." Why does that work? Special relativity!, I hear you say. But what does that actually mean? It probably makes you feel all science-like to say that out loud, but maybe you want a belief more substantial than a password. I did.
Relativity also has philosophical consequences. Metaphysics totally relies on concepts of space and time, yet philosophers don't learn relativity. One of my favorite quotes...
If I were to teach relativity to a group of people who were less interested in passing the physics GRE and more interested in actually understanding space and time, I would do things a lot differently from how I learned them. I'd focus on visualizing rather than calculating the Lorenz transforms. I'd focus on the spacetime interval, Minkowski spacetime, and the easy conversion factor between space and time (it's called c).
I love to teach and write and doodle but I'm not sure whether LessWrong is an appropriate forum for this topic. I don't want to dance in an empty or hostile theater dontchaknow.
Do people think superrationality, TDT, and UDT are supposed to be useable by humans?
I had always assumed that these things were created as sort of abstract ideals, things you could program an AI to use (I find it no coincidence that all three of these concepts come from AI researchers/theorists to some degree) or something you could compare humans to, but not something that humans can actually use in real life.
But having read the original superrationality essays, I realize that Hofstadter makes no mention of using this in an AI framework and instead thinks about humans using it. And in HPMoR, Eliezer has two eleven-year old humans using a bare-bones version of TDT to cooperate (I forget the chapter this occurs in), and in the TDT paper, Eliezer still makes no mention of AIs but instead talks about "causal decision theorists" and "evidential decision theorists" as though they were just people walking around with opinions about decision theory, not the platonic formalized abstraction of decision theories. (I don't think he uses the phrase "timeless decision theorists".)
I think part of the rejection people have to these decision theories might be from ho... (read more)
Excellent Wondermark comic that may or may not realize it's about transhumanism.
The idea of risk compensation says that if you have a seatbelt in your car, you take more risks while driving. There seem to be many similar "compensation" phenomena that are not related to risk:
Building more roads might not ease congestion because people switch from public transport to cars.
Sending aid might not alleviate poverty because people start having more kids.
Throwing money at a space program might not give you Star Trek because people create make-work.
Having more free time might not make you more productive because you'll just w
Anybody had success in dealing with acne?
I may be missing something here, but I haven't seen anyone connect utility function domain to simulation problems in decision theory. Is there a discussion I missed, or an obvious flaw here?
Basically: I can simply respond to the AI that my utility function does not include a term for the suffering of simulated me. Simulated me (which I may have trouble telling is not the "me" making the decision) may end up in a great deal of pain, but I don't care about that. The logic is the same logic that compels me to, for example, attempt actually save the ... (read more)
Less Wrong frequently suggests that people become professional programmers, since it's a fun job that pays decently. If you're already a programmer, but want to get better, you should consider Hacker School, which is now accepting applications for its fall batch. It doesn't cost anything, and there are even grants available for living expenses.
Full disclosure: it's run by friends of mine, and my wife attended.
Being inspired by the relatively recent discussions of Parfit's Repugnant Conclusion, I started to wonder how many of us actually hold that ceteris paribus, a world with more happy people is better than a world with fewer happy people. I am not that much interested in answer generated by the moral philosophy you endorse, but rather the intuitive gut feeling: imagine you learn from a sufficiently trustworthy source about existence of a previously unknown planet (1) with a billion people living on it, all of them reasonably (2) happy, would it feel like a go... (read more)
Upvote this if learning about the new planet full of happy people feels like good news to you.
Not sure if this is acceptable in an open thread but oh well.
I am currently a university student and get all of my expenses paid for by government aid and my parents. This fall I will start tutoring students and earn some money with it. Now, what should I do with it? Should I save it for later in life? Should I spend it for toys or whatnot? Part of both? I would like your opinions on that.
You should probably spend it on things that give you good experiences that will improve you and that you will remember throughout your life. Going to see shows, joining activities such as martial arts (I favor Capoeira) or juggling or something can give you fun skills you can use indefinitely as well as introducing you to large amounts of potentially awesome people. Not only are friendships and relationships super important for long-term happiness, spending money on experiential things as opposed to possessions is also linked to fonder memories etc.
If you want to buy toys, I recommend spending money on things you will use a lot, such as a new phone, a better computer, or something like a kindle.
In general I approve of saving behavior but to be honest the money you make tutoring kids is not gonna be a super relevant amount for your long-term financial security.
I call this the EverQuest Savings Algorithm when I do it. The basis is that in EverQuest and most games in general, the amount of money you can make at a given level is insignificant to the income you will be making in a few more levels, so it never really seems to make sense to save unless you've maxed out your level. The same thing happens in real life, as all your pre-first-job savings are rendered insignificant by your first-job savings, and subsequently your pre-first-post-college-job savings are obsoleted by your first post-college job.
What's that site where you can precommit to things and then if you don't do them it gives your money to $hated-political-party?
A fantastic illustration of the planning fallacy
This was inspired by the recent Pascal's mugging thread, but it seems like a slightly more general and much harder question. It sufficiently hard I'm not even sure where to start looking for the answer, but I guess my first step is to try to formalize the question.
From a computer programming perspective, it seems like a decision AI might have to have a few notations for probabilities and utilities which did not chart to actual numbers. For instance, assume a decision AI capable of assessing probability and utility uses RAM to do so, and has a finite amount... (read more)
I have never really used a budget. I want to try, even though I make enough and spend little enough that it's not an active problem. I've been pointed to YNAB... but one review says "YNAB is not for you if ... [you’re] not in debt, you don’t live paycheck to paycheck and you save money fast enough. If it ain’t broke, don’t fix it." I have data on Mint for a year, so I have a description of my spending. The part I'm confused about is the specifics of deciding what normatively I "should" spend in various categories. My current plan is pro... (read more)
Has anyone from CfAR contacted the authors of Giving Debiasing Away? They at least claim to be interested in implementing debiasing programs, and CfAR is a bit short on people with credentials in Psychology.
More well done rationality lite from cracked this time on generalizing from fictional evidence and narrative bias.
I have a question about a nagging issue I have in probability -
The conditional probability can be expressed thus: p(A|B)=p(AB)/p(B) However, the proofs I've seen of this rely on restricting your initial sample space to B. Doesn't this limit the use of this equivalency to cases where you are, in fact, conditioning on B - that is, you can't use this to make inferences about B's conditional probability given A? Or am I misunderstanding the proof? (Or is there another proof I haven't seen?)
(I can't think of a case where you can't make inferences about B given A, but I'm having trouble ascertaining whether the proof actually holds.)
I've been pondering a game; an iterated prisoner's dilemma with extended rules revolving around trading information.
Utility points can be used between rounds for one of several purposes; sending messages to other agents in the game, reproducing, storing information (information is cheap to store, but must be re-stored every round), hacking, and securing against hacking.
There are two levels of iteration; round iteration and game iteration. A hacked agent hands over its source code to the hacker; if the hacker uses its utility to store this information unti... (read more)
9 months ago, I designed something like a rationality test (as in biological rationality, although parts of it depend on prior knowledge of concepts like expected value). I'll copy it here, I'm curious whether all my questions will get answered correctly. Some of the questions might be logically invalid, please tell me if they are and explain your arguments (I didn't intend any question to be logically invalid). Also, certain bits might be vague - if you don't understand it, it's likely that it's my fault. Feel free to skip any amount of questions and sele... (read more)
I've been working on candidates to replace the home page text, about page text, and FAQ. I've still got more polishing I'd like to do, but I figured I'd go ahead and collect some preliminary feedback.
Candidate home page blurb vs current home page blurb (starts with "Thinking and deciding...").
Candidate about page vs existing about page.
Candidate FAQ vs existing FAQ.
Feel free to edit the candidate pages on the wiki, or send me suggestions via personal message. Harsh criticism is fine. It's possible that the existing versions are better... (read more)
Do any LWers have any familiarity with speed reading and have any recommendations or cautions about it?
Quantum waves might be based on a real underlying phenomenon.
Is it possible to embed JavaScript code into articles? If yes, how? I was thinking about doing some animations to illustrate probability.
FMA fans: for no particular reason I've written an idiosyncratic bit of fanfiction. I don't think I got Ed & Al's voice right, and if you don't mind reading bad fanfiction, I'd appreciate suggestions on improving the dialogue.
It's getting close to a year since we did the last census of LW, (Results) (I actually thought it had been longer until I checked) Is it time for another one? I think about once a year is right, but we may be growing or changing fast enough that more than that is appropriate. Ergo, a poll:
Edit: If you're rereading the results and have suggestions for how to improve the census, it might be a good idea to reply to this comment.
It is time for a new census.
I was planning to do one in October of this year (though now that it's been mentioned, I might wait till January as a more natural "census point").
If someone else wants to do one first, please get in contact with me so we can make it as similar to the last one as possible while also making the changes that we agreed were needed at the time.
I would wait for an exactly a year since the last one.
It is too early for a new census.
What (if anything) really helps to stop a mosquito bite from itching? And are there any reliable methods for avoiding bites, apart from DEET? I'll use DEET if I have to, but I'd rather use something less poisonous.
Does anyone have any recommendations on learning formal logic? Specifically natural deduction and the background to Godel's incompleteness theorem.
I have a lot of material on the theory but I find it a very difficult thing to learn, it doesn't respond well to standard learning techniques because of the mixture of specificity and deep concepts you need to understand to move forward.