Luke wrote a detailed description of his approach to beating procrastination (here if you missed it).
Does anyone know if he's ever given an update anywhere as to whether or not this same algorithm works for him to this day? He seems to be very prolific and I'm curious about whether his view on procrastination has changed at all.
I have no idea. The selection isn't the best selection ever (I haven't even
heard of some of them), but it can be improved for next time based on this time.
I wrote a logic puzzle, which you may have seen on my blog. It has gotten a lot of praise, and I think it is a really interesting puzzle.
Imagine the following two player game. Alice secretly fills 3 rooms with apples. She has an infinite supply of apples and infinitely large rooms, so each room can have any non-negative integer number of apples. She must put a different number of apples in each room. Bob will then open the doors to the rooms in any order he chooses. After opening each door and counting the apples, but before he opens the next door, Bob must accept or reject that room. Bob must accept exactly two rooms and reject exactly one room. Bob loves apples, but hates regret. Bob wins the game if the total number of apples in the two rooms he accepts is a large as possible. Equivalently, Bob wins if the single room he rejects has the fewest apples. Alice wins if Bob loses.
Which of the two players has the advantage in this game?
This puzzle is a lot more interesting than it looks at first, and the solution can be seen here.
I would also like to see some of your favorite logic puzzles. If you you have any puzzles that you really like, please comment and share.
To make sure I understand this correctly: Bob cares about winning, and getting
no apples is as good as 3^^^3 apples, so long as he rejects the room with the
fewest, right?
0Scott Garrabrant10y
That is correct.
3solipsist10y
A long one-lane, no passing highway has N cars. Each driver prefers to drive at
a different speed. They will each drive at that preferred speed if they can, and
will tailgate if they can't. The highway ends up with clumps of tailgaters lead
by slow drivers. What is the expected number of clumps?
4Scott Garrabrant10y
My Answer
2solipsist10y
You got it.
0Scott Garrabrant10y
I am not sure what the distribution is.
5gjm10y
The distribution; see e.g. here.
2Scott Garrabrant10y
Ah, yes, thank you.
0mwengler10y
Coscott's solution seems incorrect for N=3. label 3 cars 1 is fastest, 2 is 2nd
fastest 3 is slowest. There are 6 possible orderings for the cars on the road.
These are shown with the cars appropriately clumped and the number of clumps
associated with each ordering:
1 2 3 .. 3 clumps
1 32 .. 2 clumps
21 3 .. 2 clumps
2 31 .. 2 clumps
312 .. 1 clump
321 .. 1 clump
Find the mean number of clumps and it is 11/6 mean number of clumps. Coscott's
solution gives 10/6.
Fix?
2Scott Garrabrant10y
My solution gives 11/6
1mwengler10y
Dang you are right.
0mwengler10y
Coscott's solution also wrong for N=4, actual solution is a mean of 2, Coscott's
gives 25/12.
2Scott Garrabrant10y
4 with prob 1/24, 3 with prob 6/24, 2 with prob 11/24, 1 with prob 6/24
Mean of 25/12
How did you get 2?
0mwengler10y
Must have counted wrong. Counted again and you are right.
Great problems though. I cannot figure out how to conclude it is the solution
you got. Do you do it by induction? I think I could probably get the answer by
induction, but haven't bothered trying.
5Scott Garrabrant10y
Take the kth car. It is at the start of a cluster if it is the slowest of the
first k cars. The kth car is therefore at the start of a cluster with
probability 1/k. The expected number of clusters is the sum over all cars of the
probability that that car is in the front of a cluster.
0[anonymous]10y
Hurray for the linearity of expected value!
2Scott Garrabrant10y
Imagine that you have a collection of very weird dice. For every prime between 1
and 1000, you have a fair die with that many sides. Your goal is to generate a
uniform random integer from 1 to 1001 inclusive.
For example, using only the 2 sided die, you can roll it 10 times to get a
number from 1 to 1024. If this result is less than or equal to 1001, take that
as your result. Otherwise, start over.
This algorithm uses on average 10240/1001=10.228770... rolls. What is the fewest
expected number of die rolls needed to complete this task?
When you know the right answer, you will probably be able to prove it.
Solution
2Strilanc10y
If you care about more than the first roll, so you want to make lots and lots of
uniform random numbers in 1, 1001, then the best die is (rot13'd) gur ynetrfg
cevzr va enatr orpnhfr vg tvirf lbh gur zbfg ragebcl cre ebyy. Lbh arire qvfpneq
erfhygf, fvapr gung jbhyq or guebjvat njnl ragebcl, naq vafgrnq hfr jung vf
rffragvnyyl nevguzrgvp pbqvat.
Onfvpnyyl, pbafvqre lbhe ebyyf gb or qvtvgf nsgre gur qrpvzny cbvag va onfr C.
Abgvpr gung, tvira gung lbh pbhyq ebyy nyy 0f be nyy (C-1)f sebz urer, gur
ahzore vf pbafgenvarq gb n cnegvphyne enatr. Abj ybbx ng onfr 1001: qbrf lbhe
enatr snyy ragveryl jvguva n qvtvg va gung onfr? Gura lbh unir n enaqbz bhgchg.
Zbir gb gur arkg qvtvg cbfvgvba naq ercrng.
Na vagrerfgvat fvqr rssrpg bs guvf genafsbezngvba vf gung vs lbh tb sebz onfr N
gb onfr O gura genafsbez onpx, lbh trg gur fnzr frdhrapr rkprcg gurer'f n fznyy
rkcrpgrq qrynl ba gur erfhygf.
I give working code in "Transmuting Dice, Conserving Entropy".
0Scott Garrabrant10y
I will say as little as possible to avoid spoilers, because you seem to have
thought enough about this to not want it spoiled.
The algorithm you are describing is not optimal.
Edit: Oh, I just realized you were talking about generating lots of samples. In
that case, you are right, but you have not solved the puzzle yet.
0Luke_A_Somers10y
Ebyy n friragrra fvqrq qvr naq n svsgl guerr fvqrq qvr (fvqrf ner ynoryrq mreb
gb A zvahf bar). Zhygvcyl gur svsgl-guerr fvqrq qvr erfhyg ol friragrra naq nqq
gur inyhrf.
Gur erfhyg jvyy or va mreb gb bar gubhfnaq gjb. Va gur rirag bs rvgure bs gurfr
rkgerzr erfhygf, ergel.
Rkcrpgrq ahzore bs qvpr ebyyf vf gjb gvzrf bar gubhfnaq guerr qvivqrq ol bar
gubhfnaq bar, be gjb cbvag mreb mreb sbhe qvpr ebyyf.
0Scott Garrabrant10y
You can do better :)
0Luke_A_Somers10y
Yeah, I realized that a few minutes after I posted, but didn't get a chance to
retract it... Gimme a couple minutes.
Vf vg gur fnzr vqrn ohg jvgu avar avargl frira gjvpr, naq hfvat zbq 1001? Gung
frrzf njshyyl fznyy, ohg V qba'g frr n tbbq cebbs. Vqrnyyl, gur cebqhpg bs gjb
cevzrf jbhyq or bar zber guna n zhygvcyr bs 1001, naq gung'f gur bayl jnl V pna
frr gb unir n fubeg cebbs. Guvf qbrfa'g qb gung.
0Scott Garrabrant10y
I am glad someone is thinking about it enough to fully appreciate the solution.
You are suggesting taking advantage of 709*977=692693. You can do better.
0Luke_A_Somers10y
You can do better than missing one part in 692693? You can't do it in one roll
(not even a chance of one roll) since the dice aren't large enough to ever
uniquely identify one result... is there SOME way to get it exactly? No... then
it would be a multiple of 1001.
I am presently stumped. I'll think on it a bit more.
ETA: OK, instead of having ONE left over, you leave TWO over. Assuming the new
pair is around the same size that nearly doubles your trouble rate, but in the
event of trouble, it gives you one bit of information on the outcome. So, you
can roll a single 503 sided die instead of retrying the outer procedure?
Depending on the pair of primes that produce the two-left-over, that might be
better. 709 is pretty large, though.
1Scott Garrabrant10y
The best you can do leaving 2 over is 709*953=675677, coincidentally using the
same first die. You can do better.
0mwengler10y
It is interesting to contemplate that the almost fair solution favors bob:
Bob counts the numbers of apples in 1st room and accepts it unless it has zero
apples in it, in which case he rejects it.
If he hasn't rejected room 1 he counts the apples in 2 and if it is more than in
1 he accepts it else he rejects it.
For all possible numbers of apples in rooms EXCEPT one room has zero apples, Bob
has 50% chance of getting it right. But for all possible number of apples in
rooms where one room has zero apples in it, Bob has 5/6 chance of winning and
only 1/6 chance of losing.
I think in some important sense this is the telling limit of why Coscott is
right and how Alice can force a tie, but not win, if she knows Bob's strategy.
If Alilce knew Bob was using this strategy, she would never put zero apples in
any room, and she and Bob would tie, i.e. Alice was able to force him
arbitrarily close to 50:50.
And the strategy to work relies upon the asymmetry in the problem, that you can
go arbitrarily high in apples but you can't go arbitrarily low. Initially I was
thinking Coscott's solution must be wrong, that it must be equivocating somehow
on the fact that Alice can choose ANY number of apples. But I think it is right,
but that every strategy Bob uses to win can be defeated by Alice if she knows
what his strategy is. I think without proof, that is :)
0Scott Garrabrant10y
Right about what? The hint I give at the beginning of the solution? My solution?
Watch your quantifiers. The strategy you propose for Bob can be responded to by
Alice never putting 0 apples in any room. This strategy shows that Bob can force
a tie, but this is not an example of Bob doing better than a tie.
0mwengler10y
Right about it not being a fair game. My first thought was that it really is a
fair game and that by comparing only the cases where fixed numbers a, b, and c
are distributed you get the slight advantage for Bob that you claimed. That if
you considered ALL possibilities you would have not advantage for Bob.
Then I thought you have a vanishingly small advantage for Bob if you consider
Alice using ALL numbers, including very very VERY high numbers, where the
probability of ever taking the first room becomes vanishingly small.
And then by thinking of my strategy, of only picking the first room when you
were absolutely sure it was correct, i.e. it had in it as low a number of apples
as a room can have, I convinced myself that there really is a net advantage to
Bob, and that Alice can defeat that advantage if she knows Bob's strategy, but
Alice can't find a way to win herself.
So yes, I'm aware that Alice can defeat my 0 apple strategy if she knows about
it, just as you are aware that Alice can defeat your 2^-n strategy if she knows
about that.
0Scott Garrabrant10y
What? I do not believe Alice can defeat my strategy. She can get arbitrarily
close to 50%, but she cannot reach it.
2.5 years ago I made an attempt to calculate an upper bound for the complexity of the currently known laws of physics. Since the issue of physical laws and complexity keeps coming up, and my old post is hard to find with google searches, I'm reposting it here verbatim.
I would really like to see some solid estimates here, not just the usual hand-waving. Maybe someone better qualified can critique the following.
By "a computer program to simulate Maxwell's equations" EY presumably means a linear PDE solver for initial boundary value problems. The same general type of code should be able to handle the Schroedinger equation. There are a number of those available online, most written in Fortran or C, with the relevant code size about a megabyte. The Kolmogorov complexity of a solution produced by such a solver is probably of the same order as its code size (since the solver effectively describes the strings it generates), so, say, about 10^6 "complexity units". It might be much lower, but this is clearly the upper bound.
One wrinkle is that the initial and boundary conditions also have to be given, and the size of the relevant data heavily depends on the desired pre
Interesting recent paper: "Is ZF a hack? Comparing the complexity of some
(formalist interpretations of) foundational systems for mathematics", Wiedijk;
he formalizes a number of systems in Automath.
2shminux10y
This makes sense for mathematical systems. I wonder if is possible to do
something like this for a mathematical model of a physical phenomenon.
2Squark10y
It shouldn't be that hard to find code that solves a non-linear PDE. Google
search reveals http://einsteintoolkit.org/ an open source that does numerical
General Relativity.
However, QFT is not a PDE, it is a completely different object. The keyword here
is lattice QFT. Google reveals this gem: http://xxx.tau.ac.il/abs/1310.7087
Nonperturbative string theory is not completely understood, however all known
formulations reduce it to some sort of QFT.
I've written a game (or see (github)) that tests your ability to assign probabilities to yes/no events accurately using a logarithmic scoring rule (called a Bayes score on LW, apparently).
For example, in the subgame "Coins from Urn Anise," you'll be told: "I have a mysterious urn labelled 'Anise' full of coins, each with possibly different probabilities. I'm picking a fresh coin from the urn. I'm about to flip the coin. Will I get heads? [Trial 1 of 10; Session 1]". You can then adjust a slider to select a number a in [0,1]. As you adjust a, you adjust the payoffs that you'll receive if the outcome of the coin flip is heads or tails. Specifically you'll receive 1+log2(a) points if the result is heads and 1+log2(1-a) points if the result is tails. This is a proper scoring rule in the sense that you maximize your expected return by choosing a equal to the posterior probability that, given what you know, this coin will come out heads. The payouts are harshly negative if you have false certainty. E.g. if you choose a=0.995, you'd only stand to gain 0.993 if heads happens but would lose 6.644 if tails happens. At the moment, you don't know much about the coin, but as... (read more)
This game has taught me something. I get more enjoyment than I should out of
watching a random variable go up and down, and probably should avoid gambling.
:)
1Emile10y
Nice work, congrats! Looks fun and useful, better than the calibration apps I've
seen so far (including one I made, that used confidence intervals - I had a
proper scoring rule too!)
My score:
0mcoram10y
Thanks Emile,
Is there anything you'd like to see added?
For example, I was thinking of running it on nodejs and logging the scores of
players, so you could see how you compare. (I don't have a way to host this,
right now, though.)
Or another possibility is to add diagnostics. E.g. were you setting your guess
too high systematically or was it fluctuating more than the data would really
say it should (under some models for the prior/posterior, say).
Also, I'd be happy to have pointers to your calibration apps or others you've
found useful.
0[anonymous]10y
Thank you. I really, really want to see more of these.
Feature request #976: More stats to give you an indication of overconfidence /
underconfidence. (e.g. out of 40 questions where you gave an answer between .45
and .55, you were right 70% of the time).
Brought to mind by the recent post about dreaming on Slate Star Codex:
Has anyone read a convincing refutation of the deflationary hypothesis about dreams - that is, that there aren't any? In the sense of nothing like waking experience ever happening during sleep; just junk memories with backdated time-stamps?
My brain is attributing this position to Dennett in one of his older collections - maybe Brainstorms - but it probably predates him.
Stimuli can be incorporated into dreams - for example, if someone in a sleep lab sees you are in REM sleep and sprays water on you, you're more likely to report having had a dream it was raining when you wake up. Yes, this has been formally tested. This provides strong evidence that dreams are going on during sleep.
More directly, communication has been established between dreaming and waking states by lucid dreamers in sleep labs. Lucid dreamers can make eye movements during their dreams to send predetermined messages to laboratory technicians monitoring them with EEGs. Again, this has been formally tested.
This question reminds me of
http://lesswrong.com/lw/8wi/inverse_pzombies_the_other_direction_in_the_hard/
0[anonymous]10y
Would this be refuted by cases where lucid dreamers were able to communicate
(one way) with researchers during their dreams through eye movements?
http://en.wikipedia.org/wiki/Lucid_dream#Perception_of_time
0Alejandro110y
Indeed, there is an essay in Brainstorms articulating this position. IIRC
Dennett does not explicitly commit to defending it, rather he develops it to
make the point that we do not have a privileged, first-person knowledge about
our experiences. There is conceivable third-person scientific evidence that
might lead us to accept this theory (even if, going by Yvain's comment, this
does not seem to actually be the case), and our first-person intuition does not
trump it.
I wrote a piece for work on quota systems and affirmative action in employment ("Fixing Our Model of Meritocracy"). It's politics-related, but I did get to cite a really fun natural experiment and talk about quotas for the use of countering the availability heuristic.
This is a tangent, but since you mention the "good founders started
[programming] at 13" meme, it's a little bit relevant ...
I find it deeply bizarre that there's this idea today among some programmers
that if you didn't start programming in your early teens, you will never be good
at programming. Why is this so bizarre? Because until very recently, there was
no such thing as a programmer who started at a young age; and yet there were
people who became good at programming.
Prior to the 1980s, most people who ended up as programmers didn't have access
to a computer until university, often not until graduate school. Even for
university students, relatively unfettered access to a computer was an unusual
exception, found only in extremely hacker-friendly cultures such as MIT.
Put another way: Donald Knuth probably didn't use a computer until he was around
20. John McCarthy was born in 1927 and probably couldn't have come near a
computer until he was a professor, in his mid-20s. (And of course Alan Turing,
Jack Good, or John von Neumann couldn't have grown up with computers!)
(But all of them were mathematicians, and several of them physicists. Knuth, for
one, was also a puzzle aficionado and a musician from his early years — two
intellectual pursuits often believed to correlate with programming ability.)
In any event, it should be evident from the historical record that people who
didn't see a computer until adulthood could still become extremely proficient
programmers and computer scientists.
I've heard some people defend the "you can't be good unless you started early"
meme by comparison with language acquisition. Humans generally can't gain
native-level fluency in a language unless they are exposed to it as young
children. But language acquisition is a very specific developmental process that
has evolved over thousands of generations, and occurs in a
developmentally-critical period of very early childhood. Programming hasn't been
around that long, and there's
8Viliam_Bur10y
Seems to me that using computers since your childhood is not necessary, but
there is something which is necessary, and which is likely to be expressed in
childhood as an interest in computer programming. And, as you mentioned, in the
absence of computers, this something is likely to be expressed as an interest in
mathematics or physics.
So the correct model is not "early programming causes great programmers", but
rather "X causes great programmers, and X causes early programming; therefore
early programming correlates with great programmers".
Starting early with programming is not strictly necessary... but these days when
computers are almost everywhere and they are relatively cheap, not expressing
any interest in programming during one's childhood is an evidence this person is
probably not meant to be a good programmer. (The only question is how strong
this evidence is.)
Comparing with language acquisition is wrong... unless the comparison is true
for mathematics. (Is there a research on this?) Again, the model "you need
programming acquisition as a child" would be wrong, but the model "you need math
acquisition as a child, and without this you later will not grok programming"
might be correct.
0Pfft10y
Yeah, I think this is explicitly the claim Paul Graham made, with X = "deep
interest in technology".
4bogus10y
There is a rule of thumb that achieving exceptional mastery in any specific
field requires 10,000 hours of practice. This seems to be true across fields, in
classical musicians, chess players, sports players, scholars/academics etc...
It's a lot easier to meet that standard if you start from childhood. Note that
people who make this claim in the computing field are talking about hackers, not
professional programmers in a general sense. It's very possible to become a
productive programmer at any age.
3Douglas_Knight10y
The only aspect of language with a critical period is accent. Adults commonly
achieve fluency. In fact, adults learn a second language faster than children.
-3Creutzer10y
As far as I know, the degree to which second-language speakers can acquire
native-like competence in domains other than phonetics is somewhat debated.
Anecdotally, it's a rare person who manages to never make a syntactic error that
a native speaker wouldn't make, and there are some aspects of language (I'm told
that subjunctive in French and aspect in Slavic languages may be examples) that
may be impossible to fully acquire for non-native speakers.
So I wouldn't accept this theoretical assertion without further evidence; and
for all practical purposes, the claim that you have to learn a language as a
child in order to become perfect (in the sense of native-like) with it is true.
2Emile10y
Not my downvotes, but you're probably getting flak for just asserting stuff and
then demanding evidence for the opposing side. A more mellow approach like "huh
that's funny I've always heard the opposite" would be better received.
0Creutzer10y
Indeed, I probably expressed myself quite badly, because I don't think what I
meant to say is that outrageous: I heard the opposite, and anecdotally, it seems
right - so I would have liked to see the (non-anecdotal) evidence against it.
Perhaps I phrased it a bit harshly because what I was responding to was also
just an unsubstantiated assertion (or, alternatively, a non-sequitur in that it
dropped the "native-like" before fluency).
0[anonymous]10y
[error]
2Lumifer10y
Links? As far as I know it's not debated.
That's, ahem, bullshit. Why in the world would some features of syntax be
"impossible to fully acquire"?
For all practical purposes it is NOT true.
3Creutzer10y
You may easily know more about this issue than me, because I haven't actually
researched this.
That said, let's be more precise. If we're talking about mere fluency, there is,
of course, no question.
But if we're talking about actually native-equivalent competence and
performance, I have severe doubts that this is even regularly achieved. How many
L2 speakers of English do you know who never, ever pick an unnatural choice from
among the myriad of different ways in which the future can be expressed in
English? This is something that is completely effortless for native speakers,
but very hard for L2 speakers.
The people I know who are candidates for that level of proficiency in an L2 are
at the upper end of the intelligence spectrum, and I also know a non-dumb person
who has lived in a German-speaking country for decades and still uses wrong
plural formations. Hell, there's people who are employed and teach at MIT and so
are presumably non-dumb who say things like "how it sounds like".
The two things I mentioned are semantic/pragmatic, not syntactic. I know there
is a study that shows L2 learners don't have much of a problem with the
morphosyntax of Russian aspect, and that doesn't surprise me very much. I don't
know and didn't find any work that tried to test native-like performance on the
semantic and pragmatic level.
I'm not sure how to answer the "why" question. Why should there be a critical
period for anything? ... Intuitively, I find that semantics/pragmatics, having
to do with categorisation, is a better candidate for something
critical-period-like than pure (morpho)syntax. I'm not even sure you need
critical periods for everything, anyway. If A learns to play the piano starting
at age 5 and B starts at age 35, I wouldn't be surprised if A is not only on
average, but almost always, better at age 25 than B is at 55. Unfortunately,
that's basically impossible to study while controlling for all confounders like
general intelligence, quality of instruction, a
1Lumifer10y
You are committing the nirvana fallacy. How many native speakers of English
never make mistakes or never "pick an unnatural choice"?
For example, I know a woman who immigrated to the US as an adult and is fully
bilingual. As an objective measure, I think she had the perfect score on the
verbal section of the LSAT. She speaks better English than most "natives". She
is not unusual.
Tell your French linguist to go into countryside and listen to the French of the
uneducated native speakers. Do they make mistakes?
-1Creutzer10y
I'm not talking about performance errors in general. I'm talking about the fact
that it is extremely hard to acquire native-like competence wrt the semantics
and pragmatics of the ways in which English allows one to express something
about the future.
Your utterance of this sentence severely damages your credibility with respect
to any linguistic issue. The proper way to say this is: she speaks higher-status
English than most native speakers. Besides, the fact that she gets perfect
scores on some test (whose content and format is unknown to me), which
presumably native speakers don't, suggests that she is far from an average
individual anyway.
Also, that you're not bringing up a single relevant study that compares
long-time L2 speakers with native speakers on some interesting, intricate and
subtle issue where a competence difference might be suspected leaves me with a
very low expectation of the fruitfulness of this discussion, so maybe we should
just leave it at that. I'm not even sure to what extent we aren't simply talking
past each other because we have different ideas about what native-like
performance means.
They don't, by definition; not the way you probably mean it. I wouldn't know why
the rate of performance errors should correlate in any way with education
(controlling for intelligence). I also trust the man's judgment enough to assume
that he was talking about a sort of error that stuck out because a native
speaker wouldn't make it.
3Lumifer10y
I don't think so. This looks like an empirical question -- what do you mean by
"extremely hard"? Any evidence?
No, I still don't think so -- for either of your claims. Leaving aside my
credibility, non-black English in the United States (as opposed to the UK) has
few ways to show status and they tend to be regional, anyway. She speaks better
English (with some accent, to be sure) in the usual sense -- she has a rich
vocabulary and doesn't make many mistakes.
While that is true, your claims weren't about averages. Your claims were about
impossibility -- for anyone. An average person isn't successful at anything,
including second languages.
0Creutzer10y
I don't know if anybody has ever studied this - I would be surprised if they had
-, so I have only anecdotal evidence from the uncertainty I myself experience
sometimes when choosing between "will", "going to", plain present, "will +
progressive", and present progressive, and from the testimony of other highly
advanced L2 speakers I've talked to who feel the same way - while native
speakers are usually not even aware that there is an issue here.
How exactly is "rich vocabulary" not high-status? (Also, are you sure it
actually contains more non-technical lexemes and not just higher-status
lexemes?) I'm not exactly sure what you mean by "mistakes". Things that are
ungrammatical in your idiolect of English?
I actually made two claims. The one was that it's not entirely clear that there
aren't any such in-principle impossibilities, though I admit that the case for
them isn't very strong. I will be very happy if you give me a reference
surveying some research on this and saying that the empirical side is really
settled and the linguists who still go on telling their students that it isn't
are just not up-to-date.
The second is that in any case, only the most exceptional L2 learners can in
practice expect to ever achieve native-like fluency.
0Lumifer10y
It seems you are talking about being self-conscious, not about language fluency.
Why in the world would there be "in-principle impossibilities" -- where does
this idea even come from? What possible mechanism do you have in mind?
Well, let's get specific. Which test do you assert native speakers will pass and
ESL people will not (except for the "most exceptional")?
0Creutzer10y
I didn't say it was about fluency. But I don't think it's about
self-consciousness, either. Native speakers of a language pick the appropriate
tense and aspect forms of verbs perfectly effortlessly - or how often do you
hear a native speaker of English use a progressive in a case where it strikes
you as inappropriate and you would say that they should really have used a plain
tense here, for example?* - while for L2 speakers, it is generally pretty hard
to grasp all the details of a language's tense/aspect system.
*I'm choosing the progressive as an example because it's easiest to describe,
not because I think it's a candidate for serious unacquirability. It's known to
be quite hard for native speakers of a language that has no aspect, but it's
certainly possible to get to a point where you don't use the progressive wrongly
essentially ever.
For syntax, you would really need to be a strong Chomskian to expect any such
things. For semantics, it seems to be a bit more plausible a priori: maybe as an
adult, you have a hard time learning new ways of carving up the world?
I don't know of a pass/fail format test, but I expect reading speed and the
speed of their speech to be lower in L2 speakers than in L1 speakers of
comparable intelligence. I would also expect that if you measure cognitive load
somehow, language processing in an L2 requires more of your capacity than
processing your L1. I would also expect that the active vocabulary of L1
speakers is generally larger than that of an L2 speaker even if all the words in
the L1 speaker's active lexicon are in the L2 speaker's passive vocabulary.
0NancyLebovitz10y
I wonder if there's an implication that colloquial language is more complex than
high status language.
5arundelo10y
The things being measured are different. To a first approximation, all native
speakers do maximally well at sounding like a native speaker.
Lumifer's friend may indeed speak like a native speaker (though it's rare for
people who learned as adults to do so), but she cannot be better at it than
"most 'natives'".
What she can be better at than most natives is:
* Vocabulary.
* Speaking a high-status dialect (e.g., avoiding third person singular "don't",
double negatives, and "there's" + plural).
* Using complex sentence structures.
* Avoiding disfluencies.
It is possible, though, for a lower-status dialect to be more complex than a
higher-status one. Example: the Black English verb system.
2tut10y
Or maybe it means that high status and low status English have different
difficulties, and native speakers tend to learn the one that their parents use
(finding others harder) while L2 speakers learn to speak from a description of
English which is actually a description of a particular high status accent
(usually either Oxford or New England I think)
1taelor10y
The "Standard American Accent" spoken in the media and generally taught to
foriegners is the confusingly named "Midwestern" Accent, which due to internal
migration and a subsequent vowel shift, is now mostly spoken in California and
the Pacific Northwest.
Interestingly enough, my old Japanese instructor was a native Osakan, who's
natural dialect was Kansai-ben; despite this, she conducted the class using the
standard, Tokyo Dialect.
0Pfft10y
If all you are saying is that people who start learning a language at age 2 are
almost always better at it than people who start learning the same language at
age 20, I don't think anyone would disagree. The whole discussion is about
controlling for confounders...
0Creutzer10y
Yes and no - the whole discussion is actually two discussions, I think.
One is about in-principle possibility, the presence of something like a critical
period, etc. There it is crucial for confounders.
The second discussion is about in-practice possibility, whether people starting
later can reasonably expect to get to the same level of proficiency. Here the
"confounders" are actually part of what this is about.
0Viliam_Bur10y
Bonus points for giving a specific example, which helped me to understand your
point, and at this moment I fully agree with you. Because I understand the
example; my own language has something similar, and wouldn't expect a stranger
to use this correctly. The reason is that it would be too much work to learn
properly, for too little benefit. It's a different way to say things, and you
only achieve a small difference in meaning. And even if you asked a non-linguist
native, they would probably find it difficult to explain the difference
properly. So you have little chance to learn it right, and also little
motivation to do.
Here is my attempt to explain the examples from the link, pages 3 and 4. (I am
not a Russian language speaker, but my native language is also Slavic, and I
learned Russian. If I got something wrong, please correct me.)
"ya uslyshala ..." = "I heard ..."
"mne poslyshalis ..." = "to-me happened-to-be-heard ..."
"ya xotel ..." = "I wanted ..."
"mne xotelos ..." = "to-me happened-to-want ..."
That's pretty much the same meaning, it's just that the first variant is "more
agenty", and the second variant is "less agenty", to use the LW lingo. But
that's kinda difficult to explain explicitly, becase... you know, how exactly
can "hearing" (not active listening, just hearing) be "agenty"; and how exactly
can "wanting" be "non-agenty"? It doesn't seem to make much sense, until you
think about it, right? (The "non-agenty wanting" is something like: my emotions
made me to want. So I admit that I wanted, but at the same time I deny full
responsibility for my wanting.)
As a stranger, what is the chance that (1) you will hear it explained in a way
that will make sense to you, (2) you will remember it correctly, and (3) when
the opportunity comes, you will remember to use it. Pretty much zero, I guess.
Unless you decide to put an extra effort into this aspect of the langauge
specifically. But considering the costs and benefits, you are extremely unlikely
to do
2Douglas_Knight10y
The paper doesn't even find a statistically significant difference. The point
estimate is that advanced L2 do worse than natives, but natives make almost as
many mistakes.
0Creutzer10y
They did found differences with the advances L2 speakers, but I guess we care
about the highly advanced ones. They point out a difference at the bottom of
page 18, though admittedly, it doesn't seem to be that much of a big deal and I
don't know enough about statistics to tell whether it's very meaningful.
0IlyaShpitser10y
'mne poslyshalos' I think. This one has connotations of 'hearing things,'
though.
0Viliam_Bur10y
Note: "Mne poslyshalis’ shagi na krishe." was the original example; I just
removed the unchanging parts of the sentences.
0IlyaShpitser10y
Ah I see, yes you are right. That is the correct plural in this case. Sorry
about that! 'Mne poslyshalos chtoto' ("something made itself heard by me") would
be the singular, vs the plural above ("the steps on the roof made themselves
heard by me."). Or at least I think it would be -- I might be losing my ear for
Russian.
0Douglas_Knight10y
What do you mean by "theoretical"? Is this just an insult you fling at people
you disagree with?
-2Creutzer10y
Huh? What a curious misunderstanding! The theoretical referred just the -
theoretical! - question of whether it's in principle possible to acquire
native-like proficiency, which was contrasted with my claim that even if it is,
most people cannot expect to reach that state in practice.
0Douglas_Knight10y
I thought that my choice of the word "commonly" indicated that I was not talking
about the limits of the possible.
0Creutzer10y
You really think it's common for L2 speakers to achieve native-like levels of
proficiency? Where do you live and who are these geniuses? I'm serious. For
example, I see people speaking at conferences who have lived in the US for
years, but aren't native speakers, and they are still not doing so with
native-like fluency and eloquence. And presumably you have to be more than
averagely intelligent to give a talk at a scientific conference...
I'm not talking about just any kind of fluency here, and neither was
fubarobfusco, I assume. I suspect I was trying to interpret your utterance in a
way that I didn't assign very low probability to (i.e. not as claiming that it's
common for people to become native-like) and that also wasn't a non-sequitur wrt
the claim you were referring to (by reducing native-like fluency to some weaker
notion) and kind of failed.
0Douglas_Knight10y
Maybe I should have said "routinely" rather than "commonly." But the key
differentiator is effort.
I don't care about your theoretical question of whether you can come up with a
test that L2 speakers fail. I assume that fubarobfusco meant the same thing I
meant. I'm done.
0mwengler10y
Suppose you replaced it with the idea that people who started programming when
they were 13 have a much easier time becoming good programmers as adults, and so
are overrepresented among programmers at every level. Does that still sound
bizarre?
0[anonymous]10y
Donald Knuth was probably doing real math in his early teens. Maybe this counts.
0JQuinton10y
A similar argument was presented in an article at Slate: Affirmative action
doesn’t work. It never did. It’s time for a new solution.:
An interesting quote, I wonder what people here will make of it...
True rationalists are as rare in life as actual deconstructionists are in university English departments, or true bisexuals in gay bars. In a lifetime spent in hotbeds of secularism, I have known perhaps two thoroughgoing rationalists—people who actually tried to eliminate intuition and navigate life by reasoning about it—and countless humanists, in Comte’s sense, people who don’t go in for God but are enthusiasts for transcendent meaning, for sacred pantheons and private chapels. They hav
I can't tell if the author means "rationalists" in the technical sense (i.e. as
opposed to empiricists) but if he doesn't then I think it's unfair of him to
require that rationalists "eliminate intuition and navigate life by reasoning
about it", since this is so clearly irrational (because intuition is so
indispensably powerful).
0Vulture10y
I loved this quote. I think it's a characterization of UU-style humanism that is
fair but that they would probably agree with.
Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.
My standard reading speed is about 200 WPM (based on my eReader statisitcs, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.
My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure reading and without RSVP).
Is this even possible? Claims seem to be contradictory.
Does anybody have recommendations on systems th... (read more)
Something I recently noticed: steelmanning is popular on LessWrong. But the sequences contain a post called Against Devil's Advocacy, which argues strongly against devil's advocacy, and steelmanning often looks a lot like devil's advocacy. What, if anything is the difference between the two?
Steelmanning is about fixing errors in an argument (or otherwise improving it), while retaining (some of) the argument's assumptions. As a result, the argument becomes better, even if you disagree with some of the assumptions. The conclusion of the argument may change as a result, what's fixed about the conclusion is only the question that it needs to clarify. Devil's advocacy is about finding arguments for a given conclusion, including fallacious but convincing ones.
So the difference is in the direction of reasoning and intent regarding epistemic hygiene. Steelmanning starts from (somewhat) fixed assumptions and looks for more robust arguments following from them that would address a given question (careful hypothetical reasoning), while devil's advocacy starts from a fixed conclusion (not just a fixed question that the conclusion would judge) and looks for convincing arguments leading to it (rationalization with allowed use of dark arts).
A bad aspect of a steelmanned argument is that it can be useless: if you don't accept the assumptions, there is often little point in investigating their implications. A bad aspect of a devil's advocate's argument is that it may be misleading, acting as filtered evidence for the chosen conclusion. In this sense, devil's advocates exercise the skill of coming up with misleading arguments, which might be bad for their ability to reason carefully in other situations.
But what if you steelman devil's advocacy to exclude fallacious but convincing
arguments?
0Vladimir_Nesov10y
Then the main problem is that it produces (and exercises the skill of producing)
arguments that are filtered evidence in the direction of the predefined
conclusion, instead of well-calibrated consideration of the question on which
the conclusion is one position.
0ChrisHallquist10y
So I'm still not sure what the difference with steelmanning is supposed to be,
unless it's that with steelmanning you limit yourself to fixing flaws in your
opponents' arguments that can be fixed without essentially changing their
arguments, as opposed just trying to find the best arguments you can for their
conclusion (the latter being a way of filtering evidence?)
That would seem to imply that steelmanning isn't a universal duty. If you think
an argument can't be fixed without essentially steelmanning it, you'll just be
forced to say it can't be steelmanned.
6Jayson_Virissimo10y
As far as I can tell...nothing. Most likely, there are simply many LessWrongers
(like me) that disagree with E.Y. on this point.
2Douglas_Knight10y
What leads you to believe that you disagree with Eliezer on this point? I
suspect that you are just going by the title. I just read the essay and he
endorses lots of practices that others call Devil's Advocacy. I'm really not
sure what practice he is condemning. If you can identify a specific practice
that you disagree with him about, could you describe it in your own words?
An article on samurai mental tricks. Most of them will not be that surprising to LWers, but it is nice to see modern results have a long history of working.
Does anyone have advice for getting an entry level software-development job? I'm finding a lot seem to want several years of experience, or a degree, while I'm self taught.
Ignore what they say on the job posting, apply anyway with a resume that links to your Github, websites you've built, etc. Many will still reject you for lack of experience, but in many cases it will turn out the job posting was a very optimistic description of the candidate they were hoping to find, and they'll interview you anyway in spite of not meeting the qualifications on the job listing.
This is just a guess, but I think it might be helpful to include some
screenshots (in color) of the programs, websites, etc. That would make them
"more real" to the person who reads this. At least, save them some
inconvenience. Of course, I assume that the programs and websites have a nice
user interface.
It's also an opportunity for an interesting experiment: randomly send 10 resumes
without the screenshorts, and 10 resumes with screenshots. Measure how many
interview invitations you get from each group.
If you have a certificate from Udacity or other online university, mention that,
too. Don't list is as a formal education, but somewhere in the "other courses
and certificates" category.
2ChrisHallquist10y
I think ideally, you want your code running on a website where they can interact
with it, but maybe a screenshot would help entice them to go to the website. Or
help if you can't get the code on a website for some reason.
1ChristianKl10y
You want to signal a hacker mindset. Instead of focusing to include screenshots
it might be more effective to write your resume in LaTeX.
1Viliam_Bur10y
It depends on your model of who will be reading your resume.
I realized that my implicit model is some half-IT-literate HR person or manager.
Someone who doesn't know what LaTeX is, and who couldn't download and compile
your project from Github. But they may look at a nice printed paper and say:
"oh, shiny!" and choose you instead of some other candidate.
Live in a place with lots of demand. Silicon Valley and Boston are both good choices; there may be others but I'm less familiar with them.
Have a github account. Fill it with stuff.
Have a personal site. Fill it with stuff.
Don't worry about the degree requirements; everybody means "Bachelor's of CS or equivalent".
Don't worry about experience requirements. Unlike the degree requirement this does sometimes matter, but you won't be able to tell by reading the advert so just go ahead and apply.
Prefer smaller companies. The bigger the company, the more likely it is that your resume will be screened out by some automated process before it can reach someone like me. I read peoples' githubs; HR necessarily does not.
Practicing whiteboard-style interview coding problems is very helpful. The best
places to work will all make you code in the interview [1] so you want to feel
at-ease in that environment. If you want to do a practice interview I'd be up
for doing that and giving you an honest evaluation of whether I'd hire you if I
were hiring.
[1] Be very cautious about somewhere that doesn't make you code in the
interview: you might end up working with a lot of people who can't really code.
1maia10y
If you have the skills to do software interviews well, the hardest part will be
getting past resume screening. If you can, try to use personal connections to
bypass that step and get interviews. Then your skills will speak for themselves.
I got to design my first infographic for work and I'd really appreciate feedback (it's here: "Did We Mess Up on Mammograms?").
I'm also curious about recommendations for tools. I used Easl.ly which is a WYSIWYG editor, but it was annoying in that I couldn't just tell it I wanted an mxn block of people icons, evenly spaced, but had to do it by hand instead.
A TEDx video about teaching mathematics; in Slovak, you have to select English subtitles. "Mathematics as a source of joy" Had to share it, but I am afraid the video does not explain too much, and there is not much material in English to link to -- I only found twoarticles. So here is a bit more info:
The video is about an educational method of a Czech math teacher Vít Hejný; it is told by his son. Prof. Hejný created an educational methodology based mostly on Piaget, but specifically applied to the domain of teaching mathematics (elementary- and... (read more)
This was fun. I like how he emphasizes that every kid can figure out all of math
by herself, and that thinking citizens are what you need for a democracy rather
than a totalitarian state - because the Czech republic was a communist
dictatorship only a generation ago, and many teachers were already teachers
then.
1Viliam_Bur10y
A cultural detail which may help to explain this attitude:
In communist countries a carreer in science or education of math or physics was
a very popular choice of smart people. It was maybe the only place where you
could use your mind freely, without being afraid of contradicting something that
Party said (which could ruin your career and personal life).
So there are many people here who have both "mathematics" and "democracy" as
applause lights. But I'd say that after the end of communist regime the quality
of math education actually decreased, because the best teachers suddenly had
many new career paths available. (I was in a math-oriented high school when the
regime ended, and most of the best teachers left the school within two years,
and started their private companies or non-governmental organizations; usually
somehow related to education.) Even the mathematical curriculum of prof. Hejný
was invented during communism... but only in democracy his son has the freedom
to actually publish it.
0chaosmage10y
That's very true. Small addition: Many smart people went into medicine, too.
Sometimes I feel like looking into how I can help humanity (e.g. 80000 hours stuff), but other times I feel like humanity is just irredeemable and may as well wipe itself off the planet (via climate change, nuclear war, whatever).
For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this (and other deficiencies, of course). Yet discourse about such things is overwhelmingly neg... (read more)
You know how when you see a kid about to fall off a cliff, you shrug and don't do anything because the standards of discourse aren't as high as they could be?
A task with a better expected outcome is still better (in expected outcome), even if it's hopeless, silly, not as funny as some of the failure modes, not your responsibility or in some way emotionally less comfortable.
You're of course correct. I'm tempted to question the use of "better" (i.e. it's
a matter of values and opinion as to whether its "better" if humanity wipes
itself out or not), but I think it's pretty fair to assume (as I believe
utilitarians do) that less suffering is better, and theoretically less suffering
would result from better decision-making and possibly from less climate change.
Thanks for this.
8RomeoStevens10y
https://en.wikipedia.org/wiki/Identifiable_victim_effect
Also, would you still want to save a drowning dog even if it might bite you out
of fear and misunderstanding? (let's say it is a small dog and a bite would not
be drastically injurious)
-1ricketybridge10y
True, true. But it's still hard for me (and most people?) to circumvent that
effect, even while I'm aware of it. I know Mother Theresa actually had a
technique for it (to just think of one child rather than the millions in need).
I guess I can try that. Any other suggestions?
I'll pretend it's a cat since I don't really like small dogs. ;-) Yes, of course
I'd save it. I think this analogy will help me moving forward. Thank you! ^_^
2RomeoStevens10y
No problem. I have an intuition that IMing might be more productive than
structured posts if you're exploring this space and want to cover a bunch of
ground quickly. Feel free to ping me on gtalk if you're interested.
romeostevensit is my google.
7mwengler10y
I think it is amazingly myopic to look at the only species that has ever started
a fire or crafted a wheel and conclude that
The idea that climate change is an existential risk seems wacky to me. It is not
difficult to walk away from an ocean which is rising at even 1 m a year and no
one hypothesizes anything close to that rate. We are adapted to a broad range of
climates and able to move north south east and west as the winds might blow us.
Running out of fossil fuels, thinking we are doing something wildly stupid with
our use of fossil fuels seems to me to be about as sensible as thinking a
centrally planned economy will work better. It is not intuitive that a centrally
planned economy will be a piece of crap compared to what we have, but it turns
out to be true. Thinking you or even a bunch of people like you with no track
record doing ANYTHING can second guess the markets in fossil fuels, well it
seems intuitively right but if you ever get involved in testing your intuitions
I don't think you'll find out it holds up. And if you think even doubling the
price of fossil fuels really changes the calculus by much, I think Europe and
Japan have lived that life for decades compared to the US, and yet the US is the
home to the wackiest and ill-thought-out alternatives to fossil fuels in the
world.
Can anybody explain to me why creating a wildly popular luxury car which
effectively runs on burning coal is such a boon to the environment that it
should be subsidized at $7500 by the US federal government and an additional
$2500 by states such as California which has been so close to bankruptcy
recently? Well that is what a Tesla is, if you drive one in a country with coal
on the grid, and most of Europe, China, and the US are in that category, The
Tesla S Performance puts out the same amount of carbon as a car getting
(WRONG14WRONG) 25 mpg of gasoline.
2roystgnr10y
The Tesla S takes about 38 kW-hr to go 100 miles, which works out to around 80
lb CO2 generated. 14mpg would be 7.1 gallons of gasoline to go 100 miles, which
works out to around 140lb CO2 generated. I couldn't find any independent numbers
for the S Performance, but Tesla's site claims the same range as the regular S
with the same battery pack.
The rest of your point seems to hold, though; if the subsidy is predicated on
reducing CO2 emissions then the equivalent of 25mpg still isn't anything to brag
about.
2Nornagest10y
This is likely an overestimation, since it assumes that you're exclusively
burning coal. Electricity production in the US is about 68% fossil, the rest
deriving from a mixture of nuclear and renewables; the fossil-fuel category also
includes natural gas, which per your link generates about 55-60% the CO2 of coal
per unit electricity. This varies quite a bit state to state, though, from
almost exclusively fossil (West Virginia; Delaware; Utah) to almost exclusively
nuclear (Vermont) or renewable (Washington; Idaho).
Based on the same figures and breaking it down by the national average of coal,
natural gas, and nuclear and renewables, I'm getting a figure of 43 lb CO2 / 100
mi, or about 50 mpg equivalent. Since its subsidies came up, California burns
almost no coal but gets a bit more than 60% of its energy from natural gas; its
equivalent would be about 28 lb CO2.
0mwengler10y
Yes, but that should be the right comparison to make. Consider two alternatives:
1) World generates N kwh + 38 kwh to fuel a Tesla to go 100 miles 2) World
generates N kwh and puts 4 gallons of gasoline in a car to go 100 miles.
If we are interested in minimizing CO2 emissions, then in world 2 compared to
world 1 we will generate 38 kWh fewer from our dirtiest plant on the grid, which
is going to be a coal-fired plant.
So in world 1 we have an extra 80 lbs of CO2 emission from electric generation
and 0 from gasoline. In world 2 we have 80 lbs less of CO2 emission from
electric generation and add 80 lbs from gasoline.
When adding electric usage, you need to "bill" it at the marginal costs to
generate that electricity, which is true both in terms the price you charge
customers for it and the CO2 emissions you attribute to it.
The US, China, and most of Europe have a lot of Coal in the mix on the grid.
Until they scrub coal or stop using it, it seems very clear that the Tesla puffs
out the same amount of CO2 as a 25 mpg gasoline powered car.
3Nornagest10y
It's true that most of the flexibility in our power system comes from dirty
sources, and that squeezing a few extra kilowatt-hours in the short term
generally means burning more coal. If we're talking policy changes aimed at
popularizing electric cars, though, then we aren't talking a megawatt here or
there; we've moved into the realm of adding capacity, and it's not at all
obvious that new electrical capacity is going to come from dirty sources -- at
least outside of somewhere like West Virginia. On those kinds of scales, I think
it's fair to assume a mix similar to what we've currently got, outside of
special cases like Germany phasing out its nuclear program.
(There are some caveats; renewables are growing strongly in the US, but nuclear
isn't. But it works as a first approximation.)
3mwengler10y
The global installed capacity of coal-fired power generation is expected to
increase from 1,673.1 GW in 2012 to 2.057.6 GW by 2019, according to a report
from Transparency Market Research. Coal-fired electrical-generation plants are
being started up in Europe—and comparatively clean gas-fired generating capacity
is being shut down.
Coal electric generation isn't going away anytime soon. The only reason coal may
look at the moment like it is declining in the US is because at the moment
natural gas generation in the US is less expensive than coal. But in Europe,
coal is less expensive and, remarkably, generating companies respond by turning
up coal and turning down natural gas.
2Nornagest10y
Doesn't need to be going away for my argument to hold, as long as the relative
proportions are favorable -- and as far as I can tell, most of that GIC delta in
coal is happening in the developing world, where I don't see too many people
buying Teslas. Europe and the US project new capacity disproportionately in the
form of renewables; coal is going up in Europe, but less quickly.
This isn't ideal; I'm generally long on wind and solar, but if I had my way we'd
be building Gen IV nuclear reactors as fast as we could lay down concrete. But
neither is it as grim as the picture you seem to be painting.
3mwengler10y
I would agree with that.. Certainly my initial picture was just wrong. Even
using Coal as the standard, the Tesla is as good as a 25 mpg gasolilne car. For
that size and quality of car, that is actually not bad, but it is best in class,
not revolutionary.
As to subsidizing a Tesla as opposed to a 40 mpg diesel, for example, as long as
we use coal for electricity, we are better off adding a 40 mpg diesel to the
fleet than adding a Tesla. This is almost just me hating on subsidies,
preferring that we just tax fuels proportional to their carbon content and let
market forces decide how to distribute that distortion.
3Nornagest10y
That probably is better baseline policy from a carbon minimization perspective,
yeah; I have similar objections to the fleet mileage penalties imposed on
automakers in the US, which ended up contributing among other things to a good
chunk of the SUV boom in the '90s and '00s. Now, I can see an argument for
subsidies or even direct grants if they help kickstart building EV
infrastructure or enable game-changing research, but that should be narrowly
targeted, not the basis of our entire approach.
Unfortunately, basic economic literacy is not exactly a hallmark of
environmental policy.
0Douglas_Knight10y
Yes, but marginal analysis requires identifying the correct margin. If you
charge your car during the day at work, you are increasing peak load, which is
often coal. If you charge your car at night, you are contributing to base load.
This might not even require building new plants! This works great if you have
nuclear plants. With a sufficiently smart grid, it makes erratic sources like
wind much more useful.
2mwengler10y
I do agree using the rate for coal is pessimistic.
On further research, I discover that Li-ion batteries are very energetically
expensive to produce. Their net lifetime energy in production and then recycling
is about 430 kWh per kWh of battery. Li-ion can be recharged 300-500 times.
Using 430 recharges, amortizing production costs across all uses of the battery
we see that we have 1 kWh of production energy used for every 1 kWh of storage
the battery accomplished during its lifetime.
So now we have the more complicated accounting questions, how much carbon do we
associate with constructing the battery vs how much with charging the battery?
If construction and charging come from the same grid, we charge the same.
And of course to be fair, we need to figure the cost to refine a gallon of
gasoline. Its pretty wacky out there but the numbers out there range from 6 kwh
to 12 kwh. The higher numbers include quite a bit of natural gas directly used
in the process, which using it directly is about twice as efficient as making
electricity with it.
All in all, it looks to me like we have about 100% overhead on battery
production energy, and say 8 kWh to make a gallon of gas for about 25% overhead
on gasoline.
Lets assign 1.3 lbs of CO2 per kwh electric, which is 2009 US average adjusted
7.5% for delivery losses.
Then a gallon of gasoline gives 19 lbs from the gasoline + 10.4 lbs from
making/transporting the gasoline.
A Tesla costs 1.3*38 = 39 lbs CO2 to go 100 miles from electric charge + 39 lbs
CO2 from amortizing battery lifetime over CO2 cost or producing the battery.
Tesla = 78 lbs CO2 per 100 miles.
A 78 lbs of CO2 comes from 78/30 = 2.6 gallons of fuel.
So using US average CO2 load for kwh electricity, loading the Tesla with 100%
overhead for battery production and loading gasoline with 34% overhead from
refining, mining, and transport, we get a Tesla S about equivalent to a 38 mpg
car in CO2 emissions.
That number is actually extremely impressive for the cl
0Douglas_Knight10y
Your lithium-ion numbers match my understanding of batteries in general: they
cost as much energy to create as their lifetime capacity. That's why you can't
use batteries to smooth out erratic power sources like wind, or inflexible ones
like nuclear.
I'm skeptical that it's a good idea to focus on the energy used to create the
battery. There's energy used to create all the rest of the car, and certainly
energy to create the gasoline-powered car that you're using as a benchmark.
Production energy is difficult to compute and I think most people do such a bad
job that I think it's better to use price as a proxy.
0mwengler10y
You are right I did my math wrong.
To make it a little clearer to people following along, 80 lbs of CO2 generate to
move a Tesla 100 miles using coal generated electricity. 80 pounds of CO2 to
move a 25 mpg gasoline car 100 miles.
I'll address why the coal number is the right one in commenting on the next
comment.
-2drethelin10y
It's not difficult to walk away from an ocean? Please explain New Orleans.
Tesla (and other stuff getting power from the grid) currently run mostly on coal
but ideally they can be run off (unrealistically) solar or wind or
(realistically) nuclear.
1mwengler10y
It's not difficult to walk away from an ocean? Please explain New Orleans.
Are you under the impression that climate change rise in ocean level will look
like a dike breaking? All references to sea levels rising are reported at less
than 1 cm a year, but lets say that rises 100 fold to 1 m/yr. New Orleans
flooded a few meters in at most a few days, about 1 m/day.
A factor of 365 in rate could well be the subtle difference between finding
yourself on the roof of a house and finding yourself living in a house a few
miles inland.
-5drethelin10y
-6Izeinwinter10y
6Viliam_Bur10y
If you think helping humanity is (in long term) a futile effort, because humans
are so stupid they will destroy themselves anyway... I'd say the organization
you are looking for is CFAR.
So, how would you feel about making a lot of money and donating to CFAR? (Or
other organization with a similar mission.)
4ricketybridge10y
How cool, I've never heard of CFAR before. It looks awesome. I don't think I'm
capable of making a lot of money, but I'll certainly look into CFAR.
Edit: I just realized that CFAR's logo is at the top of the site. Just never
looked into it. I am not a smart man.
3Locaha10y
Taboo humanity.
2Slackson10y
I can't speak for you, but I would hugely prefer for humanity to not wipe itself
out, and even if it seems relatively likely at times, I still think it's worth
the effort to prevent it.
If you think existential risks are a higher priority than parasite removal,
maybe you should focus your efforts on those instead.
-2ricketybridge10y
Serious, non-rhetorical question: what's the basis of your preference? Anything
more than just affinity for your species?
I'm not 100% sure what you mean by parasite removal... I guess you're referring
to bad decision-makers, or bad decision-making processes? If so, I think
existential risks are interlinked with parasite removal: the latter causes or at
least hastens the former. Therefore, to truly address existential risks, you
need to address parasite removal.
6Slackson10y
If I live forever, through cryonics or a positive intelligence explosion before
my death, I'd like to have a lot of people to hang around with. Additionally,
the people you'd be helping through EA aren't the people who are fucking up the
world at the moment. Plus there isn't really anything directly important to me
outside of humanity.
Parasite removal refers to removing literal parasites from people in the third
world, as an example of one of the effective charitable causes you could donate
to.
0ricketybridge10y
EA? (Sorry to ask, but it's not in the Less Wrong jargon glossary and I haven't
been here in a while.)
Oh. Yes. I think that's important too, and it actually pulls on my heart strings
much more than existential risks that are potentially far in the future, but I
would like to try to avoid hyperbolic discounting and try to focus on the most
important issue facing humanity sans cognitive bias. But since human motivation
isn't flawless, I may end up focusing on something more immediate. Not sure yet.
6Emile10y
EA is Effective Altruism.
0ricketybridge10y
Ah, thanks. :)
2[anonymous]10y
I find it fascinating to observe.
-1ricketybridge10y
I assume you're talking about the facepalm-inducing decision-making? If so,
that's a pretty morbid fascination. ;-)
0DanielLC10y
If you're looking for ways to eliminate existential risk, then knowing that
humanity is about to kill itself no matter what you do and you're just putting
it off a few years instead of a few billion matters. If you're just looking for
ways to help individuals, it's pretty irrelevant. I guess it means that what
matters is what happens now, instead of the flow through effects after a billion
years, but it's still a big effect.
If you're suggesting that the life of the average human isn't worth living, then
saving lives might not be a good idea, but there are still ways to help keep the
population low.
Besides, if humanity was great at helping itself, then why would we need you? It
is precisely the fact that we allow extreme inequality to exist that means that
you can make a big difference.
0ChristianKl10y
I think you underrate the existential risks that come along with substantial
genetic or neurological enhancements. I'm not saying we shouldn't go there but
it's no easy subject matter. It requires a lot of thought to address it in a way
that doesn't produce more problems than it solves.
For example the toolkit that you need for genetic engineering can also be used
to create artificial pandemics which happen to be the existential risk most
feared by people in the last LW surveys.
When it comes to running out of fossil fuels we seem to do quite well. Solar
energy halves costs every 7 years. The sun doesn't shine the whole day so
there's still further work to be done, but it doesn't seem like an
insurmountable challenge.
0ricketybridge10y
It's true, I absolutely do. It irritates me. I guess this is because the ethics
seem obvious to me: of course we should prevent people from developing a
"supervirus" or whatever, just as we try to prevent people from developing
nuclear arms or chemical weapons. But steering towards a possibly better
humanity (or other sentient species) just seems worth the risk to me when the
alternative is remaining the violent apes we are. (I know we're hominds, not
apes; it's just a figure of speech.)
That's certainly a reassuring statistic, but a less reassuring one is that solar
power currently supplies less than one percent of global energy usage!! Changing
that (and especially changing that quickly) will be an ENORMOUS undertaking, and
there are many disheartening roadblocks in the way (utility companies, lack of
government will, etc.). The fact that solar itself is getting less expensive is
great, but unfortunately the changing over from fossil fuels to solar (e.g.
phasing out old power plants and building brand new ones) is still incredibly
expensive.
3ChristianKl10y
Of course the ethics are obvious. The road to hell is paved with good
intentions. 200 years ago burning all those fossil fuels to power steam engines
sounded like a really great idea.
If you simply try to solve problems created by people adopting technology by
throwing more technology at it, that's dangerous.
The wise way is to understand the problem you are facing and do specific
intervention that you believe to help. CFAR style rationality training might
sound less impressive then changing around peoples neurology but it might be an
approach with a lot less ugly side effects.
CFAR style rationality training might seem less technological to you. That's
actually a good thing because it makes it easier to understand the effects.
It depends on what issue you want to address. Given how things are going
technology involves in a way where I don't think we have to fear that we will
have no energy when coal runs out. There plenty of coal around and green energy
evolves fast enough for that task.
On the other hand we don't want to turn that coal. I want to eat tuna that's not
full of mercury and there already a recommendation from the European Food Safety
Authority against eating tuna every day because there so much mercury in it. I
want less people getting killed via fossil fuel emissions. I also want to have
less greenhouse gases in the atmosphere.
If you want to do policy that pays off in 50 years looking at how things are at
the moment narrows your field of vision too much.
If solar continues it's price development and is 1/8 as cheap in 21 years you
won't need government subsidies to get people to prefer solar over coal. With
another 30 years of deployment we might not burn any coal in 50 years.
If you think lack of government will or utility companies are the core problem,
why focus on changing human neurology? Addressing politics directly is more
straightforward.
When it comes to solar power it might also be that nobody will use any solar
panels in 50 years
0ricketybridge10y
It's a start, and potentially fewer side effects is always good, but think of it
this way: who's going to gravitate towards rationality training? I would bet
people who are already more rational than not (because it's irrational not to
want to be more rational). Since participants are self-selected, a massive part
of the population isn't going to bother with that stuff. There are similar
issues with genetic and neurological modifications (e.g. they'll be expensive,
at least initially, and therefore restricted to a small pool of wealthy people),
but given the advantages over things like CFAR I've already mentioned, it seems
like it'd be worth it...
I have another issue with CFAR in particular that I'm reluctant to mention here
for fear of causing a shit-storm, but since it's buried in this thread,
hopefully it'll be okay. Admittedly, I only looked at their website rather than
actually attending a workshop, but it seems kind of creepy and culty--rather
reminiscent of Landmark, for reasons not the least of which is the fact that
it's ludicrously, prohibitively expensive (yes, I know they have "fellowships,"
but surely not that many. And you have to use and pay for their lodgings? wtf?).
It's suggestive of mind control in the brainwashing sense rather than
rationality. (Frankly, I find that this forum can get that way too, complete
with shaming thought-stopping techniques (e.g. "That's irrational!"). Do you (or
anyone else) have any evidence to the contrary? (I know this is a little
off-topic from my question -- I could potentially create a workshop that I don't
find culty -- but since CFAR is currently what's out there, I figure it's
relevant enough.)
You could be right, but I think that's rather optimistic. This blog post speaks
to the problems behind this argument pretty well, I think. Its basic gist is
that the amount of energy it will take to build sufficient renewable energy
systems demands sacrificing a portion of the economy as is, to a point that no
politicia
0knb10y
Pretty sure you just feel like bragging about how much smarter you are than the
rest of the world. If you think people have to be as smart as you think you are
to be worth protecting, you are a bad person.
0skeptical_lurker10y
Well, there has not been a nuclear war yet (excluding WWII where deaths from
nuclear weapons were tiny in proportion), climate change has only been a known
risk for a few decades, and progress is being made with electric cars and solar
power. Things could be worse. Instead of moaning, propose solutions : what would
you do to stop global warming when so much depends on fossil fuels?
On a separate note, I agree with the kneejerk reactions, but its a temporary
cultural thing, caused partially by people basing morality on fiction. Get one
group of people to watch GATTACA and another to watch Ghost in the shell, and
they would have very different attitudes towards transhumanism. More
interestingly, cybergoths (people who like to dress as cyborgs as a fashion
statement) seem to be pretty open to discussions of actual brain-computer
interfaces and there is music with H+ lyrics being realeased on actual record
lables and brought by people who like the music and are not transhumanists...
yet.
In conclusion, once enhancement become possible I think there will be a sizeable
minority of people who back it - in fact this has allready happend with
modafinil and students.
3ricketybridge10y
Yes, and that seems truly damaging. I get the need to create conflict in
fiction, but it seems to come always at the expense of technological progress,
in a way I've never really understood. When I read Brave New World, I genuinely
thought it truly was a "brave new world." So what if some guy was conceived
naturally?? Why is that inherently superior? Sounds like status quo bias, if you
ask me. Buncha Luddite propraganda.
I've actually been working on a pro-technology, anti-Luddite text-based game.
Maybe working on it is in fact a good idea towards balancing out the propaganda
and changing public opinion...
0Izeinwinter10y
"Reactors by the thousand". Fissile and fertile materials are sufficiently
abundant that we could run a economy much larger than the present one entirely
on fission for millions of years, and doing so would have considerably lower
average health impacts and costs than what we are actually doing. - The fact
that we still burn coal is basically insanity, even disregarding climate change,
because of the sheer toxicity of the wastestream from coal plants. Mercury has
no halflife.
All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?
I have spent my life figuring other humans are probably conscious purely on an Occam's razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of hu... (read more)
Is this going to become even a harder distinction to make as tech continues to get better?
Wei once described an interesting scenario in that vein. Imagine you have a bunch of human uploads, computer programs that can truthfully say "I'm conscious". Now you start optimizing them for space, compressing them into smaller and smaller programs that have the same outputs. Then at some point they might start saying "I'm conscious" for reasons other than being conscious. After all, you can have a very small program that outputs the string "I'm conscious" without being conscious.
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them. It's not clear where the cutoff happens, or even if it's meaningful to talk about the cutoff happening at some point. And this is something that could happen in reality, if we ask a future AI to optimize the universe for more humans or something.
Also this scenario reopens the question of whether uploads are conscious in the first place! After all, the process of uploading a human mind to a computer can also be viewed as a compression step, which can fold constant computations into literal constants, etc. The usual justification says that "it preserves behavior at every step, therefore it preserves consciousness", but as the above argument shows, that justification is incomplete and could easily be wrong.
Suppose you mean lossless compression. The compressed program has ALL the same
outputs to the same inputs as the original program.
Then if the uncompressed program running had consciousness and the compressed
program running did not, you have either proved or defined consciousness as
something which is not an output. If it is possible to do what you are
suggesting then consciousness has no effect on behavior, which is the
presumption one must make in order to conclude that p-zombies are possible.
From an evolutionary point of view, can a feature with no output, absolutely
zero effect on the interaction of the creature with its environment ever evolve?
There would be no mechanism for it to evolve, there is no basis on which to
select for it. It seems to me that to believe in the possibility of p-zombies is
to believe in the supernatural, a world of phenomena such as consciousness that
for some reason is not allowed to be listed as a phenomenon of the natural
world.
At the moment, I can't really distinguish how a belief that p-zombies are
possible is any different from a belief in the supernatural.
Years ago I thought an interesting experiment to do in terms of artificial
consciousness would be to build an increasingly complex verbal simulation of a
human, to the point where you could have conversations involving reflection with
the simulation. At that point you could ask it if it was conscious and see what
it had to say. Would it say "not so far as I can tell?"
The p-zombie assumption is that it would say "yeah I'm conscious duhh what kind
of question is that?" But the way a simulation actually gets built is you have
the list of requirements and you keep accreting code until all the requirements
are met. If your requirements included a vast array of features but NOT the
feature that it answer this question one way or another, conceivably you could
elicit an "honest" answer from your sim. If all such sims answers "yes," you
might conclude that somehow in the coll
3crazy8810y
I haven't thought about this stuff for a while and my memory is a bit hazy in
relation to it so I could be getting things wrong here but this comment doesn't
seem right to me.
First, my p-zombie is not just a duplicate of me in terms of my input-output
profile. Rather, it's a perfect physical duplicate of me. So one can deny the
possibility of zombies while still holding that a computer with the same
input-output profile as me is not conscious. For example, one could hold that
only carbon-based life could be conscious while denying the possibility of
zombies (denying that a physical duplicate of a carbon-based lifeform that is
conscious could lack consciousness) while denying that an identical input-output
profile implies consciousness.
Second, if it could be shown that the same input-output profile could exist even
with consciousness was removed this doesn't show that consciousness can't play a
causal role in guiding behaviour. Rather, it shows that the same input-output
profile can exist without consciousness. That doesn't mean that consciousness
can't cause that input-output profile in one system and something else cause it
in the other system.
Third, it seems that one can deny the possibility of zombies while accepting
that consciousness has no causal impact on behaviour (contra the last sentence
of the quoted fragment): one could hold that the behaviour causes the conscious
experience (or that the thing which causes the behaviour also causes the
conscious experience). One could then deny that something could be physically
identical to me but lack consciousness (that is, deny the possibility of
zombies) while still accepting that consciousness lacks causal influence on
behaviour.
Am I confused here or do the three points above seem to hold?
0mwengler10y
I think formally you are right.
But I think that if consciousness is essential to how we get important aspects
of our input-output map, then I think the chances of there being another
mechanism that works to get the same input-output map are equal to the chances
that you could program a car to drive from here to Los Angeles without using any
feedback mechanisms, by just dialing in all the stops and starts and turns and
so on that it would need ahead of time. Formally possible, but absolutely
bearing no real relationship to how anything that works has ever been built.
I am not a mathematician about these things, I am an engineer or a physicist in
the sense of Feynman.
1cousin_it10y
A few points:
1) Initial mind uploading will probably be lossy, because it needs to convert
analog to digital.
2) I don't know if even lossless compression of the whole input-output map is
going to preserve everything. Let's say you have ten seconds left to live. Your
input-output map over these ten seconds probably doesn't contain many
interesting statements about consciousness, but that doesn't mean you're allowed
to compress away consciousness. And even on longer timescales, people don't seem
to be very good at introspecting about consciousness, so all your beliefs about
consciousness might be compressible into a small input-output map. Or at least
we can't say that input-output map is large, unless we figure out more about
consciousness in the first place!
3) Even if consciousness plays a large causal role, I agree with crazy88's point
that consciousness might not be the smallest possible program that can fill that
role.
4) I'm not sure that consciousness is just about the input-output map. Doesn't
it feel more like internal processing? I seem to have consciousness even when
I'm not talking about it, and I would still have it even if my religion
prohibited me from talking about it. Or if I was mute.
0mwengler10y
It is not your actual input-output map that matters, but your potential. What is
uploaded must be information about the functional organization of you, not some
abstracted mapping function. If I have 10 s left to live and I am uploaded, my
upload should type this comment in response to your comment above even if it is
well more than 10 s since I was uploaded.
If with years of intense and expert schooling I could say more about
consciousness, then that is part of my input-output map. My upload would need to
have the same property.
Might not be, but probably is. Biological function seems to be very efficient,
with most bio features not equalled in efficiency by human manufactured systems
even now. The chances that evolution would have created consciousness if it
didn't need to seem slim to me. So as an engineer trying to plan an attack on
the problem, I'd expect consciousness to show up in any successful upload. If it
did not, that would be a very interesting result. But of course, we need a way
to measure consciousness to tell whether it is there in the upload or not.
To the best of my knowledge, no one anywhere has ever said how you go about
distinguishing between a conscious being and a p-zombie.
I mean your input-output map writ broadly. But again, since you don't even know
how to distinguish a conscious me from a p-zombie me, we are not in a position
yet to worry about the input-output map and compression, in my opinion.
If a simulation of me can be complete, able to attend graduate school and get 13
patents doing research afterwards, able to carry on an obsessive relationship
with a married woman for a decade, able to enjoy a convertible he has owned for
8 years, able to post on lesswrong posts much like this one, then I would be
shocked if it wasn't conscious. But I would never know whether it was conscious,
nor for that matter will I ever know whether you are conscious, until somebody
figures out how to tell the difference between a p-zombie and a conscio
0cousin_it10y
Even if that's true, are you sure that AI will be optimizing us for the same mix
of speed/size that evolution was optimizing for? If the weighting of speed vs
size is different, the result of optimization might be different as well.
Can you expand what you mean by "writ broadly"? If we know that speech is not
enough because the person might be mute, how do you convince yourself that a
certain set of inputs and outputs is enough?
That said, if you also think that uploading and further optimization might
accidentally throw away consciousness, then I guess we're in agreement.
0mwengler10y
I was thinking of uploads in the Hansonian sense, a shortcut to "building" AI.
Instead of understanding AI/consciousness from the ground up and designing de
novo an IA, we simply copy an actual person. Copying the person, if successful,
produces a computer run person which seems to do the things the person would
have done under similar conditions.
The person is much simpler than the potential input-output map. THe human system
has memory, so a semi-complete input-output map could not be generated unless
you started with a myriad of fresh copies of the person and ran them through all
sorts of conceivable lifetimes.
You seem to be presuming the upload would consist of taking the input-output map
and, like a smart compiler, trying to invent the least amount of code that would
produce that, or in another metaphor, try to optimally compress that
input-output map. I don't think this is at all how an upload would work.
Consider duplicating or uploading a car. WOuld you drive the car back and forth
over every road in the world under every conceivable traffic and weather
condition, and then take that very large input output map and try to compress
and upload that? Or would you take each part of the car and upload it, and its
relationship when assembled, to each other part in the car? You would do the
second, there are too many possible inputs to imagine the input-output approach
could be even vaguely as efficient.
So I am thinking of Hansonian uploads for Hansonian reasons, and so it is fair
to insist we do something which is more efficient, upload a copy of the machine
rather than a compressed input-output map, especially if the ratio of efficiency
is > 10^100:1.
I think I have explained that above. TO characterize the machine by its
input-output map, you need to consider every possible input. In the case of a
person with memory, that means every possible lifetime: the input-output map is
gigantic, much bigger than the machine itself, which is the brain/body.
What I t
1cousin_it10y
Well, presumably you don't want an atom-by-atom simulation. You want to at least
compress each neuron to an approximate input-output map for that neuron,
observed from practice, and then use that. Also you might want to take some
implementation shortcuts to make the thing run faster. You seem to think that
all these changes are obviously harmless. I also lean toward that, but not as
strongly as you, because I don't know where to draw the line between harmless
and harmful optimizations.
0jefftk10y
Right; with lossless compression then you're not going to lose anything. So
cousin_it probably means lossy compression, like with jpgs and mp3s, smaller
versions that are very similar to what you had before.
0[anonymous]10y
Well, initial mind uploading is going to be lossy because it will convert analog
to digital.
That said, I don't know if even lossless compression of the whole input-output
map is going to preserve everything. Let's say you have ten seconds left to
live. Your input-output map over these ten seconds probably doesn't contain many
interesting statements about consciousness, but that doesn't mean you're allowed
to compress away consciousness...
And even on longer timescales, people don't seem to be very good at
introspecting about consciousness, so all your beliefs about consciousness might
be compressible into a small input-output map. Or at least we can't say that
input-output map is large, unless we figure out more about consciousness in the
first place.
(Also I agree with crazy88's point that consciousness might play a large causal
role but still be compressible to a smaller non-conscious program.)
More generally, I'm not sure that consciousness is just about the input-output
map. Doesn't it feel more like internal processing? I seem to have consciousness
even when I'm not talking about it, and I would still have it even if my
religion prohibited me from talking about it, or something.
1ChristianKl10y
It depends on whether you subscribe to materialism. If you do then there nothing
to measure. Conscious might even be a tricky illusion as Dennett suggests.
If on the other hand you do believe that there something beyond materialism
there are plenty of frameworks to choose from that provide ideas about what one
could measure.
0mwengler10y
OMG then someone should get busy! Tell me what I can measure and if it makes any
kind of sense I will start working on it!
0ChristianKl10y
I do have a qualia for perceiving whether someone else is present in a
meditation or is absent minded. It could be that it's some mental reactions that
picks up microgestures or some other thing that I don't consciously perceive and
summarizes that information into a qualia for mental presence.
Investigating how such a qualia works is what I would do personally when I would
want to investigate consciousness.
But you probably have no such qualia, so you either need someone who has or
develop it yourself. In both cases that probably means seeking a good meditation
teacher.
It's a difficult subject to talk about in a medium like this where people who
are into a spiritual framework that has some model of what conscious happens to
be have phenomenological primitives that the audience I'm addressing doesn't
have. In my experience most of the people who I consider capable in that regard
are very unwilling to talk about details with people who don't have
phenomenological primitives to make sense of them. Instead of answering a
question directly a Zen teacher might give you a koan and tell you to come back
in a month when you build the phenomenological primitives to understand it,
expect that he doesn't tell you about phenomenological primitives.
0shminux10y
I don't know of a human-independent definition of consciousness, do you? If not,
how can one say that "something else is conscious"? So the statement
will only make sense once there is a definition of consciousness not relying on
being a human or using one to evaluate it. (I have a couple ideas about that,
but they are not firm enough to explicate here.)
2mwengler10y
I don't know of ANY definition of consciousness which is testable,
human-independent or not.
1Scott Garrabrant10y
Integrated Information Theory is one attempt at a definition. I read about it a
little, but not enough to determine if it is completely crazy.
1fluchess10y
IIT is provides a mathematical approach to measuring consciousness. It is not
crazy, and has a significant number of good papers on the topic. It is
human-independent
0Viliam_Bur10y
I don't understand it, but from reading the wikipedia summary it seems to me it
measures a complexity of the system. A complexity is not necessarily
consciousness.
According to this theory, what is the key difference between a human brain,
and... let's say a hard disk of the same capacity, connected to a
high-resolution camera? Let's assume that the data from the camera are being
written in real time to pseudo-random parts of the hard disk. The pseudo-random
parts are chosen by calculating a checksum of the whole hard disk. This system
obviously is not conscious, but seems complex enough.
2fluchess10y
IIT proposes that consciousness is integrated information.
The key difference between a brain and the hard disk is the disk has no way of
knowing what it is actually sensing. Brain can tell difference between many more
sense and receive and use more forms of information. The camera is not conscious
of the fact it sensing light and colour.
This article is a good introduction to the topic and the photodiode example in
the paper is the simple version of your question
http://www.biolbull.org/content/215/3/216.full
1Viliam_Bur10y
Thanks! The article was good. At this moment, I am... not convinced, but also
not able to find an obvious error.
Paraphrased from #lesswrong: "Is it wrong to shoot everyone who believes Tegmark level 4?" "No, because, according to them, it happens anyway". (It's tongue-in-cheek, for you humorless types.)
I am still seeking players for a multiplayer game of Victoria 2: Hearts of Darkness. We have converted from an earlier EU3 game, itself converted from CK2; the resulting history is very unlike our own. We are currently in 1844:
Islamic Spain has publicly declared half of Europe to be dar al Harb, liable to attack at any time, while quietly seeking the return of its Caribbean colonies by diplomatic means.
The Christian powers of Europe discuss the partition of Greece-across-the-sea, the much-decayed final remnant of the Roman Empire, which nonetheless rule
Additionally, playing in an MP campaign offers all sorts of opportunities for
sharpening your writing skills through stories set in the alternate history!
0Vaniver10y
If you play in this game, you get to play with not one, but two LWers! I am
Spain, beacon of learning, culture, and industry.
0bramflakes10y
Other than the alternate start, are there any mods?
0RolfAndreassen10y
Yes, we have redistributed the RGOs for great balance, and stripped out the
nation-specific decisions.
It's a repost from last week.
Though rereading it, does anyone know whether Zach knows about MIRI and/or
lesswrong? I expect "unfriendly human-created Intelligence " to parse to AI with
bad manners to people unfamiliar with MIRI's work, which is probably not what
the scientist is worried about.
1Lumifer10y
I expect "unfriendly human-created Intelligence " to parse to HAL and Skynet to
regular people.
1Vulture10y
The use of "friendly" to mean "non-dangerous" in the context of AI is, I
believe, rather idiosyncratic.
I'm interested in learning pure math, starting from precalculus. Can anyone give advise on what textbooks I should use?
Here's my current list (a lot of these textbooks were taken from the MIRI and LW's best textbook list):
I advise that you read the first 3 books on your list, and then reevaluate. If you do not know any more math than what is generally taught before calculus, then you have no idea how difficult math will be for you or how much you will enjoy it.
It is important to ask what you want to learn math for. The last four books on your list are categorically different from the first four (or at least three of the first four). They are not a random sample of pure math, they are specifically the subset of pure math you should learn to program AI. If that is your goal, the entire calculus sequence will not be that useful.
If your goal is to learn physics or economics, you should learn calculus, statistics, analysis.
If you want to have a true understanding of the math that is built into rationality, you want probability, statistics, logic.
If you want to learn what most math PhDs learn, then you need things like algebra, analysis, topology.
Thanks, I made an edit you might not have seen, I mentioned I do have experience
with calculus (differential, integral, multi-var), discrete math (basic graph
theory, basic proofs), just filling in some gaps since it's been awhile since
I've done 'math'. I imagine I'll get through the first two books quickly.
Can you recommend some algebra/analysis/topology books that would be a natural
progression of the books I listed above?
2Nisan10y
In my experience, "analysis" can refer to two things: (1) A proof-based calculus
course; or (2) measure theory, functional analysis, advanced partial
differential equations. Spivak's Calculus is a good example of (1). I don't have
strong opinions about good texts for (2).
2Nisan10y
Dummit & Foote's Abstract Algebra is a good algebra book and Munkres' Topology
is a good topology book. They're pretty advanced, though. In university one
normally one tackles them in late undergrad or early grad years after taking
some proof-based analysis and linear algebra courses. There are gentler
introductions to algebra and topology, but I haven't read them.
0cursed10y
Great, I'll look into the Topology book.
2gjm10y
A couple more topology books to consider: "Basic Topology" by Armstrong, one of
the Springer UTM series; "Topology" by Hocking and Young, available quite cheap
from Dover. I think I read Armstrong as a (slightly but not extravagantly
precocious) first-year undergraduate at Cambridge. Hocking and Young is less fun
and probably more of a shock if you've been away from "real" mathematics for a
while, but goes further and is, as I say, cheap.
2Vladimir_Nesov10y
Given how much effort it takes to study a textbook, cost shouldn't be a
significant consideration (compare a typical cost per page with the amount of
time per page spent studying, if you study seriously and not just cram for
exams; the impression from the total price is misleading). In any case, most
texts can be found online.
2gjm10y
And yet, sometimes, it is. (Especially for impecunious students, though that
doesn't seem to be quite cursed's situation.)
Some people may prefer to avoid breaking the law.
8Nornagest10y
There's some absurd recency effects in textbook publishing. In well-trodden
fields it's often possible to find a last-edition textbook for single-digit
pennies on the dollar, and the edition change will have close to zero impact if
you're doing self-study rather than working a highly exact problem set every
week.
(Even if you are in a formal class, buying an edition back is often worth the
trouble if you can find the diffs easily, for example by making friends with
someone who does have the current edition. I did that for a couple semesters in
college, and pocketed close to $500 before I started getting into textbooks
obscure enough not to have frequent edition changes.)
1Scott Garrabrant10y
I am not going to be able to recommend any books. I learned all my math directly
from professors' lectures.
What is your goal in learning math?
If you want to learn for MIRI purposes, and youve already seen some math, then
relearning calculus might not be worth your time
0cursed10y
I have a degree in computer science, looking to learn more about math to apply
to a math graduate program and for fun.
2Scott Garrabrant10y
My guess is that if you have an interest in computer science, you will have the
most fun with logic and discrete math, and will not have much fun with the
calculus.
If you are serious about getting into a math graduate program, then you have to
learn the calculus stuff anyway, because it is a large part of the Math GRE.
2lmm10y
It's worth mentioning that this is a US peculiarity. If you apply to a program
elsewhere there is a lot less emphasis on calculus.
0somervta10y
But you should still know rthe basics of calculus (and linear algebra) - at
least the equivalent of calc 1, 2 & 3,
2Nisan10y
Maybe the most important thing to learn is how to prove things. Spivak's
Calculus might be a good place to start learning proofs; I like that book a lot.
1ricketybridge10y
For what it's worth, I'm doing roughly the same thing, though starting with
linear algebra. At first I started with multivariable calc, but when I found it
too confusing, people advised me to skip to linear algebra first and then return
to MVC, and so far I've found that that's absolutely the right way to go. I'm
not sure why they're usually taught the other way around; LA definitely seems
more like a prereq of MVC.
I tried to read Spivak's Calc once and didn't really like it much; I'm not sure
why everyone loves it. Maybe it gets better as you go along, idk.
I've been doing LA via Gilbert Strang's lectures on the MIT Open CourseWare, and
so far I'm finding them thoroughly fascinating and charming. I've also been
reading his book and just started Hoffman & Kunze's Linear Algebra, which
supposedly has a bit more theory (which I really can't go without).
Just some notes from a fellow traveler. ;-)
0Vladimir_Nesov10y
"Not liking" is not very specific. It's good all else equal to "like" a book,
but all else is often not equal, so alternatives should be compared from other
points of view as well. It's very good for training in rigorous proofs at
introductory undergraduate level, if you do the exercises. It's not necessarily
enjoyable.
It's a much more advanced book, more suitable for a deeper review somewhere at
the intermediate or advanced undergraduate level. I think Axler's "Linear
Algebra Done Right" is better as a second linear algebra book (though it's less
comprehensive), after a more serious real analysis course (i.e. not just Spivak)
and an intro complex analysis course.
1ricketybridge10y
Oh yeah, I'm not saying Spivak's Calculus doesn't provide good training in
proofs. I really didn't even get far enough to tell whether it did or not, in
which case, feel free to disregard my comment as uninformed. But to be more
specific about my "not liking", I just found the part I did read to be more
opaque than engaging or intriguing, as I've found other texts (like Strang's
Linear Algebra, for instance).
Edit: Also, I'm specifically responding to statements that I thought referring
to liking the book in the enjoyment sense (expressed on this thread and
elsewhere as well). If that's not the kind of liking they meant, then my comment
is irrelevant.
Damn, really?? But I hate it when math books (and classes) effectively say
"assume this is true" rather than delve into the reason behind things, and those
reasons aren't explained until 2 classes later. Why is it not more pedagogically
sound to fully learn something rather than slice it into shallow,
incomprehensible layers?
1Qiaochu_Yuan10y
I think people generally agree that analysis, topology, and abstract algebra
together provide a pretty solid foundation for graduate study. (Lots of
interesting stuff that's accessible to undergraduates doesn't easily fall under
any of these headings, e.g. combinatorics, but having a foundation in these
headings will equip you to learn those things quickly.)
For analysis the standard recommendation is baby Rudin, which I find dry, but it
has good exercises and it's a good filter: it'll be hard to do well in, say,
math grad school if you can't get through Rudin.
For point-set topology the standard recommendation is Munkres, which I generally
like. The problem I have with Munkres is that it doesn't really explain why the
axioms of a topological space are what they are and not something else; if you
want to know the answer to this question you should read Vickers. Go through
Munkres after going through Rudin.
I don't have a ready recommendation for abstract algebra because I mostly didn't
learn it from textbooks. I'm not all that satisfied with any particular abstract
algebra textbooks I've found. An option which might be a little too hard but
which is at least fairly comprehensive is Ash, which is also freely legally
available online.
For the sake of exposure to a wide variety of topics and culture I also
strongly, strongly recommend that you read the Princeton Companion. This is an
amazing book; the only bad thing I have to say about it is that it didn't exist
when I was a high school senior. I have other reading recommendations along
these lines (less for being hardcore, more for pleasure and being exposed to
interesting things) at my blog.
4Vladimir_Nesov10y
I feel that it's only good as a test or for review, and otherwise a bad
recommendation, made worse by its popularity (which makes its flaws harder to
take seriously), and the widespread "I'm smart enough to understand it, so it
works for me" satisficing attitude. Pugh's "Real Mathematical Analysis" is a
better alternative for actually learning the material.
0MrMind10y
I would preface any textbook on topology with the first chapter of Ishan's
"Differential geometry". It builds the reason for studying topology and why the
axioms have the shape they have in a wonderful crescendo, and at the end even
dabs a bit into nets (non point-set topology). It's very clear and builds a lot
of intuition.
Also, as a side dish in a topology lunch, the peculiar "Counterexamples in
topology".
1Vladimir_Nesov10y
Keep a file with notes about books. Start with Spivak's "Calculus" (do most of
the exercises at least in outline) and Polya's "How to Solve It", to get a
feeling of how to understand a topic using proofs, a skill necessary to properly
study texts that don't have exceptionally well-designed problem sets.
(Courant&Robbins's "What Is Mathematics?" can warm you up if Spivak feels too
dry.)
Given a good text such as Munkres's "Topology", search for anything that could
be considered a prerequisite or an easier alternative first. For example,
starting from Spivak's "Calculus", Munkres's "Topology" could be preceded by
Strang's "Linear Algebra and Its Applications", Hubbard&Hubbard's "Vector
Calculus", Pugh's "Real Mathematical Analysis", Needham's "Visual Complex
Analysis", Mendelson's "Introduction to Topology" and Axler's "Linear Algebra
Done Right". But then there are other great books that would help to appreciate
Munkres's "Topology", such as Flegg's "From Geometry to Topology", Stillwell's
"Geometry of Surfaces", Reid&Szendrői's "Geometry and Topology", Vickers's
"Topology via Logic" and Armstrong's "Basic Topology", whose reading would
benefit from other prerequisites (in algebra, geometry and category theory) not
strictly needed for "Topology". This is a downside of a narrow focus on a few
harder books: it leaves the subject dry. (See also this comment.)
0iarwain110y
I'm doing precalculus now, and I've found ALEKS to be interesting and useful.
For you in particular it might be useful because it tries to assess where you're
up to and fill in the gaps.
I also like the Art of Problem Solving books. They're really thorough, and if
you want to be very sure you have no gaps then they're definitely worth a look.
Their Intermediate Algebra book, by the way, covers a lot of material normally
reserved for Precalculus. The website has some assessments you can take to see
what you're ready for or what's too low-level for you.
0[anonymous]10y
Given your background and our wish for pure math, I would skip the calculus and
applications of linear algebra and go directly to basic basic set theory, then
abstract algebra, then mathy linear algebra or real analysis, then topology.
Or, do discrete math directly if you already know how to write a proof.
I am going to organize a coaching course to learn Javascript + Node.js.
My particular technology of choice is node.js because:
If starting from scratch, having to learn just one language for both frontend and backend makes sense. Javascript is the only language you can use in a browser and you will have to learn it anyway. They say it's kind of Lisp or Scheme in disguise and a pretty cool language by itself.
Node.js is a modern asynchronous web framework, made by running Javascript code server-side on Google's open-source V8 JavaScript Engine It seems to b
I would suggest using AngularJs instead, since it can be purely client-side
code, you don't need to deal with anything server-side.
There are also some nice online development environments like codenvy that can
provide a pretty rich environment and I belieave have some collaburative
features too (instead of using dropbox, doodle and slideshare, maybe).
If all those technologies seem intimidating, some strategies:
* Focus on a subset, i.e. only html and css
* Use Anki a lot - I've used anki to put in git commands, AngularJS concepts
and CSS tricks so that even if I wasn't actively working on a project using
those, they'd stay at the back of my mind.
Has anyone else had one of those odd moments when you've accidentally confirmed reductionism (of a sort) by unknowingly responding to a situation almost identically to the last time or times you encountered it? For my part, I once gave the same condolences to an acquaintance who was living with someone we both knew to be very unpleasant, and also just attempted to add the word for "tomato" in Lojban to my list of words after seeing the Pomodoro technique mentioned.
A freaky thing I once saw... when my daughter was about 3 there were certain
things she responded to verbally, I can't remember what the thing was in this
example, but something like me asking here "who is your rabbit?" and her
replying "Kisses" (which was the name of her rabbit).
I had videoed some of this exchange and was playing it on a TV with her in the
room. I was appalled to hear her responding "Kisses" upon hearing me on the TV
saying "who is your favorite rabbit." Her response was extremely similar to her
response on the video, tremendous overlap in timing tone and inflection. Maybe
20 to 50 ms off in timing (almost sounded like unison).
I really had the sense that she was a machine and it did not feel good.
After a brain surgery, my father developed Anterograde amnesia. Think Memento by Chris Nolan. His reactions to different comments/situations were always identical. If I were to mention a certain word, it would always invoke the same joke. Seeing his wife wearing a certain dress always produces the same witty comment. He was also equally amused by his wittiness every time.
For several months after the surgery he had to be kept on tight watch, and was prone to just do something that was routine pre-op, so we found a joke he finds extremely funny and which he hasn't heard before the surgery, and we would tell it every time we want him to forget where he was going. So, he would laugh for a good while, get completely disoriented, and go back to his sofa.
For a long while, we were unable to convince him that he had a problem, or even that he had the surgery (he would explain the scar away through some fantasy). And even when we manage, it lasts only for a minute or two.. Since then, I've developed several signals I would use if I found myself in an isomorphic situation. I had already read HPMoR by that time, but have discarded Harry's lip-biting as mostly pointless in real life.
These are both pretty much exactly what I'm thinking of! The feeling that
someone (or you!) is/are a terrifyingly predictable black box.
0DanielLC10y
My goal in life is to become someone so predictable that you can figure out what
I'll do just by calculating what choice would maximize utility.
0Bayeslisk10y
That seems eminently exploitable and consequently extremely dangerous. Safety
and unexpected delight lie in unpredictability.
2BloodyShrimp10y
This doesn't seem related to reductionism to me, except in that most
reductionists don't believe in Knightian free will.
0Bayeslisk10y
Sort of in the sense of human minds being more like fixed black boxes that one
might like to think. What's Knightian free will, though?
0BloodyShrimp10y
Knightian uncertainty is uncertainty where probabilities can't even be applied.
I'm not convinced it exists. Some people seem to think free will is rescued by
it; that the human mind could be unpredictable even in theory, and this somehow
means it's "you" "making choices". This seems like deep confusion to me, and so
I'm probably not expressing their position correctly.
Reductionism could be consistent with that, though, if you explained the mind's
workings in terms of the simplest Knightian atomic thingies you could.
0Bayeslisk10y
Can you give me some examples of what some people think constitutes Knightian
uncertainty? Also: what do they mean by "you"? They seem to be postulating
something supernatural.
1BloodyShrimp10y
Again, I'm not a good choice for an explainer of this stuff, but you could try
http://www.scottaaronson.com/blog/?p=1438
0Bayeslisk10y
Thanks! I'll have a read through this.
0BloodyShrimp10y
I decided I should actually read the paper myself, and... as of page 7, it sure
looks like I was misrepresenting Aaronson's position, at least. (I had only
skimmed a couple Less Wrong threads on his paper.)
2NancyLebovitz10y
In my case, it seems more likely that the other person will remember that I'd
said the same thing before.
0Bayeslisk10y
In mine, too, at least for the first few seconds. Otherwise, knowing I had
already responded a certain way, I would probably respond differently.
I am interested in this, or possibly a different closely-related thing.
I accept the logical arguments underlying utilitarianism ("This is the morally
right thing to do.") but not the actionable consequences. ("Therefore, I should
do this thing.") I 'protect' only my social circle, and have never seen any
reason why I should extend that.
5blacktrance10y
What does "the morally right thing to do" mean if not "the thing you should do"?
3VAuroch10y
To rephrase: I accept that utilitarianism is the correct way to extrapolate our
moral intuitions into a coherent generalizable framework. I feel no 'should'
about it -- no need to apply that framework to myself -- and feel no cognitive
dissonance when I recognize that an action I wish to perform is immoral, if it
hurts only people I don't care about.
0mwengler10y
Ultimately I think that is the way all utilitarianism works. You define an in
group of people who are important, effectively equivalently important to each
other and possibly equivalently important to yourself.
For most modern utilitarians, the in-group is all humans. Some modern
utilitarians put mammals with relatively complex nervous systems in the group,
and for the most part become vegetarians. Others put everything with a nervous
system in there and for the most part become vegans. Very darn few put all life
forms in there as they would starve. Implicit in this is that all life forms
would place negative utility on being killed to be eaten which may be reasonable
or may be projection of human values on to non-human entities.
But logically it makes as much sense to shrink the group you are utilitarian
about as to expand it. Only Americans seems like a popular one in the US when
discussing immigration policy. Only my friends and family has a following. Only
LA Raiders fans or Manchester United fans seems to also gather its proponents.
Around here, I think you find people trying to put all thinking things, even
mechanical, in the in-group, perhaps only all conscious thinking things. Maybe
the way to create a friendly AI would be to make sure the AI never values its
own life more than it values its own death, then we would always be able to turn
it off without it fighting back.
Also, I suspect in reality you have a sliding scale of acceptance, that you
would not be morally neutral about killing a stranger on the road and taking
their money if you thought you could get away with it. But you certainly won't
accord the stranger the full benefit of your concern, just a partial benefit.
1VAuroch10y
Oh, there are definitely gradations. I probably wouldn't do this, even if I
could get away with it. I don't care enough about strangers to go out of my way
to save them, but neither do I want to kill them. On the other hand, if it was a
person I had an active dislike for, I probably would. All of which is basically
irrelevant, since it presupposes the incredibly unlikely "if I thought I could
get away with it".
1deskglass10y
I used to think I thought that way, but then I had some opportunities to
casually steal from people I didn't know (and easily get away with it), but I
didn't. With that said, I pirate things all the time despite believing that
doing so frequently harms the content owners a little.
0VAuroch10y
I have taken that precise action against someone who mildly annoyed me. I
remember it (and the perceived slight that motivated it), but feel no guilt over
it.
0Squark10y
By utilitiarian you mean:
1. Caring about all people equally
2. Hedonism, i.e. caring about pleasure/pain
3. Both of the above (=Bentham's classical utilitarianism)?
In any case, what answer do you expect? What would constitute a valid reason?
What are the assumptions from which you want to derive this?
0[anonymous]10y
I mean this.
I do not expect any specific answer.
For me personally, probably nothing, since, apparently, I neither really care
about people (I guess I overintellectuallized my empathy), nor about pleasure
and suffering. The question, however, was asked mostly to better understand
other people.
I don't know any.
0Scott Garrabrant10y
You can band together lots of people to work together towards the same
utilitarianism.
0[anonymous]10y
i.e. change happiness-suffering to something else?
0Scott Garrabrant10y
I don't know how to parse that question.
I am claiming that people with no empathy at all can agree to work towards
utilitarianism, for the same reason they can agree to cooperate in the repeated
prisoner's dilemma.
2Lumifer10y
I don't understand why is this an argument in favor of utilitarianism.
A bunch of people can agree to work towards pretty much anything, for example
getting rid of the unclean/heretics/untermenschen/etc.
0Scott Garrabrant10y
I think you are taking this sentence out of context. I am not trying to present
an argument in favor of utilitarianism. I was trying to explain why empathy is
not necessary for utilitarianism.
I interpreted the question as "Why (other than my empathy) should I try to
maximize other people's utility?"
2Lumifer10y
Right, and here is your answer:
I don't understand why this is a reason "to maximize other people's utility".
0Scott Garrabrant10y
You can entangle your own utility with other's utility, so that what maximizes
your utility also maximizes their utility and vice versa. Your terminal value
does not change to maximizing other people's utility, but it becomes a side
effect.
2Lumifer10y
So you are basically saying that sometimes it is in your own self-interest ("own
utility") to cooperate with other people. Sure, that's a pretty obvious
observation. I still don't see how it leads to utilitarianism.
If you terminal value is still self-interest but it so happens that there is a
side-effect of increasing other people's utility -- that doesn't look like
utilitarianism to me.
0Scott Garrabrant10y
I was only trying to make the obvious observation.
Just trying to satisfy your empathy does not really look like pure
utilitarianism either.
0[anonymous]10y
There's no need to parse it anymore, I didn't get your comment initially.
I agree theoretically, but I doubt that utilitarianism can bring more value to
egoistic agent than being egoistic without regard to other humans' happiness.
1Scott Garrabrant10y
I agree in the short term, but many of my long term goals (e.g. not dying)
require lots of cooperation.
-4Viliam_Bur10y
I guess the reason is maximizing one's utility function, in general. Empathy is
just one component of the utility function (for those agents who feel it).
If multiple agents share the same utility function, and they know it, it should
make their cooperation easier, because they only have to agree on facts and
models of the world; they don't have to "fight" against each other.
2[anonymous]10y
Apparently, we mean different things by "utilitarianism". I meant moral system
whose terminal goal is to maximize pleasure and minimize suffering in the whole
world, while you're talking about agent's utility function, which may have no
regard for pleasure and suffering.
I agree, thought, that it makes sense to try to maximize one's utility function,
but to me it's just egoism.
Criticism's well and good, but 140 characters or less of out-of-context
quotation doesn't lend itself to intelligent criticism. From the looks of that
feed, about half of it is inferential distance problems and the other half is
sacred cows, and neither one's very interesting.
If we can get anything from it, it's a reminder that killing sacred cows has
social consequences. But I'm frankly tired of beating that particular drum.
0[anonymous]10y
Things like this merely mean that you exist and someone else has noticed it.
EDIT: This particular site does margin trading differently to how I thought margin trading normally works. So... disregard everything I just said?
Bitcoin economy and a possible violation of the efficent market hypososis.
With the growing maturity of the Bitcoin ecosystem, there has appeared a website which allows leveraged trading, meaning that people who think they know which way the price is going can borrow money to increase their profits. At the time of writing, the bid-ask spread for the rates offered is 0.27% - 0.17% per day, which is 166% - 86% per ... (read more)
The exchange can just fail in a large variety of ways and close (go bankrupt).
If you're not "insured" you are exposed to the trading risk and insurance costs
what, about 30%? and, of course, it doesn't help you with the exchange
counterparty risk.
0skeptical_lurker10y
30% per annum? Even if this were true (and this sounds quite high, as I
mentioned with Gwerns 1% per month estimate) then providing liquidity with them
would still be +EV (86% increase vs 30% risk).
3Lumifer10y
Um, did you make your post without actually reading the Bitfinex site about how
it works..?
2skeptical_lurker10y
Upvoted for pointing out my stupid mistake (I assumed it works in a certain way,
and skipped readig the vital bit)
2skeptical_lurker10y
Ahh, oops. I think I missed the last line... I thought if someone exceeded their
margin, they were forced to close their position so that no money was lost.
0niceguyanon10y
There is risk that is baked in from the fact that depositors are on the hook if
trades can not be unwound quickly enough, and because this is Bitcoins, where
volatility is crazy there is even more of this risk.
For example assume you lend money for some trader to go long, and now say that
suddenly prices drop so quickly that it puts the trader beyond a margin call, in
fact it puts him at liquidation, oh uh...the traders margin wallet is now
depleted, who makes up the balance, the lenders. They actually do mention this
on their website. But they don't tell you what the margin call policy is. This
is a really important part of the risk. If they allow a trader to only put up
$50 of a $100 position and call you in when your portion hits 25% that would be
normal for something like index equities but pretty insane for something like
Bitcoin.
How does solipsism change one's pattern of behavior, compared to other things being alive? I noticed that when you take enlightened self-interest into account, it seems that many behaviors don't change regardless of whether the people around you are sentient or not.
For example, if you steal from your neighbor, you can observe that you run the risk of him catching you, and thus you having to deal with consequences that will be painful or unpleasant. Similarly, assuming you're a healthy person, you have a conscience that makes you feel bad about certain thin... (read more)
I'm certainly comfortable with violent fantasy when the roles are acted out.
This suggests to me that if I were convinced that certain person-seeming things
were not alive, conscious, were not what they seemed that this might tip me in
to some violent behaviors. I think at minimum I would experiment with it, try a
slap here, a punch there. And where I went from there would depend on how it
felt I suppose.
Also I would almost certainly steal more stuff if I was convinced that
everything was landscape.
0hyporational10y
In fantasies you're in total control. Same applies to video games for example.
Risk of severe retaliation isn't a real.
3ahbwramc10y
Well, the obvious difference would be that non-solipsists might care about what
happens after they die, and act accordingly.
1MrMind10y
When I was younger and studying analytical philosophy, I noticed the same thing.
Unless solipsism morphs into apathy, there are still 'representations' you can't
control and that you can care about. Unless it alters your values, there should
be no difference in behaviour too.
0DanielLC10y
If I didn't care about other people, I wouldn't worry about donating to
charities that actually help people. I'd donate a little to charities that make
me look good, and if I'm feeling guilty and distracting myself doesn't seem to
be cost-effective, I'd donate to charities that make me feel good. I would still
keep quite a bit of my money for myself, or at least work less.
As it is, I've figured that other people matter, and some of them are a lot
cheaper to make happy than me, so I decided that I'm going to donate pretty much
everything I can to the best charity I can find.
0[anonymous]10y
If there were no other beings that could consciously suffer, I would probably
adopt a morality that would be utterly horrible in the real world. Video games
might hint at how solipsism would make you behave.
I participated in an economics experiment a few days ago, and one of the tasks was as follows. Choose one of the following gambles where each outcome has 50% probability
Option 1: $4 definitely
Option 2: $6 or $3
Option 3: $8 or $2
Option 4: $10 or $1
Option 5: $12 or $0
I choose option 5 as it has the highest expected value. Asymptotically this is the best option but for a single trial, is it still the best option?
Technically, it depends on your utility function. However, even without knowing your utility function, I can say that for such a low amount of money, your utility function is very close to linear, and option 5 is the best.
Here's one interesting way of viewing it that I once read:
Suppose that the option you chose, rather than being a single trial, were actually 1,000 trials. Then, risk averse or not, Option 5 is clearly the best approach. The only difficulty, then, is that we're considering a single trial in isolation. However, when you consider all such risks you might encounter in a long period of time (e.g. your life), then the situation becomes much closer to the 1,000 trial case, and so you should always take the highest expected value option (unless the amounts involved are absolutely huge, as others have pointed out).
As a poker player, the idea we always batted back and forth was that Expected
Value doesn't change over shorter sample sizes, including a single trial.
However you may have a risk of ruin or some external factor (like if you're poor
and given the option of being handed $1,000,000 or flipping a coin to win
$2,000,001).
Barring that, if you're only interested in maximizing your result, you should
follow EV. Even in a single trial.
3Lumifer10y
That depends on your utility function, specifically your risk tolerance. If
you're risk-neutral, option 5 has the highest value, otherwise it depends.
1Dagon10y
Clearly option 5 has the higest mean outcome. If you value money linearly (that
is, $12 is exactly 3 times as good as $4, and there's no special utility
threshold along the way (or disutility at $0), it's the best option.
For larger values, your value for money may be nonlinear (meaning: the
difference between $0 and $50k may be much much larger than the difference
between $500k and $550k to your happiness), and then you'll need to convert the
payouts to subjective value before doing the calculation. Likewise if you're in
a special circumstance where there's a threshold value that has special value to
you - if you need $3 for bus fare home, then option 1 or 2 become much more
attractive.
0DanielLC10y
That depends on the amount of background money and randomness you have.
Although I can't really see any case where I wouldn't pick option five. Even if
that's all the money I will ever have, my lifespan, and by extension my
happiness, will be approximately linear with time.
If you specify that I get that much money each day for the rest of my life, and
that's all I get, then I'd go for something lower risk.
-2jobe_smith10y
In general, picking the highest EV option makes sense, but in the context of
what sounds like a stupid/lazy economics experiment, you have a moral duty to do
the wrong thing. Perhaps you could have flipped a coin twice to choose among the
first 4 options? That way you are providing crappy/useless data and they have to
pay you for it!
0fluchess10y
Why do I have a moral duty to do wrong thing? Shouldn't I act in my own self
interest to maximise the amount of money I make?
An Iterated Prisoner's Dilemma variant I've been thinking about —
There is a pool of players, who may be running various strategies. The number of rounds played is randomly determined. On each round, players are matched randomly, and play a one-shot PD. On the second and subsequent rounds, each player is informed of its opponent's previous moves; but players have no information about what move was played against them last round, nor whether they have played the same opponent before.
In other words, as a player you know your current opponent's move history — ... (read more)
Self-driving cars had better use (some approximation of) some form of acausal decision theory, even more so than a singleton AI, because the former will interact in PD-like and Chicken-like ways with other instantiations of the same algorithm.
Self driving cars have very complex goal metrics, along the lines of getting to
the destination while disrupting the traffic the least (still grossly
oversimplifying).
The manufacturer is interested in every one of his cars getting to the
destination in the least time, so the cars are programmed to optimize for the
sake of all cars. They're also interested in getting human drivers to buy their
cars, which also makes not driving like a jerk a goal. PD is problematic when
agents are selfish, not when agents entirely share the goal. Think of 2 people
in PD played for money, who both want to donate all proceeds to same charity.
This changes the payoffs to the point where it's not PD any more.
0A1987dM10y
Depends on who those humans are. For a large fraction of low-IQ young males...
3private_messaging10y
I dunno, having a self driving jerk car takes away what ever machoism one could
have about driving... there's something about a car where you can go macho and
drive manual to be a jerk.
I don't think it'd help sales at all if self driving cars were causing accidents
while themselves evading the collision entirely.
6Douglas_Knight10y
Already deployed is a better example: computer network protocols.
4Error10y
Or different algorithms. How long after wide release will it be before someone
modifies their car's code to drive aggressively, on the assumption that cars
running the standard algorithm will move out of the way to avoid an accident?
(I call this "driving like a New Yorker." New Yorkers will know what I mean.)
5private_messaging10y
That's like driving without a license. Obviously the driver (software) has to be
licensed to drive the car, just as persons are. Software that operates deadly
machinery has to be developed in specific ways, certified, and so on and so
forth, for how many decades already? (Quite a few)
I have been reviewing FUE hair transplants, and I would like LWers' opinion. I'm actually surprised this isn't covered, as it seems relevant to many users.
As far as I can tell, the downsides are:
Mild scarring on the back of the head
Doesn’t prevent continued hair loss, so if you get e.g. a bald spot filled in, then you will in a few years have a spot of hair in an oasis
Cost
Mild pain/hassle in the initial weeks.
Possibility of finding a dodgy surgeon
The scarring is basically covered if you have a few two days’ hair growth there and I am fine with tha... (read more)
This is quite far down the page, even though I posted it a few hours ago. Is
that an intended effect of the upvoting/downvoting system? (it may well be - I
don't understand how the algorithm assigns comment rankings)
0Oscar_Cunningham10y
Just below and to the right of the post there's a choice of which algorithm to
use for sorting comments. I don't remember what the default is, but I do know
that at least some of them sort by votes (possibly with other factors). I
normally use the sorting "Old" (i.e. oldest first) and then your comment is near
thhe bottom of the page since so many were posted before it.
0Douglas_Knight10y
The algorithm is a complicated mix of recency and score, but on an open thread
that only lasts a week, recency is fairly uniform, so it's pretty much just
score.
I'm looking into Bayesian Reasoning and trying to get a basic handle on it and how it differs from traditional thinking. When I read about how it (apparently) takes into account various explanations for observed things once they are observed, I was immediately reminded of Richard Feynman's opinion of Flying Saucers. Is Feynman giving an example of proper Bayesian thinking here?
It's certainly in the right spirit. He's reasoning backwards in the same way
Bayesian reasoning does: here's what I see; here's what I know about possible
mechanisms for how that could be observed and their prior probabilities; so here
what I think is most likely to be really going on.
Since people were pretty encouraging about the quest to do one's part to help humanity, I have a follow-up question. (Hope it's okay to post twice on the same open thread...)
Perhaps this is a false dichotomy. If so, just let me know. I'm basically wondering if it's more worthwhile to work on transitioning to alternative/renewable energy sources (i.e. we need to develop solar power or whatever else before all the oil and coal run out, and to avoid any potential disastrous climate change effects) or to work on changing human nature itself to better address ... (read more)
The core question is: "What kind of impact do you expect to make if you work on
either issue?"
Do you think there work to be done in the space of solar power development that
other people than yourself aren't effectively doing? Do you think there work to
be done in terms of better judgment and decision-making that other people aren't
already doing?
The problem with coal isn't that it's going to run out but that it kills hundred
of thousands of people via pollution and that it creates climate change.
Why? To me it seems much more effective to focus on more cognitive issues when
you want to improve human judgment. Developing training to help people calibrate
themselves against uncertainty seems to have a much higher return than trying to
do fMRI studies or brain implants.
0ricketybridge10y
I'm familiar with questions like these (specifically, from 80000 hours), and I
think it's fair to say that I probably wouldn't make a substantive contribution
to any field, those included. Given that likelihood, I'm really just trying to
determine what I feel is most important so I can feel like I'm working on
something important, even if I only end up taking a job over someone else who
could have done it equally well.
That said, I would hope to locate a "gap" where something was not being done
that should be, and then try to fill that gap, such as volunteering my time for
something. But there's no basis for me to surmise at this point which issue I
would be able to contribute more to (for instance, I'm not a solar engineer).
At the moment, yes, but it seems like it has limited potential. I think of it a
bit like bootstrapping: a judgment-impaired person (or an entire society) will
likely make errors in determining how to improve their judgment, and the
improvement seems slight and temporary compared to more fundamental, permanent
changes in neurochemistry. I also think of it a bit like people's attempts to
lose weight and stay fit. Yes, there are a lot of cognitive and behavioral
changes people can make to facilitate that, but for many (most?) people, it
remains a constant struggle -- one that many people are losing. But if we could
hack things like that, "temptation" or "slipping" wouldn't be an issue.
From what I've gathered from my reading, the jury is kind of out on how
disastrous climate change is going to be. Estimates seem to range from
catastrophic to even slightly beneficial. You seem to think it will definitely
be catastrophic. What have you come across that is certain about this?
0DanielLC10y
The economy is quite capable of dealing with finite resources. If you have land
with oil on it, you will only drill if the price of oil is increasing more
slowly than interest. If this is the case, then drilling for oil and using the
value generated by it for some kind of investment is more helpful than just
saving the oil.
Climate change is still an issue of course. The economy will only work that out
if we tax energy in proportion to its externalities.
We should still keep in mind that climate change is a problem that will happen
in the future, and we need to look at the much lower present value of the cost.
If we have to spend 10% of our economy on making it twice as good a hundred
years from now, it's most likely not worth it.
I am not sure if this deserves it's own post. I figured I would post here and then add it to discussion if there is sufficient interest.
I recently started reading Learn You A Haskell For Great Good. This is the first time I have attempted to learn a functional language, and I am only a beginner in Imperative languages (Java). I am looking for some exercises that could go along with the e-book. Ideally, the exercises would encourage learning new material in a similar order to how the book is presented. I am happy to substitute/compliment with a different re... (read more)
* I would heartily recommend Project Euler for Haskell and to anyone picking up
a new language (or programming for the first time).
* For Haskell specific problems, there is 99 Haskell problems.
* For building monad intuition, there's a tutorial with some problems here.
* This is a tutorial where you implement a Scheme in Haskell.
* Programming Praxis has a bunch of practice exercises.
* I haven't tried this project out, but it's supposed to allow you to work on
TopCoder problems with Haskell.
* There is a Haskell course with problems being put together here. I'm sure how
it works, though, and documentation is sparse.
* There's more advice here.
* If you're looking for Haskell code to read, I would start with this
simplified version of the Prelude.
0JMiller10y
Awesome, thanks so much! If you were to recommend one of these resources to
begin with, which would it be?
0adbge10y
Happy to help!
I like both Project Euler and 99 Haskell problems a lot. They're great for
building success spirals.
-1Douglas_Knight10y
Why are you committed to that book? SICP is well-tested introductory textbook
with extensive exercises. Added: I meant to say that it is functional.
0JMiller10y
I'm not. The reason I picked it up was because it happens to be the book
recommended in MIRI's course suggestions, but I am not particularly attached to
it. Looking again, it seems they do actually recommend SICP on lesswrong, and
Learnyouahaskell on intelligence.org.
Thanks for the suggestion.
Modafinil is prescription-only in the US, so to get it you have to do illegal things. However, I note that (presumably due to some legislative oversight?) the related drug Adrafinil is unregulated, you can buy it right off Amazon. Does anyone know how Adrafinil and Modafinil compare in terms of effectiveness and safety?
No, you don't have to do illegal things. Another option is to convince your
doctor to give you a prescription. I think people on LW greatly overestimate the
difficulty of this.
0hg0010y
Some info on getting a prescription here:
http://www.bulletproofexec.com/q-a-why-i-use-modafinil-provigil/
I think ADD/ADHD will likely be a harder sell; my impression is that people are
already falsely claiming that in order to get Adderall etc.
2Douglas_Knight10y
I don't even mean to suggest lying. I mean something simple like "I think this
drug might help me concentrate."
A formal diagnosis of ADD or narcolepsy is carte blanche for amphetamine
prescription. Because it is highly scheduled and, moreover, has a big black
market, doctors guard this diagnosis carefully. Whereas, modafinil is lightly
scheduled and doesn't have a black market (not driven by prescriptions), so they
are less nervous about giving it out in ADD-ish situations.
But doctors very much do not like it when a new patient comes in asking for a
specific drug.
3Lumifer10y
See Gwern's page.
2RomeoStevens10y
Adrafinil has additional downstream metabolites besides just modafinil, but I
don't know exactly what they are. Some claim it is harder on the liver implying
some of the metabolites are mildly toxic, but that's not really saying much.
Lots of stuff we eat is mildly toxic. Adrafinil is generally well tolerated and
if your goal is finding out the effects of modafinil on your system and you
can't get modafinil itself I would say go for it. If you then decided to take
moda long term I would say do more research.
IANAD. Research thoroughly and consult with a doctor if you have any medical
conditions or are taking any medications.
Andy Weir's "The Martian" is absolutely fucking brilliant rationalist fiction, and it was published in paper book format a few days ago.
I pre-ordered it because I love his short story The Egg, not knowing I'd get a super-rationalist protagonist in a radical piece of science porn that downright worships space travel. Also, fart jokes. I love it, and if you're an LW type of guy, you probably will too.
[This comment is no longer endorsed by its author]Reply
One person being horribly tortured for eternity is equivalent to that one person being copied infinite times and having each copy tortured for the rest of their life. Death is better than a lifetime of horrible torture, and 3^^^3, despite being bigger than a whole lot of numbers, is still smaller than infinity.
What if the 3^^^3 people were one immortal person?
1RowanE10y
Well then the answer is still obviously death, and that fact has become more
immediately intuitive - probably even those who disagreed with my assessment of
the original question would agree with my choice given the scenario "an immortal
person is tortured forever or an otherwise-immortal person dies"
0DanielLC10y
Being horribly tortured is worse than death, so I'd pick death.
-3jobe_smith10y
I would solicit bids from the two groups. I imagine that the 3^^^3 people would
be able to pay more to save their lives than the 1 person would be able to pay
to avoid infinite torture. Plus, once I make the decision, if I sentence the 1
person to infinite torture I only have to worry about their friends/family and I
have 3^^^3 allies who will help defend me against retribution. Otherwise, the
situation is reversed and I think its likely I'll be murdered or imprisoned if I
kill that many people. Of course, if the scenario is different, like the 3^^^3
people are in a different galaxy (not that that many people could fit in a
galaxy) and the 1 person is my wife, I'll definitely wipe out all those assholes
to save my wife. I'd even let them all suffer infinite torture just to keep my
wife from experiencing a dust speck in her eye. It is valentine's day after all!
Luke wrote a detailed description of his approach to beating procrastination (here if you missed it).
Does anyone know if he's ever given an update anywhere as to whether or not this same algorithm works for him to this day? He seems to be very prolific and I'm curious about whether his view on procrastination has changed at all.
Yvain has started a nootropics survey: https://docs.google.com/forms/d/1aNmqagWZ0kkEMYOgByBd2t0b16dR029BoHmR_OClB7Q/viewform
Background: http://www.reddit.com/r/Nootropics/comments/1xglcg/a_survey_for_better_anecdata/ http://www.reddit.com/r/Nootropics/comments/1xt0zn/rnootropics_survey/
I hope a lot of people take it; I'd like to run some analyses on the results.
I wrote a logic puzzle, which you may have seen on my blog. It has gotten a lot of praise, and I think it is a really interesting puzzle.
Imagine the following two player game. Alice secretly fills 3 rooms with apples. She has an infinite supply of apples and infinitely large rooms, so each room can have any non-negative integer number of apples. She must put a different number of apples in each room. Bob will then open the doors to the rooms in any order he chooses. After opening each door and counting the apples, but before he opens the next door, Bob must accept or reject that room. Bob must accept exactly two rooms and reject exactly one room. Bob loves apples, but hates regret. Bob wins the game if the total number of apples in the two rooms he accepts is a large as possible. Equivalently, Bob wins if the single room he rejects has the fewest apples. Alice wins if Bob loses.
Which of the two players has the advantage in this game?
This puzzle is a lot more interesting than it looks at first, and the solution can be seen here.
I would also like to see some of your favorite logic puzzles. If you you have any puzzles that you really like, please comment and share.
2.5 years ago I made an attempt to calculate an upper bound for the complexity of the currently known laws of physics. Since the issue of physical laws and complexity keeps coming up, and my old post is hard to find with google searches, I'm reposting it here verbatim.
... (read more)I've written a game (or see (github)) that tests your ability to assign probabilities to yes/no events accurately using a logarithmic scoring rule (called a Bayes score on LW, apparently).
For example, in the subgame "Coins from Urn Anise," you'll be told: "I have a mysterious urn labelled 'Anise' full of coins, each with possibly different probabilities. I'm picking a fresh coin from the urn. I'm about to flip the coin. Will I get heads? [Trial 1 of 10; Session 1]". You can then adjust a slider to select a number a in [0,1]. As you adjust a, you adjust the payoffs that you'll receive if the outcome of the coin flip is heads or tails. Specifically you'll receive 1+log2(a) points if the result is heads and 1+log2(1-a) points if the result is tails. This is a proper scoring rule in the sense that you maximize your expected return by choosing a equal to the posterior probability that, given what you know, this coin will come out heads. The payouts are harshly negative if you have false certainty. E.g. if you choose a=0.995, you'd only stand to gain 0.993 if heads happens but would lose 6.644 if tails happens. At the moment, you don't know much about the coin, but as... (read more)
Brought to mind by the recent post about dreaming on Slate Star Codex:
Has anyone read a convincing refutation of the deflationary hypothesis about dreams - that is, that there aren't any? In the sense of nothing like waking experience ever happening during sleep; just junk memories with backdated time-stamps?
My brain is attributing this position to Dennett in one of his older collections - maybe Brainstorms - but it probably predates him.
Stimuli can be incorporated into dreams - for example, if someone in a sleep lab sees you are in REM sleep and sprays water on you, you're more likely to report having had a dream it was raining when you wake up. Yes, this has been formally tested. This provides strong evidence that dreams are going on during sleep.
More directly, communication has been established between dreaming and waking states by lucid dreamers in sleep labs. Lucid dreamers can make eye movements during their dreams to send predetermined messages to laboratory technicians monitoring them with EEGs. Again, this has been formally tested.
I wrote a piece for work on quota systems and affirmative action in employment ("Fixing Our Model of Meritocracy"). It's politics-related, but I did get to cite a really fun natural experiment and talk about quotas for the use of countering the availability heuristic.
An interesting quote, I wonder what people here will make of it...
... (read more)Speed reading doesn't register many hits here, but in a recent thread on subvocalization there are claims of speeds well above 500 WPM.
My standard reading speed is about 200 WPM (based on my eReader statisitcs, varies by content), I can push myself to maybe 240 but it is not enjoyable (I wouldn't read fiction at this speed) and 450-500 WPM with RSVP.
My aim this year is to get myself at 500+ WPM base (i.e. usable also for leisure reading and without RSVP). Is this even possible? Claims seem to be contradictory.
Does anybody have recommendations on systems th... (read more)
Something I recently noticed: steelmanning is popular on LessWrong. But the sequences contain a post called Against Devil's Advocacy, which argues strongly against devil's advocacy, and steelmanning often looks a lot like devil's advocacy. What, if anything is the difference between the two?
Steelmanning is about fixing errors in an argument (or otherwise improving it), while retaining (some of) the argument's assumptions. As a result, the argument becomes better, even if you disagree with some of the assumptions. The conclusion of the argument may change as a result, what's fixed about the conclusion is only the question that it needs to clarify. Devil's advocacy is about finding arguments for a given conclusion, including fallacious but convincing ones.
So the difference is in the direction of reasoning and intent regarding epistemic hygiene. Steelmanning starts from (somewhat) fixed assumptions and looks for more robust arguments following from them that would address a given question (careful hypothetical reasoning), while devil's advocacy starts from a fixed conclusion (not just a fixed question that the conclusion would judge) and looks for convincing arguments leading to it (rationalization with allowed use of dark arts).
A bad aspect of a steelmanned argument is that it can be useless: if you don't accept the assumptions, there is often little point in investigating their implications. A bad aspect of a devil's advocate's argument is that it may be misleading, acting as filtered evidence for the chosen conclusion. In this sense, devil's advocates exercise the skill of coming up with misleading arguments, which might be bad for their ability to reason carefully in other situations.
An article on samurai mental tricks. Most of them will not be that surprising to LWers, but it is nice to see modern results have a long history of working.
Does anyone have advice for getting an entry level software-development job? I'm finding a lot seem to want several years of experience, or a degree, while I'm self taught.
Ignore what they say on the job posting, apply anyway with a resume that links to your Github, websites you've built, etc. Many will still reject you for lack of experience, but in many cases it will turn out the job posting was a very optimistic description of the candidate they were hoping to find, and they'll interview you anyway in spite of not meeting the qualifications on the job listing.
I got to design my first infographic for work and I'd really appreciate feedback (it's here: "Did We Mess Up on Mammograms?").
I'm also curious about recommendations for tools. I used Easl.ly which is a WYSIWYG editor, but it was annoying in that I couldn't just tell it I wanted an mxn block of people icons, evenly spaced, but had to do it by hand instead.
A TEDx video about teaching mathematics; in Slovak, you have to select English subtitles. "Mathematics as a source of joy" Had to share it, but I am afraid the video does not explain too much, and there is not much material in English to link to -- I only found two articles. So here is a bit more info:
The video is about an educational method of a Czech math teacher Vít Hejný; it is told by his son. Prof. Hejný created an educational methodology based mostly on Piaget, but specifically applied to the domain of teaching mathematics (elementary- and... (read more)
Sometimes I feel like looking into how I can help humanity (e.g. 80000 hours stuff), but other times I feel like humanity is just irredeemable and may as well wipe itself off the planet (via climate change, nuclear war, whatever).
For instance, humans are so facepalmingly bad at making decisions for the long term (viz. climate change, running out of fossil fuels) that it seems clear that genetic or neurological enhancements would be highly beneficial in changing this (and other deficiencies, of course). Yet discourse about such things is overwhelmingly neg... (read more)
You know how when you see a kid about to fall off a cliff, you shrug and don't do anything because the standards of discourse aren't as high as they could be?
Me neither.
A task with a better expected outcome is still better (in expected outcome), even if it's hopeless, silly, not as funny as some of the failure modes, not your responsibility or in some way emotionally less comfortable.
All this talk of P-zombies. Is there even a hint of a mechanism that anybody can think of to detect if something else is conscious, or to measure their degree of consciousness assuming it admits of degree?
I have spent my life figuring other humans are probably conscious purely on an Occam's razor kind of argument that I am conscious and the most straightforward explanation for my similarities and grouping with all these other people is that they are in relevant respects just like me. But I have always thought that increasingly complex simulations of hu... (read more)
Wei once described an interesting scenario in that vein. Imagine you have a bunch of human uploads, computer programs that can truthfully say "I'm conscious". Now you start optimizing them for space, compressing them into smaller and smaller programs that have the same outputs. Then at some point they might start saying "I'm conscious" for reasons other than being conscious. After all, you can have a very small program that outputs the string "I'm conscious" without being conscious.
So you might be able turn a population of conscious creatures into a population of p-zombies or Elizas just by compressing them. It's not clear where the cutoff happens, or even if it's meaningful to talk about the cutoff happening at some point. And this is something that could happen in reality, if we ask a future AI to optimize the universe for more humans or something.
Also this scenario reopens the question of whether uploads are conscious in the first place! After all, the process of uploading a human mind to a computer can also be viewed as a compression step, which can fold constant computations into literal constants, etc. The usual justification says that "it preserves behavior at every step, therefore it preserves consciousness", but as the above argument shows, that justification is incomplete and could easily be wrong.
Paraphrased from #lesswrong: "Is it wrong to shoot everyone who believes Tegmark level 4?" "No, because, according to them, it happens anyway". (It's tongue-in-cheek, for you humorless types.)
I am still seeking players for a multiplayer game of Victoria 2: Hearts of Darkness. We have converted from an earlier EU3 game, itself converted from CK2; the resulting history is very unlike our own. We are currently in 1844:
BBC Radio : Should we be frightened of intelligent computers? http://www.bbc.co.uk/programmes/p01rqkp4 Includes Nick Bostrom from about halfway through.
I don't think it has already been posted here on LW, but SMBC has a wonderful little strip about UFAI: http://www.smbc-comics.com/?id=3261#comic
I'm interested in learning pure math, starting from precalculus. Can anyone give advise on what textbooks I should use? Here's my current list (a lot of these textbooks were taken from the MIRI and LW's best textbook list):
I'm w... (read more)
I advise that you read the first 3 books on your list, and then reevaluate. If you do not know any more math than what is generally taught before calculus, then you have no idea how difficult math will be for you or how much you will enjoy it.
It is important to ask what you want to learn math for. The last four books on your list are categorically different from the first four (or at least three of the first four). They are not a random sample of pure math, they are specifically the subset of pure math you should learn to program AI. If that is your goal, the entire calculus sequence will not be that useful.
If your goal is to learn physics or economics, you should learn calculus, statistics, analysis.
If you want to have a true understanding of the math that is built into rationality, you want probability, statistics, logic.
If you want to learn what most math PhDs learn, then you need things like algebra, analysis, topology.
I am going to organize a coaching course to learn Javascript + Node.js.
My particular technology of choice is node.js because:
Has anyone else had one of those odd moments when you've accidentally confirmed reductionism (of a sort) by unknowingly responding to a situation almost identically to the last time or times you encountered it? For my part, I once gave the same condolences to an acquaintance who was living with someone we both knew to be very unpleasant, and also just attempted to add the word for "tomato" in Lojban to my list of words after seeing the Pomodoro technique mentioned.
After a brain surgery, my father developed Anterograde amnesia. Think Memento by Chris Nolan. His reactions to different comments/situations were always identical. If I were to mention a certain word, it would always invoke the same joke. Seeing his wife wearing a certain dress always produces the same witty comment. He was also equally amused by his wittiness every time.
For several months after the surgery he had to be kept on tight watch, and was prone to just do something that was routine pre-op, so we found a joke he finds extremely funny and which he hasn't heard before the surgery, and we would tell it every time we want him to forget where he was going. So, he would laugh for a good while, get completely disoriented, and go back to his sofa.
For a long while, we were unable to convince him that he had a problem, or even that he had the surgery (he would explain the scar away through some fantasy). And even when we manage, it lasts only for a minute or two.. Since then, I've developed several signals I would use if I found myself in an isomorphic situation. I had already read HPMoR by that time, but have discarded Harry's lip-biting as mostly pointless in real life.
Are there any reasons for becoming utilitarian, other than to satisfy one's empathy?
Would just like to make sure everyone here is aware of LessWrong.txt
Why?
EDIT: This particular site does margin trading differently to how I thought margin trading normally works. So... disregard everything I just said?
Bitcoin economy and a possible violation of the efficent market hypososis. With the growing maturity of the Bitcoin ecosystem, there has appeared a website which allows leveraged trading, meaning that people who think they know which way the price is going can borrow money to increase their profits. At the time of writing, the bid-ask spread for the rates offered is 0.27% - 0.17% per day, which is 166% - 86% per ... (read more)
How does solipsism change one's pattern of behavior, compared to other things being alive? I noticed that when you take enlightened self-interest into account, it seems that many behaviors don't change regardless of whether the people around you are sentient or not.
For example, if you steal from your neighbor, you can observe that you run the risk of him catching you, and thus you having to deal with consequences that will be painful or unpleasant. Similarly, assuming you're a healthy person, you have a conscience that makes you feel bad about certain thin... (read more)
I participated in an economics experiment a few days ago, and one of the tasks was as follows. Choose one of the following gambles where each outcome has 50% probability Option 1: $4 definitely Option 2: $6 or $3 Option 3: $8 or $2 Option 4: $10 or $1 Option 5: $12 or $0
I choose option 5 as it has the highest expected value. Asymptotically this is the best option but for a single trial, is it still the best option?
Technically, it depends on your utility function. However, even without knowing your utility function, I can say that for such a low amount of money, your utility function is very close to linear, and option 5 is the best.
Here's one interesting way of viewing it that I once read:
Suppose that the option you chose, rather than being a single trial, were actually 1,000 trials. Then, risk averse or not, Option 5 is clearly the best approach. The only difficulty, then, is that we're considering a single trial in isolation. However, when you consider all such risks you might encounter in a long period of time (e.g. your life), then the situation becomes much closer to the 1,000 trial case, and so you should always take the highest expected value option (unless the amounts involved are absolutely huge, as others have pointed out).
An Iterated Prisoner's Dilemma variant I've been thinking about —
There is a pool of players, who may be running various strategies. The number of rounds played is randomly determined. On each round, players are matched randomly, and play a one-shot PD. On the second and subsequent rounds, each player is informed of its opponent's previous moves; but players have no information about what move was played against them last round, nor whether they have played the same opponent before.
In other words, as a player you know your current opponent's move history — ... (read more)
Self-driving cars had better use (some approximation of) some form of acausal decision theory, even more so than a singleton AI, because the former will interact in PD-like and Chicken-like ways with other instantiations of the same algorithm.
I have been reviewing FUE hair transplants, and I would like LWers' opinion. I'm actually surprised this isn't covered, as it seems relevant to many users.
As far as I can tell, the downsides are:
The scarring is basically covered if you have a few two days’ hair growth there and I am fine with tha... (read more)
I'm looking into Bayesian Reasoning and trying to get a basic handle on it and how it differs from traditional thinking. When I read about how it (apparently) takes into account various explanations for observed things once they are observed, I was immediately reminded of Richard Feynman's opinion of Flying Saucers. Is Feynman giving an example of proper Bayesian thinking here?
http://www.youtube.com/watch?v=wLaRXYai19A
Since people were pretty encouraging about the quest to do one's part to help humanity, I have a follow-up question. (Hope it's okay to post twice on the same open thread...)
Perhaps this is a false dichotomy. If so, just let me know. I'm basically wondering if it's more worthwhile to work on transitioning to alternative/renewable energy sources (i.e. we need to develop solar power or whatever else before all the oil and coal run out, and to avoid any potential disastrous climate change effects) or to work on changing human nature itself to better address ... (read more)
I am not sure if this deserves it's own post. I figured I would post here and then add it to discussion if there is sufficient interest.
I recently started reading Learn You A Haskell For Great Good. This is the first time I have attempted to learn a functional language, and I am only a beginner in Imperative languages (Java). I am looking for some exercises that could go along with the e-book. Ideally, the exercises would encourage learning new material in a similar order to how the book is presented. I am happy to substitute/compliment with a different re... (read more)
Modafinil is prescription-only in the US, so to get it you have to do illegal things. However, I note that (presumably due to some legislative oversight?) the related drug Adrafinil is unregulated, you can buy it right off Amazon. Does anyone know how Adrafinil and Modafinil compare in terms of effectiveness and safety?
Andy Weir's "The Martian" is absolutely fucking brilliant rationalist fiction, and it was published in paper book format a few days ago.
I pre-ordered it because I love his short story The Egg, not knowing I'd get a super-rationalist protagonist in a radical piece of science porn that downright worships space travel. Also, fart jokes. I love it, and if you're an LW type of guy, you probably will too.
Would you prefer that one person be horribly tortured for eternity without hope or rest, or that 3^^^3 people die?
One person being horribly tortured for eternity is equivalent to that one person being copied infinite times and having each copy tortured for the rest of their life. Death is better than a lifetime of horrible torture, and 3^^^3, despite being bigger than a whole lot of numbers, is still smaller than infinity.