All of PuyaSharif's Comments + Replies

Is this paper formally modeling human (ir)rational decision making worth understanding?

Even I have a chapter in a textbook, its not a measure of quality :) Conference proceedings sometimes are published as a book, with ISBN and all.

Please recommend some audiobooks

I guess the bottom line is that, when it comes to fields like philosophy and history, the literature will be heavily biased by the authors, and if one really wants to reduce this bias the one must consult multiple sources.

0gjm7yYes. There's another single-volume history of philosophy, by Anthony Kenny, that's alleged to be good. I would expect Kenny to have a quite different set of biases from Russell's (and for what it's worth less like my own than Russell's). I have it on my shelves but it's one of the hundreds I haven't read yet so I can't endorse it (or the reverse) independently. I've no idea whether there's an audiobook of it.
Please recommend some audiobooks

Wonderful recommendation. I am listening to 'A History of western philosophy' at the moment and I enjoy every single minute of it. Its my clean and cook book. Not only is it a literary masterpiece, it is a well researched account of exactly what the name says. As a bonus you get the whole story commented by one of the greatest philosophers of the 20th century.

0gjm7yI have heard it claimed by people who know more about the history of philosophy than I do that it's less than perfectly reliable, and in particular that if Russell's account makes someone look silly then you should consider seriously the possibility that they were distinctly less silly than Russell makes them look. (But I agree that it's a lovely book, and I wouldn't discourage anyone from reading it.)
ICONN 2012 nanotechnology conference in Perth

Academic conferences tends to be very technical, so don't expect to be able fully follow the talks. A review paper

2Solvent10yThank you for the paper. I accept your challenge. :)
What jobs are safe in an automated future?

By human-equivalent i'd guess you mean equivalent in if not all, but in many different aspects of human intelligence. I wouldn't dare to have an opinion at the moment.

Anyone else?

What jobs are safe in an automated future?

Yes I am, and I'll soon start looking for PhD-positions either in physics or some interdisciplinary field of interest. I know I seem a bit over-optimistic, and that such radical changes may take maybe at least 30-50 years, but I'd guess most of us will be alive by then so its still relevant. My main point is that step by step theoretical tasks will move to the space of computation and the job of the theoretician will evolve to something else. If one day our computers in our computer aided research starts to output suggestions for models, or links between s... (read more)

What jobs are safe in an automated future?

I agree that the conceptual (non-simply-symbol-processing) part of theoretical physics is the tricky part to automate, and even if I am willing to accept that that last 1% will be kept in the monopoly of human beings, but then that's it; theoretical physics will asymptotically reduce to that 1% and stay there until AGI arrives. Its not bound to change over night, but the change will be the product of many small changes where computers start to aid us not by just doing the calculations and simulations but more advanced tasks where we can input sets of equa... (read more)

What jobs are safe in an automated future?

For example, music composition, writing fiction, and similar artistic endeavors require that the artist know what people enjoy. I think that that will be done by humans for the foreseeable future.

Regarding music composition; there are already algorithms being developed for predicting the potential of a song becoming a hit. Next step could be algorithms that creates the songs by themselves. Its all about optimization with positive feedback. Algorithm: Create a piece of art A such that A has a high probability of satisfying the ones experiencing it. Input... (read more)

1beoShaffer10yActually we already have ai composers. http://hplusmagazine.com/2010/03/22/has-emily-howell-passed-musical-turing-test/ [http://hplusmagazine.com/2010/03/22/has-emily-howell-passed-musical-turing-test/]
What jobs are safe in an automated future?

My goal was/is to start a discussion around: 1. Strategies today for maximizing probability of being needed in the future. 2 Even more interesting, what tasks are hard/easy to automate and why? 3 The consequences automation will have on global economy. So far, the comments covers a little bit of all.

What jobs are safe in an automated future?

1 Hindsight bias? Quite a diagnosis there. I never specified the level of those algorithms.

2 Which part of theoretical physics is not math? Experiments confirm or reject theoretical conclusions and points theoretical work in different directions. But that theoretical work is in the end symbol processing - something that computers are pretty good at. There could be a variety of ways for a computer to decide if a theorem is interesting just as for a human. Scope, generality and computability of the theorems could be factors. Input Newtonian mechanics and the mathematics of 1850 and output Hamiltonian mechanics just based on the generality of that framework.

3shminux10yI have, in my reply: probably AGI-level, i.e. too far into the haze of the future to be considered seriously. Probably the 1% that counts the most (I agree, 99% of theoretical physics is math, as I found out the hard way). It's finding the models that make the old experiments make sense and that make new interesting predictions that turn out to be right that is the mysterious part. How would you program a computer that can decide, on its own, that adding the gauge freedom to the Maxwell equations would actually make them simpler and lay foundations for nearly all of modern high-energy physics? That the Landau pole is not an insurmountable obstacle, despite all the infinities? That 2D models like graphene are worth studying? That can resolve the current mysteries, like the High Tc superconductivity, the still mysterious foundations of QM, the cosmological mysteries of dark matter and dark energy, the many problems in chemistry, biology, society etc.? Sure, it is all "symbol manipulation", but so is everything humans do, if you agree that we are (somewhat complicated) Turing machines and Markov chains. If you assert that it is possible to do all this with anything below an AGI-level complexity, I hope that you are right, but I am extremely skeptical.
What jobs are safe in an automated future?

1 Maybe I should clarify: Are the tasks previously done by bank tellers becoming automated? Yes. The fact that the number bank tellers has increased does not invalidate my statement. If there were no internet banking or ATMs then increase would be much larger right? So its trivial to see that the number of bank tellers can increase at the same time as bank teller jobs are lost to automated systems.

2 I'll give you an extreme one. I am a few steps away of earning a degree in theoretical physics specializing in quantum information theory. Theoretical quantu... (read more)

0syllogism10yAre you a grad student? Because I don't know much about theoretical physics, but I find it very hard to believe much academic research could be automated. I'm a post-doc doing research on computational linguistics. I can't imagine automating my work.
0prase10yDo you have any idea how to do it? What does organising the mathematics and physics actually look like? What are the right algorithms? Nobody probably doubts that theoretical work can be done by machines. But your original claim was stronger: that these tasks are "fairly easy to represent virtually" and that the conjecture that they are going to be the last to go is a misconception.
8shminux10yYou have fallen victim to the hindsight bias [http://lesswrong.com/lw/il/hindsight_bias/]. The parameter space of the ways of reconciling Special Relativity with Newtonian gravity is quite large, even assuming that this goal would have occurred to anyone but Einstein at that time (well, Hilbert did the math independently, after communicating with Einstein for some time). Rejecting the implicit and unquestionable idea of a fixed background spacetime was an extreme leap of genius. The "right algorithms" would probably have to be the AGI-level ones. "Theoretical quantum information theory" is math, not a natural science, and math is potentially easier to automate. Still, feel free to research the advances in automated theorem proving, and, more importantly, in automated theorem stating, a much harder task. How would a computer know what theorems are interesting?
The punishment dilemma

There is nothing wrong with the consistency. At least not in principle. A crime could still be defined as a crime and the punishment could go towards zero asymptotically.

The punishment dilemma

Yes of course, you are free to do it yourself, but it is assumed on the large scale that that even including retaliations (which are crimes), crime rates would go down. And in a society with no punishments would it be rational to do that? (Given that the friends or relatives of that guy could come after you for coming after him for coming after you and so on..?

3RolfAndreassen10yIt's hard to reason about your hypothetical, because it seems to directly contradict actual experience. But it seems fairly straightforward to reason about what I, personally, would do: I want the chance of crimes against me minimised, so I accept the no-laws state as the best way of getting that. But if I nonetheless am among the unlucky ones, then I want revenge, so I get revenge. By hypothesis, the chance that the other guy's friends will successfully punish me has got to be small, because we've established that crime is very low in this hypothetical universe.
3daenerys10yYou know this doesn't have to be relegated to "thought experiment". It's pretty much the story of a large part of human history: There were no state-enforced laws, per se, but you didn't kill people, because then their family/clan/tribe would kill you. Of course then your family would do the same, and well..you can see how this might be a never-ending downward spiral. A more sophisticated way would be to pay a weregild [http://en.wikipedia.org/wiki/Weregild] . (I killed your brother. Sorry. Here's some goats.) If you are interested in seeing how this actually worked out, early Icelandic society is a pretty good microcosm. If you want to make learning it fun, read a saga. (Here's Njal's Saga [http://omacl.org/Njal/] )
Review of Kahneman, 'Thinking, Fast and Slow' (2011)

Good! Now I have two recently published very interesting books to read! Khaneman and Michael Nielsen's Reinventing Discovery. (I'll submit a review M.N.'s this as soon as I've read it)

Rational to distrust your own rationality?

You see, the reason for why it is discussed as an "effect" or "paradox" is that even if your risk aversion ("oh no what if I lose") is taken into account, it is strange to take 1A together with 2B. A risk averse person might "correctly" chose 1A, but that for person to be consistent in its choices has to chose 2A. Not 1A and 2B together.

My suggestion is that the slight increase in complexity in 1A adds to your risk (external risk+internal risk) and therefore within your given risk profile makes 1A and 2B a consistent combination.

0endoself10yWell when I look at experiment 1, I feel the risk. My brain simulates my reaction upon getting nothing and does not reduce its emotional weight in accordance with its unlikeliness. Looking at experiment 2, I see the possibility and think, "Well, I'd be screwed either way if I'm not lucky, so I'll just look at the other possibility.". My system 1 thought ignores the 89% vs 90% distinction as pointless, and, while not consistent with its other decision, it is right to do so.
Rational to distrust your own rationality?

One way of testing: Have two questions just like in Allais experiment. Make the experiment in five different versions where choice 1B has increasing complexity but same expected utility. See if 1B-aversion correlates with increasing complexity.

0JoshuaZ10yOoh. I like that. That's a much more direct test than my suggestion.
Rational to distrust your own rationality?

'Rational' as in rational agent is a pretty well defined concept in rational choice theory/game theory/decision theory. That is what I refer to when I use the word.

Truth & social graces

Why interfering and not letting your kid develop his own ways? Answering "How are you?" in detail sounds to me as a fantastic trait of his personality.

When I was 7-years old I stopped calling my parents mom and dad and switched over to calling them by their names. I just couldn't understand the logic of other people call them one thing and me calling the something else. Happily nobody tried to "correct" me according to social rules, and still today it wouldn't cross my mind to call my mother 'mother'!

0CronoDAS10yI call my mother "Mom" and my father "Norman". I'm not sure why; all I know is that I started when was small.
0irrational10yI am in fact not planning to interfere for now.
Satisficers want to become maximisers

Can you really assume the agent to have a utility function that is both linear in paperclips (which implies risk neutrality) and bounded + monotonic?

0Stuart_Armstrong10yNo; but you can assume it's linear up to some bound.
Rational toy buying

When I was eight or nine i got one of those electricity/magnetism experiment kits. Boy, did I love that kit! I did that motor, electric bell and electromagnet experiment over and over again for maybe a year and then moved on to building my own electronic stuff from components I found tearing old TV's and radios apart. I soon had a little club at home teaching my friends!

Some years ago when my cousin just had turned nine I got him a kit and hoped to see him become as interested in electronics as I was in his age. But he hardly opened the box, and when I c... (read more)

The Need for Universal Experience Classes

It depends on how you define 'use'. People are trying to make sense of reality all the time. Different scenarios needs different tools and different ways of thinking. Basic high school science helps you understand parts of the news flow, some aspects of the mechanisms of your household appliances, transportation related concepts like time, velocity, acceleration, your body and so on.

-1TimS10yAs a general principle, resolving ambiguities in other people's assertions so those assertions are true is more charitable, and more likely to allow us to understand their point.
The Need for Universal Experience Classes

shend, shimux. I am not questioning the overall thesis of the post. Just reacting to:

"I think the problem here is that people can’t understand what is really important. Calculus, mechanical physics, chemistry, microiology, etc. are interesting to learn, perhaps. But, they are relatively advanced topics. People don’t use them in daily life unless they are professionals. Why not learn things that we think about every day instead of those that will frankly be useless to most? "

0shminux10yI suppose I can see how one could interpret the post the way you did, though the author emphatically did not advocate teaching art instead of science, just a different way of teaching (or, rather, not teaching) in general.
2TimS10yIsn't this statement true?
The Need for Universal Experience Classes

Calculus, mechanical physics, chemistry, microbiology etc are areas describing objective reality. They explain how the world we live in works on a fundamental level, i.e the very fabric of reality. Not only do they give answers to basic questions of human life, they also activate the students toward systematic analytical thinking and questioning.

Do you really mean that people would be better off never being exposed to ("interesting but useless") natural science? Would you prefer a society where most people doesn't have a clue about how things aro... (read more)

3shminux10yI suspect that your are making and destroying a straw man [http://en.wikipedia.org/wiki/Straw_man] here. The original (admittedly rather rambling) post did not advocate never exposing students to science, but rather a specific way of doing it, a sort of a loose version of Socratic questioning [http://en.wikipedia.org/wiki/Socratic_questioning].
0[anonymous]10yActually, I was never saying that we shouldn't learn about science or any other "areas describing objective reality." I love science myself. I just think that we shouldn't focus on these areas exclusively, like many schools do now. For example, I think that an engineer who loves listening to music all day should at least learn the basic theory of how music works, so that he can appreciate it more intellectually and understand it better. Why should we choose to understand some aspects of life and not others?
The self-unfooling problem

This problem reminds me of the movie Memento. The lead character was unable to make any new memories and his mind was reset every two or three minutes. Nevertheless was he trying to find his wifes killer, and kept record of new leads by taking pictures with a Polaroid camera, keeping notes and tattooing pieces of information to his body. Great movie!

[SEQ RERUN] Torture vs. Dust Specks

An interesting related question would be: What would people in a big population Q choose if given alternatives: extreme pain with probability p=1/Q or tiny pain with probability p=1. In the framework of expected utility theory you'd have to include not only the sizes of the pains and size of populations but also the risk aversion of the person asked. So its not only about adding up small utilities.

9ArisKatsaris10yTo consider it like this misses the point of the exercise: which is to treat each individual dust speck as the tiniest amount of disutility you can imagine, and multiply those tiny disutilities. If you treat the dust specks differently, as representing a small probability of a huge disutility(death) instead, the equation becomes different in the minds of many, because it's now about adding small probabilities instead of adding small disutilities. In short: you ought consider the least convenient world, in which you are assured that the momentary inconvenience/annoyance of the dust speck is all the disutility these people will suffer if you choose "dust specks" -- they won't be involved in accidents of any kind, etc.
The self-fooling problem.

I see your point. A reduction of easily searched places will indeed make it more difficult for B to find the coin, even though B will have a smaller space to search. The question that remains is: given a mathematical description of the search/hide-space what probability distribution over locations (randomization process) will minimize the probability of B finding the coin.

The self-fooling problem.

As some comments has pointed out there are some loopholes in the original formulation, and I will do my best to close these or accept the fact that they're not closeable (which would be interesting in its own right).

Lets try a simpler formulation.. Basically what is being asked here is that given two intelligences A and B, where A and B are identical (perfect copies), can A have a strategy that minimizes the probability of B finding the coin?

Further: Any chain of reasoning leading to a constrained set of available locations followed by randomization could be used by B to constrain the set of locations to search. Is it therefore possible to beat complete randomization?

7Nornagest10yYes. You need to weight locations according to the time it takes to search them and then make a random selection from that weighted set; that'll give you longer search times on average than an unweighted random pick from a large set where most of the elements take a trivially small time to search. I could take a stab at proving that mathematically, if you're comfortable with some abstraction. You can beat even that by cleverly exploiting features of the setup, as I and muflax did in our responses to the OP, but that's admittedly not quite in keeping with the spirit of the problem.
What are you working on?

Since ten years back a sub-field of quantum information theory has emerged, quantum game theory. Regard it as the intersection of quantum mechanics and game theory. It deals with game theoretical situations where the participants use entangled quantum states, quantum superposition and unitary operations as resources to gain advantages compared to classical counterparts.

I am designing and (trying) solving a quantum game using three level quantum states, qutrits (compared to the usual qubits, two level systems).

Meetup : Stockholm meetup

This is fantastic! I came across this site a while back and promised myself that I would submit something as soon as I could find the time. Then it slipped my occupied mind and I forgot about the very existence of LW! I happened to take a look inside tonight and the first thing I see is that there is a meetup here in Stockholm 36 hours and 15 minutes from now. What a nice coincidence! I'll be very happy to join. I'm a sucker for all kinds of discussions. Game theory, economy, politics, decision theory, technology, artificial intelligence, fundamental questions in philosophy, physics, you name it. See you guys!