An Open Thread: a place for things foolishly April, and other assorted discussions.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea.
It doesn't seem like it's ever going to be mentioned otherwise, so I thought I should tell you this:
Lesswrong is writing a story, called "Harry Potter and the Methods of Rationality". It's just about what you'd expect; absolutely full of ideas from LW.com. I know it's not the usual fare for this site, but I'm sure a lot of you have enjoyed Eliezer's fiction as fiction; you'll probably like this as well.
Who knows, maybe the author will even decide to decloak and tell us who to thank?
-- Al Gore on Futurama
Yeah, I don't think I can plausibly deny responsibility for this one.
Googling either (rationality + fanfiction) or even (rational + fanfiction) gets you there as the first hit, just so ya know...
Also, clicking on the Sitemeter counter and looking at "referrals" would probably have shown you a clickthrough from a profile called "LessWrong" on fanfiction.net.
Want to know the rest of the plot? Just guess what the last sentence of the current version is about before I post the next part on April 3rd. Feel free to post guesses here rather than on FF.net, since a flood of LW.com reviewers would probably sound rather strange to them.
Voldemort's Killing Curse had an epiphenomenal effect: Harry is a p-zombie. ;)
I don't like where this is headed - Harry isn't provably friendly and they're setting him loose in the wizarding world!
This Harry and Ender are both terrified of becoming monsters. Both have a killer instinct. Both are much smarter than most of their peers. Ender's two sides are reflected in the monstrous Peter and the loving Valentine. The two sides of Potter-Evans-Verres are reflected in Draco and Hermione. The environments are of course very similar: both are in very abnormal boarding schools teaching them things regular kids don't learn.
Oh, and now the Defense Against the Dark Arts prof is going to start forming "armies" for practicing what is now called "Battle Magic" (like the Battle Room!).
And the last chapter's disclaimer?
If the parallels aren't intentional I'm going insane.
It's almost done, actually. Here's a sneak preview of the next chapter:
... (read more)I think you underestimate the real-world value of Just Testing It. If I got a mysterious letter in the mail and Mom told me I was a wizard and there was a simple way to test it, I'd test it. Of course I know even better than rationalist!Harry all the reasons that can't possibly be how the ontologically lowest level of reality works, but if it's cheap to run the test, why not just say "Screw it" and test it anyway?
Harry's decision to try going out back and calling for an owl is completely defensible. You just never have to apologize for doing a quick, cheap experimental test, pretty much ever, but especially when people have started arguing about it and emotions are running high. Start flipping a coin to test if you have psychic powers, snap your fingers to see if you can make a banana, whatever. Just be ready to accept the result.
A "Jedi"? Obi-Wan Kenobi?
I wonder if you mean old Ben Kenobi. I don't know anyone named Obi-Wan, but old Ben lives out beyond the dune sea. He's kind of a strange old hermit.
Example of teachers not getting past Guessing the Teacher's Password: debating teachers on the value of pi. Via Gelman.
After the top level post about it, I bought a bottle of Melatonin to try. I've been taking it for 3 weeks. Here are my results.
Background: Weekdays I typically sleep for ~6 hours, with two .5 hour naps in the middle of the day (once at lunch and once when I get home from work). Weekends I sleep till I feel like getting up, so I usually get around 10-11 hours.
I started with a 3mg pill, then switched to a ~1.5 mg pill (I cut them in half) after being extremely tired the next day. I take it about an hour before I go to sleep.
The first thing I noticed was that it makes falling asleep much easier. It's always been a struggle for me to fall asleep (usually I have to lay there for an hour or more), but now I'm almost always out cold within 20 minutes.
I've also noticed that I feel much less tired during the day, which was my impetus for trying it in the first place. However, I'm not sure how much of this is a result of needing less sleep, and how much is a result of me falling asleep faster and thus sleeping for longer. But it's definitely noticeable.
Getting up in the morning is not noticeably easier.
No evidence that it's habit forming. I'm currently not taking it on weekends (I found mys... (read more)
I have a couple of problems with anthropic reasoning, specifically the kind that says it's likely we are near the middle of the distribution of humans.
First, this relies on the idea that a conscious person is a random sample drawn from all of history. Okay, maybe; but it's a sample size of 1. If I use anthropic reasoning, I get to count only myself. All you zombies were selected as a side-effect of me being conscious. A sample size of 1 has limited statistical power.
ADDED: Although, if the future human population of the universe were over 1 trillion, a sample size of 1 would still give 99% confidence.
Second, the reasoning requires changing my observation. My observation is, "I am the Xth human born." The odds of being the 10th human and the 10,000,000th human born are the same, as long as at least 10,000,000 humans are born. To get the doomsday conclusion, you have to instead ask, "What is the probability that I was human number N, where N is some number from 1 to X?" What justifies doing that?
Some fantastic singularity-related jokes here:
http://crisper.livejournal.com/242730.html
http://www4.gsb.columbia.edu/ideasatwork/feature/735403/Powerful+Lies
A couple of articles on the benefits of believing in free will:
Vohs and Schooler, "The Value of Believing in Free Will"
Baumeister et al., "Prosocial Benefits of Feeling Free"
The gist of both is that groups of people experimentally exposed to statements in favour of either free will or determinism[1] acted, on average, more ethically after the free will statements than the determinism statements.
References from a Sci. Am. article.
[1] Cough.
ETA: This is also relevant.
I've written a reply to Bayesian Flame, one of cousin_it's posts from last year. It's titled Frequentist Magic vs. Bayesian Magic. I'd appreciate some review and comments before I post it here. Mainly I'm concerned about whether I've correctly captured the spirit of frequentism, and whether I've treated it fairly.
BTW, I wish there is a "public drafts" feature on LessWrong, where I can make a draft accessible to others by URL, but not show up in recent posts, so I don't have to post a draft elsewhere to get feedback before I officially publish it.
Y'know, there's something this blogger I read once wrote that seems kinda applicable here:
The London meet is going ahead. Unless someone proposes a different time, or taw's old meetings are still going on and I just didn't know about them, it will be:
5th View cafe on top of Waterstone's bookstore near Piccadilly Circus Sunday, April 4 at 4PM
Roko, HumanFlesh, I've got your numbers and am hoping you'll attend and rally as many Londoners as you can.
EDIT: Sorry, Sunday, not Monday.
No. Moving non-rigidly breaks things. Differences in acceleration on different parts of things break things.
I recently found something that may be of interest to LW readers:
This post at the Lifeboat Foundation blog announces two tools for testing your "Risk Intelligence":
The Risk Intelligence Game, which consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. Then it calculates your risk intelligence quotient (RQ) on the basis of your estimates.
The Prediction Game, which provides you with a bunch of statements, and your task is to say how likely... (read more)
Karma creep: It's pleasant to watch my karma going up, but I'm pretty sure some of it is for old comments, and I don't know of any convenient way to find out which ones.
If some of my old comments are getting positive interest, I'd like to revisit the topics and see if there's something I want to add. For that matter, if they're getting negative karma, there may be something I want to update.
US Government admits that multiple-time convicted felon Pfizer is too big to fail. http://www.cnn.com/2010/HEALTH/04/02/pfizer.bextra/index.html?hpt=Sbin
Did the corporate death penalty fit the crime(s)? Or, how can corporations be held accountable for their crimes when their structure makes them unpunishable?
Is there any evidence that Bruce Bueno de Mesquita is anything else than a total fraud?
Applied rationality April Edition: convince someone with currently incurable cancer to sign up for cryonics: http://news.ycombinator.com/item?id=1239055
Hacker News rather than Reddit this time, which makes it a little easier.
A recent study (hiding behind a paywall) indicates people overestimate their ability to remember and underestimate the usefulness of learning. More ammo for the sophisticated arguer and the honest enquirer alike.
My parents are both vegetarian, and have been since I was born. They brought me up to be a vegetarian. I'm still a vegetarian. Clearly I'm on shaky ground, since my beliefs weren't formed from evidence, but purely from nurture.
Interestingly my parents became vegetarian because they perceived the way animals were farmed to be cruel (although they also stopped eating non-farmed animals such as fish), however my rationalization for not eating meat is that it is the killing of animals that is wrong (generalising from the belief that killing humans is worse tha... (read more)
I hope this isn't a vegatarianism argument, but remember that you have to rehabilitate both killing and cruelty to justify eating most meat, even if killing alone has held you back so far.
Perhaps the folks at LW can help me clarify my own conflicting opinions on a matter I've been giving a bit of thought lately.
Until about the time I left for college, most of my views reflected those of my parents. It was a pretty common Republican party-line cluster, and I've got concerns that I have anchored at a point too close to favoring the death penalty than I should. I read studies about how capital punishment disproportionately harms minorities, and I think Robin Hanson had more to say about difference in social tier. Early in my college time, t... (read more)
My take on capital punishment is that it's not actually that important an issue. With pretty much anything that you can say about the death penalty, you can say something similar about life imprisonment without parole (especially with the way that the death penalty is actually practiced in the United States). Would you lock an innocent man in a cell for the rest of his life to keep 19 bad ones locked up?
Virtually zero chance of recidivism? True for both. Very expensive? Check. Wrongly convicted innocent people get screwed? Check - though in both cases they have a decent chance of being exonerated after conviction before getting totally screwed (and thus only being partially screwed). Could be considered immoral to do something so severe to a person? Check. Deprives people of an "inalienable" right? Check (life/liberty). Strongly demonstrates society's disapproval of a crime? Check (slight edge to capital punishment, though life sentences would be better at this if the death penalty wasn't an option). Applied disproportionately to certain groups? I think so, though I don't know the research. Strong deterrent? It seems like the death penalty should be a bit... (read more)
As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.
Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.
Discuss.
In some periods of my life I've read about a book a day (almost entirely fiction), but I mostly look back at those periods with regret, because I suspect my reading was largely based on the desire to escape an unpleasant reality that I understood as inherent to reality rather than something contingent that I could do something about.
As an adult I have found myself reading non-fiction directed at life goals more often and fiction relatively less. Every so often I go 3 months without reading a book but other times I get through maybe 1 a week, but part of ... (read more)
1) I will do all things such that they maximize expected paperclip content of the universe, trading off smaller paperclip quantities for larger ones. I can't express a more specific algorithm than that without knowing the particulars of the situation.
2) I will do much better than humans at finding the ultimate morality* of the universe because I can spend all my resources to make perfect copies of myself that share my values and update knowledge and reflective value equilibria among each other, rather than having to pursue other values like "signalin... (read more)
It was so filled with wrong I couldn't even bother to finish it, and I usually enjoy crackpots from TED.
Harris has also written a blog post nominally responding to 'many of my [Harris'] critics' of his talk, but it seems to be more of a reply to Sean Carroll's criticism of Harris' talk (going by this tweet and the many references to Carroll in Harris' post). Carroll has also briefly responded to Harris' response.
Does brain training work? Not according to an article that has just appeared in Nature. Paper here, video here or here.
... (read more)Rats have some ability to distinguish between correlation and cauation
... (read more)Can something be mathematical and yet not strict?
Overly-simple mathematical models don't always work in the real world.
David Chalmers has written up a paper based on the talk he gave at 2009 Singularity Summit:
From the blog post where he announced the paper:
... (read more)PDF: "Are black hole starships possible?"
This paper examines the possibility of using miniature black holes for converting matter to energy via Hawking radiation, and propelling ships with that. Pretty interesting, I think.
I'm no physicist and not very math literate, but there is one issue I pondered: namely, how the would it be possible to feed matter to a mini black hole that has an attometer scale event horizon and radiating petajoules of energy in all directions? The black hole would be an extremely tiny target in a barrier of ridiculous energy density. The paper, as rudimentary it is, does not discuss this feeding issue.
Why doesn't brain size matter? Why is a rat with its tiny brain smarter than a cow? Why does the cow bother devoting all those resources to expensive gray matter? Eliezer posted this question in the February Open Topic, but no one took a shot at it.
FTA: "In the real world of computers, bigger tends to mean smarter. But this is not the case for animals: bigger brains are not generally smarter."
This statement seems ripe for semantic disambiguation. Cows can "afford" a larger brain than rats can, and although "large cow brain < sma... (read more)
I'd like to plug a facebook group:
Once we reach 4,096 members, everyone will donate $256 to SingInst.org.
Folks may also be interested in David Robert's group:
1 million people, $100 million to defeat aging.
My mother's sister has two children. One is eleven and one is seven. They are both being given an unusually religious education. (Their mother, who is Catholic, sent them to a prestigious Jewish pre-school, and they seem to be going through the usual Sunday School bullshit.) I find this disturbing and want to proselytize for atheism to them. Any advice?
ETA: Their father is non-religious. I don't know why he's putting up with this.
I'd put it differently: There's nothing intrinsically wrong with a 16-year-old and a 30-year-old having sex, any more than there is anything intrinsically wrong with two 30-year-olds having sex. There may be extrinsic factors in either case that make it problematic (somebody's being coerced or forced, somebody's elsewhere married, somebody's intoxicated, somebody's being manipulative to get the sex). The way our society is set up, the first case is dramatically more likely to feature such extrinsic factors than the second case.
In a gravitational field steep enough to have nonnegligible tides (that is the phenomenon you were referring to, right?), there is no reference frame in which all parts of you remain at rest without tearing you apart. You can define some point in your head to be at rest, but then your feet are accelerating; and vice versa.
An Amanda-Knox-type situation would be one where the priors are extreme and there are obvious biases and probability-theoretic errors causing people to overestimate the strength of the evidence.
I think one would have to know a fair amount of biochemistry in order for food controversies to seem this way.
Although one might potentially be able to apply the heuristic "look at which side has the more generally impressive advocates" -- which works spectacularly well in the Knox case -- to an issue like this.
"Update: Tom McCabe has created a sub-Reddit to use for assorted discussions instead of relying on open threads. Go there for the sub-Reddit and discussion about it, and go here to vote on the idea."
Attention everyone: This post is currently broken for some unknown reason. Please use the new post at http://lesswrong.com/lw/212/announcing_the_less_wrong_subreddit_2/ if you want to discuss the sub-Reddit. The address of the sub-Reddit is http://www.reddit.com/r/LessWrong
Why do you say that? What do you mean?
What do you value?
Here are some alternate phrasings in an attempt to find the same or similar reasoning (it is not clear to me whether these are separate concepts):
Here's another article asking a similar question: Post Your Utility Function. I think people did a poor job answering it back then.
Well, User:Rain, that's about the story of my existence right there. What kinds of paperclips are the right ones? What tradeoffs should I make?
However, regarding the specific matters you bring up, they are mostly irrelevant. Yes, there could be some conceivable situation in which I have to trade off paperclips now against paperclips later. But the way it usually works is that once I obtain or make a paperclip, I move it to the safe zone, where I'll pretty much have it forever. Also, it's obviously the number of paperclips that matters, and the constraint on bizarre paperclips is obviously that they have to be able to (counterfactually) hold sheets of paper together.
If you want to get past this abstract philosophizing and on to some concrete problems, it would be better to talk about the dilemma that User:h-H posed to me, in which I must consider alternate models of paperclipping that don't have the shape of standard paperclips. Here's my recent progress on thinking about the issue.
My current difficulty is extrapolating my values to cover unexpected situations like this, starting from the simplest algorithm I can find which generates my current preference. The problem is that... (read more)
How to deal with a program that has become self aware? - April Fools on StackOverflow.
It isn't tautological. In fact, it's been my experience that this is simply not true. There seem to be times that I prefer to wallow in self-pity rather than feel happiness. Anger also seems to preclude happiness in the moment, but there are also times that I prefer to be... (read more)
I've become a connoisseur of hard paradoxes and riddles, because I've found that resolving them always teaches me something new about rationalism. Here's the toughest beast I've yet encountered, not as an exercise for solving but as an illustration of just how much brutal trickiness can be hidden in a simple-looking situation, especially when semantics, human knowledge, and time structure are at play (which happens to be the case with many common LW discussions).
... (read more)Does anyone have suggestions for how to motivate sleep? I've hacked all the biological problems so that I can actually fall asleep when I order it, but me-Tuesday generally refuses to issue an order to sleep until it's late enough at night that me-Wednesday will sharply regret not having gone to bed earlier.
I've put a small effort into setting a routine, and another small effort into forcing me-Tuesday to think about what I want to accomplish on Wednesday and how sleep will be useful for that; neither seems to be immediately useful. If I reorganize my en... (read more)
Note that Bayesian probability is not absolute, so it's not appropriate to demand absolute morality in order to put probabilities on moral claims. You just need a meaningful (subjective) concept of morality. This holds for any concept one can consider, any statement can be assigned a subjective probability, and morality isn't an exceptional special case.
If morality is a fixed computation, you can place probabilities on possible outputs of that computation (or more concretely, on possible outputs of an extrapolation of your or humanity's volition).
It takes about an hour to familiarize yourself with all of the relevant information in the Knox case, I imagine it would take a lot longer in this case. It might still work though if enough people were willing to invest the time, especially since most people don't already have rigid, well-formed opinions on the issue.
I am not any other drugs or medication. The only thing that would qualify as a stimulant is caffeine - I have a coffee in the morning and a soda at lunch.
I recently got into some arguments with foodies I know on the merits (or lack thereof) of organic / local / free-range / etc. food, and this is a topic where I find it very difficult to find sources of information that I trust as reflective of some sort of expert consensus (insofar as one can be said to exist.) Does anyone have any recommendations for books or articles on nutrition/health that holds up under critical scrutiny? I trust a lot of you as filters on these issues.
Having read the quantum physics sequence I am interested in simulating particles at the level of quantum mechanics (for my own experimentation and education). While the sequence didn't go into much technical detail, it seems that the state of a quantum system comprises an amplitude distribution in configuration space for each type of particle, and that the dynamics of the system are governed by the Shroedinger equation. The usual way to simulate something like this would be to approximate the particle fields as piecewise linear and update iteratively accor... (read more)
The problem with choosing at day at random is, what if it turns out to be Friday? Friday would not be a surprise, since the test will be either Monday, Wednesday or Friday, and so by Thursday the students would know by process of elimination that it had to be Friday.
You misunderstand me - I maintain that an obvious unstated condition in the announcement is that there will be a test next week. Under this condition, the student will be surprised by a Wednesday test but not a Friday test, and therefore
and, if I guess your algorithm correctly,
[edit: algebra corrected]
Arithmetic, Population, and Energy by Dr. Albert A. Bartlett, Youtube playlist. Part One. 8 parts, ~75 minutes.
Relatively trivial, but eloquent: Dr. Bartlett describes some properties of exponential functions and their policy implications when there are ultimate limiting factors. Most obvious policy implication: population growth will be disastrous unless halted.
An extensive observation-based discussion of why people leave cults Worth reading, not just for the details, but because it's made very clear that leaving has to make emotional sense to the person doing it. Logical argument is not enough!
People leave because they've been betrayed by leaders, they've been influenced by leaders who are on their own way out of the cult, they find the world is bigger and better than the cult has been telling them, the fears which drove a person into a cult get resolved, and /or life changes which show that the cult isn't working for them.
Does anyone know a popular science book about, how should I put it, statistical patterns and distributions in the universe. Like, what kind of things follow normal distributions and why, why do power laws emerge everywhere, why scale-free networks all over the place, etc. etc.
Sorry for ranting instead of answering your question, but "power laws emerge everywhere" is mostly bullshit. Power laws are less ubiquitous than some experts want you to believe. And when you do see them, the underlying mechanisms are much more diverse than what these experts will suggest. They have an agenda: they want you to believe that they can solve your (biology, sociology, epidemiology, computer networks etc.) problem with their statistical mechanics toolbox. Usually they can't.
For some counterbalance, see Cosma Shalizi's work. He has many amusing rants, and a very good paper:
Gauss Is Not Mocked
So You Think You Have a Power Law — Well Isn't That Special?
Speaking Truth to Power About Weblogs, or, How Not to Draw a Straight Line
Power-law distributions in empirical data
Note that this is not a one-man crusade by Shalizi. Many experts of the fields invaded by power-law-wielding statistical physicists wrote debunking papers such as this:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.8169
Another very relevant and readable paper:
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.11.6305
My reaction was: bad talk, wrong answers, not properly thought through.
Is there any chance that we (a) CAN'T restrict AI to be friendly per se, but (b) (conditional on this impossibility) CAN restrict it to keep it from blowing up in our faces? If friendly AI is in fact not possible, then first generation AI may recognize this fact and not want to build a successor that would destroy the first generation AI in an act of unfriendliness.
It seems to me like the worst case would be that Friendly AI is in fact possible...but that we aren't the first to discover it. In which case AI would happily perpetuate itself. But what are ... (read more)
Are you sure that is your real reason for valuing the latter? I doubt it.
Are there any Germans, preferably from around Stuttgart, who are interested in forming a society for the advancement of rational thought? Please PM me.
He discusses that science can answer factual questions, thus resolving uncertainty in moral dogma defined conditionally on those answers. This is different from figuring out moral questions themselves.
I know I asked this yesterday, but I was hoping someone in the Bay Area (or otherwise familiar) could answer this:
Monica Anderson: Anyone familar with her work? She apparently is involved with AI in the SF Bay area, and is among the dime-a-dozen who have a Totally Different approach to AI that will work this time. She made this recent slashdot post (as "technofix") that linked a paper (PDF WARNING) that explains her ideas and also linked her introductory site and blog.
It all looks pretty flaky to me at this point, but I figure some of you must ... (read more)
A couple of physics questions, if anyone will indulge me:
Is quantum physics actually an improvement in the theory of how reality works? Or is it just building uncertainty into our model of reality? I was browsing A Brief History of Time at a bookstore, and the chapter on the Heisenberg uncertainty principle seem to suggest the latter - what I read of it, anyway.
If this is just a dumb question for some reason, feel free to let me know - I've only taken two classes in physics, and we never escaped the Newtonian world.
On a related note, I'm looking for a ... (read more)
I'm looking at the question of whether it's certainly the case that getting an FAI is a matter of zeroing in directly on a tiny percentage of AI-space.
It seems to me that an underlying premise is that there's no reason for a GAI to be Friendly, so Friendliness has to be carefully built into its goals. This isn't unreasonable, but there might be non-obvious pulls towards or away from Friendliness, and if they exist, they need to be considered. At the very least, there may be general moral considerations which incline towards Friendliness, and which would be... (read more)
I wonder how alarming people find this? I guess that if something fooms, this will provide the infrastructure for an instant world takeover. OTOH, the "if" remains as large as ever.
... (read more)CFS: creative non-fiction about immortality
BOOK PROJECT: Immortality postmark deadline August 6, 2010
For a new book project to be published by Southern Methodist University Press, entitled "Immortality," we're seeking new essays from a variety of perspectives on recent scientific developments and the likelihood, merits and ramifications of biological immortality. We're looking for essays by writers, physicians, scientists, philosophers, clergy--anyone with an imagination, a vision of the future, and a dream (or fear) of living forever.
Essays must... (read more)
How does the notion of time consistency in decision theory deal with the possibility of changes to our brains/source code? For example, suppose I know that my brain is going to be forcibly re-written in 10 minutes, and that I cannot change this fact. Then decisions I make after that modification will differ from those I make now, in the presence of the same information (?).
If you were going to predict the emergence of AGI by looking at progress towards it over the past 40 years and extrapolate into the future, then what parameter(s) would you measure and extrapolate?
Kurzweil et al measure raw compute power in flops/$, but as has been much discussed on LessWrong there is more to AI than raw compute power. Another popular approach is to chart progress in terms of the animal kingdom, saying things like "X years ago computers were as smart as jellyfish, now they're as smart as a mouse, soon we'll be at human level", b... (read more)
How do you get that result while requiring that the test occur next week? It is that assumption that drives the 'paradox'.
In spite of the rather aggressive signaling here in favor of atheism, I'm still an agnostic on the grounds that it isn't likely that we know what the universe is ultimately made of.
I'm even willing to bet that there's something at least as weird as quantum physics waiting to be discovered.
Discussion here has led me to think that whatever the universe is made of, it isn't all that likely to lead to a conclusion there's a God as commonly conceived, though if we're living in a simulation, whoever is running it may well have something like God-like omnipotence... (read more)
I have a couple of questions about UDT if anyone's willing to bite. Thanks in advance.
Mass Driver's recent comment about developing the US Constitution being like the invention of a Friendly AI opens up the possibility of a mostly Friendly AI-- an AI which isn't perfectly Friendly, but which has the ability to self-correct.
Is it more possible to have an AI which never smiley-faces or paperclips or falls into errors we can't think of than to have an AI which starts to screw up, but can realize it and stops?
Is anybody interested in finding a study buddy for the material on Less Wrong? I think a lot of the material is really deep -- sometimes hard to internalize and apply to your own life even if you're articulate and intelligent -- and that we would benefit from having a partner to go over the material with, ask tough questions, build trust, and basically learn the art of rationality together. On the off chance that you find Jewish analogies interesting or helpful, I'm basically looking for a chevruta partner, although the sacredish text in question would be the Less Wrong sequences instead of the Bible.
I read the linked-to comment, but still don't know what reference class tennis is.
Only the benefit of the doubt.
If you actually value private property because you value individual responsibility then your core value system is based on confusion. Assuming you meant "I value personal responsibility, I value private property, these two beliefs are politically aligned and here is one way that one can work well with the other" puts your position at least relatively close to sane.
No more than Chewbacca is an Ewok. He just isn't, even if they both happen to be creatures from Star Wars.
The stuff Ferriss covers is normal enough. It's better to think of it as remedial reading techniques for people (most everyone) who don't read well than as speeding up past 'normal'. For example, if you're subvocalizing everything you read, You're Doing It Wrong. For your average LW reader, I'd suggest that anything below 300WPM is worth fixing.
.
We can't test for values -- we don't know what they are. A negative test might be possible ("this thing surely has wrong values"), as a precaution, but not a positive test.