Once again I am deeply impressed how Yvain can explain things that I have vaguely felt for a long time but couldn't quite put into words.
Specifically the concept of the "safe spaces" and whether only some groups deserve them and other groups don't. And more generally, whether only members of some groups have feelings and can be hurt (or perhaps whether only feelings and pain of some groups matter) or whether we all are to some degree fragile and valueable.
And how the "safe space" of one group sometimes cannot be a "safe space" of another group, and it's okay to simply have both of them. And as a consequence how by insisting that every place must be a "safe space" of group X we de facto say that the group Y should have no "safe space", ever.
A few months ago, I re-read HPMOR in its entirety, and had an insight about the Hermione / feminism issue that I'd previously missed when I wrote this comment. I never got around to saying it anywhere, so I'm saying it here:
I'd previously written:
HPMOR kinda feels off because canonically, Hermione is unambiguously the most competent person in Harry's year, and has a good chance of growing up to be the most competent person in the 'verse. Harry is kept at the center of the story by his magical connection to Voldemort. In HPMOR, in contrast, Harry is kept at the center of the story by competence and drive. It's going to be very hard to do that without it feeling like Hermione is getting shafted.
But actually, HPMOR closely parallels cannon on this point: Methods!Hermione got just as much of an intelligence upgrade as Methods!Harry did, so she's still unambiguously more competent than him, at least before repeated use of his mysterious dark side gave him a mental age-up. This is more or less explicitly pointed out in chapter 21:
...She'd done better than him in every single class they'd taken. (Except for broomstick riding which was like gym class, it didn't count.) She'd gotten real
I'm planning to meet with my local Department of Services for the Blind tomorrow; the stated purpose of the meeting is to discuss upcoming life changes/needs/etc. This appears to be exactly what I need at the moment, but I'm concerned that I'm not going to be optimally prepared, so I'd like to post some details here to increase the chances of useful feedback.
(For transparency's sake: I'm legally blind, unemployed, living with my parents until they take the necessary steps to get me moved into the place I own, with student loan payments outpacing my SSI benefits by over $200/month, and stuck in the bible belt.)
I was absurdly lucky: the counselor I spoke to is new and motivated to put in the necessary effort for everything, and went to high school with my stepmother; it also turns out that the in-state training center has a thirty-day trial period, during which commitment is a non-issue. They also offered to provide any required technology, be it laptops or note takers or whatever. It could start as early as the first week of February, which is early enough that I wouldn't need to worry about security at my property. So on the whole, a surprisingly good day.
If you have not dealt with something the DSB before, you're probably drastically overestimating how much mental effort they are willing to expend to help you. (I dealt with a similar agency, the California Department of Rehabilitation, many years ago.)
Although it is of course good for you to try to estimate how much mental effort they are willing to make in real time during the interview, I suggest the plan you go into the meeting with assume it is low. E.g. you might consider just asking for a notetaker over and over again.
Try to appear a little dumber than you actually are.
I would not risk alienating your parents to try for a deeper conversation with DSB staff.
After doing a large amount of research, I feel fairly confident saying that high-dose Potassium supplementation was the initial trigger that pushed me into two-year nightmare struggle with migraines which I am still dealing with. I didn't do anything beyond the recommendations that you can find on gwern's page and gwern doesn't really recomend anything that is technically unsafe, but the fact is that (apparently!) some people are migraine prone and these people should probably definitely not do what I did. (To be clear, I'm not blaming gwern in any way, that's merely a "community reference" that a lot of folks refer to.)
All the productivity posts on LW that I've read, I found mildly disturbing. They all give a sense of excessive regimentation, as well as giving up enjoyable activity - sacrificing a lot for a single goal (or a few goals). I'm sure it's good for getting work done, but there's more to life than work - there's actually enjoying life, having fun, etc.
I think you're talking about So8res's recent posts, but I think they're exceptional. Most productivity posts are about avoiding spending time web surfing, particularly during time that has been budgeted for work. They do this partly because fragmenting time is bad and partly because there are better ways to have fun.
My experience is the opposite; productivity generally feels awesome, sitting around doing nothing or wandering around the internet is generally depressing. (This is insufficient as a motivator for behavior.)
If you're expecting the singularity within a century, does it make sense to put any thought into eugenics except for efforts to make it easy to avoid the worst genetic disorders?
I don't see any discussion about this blog post by Mike Travern.
His point is that people trying to solve for Friendly AI are doing so because it's an "easy", abstract problem well into the future. He contends that we are already taking significant damage from artificially created human systems like the financial system, which can be ascribed agency and it's goals are quite different from improving human life. These systems are quite akin to "Hostile AI". This, he contends, is the really hard problem.
Here is a quote from the blogpost (which is from a Facebook comment he made):
I am generally on the side of the critics of Singulitarianism, but now want to provide a bit of support to these so-called rationalists. At some very meta level, they have the right problem — how do we preserve human interests in a world of vast forces and systems that aren’t really all that interested in us? But they have chosen a fantasy version of the problem, when human interests are being fucked over by actual existing systems right now. All that brain-power is being wasted on silly hypotheticals, because those are fun to think about, whereas trying to fix industrial capitalism so it doesn’t wreck the human life-support system is hard, frustrating, and almost certainly doomed to failure.
It's a short post, so you can read it quickly. What do you think about his argument?
It's a short post, so you can read it quickly. What do you think about his argument?
I think it's silly. I suspect MIRI and every other singulatarian organization, and every other individual working on the challeges of unfriendly AI, could fit comfortably in a 100-person auditorium.
In contrast, "trying to fix industrial capitalism" is one of the main topics of political dispute everywhere in the world. "How to make markets work better" is one of the main areas of research in economics. The American Economic Association has 18,000 members. We have half a dozen large government agencies, with budgets of hundreds of millions of dollars each, to protecting people from hostile capitalism. (The SEC, the OCC, the FTC, etc etc, are all ultimately about trying to curb capitalist excess. Each of these organizations has a large enforcement bureaucracy, and also a number of full-time salaried researchers.)
The resources and human energy devoted to unfriendly AI are tiny compared to the amount expended on politics and economics. So it's strange to complain about the diversion of resources.
Excellent point. I'm surprised this did not occur to me. This reminds me of Scott Aaronson's reply when someone suggested that quantum computational complexity is quite unimportant compared to experimental approaches to quantum computing and therefore shouldn't get much funding:
I find your argument extremely persuasive—assuming, of course, that we’re both talking about Bizarro-World, the place where quantum complexity research commands megabillions and is regularly splashed across magazine covers, while Miley Cyrus’s twerking is studied mostly by a few dozen nerds who can all fit in a seminar room at Dagstuhl.
Finally have a core mechanic for my edugame about Bayesian networks. At least on paper.
This should hopefully be my last post before I actually have a playable prototype done, even if a very short one (like the tutorial level or something).
i plan to quit my job and move to an Eastern European country with small costs of living in march. Because of this I am looking for any job that I can do online for around 20 hours a week. I am looking for recommendations on where to look, where to ask, who to contact that might help me, etc. Any help will be appreciated.
In light of gwern's good experiences with one, I too now have an anonymous feedback form. You can use it to send me feedback on my personality, writing, personal or professional conduct, or anything else.
(My thoughts are still not sufficiently organized that I’m making a top level post about this, but I think it’s worth putting out for discussion.)
A couple of years ago, in a thread I can no longer find, someone argued that they valued the pleasure they got from defecation, and that they would not want to bioengineer away the need to do so. I thought this was ridiculous.
At the same time, I see many Lesswrongers view eating as a chore that they would like to do away with. And yet I also find this ridiculous.
So I was thinking about where there difference lay for me. My working hypothesis is that there are two elements of pleasure: relief and satisfaction. Defecation, or a drink of water when you’re very thirsty bring you relief, but not really satisfaction. Eating a gourmet meal, on the other hand, may or may not bring relief, depending on how hungry you are when you eat it, but it’s very satisfying. The ultimate pleasure is sex, which culminates in a very intense sense of both relief and satisfaction. (Masturbation, at least from a male perspective, can provide the relief but only a tiny fraction of the satisfaction – hence the difference in pleasure from sex.)
I can understand...
I'm having some trouble keeping myself from browsing to timesink websites at work(And I'm self-employed, so it's not like I'm even getting paid for it). Anyone know of a good Chrome app for blocking websites?
What are you supposed to do when you've nailed up a post that is generally disliked? I figured that once this got to -5 karma it would disappear from view and be forgotten. But it just keeps going down and it's now at -12. This must mean that someone saw the title of it at -11 karma and thought "Sounds promising! Reading this now will be a good use of my time." And then they read it and went: "Arrgh! This turned out to be a disappointing post. Less like this, please. I'd better downvote it to warn others."
What does etiquette suggest I do here? Am I supposed to delete the post to keep people from falling into the trap of reading it? But I like the discussion it spawned and I'd like to preserve it. I'm at a loss and I can't find relevant advice at the wiki.
if we don't have downvoted topics some of the time it means we are being too conservative about what we judge will be useful to others. Only worry if too large a fraction of your stuff gets downvoted.
This must mean that someone saw the title of it at -11 karma and thought "Sounds promising! Reading this now will be a good use of my time." And then they read it and went: "Arrgh! This turned out to be a disappointing post. Less like this, please. I'd better downvote it to warn others."
Not necessarily. Seeing a heavily downvoted post seems to trigger some kind of group-norm-reinforcement instinct in me: I often end up wanting to read it in the hopes of it being just as bad as the downvotes imply, so that I could join in the others in downvoting it. And I actually get pleasure out of being able to downvote it.
I'm not very proud of acting on that impulse, especially since I'm not going to be able to objectively evaluate a post's merit if I start reading it while hoping it to be bad. But sometimes I do act on it regardless. (I didn't do that with your post, though.)
What are you supposed to do when you've nailed up a post that is generally disliked?
Grin and say "Fuck 'em!"
Eh, if someone clicks on an article at -11, then feels reading it was a waste of time, he should blame himself, not you.
I see from time to time people mention a 'rationalist house' as though it is somewhere they live, and everyone else seems to know what they're talking about. What are are they talking about? Are there many of these? Are these in some way actually planned places or just an inside joke of some kind?
Every single time the subject of overpopulation comes up and I offer my opinion (which is that in some respects the world is overpopulated and that it would benefit us to have a smaller or negative population growth rate), I seem to get one or two negative votes. The negative karma isn't nearly as important to me as the idea that I might be missing some fundamental idea and that those who downvote me are actually right.
Especially, this recent thread: http://lesswrong.com/r/discussion/lw/jgg/we_need_new_humans_please_help/ has highlighted this issue for me again.
So, I'm opening my mind, trying to set aside my biases, and hereby asking all those who disagree with me to give me a rational argument for why I'm wrong and why the world needs more people. If I stray from my objective and take a biased viewpoint, I deserve all the negative karma you can throw at me.
Well, let's try to be a bit more specific about this.
First, what does the claim that "the world is overpopulated" mean? It implies a metric of some sort to which we can point and say "this is too high", "this is too low", "this is just right". I am not sure what this metric might be.
The simplest metric used in biology is an imminent population crash -- if the current count of some critters in an ecosystem is pretty sure to rapidly contract soon we'd probably speak of overpopulation. That doesn't seem to be the case with respect to humans now.
Second, the overpopulation claim is necessarily conditional on a specific level of technology. It is pretty clear that the XXI technology can successfully sustain more people than, say, the pre-industrial technology. One implication is that future technological progress is likely to change whatever number we consider to be the sustainable carrying capacity of Earth now.
Third, and here things get a bit controversial, it all depends (as usual) on your terminal goals. If your wish is for peace and comfort of Mother Gaia, well, pretty much any number of humans is overpopulation. But let's take a common (thoug...
I don't recall downvoting you, but I think that there is a very high chance technology makes the problem moot - either by killing us or by alleviating scarcity until a superintelligence happens.
(Reposted from the bottom of the last open thread.)
Two unrelated things (should I make these in separate posts or...?):
1.) Given recent discussion on social justice advocates and their... I don't know the best way to describe this, sometimes poor epistemological habits? I thought I would post this
http://geekfeminism.wikia.com/wiki/Concern_troll
Is this it just me, or is this, like, literally the worst concept ever? It literally just means "someone slightly to the right of me" or "someone does anything that could be considering cheering for the other side", backed with a dubious claim tha...
If it's worth saying but not worth its own thread even in discussion it goes here.