Is Less Wrong dying?
Some observations...
About all I look at on LW anymore is the Open Discussion Thread, Rationality Quotes and the link to Slate Star Codex. I noticed CFAR and MIRI's websites gave me the impression they were getting more traction and perhaps making some money.
Has LW run it's course?
I think it's a little early to predict the end, but there's less I'm interested in here, and I'm having trouble thinking of things to write about, though I can still find worthwhile links for open threads.
Is LW being hit by some sort of social problem, or have we simply run out of things to say?
I'd add "Metacontrarianism is on the rise" to your list. Many of the top posts now are contrary to at least the spirit of the sequences, if not the letter, or so it feels to me.
Has LW run it's course?
It seems to be a common sentiment, actually. I mentioned this a few times on #lesswrong and the regulars there appear to agree. Whether this is a some sort of confirmation bias, I am not sure. Fortunately, there is a way to measure it:
Look at the recent Main entries: http://lesswrong.com/recentposts/
Then look at the entries from about 1 year ago: http://lesswrong.com/recentposts/?count=250&after=t3_gnv
Count interesting articles from each period and compare the numbers.
It's mostly that Eliezer has taken to disseminating his current work via open Facebook discussions. I can see how that choice makes sense, from his position, but it's still sad for the identity-paranoid and the nostalgic remnants still roaming these forgotten halls. Did I have a purpose once? It's been so long.
I'm not sure total participants is a good metric to use in making that determination. It depends on people's level of participation and engagment, I think.
As per issue #389, I've just pushed a change to meetups. All future meetup posts will be created in /r/meetups to un-clutter /r/discussion a little bit.
Below is an edited version of an email I prepared for someone about what CS researchers can do to improve our AGI outcomes in expectation. It was substantive enough I figured I might as well paste it somewhere online, too.
I'm currently building a list of what will eventually be short proposals for several hundred PhD theses / long papers that I think would help clarify our situation with respect to getting good outcomes from AGI, if I could persuade good researchers to research and write them. A couple dozen of these are in computer science broadly: the others are in economics, history, etc. I'll write out a few of the proposals as 3-5 page project summaries, and the rest I'll just leave as two-sentence descriptions until somebody promising contacts me and tells me they want to do it and want more detail. I think of these as "superintelligence strategy" research projects, similar to the kind of work FHI typically does on AGI. Most of these projects wouldn't only be interesting to people interested in superintelligence, e.g. a study building on these results on technological forecasting would be interesting to lots of people, not just those who want to use the results to g...
Continued...
Top-down research aimed at high assurance AGI tries to envision what we'll need a high assurance AGI to do, and starts playing with toy models to see if they can help us build up insights into the general problem, even if we don't know what an actual AGI implementation will look like. Past examples of top-down research of this sort in computer science more generally include:
Here's a comment that I posted in a discussion on Eliezer's FB wall a few days back but didn't receive much of a response there, maybe it'll prompt more discussion here:
--
So this reminds me, I've been thinking for a while that VNM utility might be a hopelessly flawed framework for thinking about human value, but I've had difficulties putting this intuition in words. I'm also pretty unfamiliar with the existing literature around VNM utility, so maybe there is already a standard answer to the problem that I've been thinking about. If so, I'd appreciate a pointer to it. But the theory described in the linked paper seems (based on a quick skim) like it's roughly in the same direction as my thoughts, so maybe there's something to them.
Here my stab at trying to describe what I've been thinking: VNM utility implicitly assumes an agent with "self-contained" preferences, and which is trying to maximize the satisfaction of those preferences. By self-contained, I mean that they are not a function of the environment, though they can and do take inputs from the environment. So an agent could certainly have a preference that made him e.g. want to acquire more money if he had less than ...
I also have lots of objections to using VNM utility to model human preferences. (A comment on your example: if you conceive of an agent as accruing value and making decisions over time, to meaningfully apply the VNM framework you need to think of their preferences as being over world-histories, not over world-states, and of their actions as being plans for the rest of time rather than point actions.) I might write a post about this if there's enough interest.
I've always thought of it as preferences over world-histories and I don't see any problem with that. I'd be interested in the post if it covers a problem with that formulation
Robin Hanson writes about rank linear utility. This formalism asserts that we value options by their rank in a list of options available at any one time, making it impossible to construct a coherent classical utility function.
I recently saw an advertisement which was such a concentrated piece of antirationality I had to share it here. Imagine a poster showing a man's head and shoulders gazing inspiredly past the viewer into the distance, rendered in posterised red, white, and black with a sort of socialist realism flavour. The words: "No Odds Too Long. No Dream Too Great. The Believer."
If that was all, it would just be a piece of inspirational nonsense. But what was it advertising?
Ladbrokes. A UK chain of betting shops.
The United States green card lottery is one of the best lotteries in the world. The payoff is huge (green cards would probably sell for six figures if they were on the market), the cost of entry is minimal ($0 and 30 minutes) and the odds of winning are low, but not astronomically low. If you meet the eligibility criterion and are even a little interested in moving to America, you should enter the lottery this October.
The payoff is huge ...,the cost of entry is minimal
This reminds me of another pretty decent lottery that some U.S. residents can take advantage of. Many major cities, including NYC, have affordable housing programs in brand new buildings. The cost to apply is $0, the payoff of is paying 20% - 25% of market rate of housing in that area. No, it's not for poor people, there are other programs for that, the income requirements vary but in general is set to qualify the working residents of the city (maybe 50k - 95k).
Some of the most desirable and stunning locations in the city, where rents are 4k for 600 sq/f, can go for $700. Just Google the city you live in to see the specific requirements.
Have you guys noticed that, while the notion of AI x-risk is gaining credibility thanks to some famous physicists, there is no mention of Eliezer and only a passing mention of MIRI? Yet Irving Good, who pointed out the possibility of recursive self-improvement without linking it to x-risk, is right there. Seems like a PR problem to me. Either raising the profile of the issue is not associated with EY/MIRI, or he is considered too low status to speak of publicly. Both possibilities are clearly detrimental to MIRI's fundraising efforts.
See also this old post where Robin Hanson basically predicted that this would happen.
The contrarian will have established some priority with these once-contrarian ideas, such as being the first to publish on or actively pursue related ideas. And he will be somewhat more familiar with those ideas, having spent years on them.
But the cautious person will be more familiar with standard topics and methods, and so be in a better position to communicate this new area to a standard audience, and to integrate it in with other standard areas. More important to the "powers that be" hoping to establish this new area, this standard person will bring more prestige and resources to this new area.
If the standard guy wins the first few such contests, his advantage can quickly snowball into an overwhelming one. People will prefer to cite his publications as they will be in more prestigious journals, even if they were not quite as creative. Reporters will prefer to quote him, students will prefer to study under him, firms will prefer to hire him as a consultant, and journals will prefer to publish him, as he will be affiliated with more prestigious institutions. And of course the contrarian may have a worse reputation as a "team player."
I think this is fine. Convincing people that this is a Real Thing and then specifically making them aware of Eliezer and MIRI should be done separately anyway. Doing the second thing too soon may make the first thing harder, while doing the second thing late makes the first thing easier (because then AI x-risk can be put in a mental category other than "that weird thing that those weird people care about").
There is a lot of interest in prediction markets in the Less Wrong community. However, the prediction markets that we have are currently only available in meatspace, they have very low volume, and the rules are not ideal (You cannot leave positions by selling your shares, and only the column with the final outcome contributes to your score)
I was wondering if there would be interest in a prediction market linked to the Less Wrong account? The idea is that we use essentially the same structure as Intrade / Ipredict. We use play money - this can either be Karma or a new "currency" where everyone is assigned the same starting value. If we use a currency other than Karma, your balance would be publicly linked to your account, as an indicator of your predictive skills.
Perhaps participants would have to reach a specified level of Karma before they are allowed to participate, to avoid users setting up puppet accounts to transfer points to their actual accounts
I think such a prediction market would act as a tax on bullshit, it would help aggregate information, it would help us identify the best predictors in the community, and it would be a lot of fun.
I don't think karma matters as much as people think it does, but if that's the only reason, LW could be programmed to look on PB.com for a matching username and increase karma based on the scores or something, much more easily than an entire prediction market written.
Recently I've been trying to catch up in math, with a goal of trying to get to calculus as soon as possible. (I want to study Data Science, and calculus / linear algebra seems to be necessary for that kind of study.) I found someone on LW who agreed to provide me with some deadlines, minor incentives, and help if I need it (similar to this proposal), although I'm not sure how well such a setup will end up working.
Originally the plan was that I'd study the Art of Problem Solving Intermediate Algebra book, but I found that many of the concepts were a little advanced for me, so I switched to the middle of the Introduction to Algebra book instead.
The Art of Problem Solving books deliberately make you think a lot, and a lot of the problems are quite difficult. That's great, but I've found that after 2-3 hours of heavy thinking my brain often feels completely shot and that ruins my studying for the rest of the day. It also doesn't help that my available study time usually runs from about 10am-2pm, but I often only start to really wake up around noon. (Yes, I get enough sleep usually. I also use a light box. But I still often only wake up around noon.)
One solution I've been thinking of wo...
...The five defined depression biotypes are:
“It’s not serotonin deficiency, but an inability to keep serotonin in the synapse long enough. Most of these patients report excellent response to SSRI antidepressants, although they may experience nasty side effects,” Walsh said.
Pyrrole Depression: This type was found in 17 percent of the patients studied, and most of these patients also said that SSRI antidepressants helped them. These patients exhibited a combination of impaired serotonin production and extreme oxidative stress.
Copper Overload: Accounting for 15 percent of cases in the study, these patients cannot properly metabolize metals. Most of these people say that SSRIs do not have much of an effect—positive or negative—on them, but they report benefits from normalizing their copper levels through nutrient therapy. Most of these patients are women who are also estrogen intolerant.
“For them, it’s not a serotonin issue, but extreme blood and brain levels of copper that result in dopamine deficiency and norepinephrine overload,” Walsh explained. “This may be the primary cause of postpartum depression.”
Low-Folate Depression: These patients account for 20 p
According to the principle of enlightened self-interest, you should help other people because this will help you in the long run. I've seen it argued that this is the reason why people have an instinct to help others. I don't think that this would mean helping people the way an Effective Altruist would. It would mean giving the way people instinctually do. You give gifts to friends, give to your community, give to children's hospitals, that sort of thing.
This makes me wonder about what I'm calling enlightened altruism. If you get power from helping people in that way, then you can use the power to help people effectively.
Imagine a planet with one billion people each of whom has $1000, except the 99,999,990 people who played the lottery and lost and now have $990 each and the 10 people who played the lottery and won and now have $1,000,990 each. 100% of the rich people played the lottery whereas only 10% of the poor people did so, but that doesn't mean playing the lottery was a good idea.
Elsewhere in comments here it's suggested that one reason why LW (allegedly) has less interesting posts and discussions than it used to is that "Eliezer has taken to disseminating his current work via open Facebook discussions". I am curious about how the rest of the LW community feels about this.
Poll! The fact that Eliezer now tends to talk about his current work on Facebook rather than LW is ...
[pollid:697]
(For the avoidance of doubt, I am not suggesting that Eliezer has any obligation to do what anyone votes for here. Among many reasons there's this: If he's posting things on FB rather than LW because there are lots of people who want to read his stuff but for whatever reason will never read anything on LW then this poll can't possibly detect that other than weakly and indirectly.)
The main problem is that facebook encourages drastically different quality of thought and expressions than lesswrong does. The quality of thought in Eliezer's comments on facebook is sloppy. I chose to unfollow him on facebook because seeing Eliezer at his worst makes it rather a lot more difficult to appreciate Eliezer at his best (contempt is the mind killer). I assumed that any particularly insteresting work he did (that is safe to share with the public) would end up finding its way into a less transient medium than facebook eventually...
...Have I been missing anything exciting?
I wonder what you think of the question of the origin of consciousness i. e. "Why do we have internal experiences att all?" and "How can any physical process result in an internal/subjective experience?"
I've read some material on the subject before, and reading the quantum physics and identity sequence got me thinking about this again.
Links: Young blood reverses age-related impairments in cognitive function and synaptic plasticity in mice (press release)(paper)
I think the radial arm water maze experiment's results are particularly interesting; it measures learning and memory (see fig 2c which is visible even with the paywall). There's a day one and day two of training and the old mice (18 months) improve somewhat during the first day and then more or less start over on the second day in terms of the errors they are making. This is also true if the old mice are treated with 8 injections ...
Idea for a question for the next LW survey: Have you ever been diagnosed with a mental disorder? If so, what was it? [either a list of some common ones and an "other" box, or, ideally, a full drop-down of DSM-5 diagnoses. Plus a troll-bait non-disorder and a "prefer not to say", of course]
I've been wondering a lot about whether or not I'm acting rationally with regards to the fact that I will never again be as young as I am now.
So I've been trying to make a list of things I can only do while I'm young, so that I do not regret missing the opportunity later (or at least rationally decided to skip it). I'm 27 so I've already missed a lot of the cliche advice aimed at high school students about to enter college, and I'm already happily engaged so that cuts out some other things.
Any thoughts on opportunities only available at a certain age?
Is anyone familiar with any effective-altruist work on pushing humanity towards becoming a spacefaring species? Seems relevant given the likely difference between a civilization that develops it vs. one that doesn't.
So I often find that interesting people live near me. Anyone have tips on asking random people to meet up? Ask them for coffee? I suppose a short email is better than a long one, which may come off creepy? Anyone have friends they met via random emails?
Dear LW,
I've just this morning been offered funding for a research placement in a British University this summer (I'm 17). I have to contact researchers myself, and it generally has to be in a STEM subject area. I am looking very generally for any recommendations of researchers to contact in areas of Maths, Physics and Computer Science. If you know any people who do research that would be of interest to the average LessWronger, especially in the aforementioned fields, I would appreciate it greatly.
Looks like when my current job ends (May 31), I'll have the summer free before my next one starts (Sept). My June is pretty much booked with a big writing project with a looming deadline, but I get to decide how to fill July and August, and I'd appreciate crowdsourced suggestions.
I'm lucky enough to not need to find alternate work to cover living expenses for those two months, so I'm not particularly in the market for short-term work suggestions. I'll be based out of D.C. during this period. Not super interested in travel. I'm considering some self-stu...
What is the meaning and use of (total) GDP, adjusted PPP?
I cannot think of a single use for it (unlike nominal total GDP or PPP GDP per capita).
Previous Open Thread
You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should start on Monday, and end on Sunday.
4. Open Threads should be posted in Discussion, and not Main.