If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
This post by Eric Raymond should be interesting to LW :-) Extended quoting:... (read more)
Simple hypothesis relating to Why Don't Rationalists Win:
Everyone has some collection of skills and abilities, including things like charisma, luck, rationality, determination, networking ability, etc. Each person's success is limited by constraints related to these abilities, in the same way that an application's performance is limited by the CPU speed, RAM, disk speed, networking speed, etc of the machine(s) it runs on. But just as for many applications the performance bottleneck isn't CPU speed, for most people the success bottleneck isn't rationality.
It could be worse. Rationality essays could be attracting a self-selected group of people whose bottleneck isn't rationality. Actually I think that's true. Here's a three-step program that might help a "stereotypical LWer" more than reading LW:
1) Gym every day
2) Drink more alcohol
3) Watch more football
Only slightly tongue in cheek ;-)
Well, there's also the possibility that people who did successfully hack their determination, networking ability, and performance are now mostly not spending time on LW.
Probably everybody had seen it, but EY wrote long post on FB about AlphaGO which get 400 reposts. The post overestimates power of AlphaGO, and in general it seems to me that EY did too much conclusions based on very small available information (3:0 wins at the moment of the post - 10 pages of conclusions). The post's comment section includes contribution from Robin Hanson about usual foom's speed and type topic. EY later updated his predictions based on Segol win on game 4 and stated that even superhuman AI could make dumb mistakes, which may result in new... (read more)
History of "That which can be destroyed by the truth, should be"
First said by Hodgell, Yudkowsky wrote a variant, Sagan didn't say it.
Ok, now Lenet run out his AI after 30 years of development https://www.technologyreview.com/s/600984/an-ai-with-30-years-worth-of-knowledge-finally-goes-to-work/
Russian Compreno system, which manually model language is also suggested first service Findo (after 20 years and 80 million USD) https://abbyy.technology/en:features:linguistic:semanitc-intro
Three days ago, I went through a traditional rite of passage for junior academics: I received my first rejection letter on a paper submitted for peer review. After I received the rejection letter, I forwarded the paper to two top professors in my field, who both confirmed that the basic arguments seem to be correct and important. Several top faculty members have told me they believe the paper will eventually be published in a top journal, so I am actually feeling more confident about the paper than before it got rejected.
I am also very frustrated with the ... (read more)
A while ago I was, for some reason, answering a few hundred questions with yes-or-no answers. I thought I would record my confidence in the answers in 5% intervals, to check my calibration. What I found was that for 60%+ confidence I am fairly well calibrated, but when I was 55% confidant I was only right 45% of the time (100)!
I think what happened is that sometimes I would think of a reason why the proposition X is true, and then think of some reasons why X is false, only I would now be anchored onto my original assessment that X is true. So instead of ch... (read more)
In The genie knows, but it doesn't care, RobbBB argues that even if an AI is intelligent enough to understand its creator's wishes in perfect detail, that doesn't mean that its creator's wishes are the same as its own values. By analogy, even though humans were optimized by evolution to have as many descendants as possible, we can understand this without caring about it. Very smart humans may have lots of detailed knowledge of evolution & what it means to have many descendants, but then turn around and use condoms & birth control in order to stym... (read more)
The recently posted Intelligence Squared video titled Don't Trust the Promise of Artificial Intelligence may be of interest to LW readers, if only because of IQ2's decently sized cultural reach and audience.
Replication crisis: does anyone know of a list of solid, replicated findings in the social sciences? (all I know is that there were 36 in the report by Open Science Collaboration, and those are the ones I can easily find)
Telling truth to any face -
Not a lie, with mortar hoary -
Go apace to any place,
To attend to any story.
Happy belated Pi Day, everyone!
I want to make a desktop map application of my city, kinda like Paradox Interactive's games. My city is 280 km^2, and I would like it at a street level detail. I want to be able to just overlay multiple layers of different maps. What I have in mind is displaying predicted tram locations, purchasing power maps, and pretty much any information I can find on one map, and combining these at will, with a reasonable speed (and I would much prefer it to be seamless, like in a game, and not displaying white spots at the edges while it is loading)
Does anyone know of some toolset for such?
Do you have a background in formal debate?
If you do, do you think it was worth the time?
If you don't, do you regret not having it?
I've always enjoyed Kurzweil's story about how the human genome project was "almost done" when they had decoded the first 1% of the genome, because the doubling rate of genomic science was so high at the time. (And he was right).
It makes me wonder if we're "almost done" with FAI.
I don't really know where we are with FAI. I don't know if our progress is even knowable, since we don't really know where we're going. There's certainly not a percentage associated with FAI Completion. However, there are a number of technologies that might sudd... (read more)
Modest proposal for Friendly AI research:
Create a moral framework that incentivizes assholes to cooperate.
Specifically, create a set of laws for a "community", with the laws applying only to members, that would attract finance guys, successful "unicorn" startup owners, politicians, drug dealers at the "regional manager" level, and other assholes.
Win condition: a "trust app" that everyone uses, that tells users how trustworthy every single person they meet is.
Lose condition: startup fund assholes end up with majority... (read more)
Looking for advice with something it seems LW can help with.
I'm currently part of a program the trains highly intelligent people to be more effective, particularly with regards to scientific research and effecting change within large systems of people. I'm sorry to be vague, but I can't actually say more than that.
As part of our program, we organize seminars for ourselves on various interesting topics. The upcoming one is on self-improvement, and aims to explore the following questions: Who am I? What are my goals? How do I get there?
Naturally, I'm of the ... (read more)
Do you guys know how you can prevent sleep paralysis?
Does it make a difference if an organism reproduces in multiple smaller populations versus one larger, if the number of offspring at generation one is held constant? (score is determined by the number of offspring and their relatedness, so the standard game)
Smaller populations are more prone to genetic drift, but in both directions, right?
Does this change somehow if the populations are connected, with different rates of flow depending on the direction?
For example, in humans, migration to the capitals (and in general, urbanization) happens way more often t... (read more)
I have a rationalist/rationalist-adjacent friend who would love a book recommendation on how to be good at dating and relationships. Their specific scenario is that they already have a stable relationship, but they're relatively new to having relationships in general, and are looking for lots of general advice.
Since the sanity waterline here is pretty high, I though I'd ask if anyone had any recommendations or not. If not, I'll just point them to this LW post, though having a bit more material to read through might suit them well.
Isn't some sort of deism at least plausible and reasonable at this juncture? Is there a materialistic theory of what happened before the big bang that is worth putting any stock in? Or are we in an agnostic wait-and-see mode regarding pre-big bang events?
One major difference between left and right is the stance on personal responsibility.
Leftist intellectuals (tends to) think society influence trumps individual capabilities, so people are not responsible for their misfortunes and deserve to be helped. Whereas Rightist have the opposite view (related).
This seems trivial, especially in hindsight. But I hardly ever see it mentioned and in most discussions the right side treat the left as foolish and irrational and the left thinks right people are self-interested and evil rather than simply having a differen... (read more)