If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Economist Scott Sumner at Econlog praised heavily Yudkowsky and the quantum physics sequence, and applies lessons from it to economics. Excerpts:
and... (read more)
What sophisticated ideas did you come up with independently before encountering them in a more formal context?
I'm pretty sure that in my youth I independently came up with rudimentary versions of the anthropic principle and the Problem of Evil. Looking over my Livejournal archive, I was clearly not a fearsome philosophical mind in my late teens, (or now, frankly), so it seems safe to say that these ideas aren't difficult to stumble across.
While discussing this at the most recent London Less Wrong meetup, another attendee claimed to have independently arrived at Pascal's Wager. I've seen a couple of different people speculate that cultural and ideological artefacts are subject to selection and evolutionary pressures without ever themselves having come across memetics as a concept.
I'm still thinking about ideas we come up with that stand to reason. Rather than prime you all with the hazy ideas I have about the sorts of ideas people converge on while armchair-theorising, I'd like to solicit some more examples. What ideas of this sort did you come up with independently, only to discover they were already "a thing"?
When I was a teenager, I imagined that if you had just a tiny infinitesimally small piece of a curve - there would only be one moral way to extend it. Obviously, an extension would have to be connected to it, but also, you would want it to connect without any kinks. And just having straight-lines connected to it wouldn't be right, it would have to be curved in the same sort of way - and so on, to higher-and-higher orders. Later I realized that this is essentially what a Taylor series is.
I also had this idea when I was learning category theory that objects were points, morphisms were lines, composition was a triangle, and associativity was a tetrahedron. It's not especially sophisticated, but it turns out this idea is useful for n-categories.
Recently, I have been learning about neural networks. I was working on implementing a fairly basic one, and I had a few ideas for improving neural networks: making them more modular - so neurons in the next layer are only connected to a certain subset of neurons in the previous layer. I read about V1, and together, these led to the idea that you arrange things so they take into account the topology of the inputs - so for image processing, havi... (read more)
I came up with the idea of a Basic Income by myself, by chaining together some ideas:
Capitalism is the most efficient economic system for fulfilling the needs of people, provided they have money.
The problem is that if lots of people have no money, and no way to get money (or no way to get it without terrible costs to themselves), then the system does not fulfill their needs.
In the future, automation will both increase economic capacity, while also increase the barrier to having a 'valuable skill' allowing you to get money. Society will have improved capacity to fulfill the needs of people with money, yet the barrier to having useful skills and being able to get money will increase. This leads to a scenario where the society could easily produce the items needed by everyone, yet does not because many of those people have no money to pay for them.
If X% of the benefits accrued from ownership of the capital were taken and redistributed evenly among all humans, then the problem is averted. Average people still have some source of money with which they can purchase the fulfillment of their needs, which are pretty easy to supply in this advanced future society.
X=100%, as in
Once a Christian friend asked me why I cared so much about what he believed. Without thinking, I came up with, "What you think determines what you choose. If your idea of the world is inaccurate, your choices will fail."
This was years before I found LW and learned about the connection between epistemic and instrumental rationality.
P.S. My friend deconverted himself some years afterwards.
This is not a direct answer: Every time I come up with an idea in a field I am not very deeply involved in sooner or later I will realise that the phenomenon is either trivial, a misperception or very well studied. Most recently this happened with pecuniary externalities.
Came up with the RNA-world hypothesis on my own when reading about the structure and function of ribosomes in middle school.
Decided long ago that there was a conflict between the age of the universe and the existence of improvements in space travel meaning that things such as we would never be able to reach self-replicating interstellar travel. Never came to the conclusion that it meant extinction at all and am still quite confused by people who assume its interstellar metastasis or bust.
In the last open thread Lumifer linked to a list by the American Statistical Association with points that need to be understood to be considered statistically literate. In the same open thread in another comment sixes_and_sevens asked for statements we know are true but the average lay person gets wrong. As response he mainly got examples from the natural sciences and mathematics. Which makes me wonder, can we make a general test of education in all of these fields of knowledge that can be automatically graded? This test would serve as a benchmark for traditional educational methods and for autodidacts checking themselves.
I imagine having simple calculations for some things and multiple-choice tests for other scenarios where intuition suffices.
Edit: Please don't just upvote, try to point to similar ideas in your respective field or critique the idea.
What are some good paths toward good jobs, other than App Academy?
I've just finished the first draft of a series of posts on control theory, the book Behavior: The Control of Perception, and some commentary on its relevance to AI design. I'm looking for people willing to read the second draft next week and provide comments. Send me a PM or an email (I use the same username at gmail) if you're interested.
In particular, I'm looking for:
(Yes, that is basically a complete... (read more)
My son was asked what he'd wish for when he could wish for any one thing whatsoever.
He considered a while and then said: "I have so many small wishes that I'd wish for many wishes."
My ex-wife settled for "I want to be able to conjure magic" reasoning that then she could basically make any thing come true.
For me it is obviously "I want a friendly artificial general intelligence" - seems like the safest bet.
Thus basically we all chose alike things.
The Unicorn Fallacy (warning, relates to politics)
Is there an existing name for that one? It's similar to the nirvana fallacy but looks sufficiently different to me...
How morally different are ISIS fighters from us? If we had a similar upbringing would we think it morally correct to kill Yazidi children for having the "wrong" religion? Or might genetics play a role in our differing moral views? I find it hard to think of ISIS members as human, or at least I don't want to belong to the same species as them. But yes I do realize that some of my direct ancestors almost certainly did horrible, horrible things by my current moral standards.
That's how the ISIS fighters feel about the Yazidi.
Beware of refusing to believe undeniable reality just because it's not nice.
A relevant factor which is (intentionally or not) ignored by American media is that, from the point of view of pious Muslims, Yazidis are satanists.
To quote Wikipedia (Taus Melek is basically the chief deity for Yazidis, God the Creator being passive and uninvolved with the world):
So, what's the Christianity's historical record for attitude towards devil worshippers?
Any particular reason you feel this way about the Sunni armed groups, but not about, say, Russian communists, or Mao's Chinese, or Pol Pot's Cambodians, or Rwandans, or... it's a very long list, y'know?
First you might want to consider propaganda.
It's a little harder to say about the ISIS guys, but I think personality wise many of us are a lot like the Al Qaeda leadership. Ideology and Jihad for it appeals.
Most people don't take ideas too seriously. We do. And I think it's largely genetic.
Human, All Too Human.
Historically, massacring The Other is the rule, not the exception. You don't even need to be particularly ideological for that. People who just go with the flow of their community will set The Other on fire in a public square, and have a picnic watching. Bring their kids. Take grandma out for the big show.
I was talking to someone from Tennessee once, and he said something along the lines of: "When I'm in a bar in western Tennessee, I drink with the guy from western Tennessee and fight the guy from eastern Tennessee. When I'm in a bar in eastern Tennessee, I drink with the guy from Tennessee and fight the guy from Georgia. When I'm in a bar in Georgia, I drink with the guy from the South and fight the guy from New England."
They are also sweetness and light under the proper conditions.
You seem to be claiming that certain conditions -- those not producing brutal murderous monkeys -- are accidents of space and time, but certain others -- those producing brutal murderous monkeys -- are not. That "brutal murderous monkeys" is our essence and any deviation from that mere accident, in the philosophical sense. That the former is our fundamental nature and the latter mere superficial froth.
There is no actual observation that can be made to distinguish "proper conditions" from "parochial circumstance", "essence" from "accident", "fundamental" from "superficial".
I wonder why we don't see more family fortunes in the U.S. in kin groups that have lived here for generations. Estate taxes tend to inhibit the transmission of wealth down the line, but enough families have figured out how to game the system that they have held on to wealth for a century or more, notably including families which supply a disproportionate number of American politicians; they provide proof of concept of the durable family fortune. Otherwise most Americans seem to live in a futile cycle where their lifetime wealth trajectory starts from zero ... (read more)
This is not an attempt at an organised meetup, but the World Science Fiction Convention begins tomorrow in London. I'll be there. Anyone else from LessWrong?
I had intended to be at Nineworlds last weekend as well, but a clash came up with something else and I couldn't go. Was anyone else here there?
If any LWer is attending the Quantum Foundations of a Classical Universe workshop at the IBM Watson Research Center, feel free to report!
Several relatively famous experts are discussing anthropics, the Born rule, MWI, Subjective Bayesianism, quantum computers and qualia.
I am getting the red envelope sign on the right side here, as I had a message. But then I see it's not for me. For a few days now.
Have you ever clicked on the grey envelope icon found at the bottom right of every post and comment? If you do, then immediate replies to it show up in your inbox also. Look at the parent of one of these mysterious messages and see if its envelope is green. If it is, you can click it again to turn it off.
My brain spontaneously generated an argument for why killing all humans might be the best way to satisfy my values. As far as I know it's original; at any rate, I don't recall seeing it before. I don't think it actually works, and I'm not going to post it on the public internet. I'm happy to just never speak of it again, but is there something else I should do?
Find out how your brain went wrong, with a view to not going so wrong again.
Apart from the fact that they wouldn't say anything (because generally animals can't speak our languages ;)), nature can be pretty bloody brutal. There are plenty of situations in which our species' existence has made the lives of other animals much better than they would otherwise be. I'm thinking of veterinary clinics that often perform work on wild animals, pets that don't have to be worried about predation, that kind of thing. Also I think there are probably a lot of species that have done alright for themselves since humans showed up, animals like crows and the equivalents in their niche around the world seem to do quite well in urban environments.
As someone who cares about animal suffering, is sympathetic to vegetarianism and veganism, and even somewhat sympathetic to more radical ideas like eradicating the world's predators, I think that humanity represents a very real possibility to decrease suffering including animal suffering in the world, especially as we grow in our ability to shape the world in the way we choose. Certainly, I think that humanity's existence provides real hope in this direction, remembering that the alternative is for animals to continue to suffer on nature's whims perhaps indefinitely, rather than ours perhaps temporarily.
Hardcore negative utilitarianism?
(Pretty cute wind-up on Smart's part; grab Popper's argument that to avoid totalitarianism we should minimize pain, not maximize happiness, then turn it around on Popper by counterarguing that his argument obliges the obliteration of humanity whenever feasible!)
The ‘six universal’ facial expressions are not universal, cross-cultural study shows
For those of you using life insurance to pay your cryonics costs, what sort of policy do you use?
New open thread
I've been noticing a theme of utilitarianism on this site -- can anyone explain this? More specifically: how did (x)uys rationalize a utilitarian philosophy over an existential, nihilistic, or hedonistic one?
I posted this in the last open thread but I think it got buried:
I was at Otakon 2014, and there was a panel about philosophy and videogames. The description read like Less Wrongese. I couldn't get in (it was full) but I'm wondering if anyone here was responsible for it.
David Collingridge wouldn't have liked Nick Bostrom's "differential technological development" idea.
Is it easier for you to tell men or women apart?
Obvious hypothesis: whichever gender you are attracted to, you will find them easier to tell apart.
In a world without leap years, how many people should a company have to be reasonably certain that everyday will be someone's birthday?
See Coupon collector's problem, particularly "tail estimates".
If post a response to someone, and someone replies to me, and they get a single silent downvote prior to me reading their response, I find myself reflexively upvoting them just so they won't think I was the one who did the single silent downvote, since it seems plausible to me that if you have a single downvote, and no responses, the most likely explanation to me was that it was from the person who you replied to downvoted you, and I don't want people to think that.
Except, then I seem to have gotten my opinion of the post hopelessly biased before even read... (read more)
The suggestion is correct.
30 day experiment with homemade soylent-- mostly positive outcome.