All of Skeptityke's Comments + Replies

Utility vs Probability: idea synthesis

"But the general result is that one can start with an AI with utility/probability estimate pair (u,P) and map it to an AI with pair (u',P) which behaves similarly to (u,P')"

Is this at all related to the Loudness metric mentioned in this paper? https://intelligence.org/files/LoudnessPriors.pdf It seems like the two are related... (in terms of probability and utility blending together into a generalized "importance" or "loudness" parameter)

1Stuart_Armstrong7y
Not really. They're only connected in that they both involve scaling of utilities (but in one case, scaling of whole utilities, in this case, scaling of portions of the utility).
New paper from MIRI: "Toward idealized decision theory"

What happens if it only considers the action if it both failed to find "PA+A()=x" inconsistent and found a proof that PA+A()=x proves U()=x? Do an inconsistency check first and only consider/compare the action if the inconsistency check fails.

3So8res8y
Then you get spurious proofs of inconsistency :-) (If PA is strong enough to prove what the agent will do, then PA + "A()=x" is only consistent for one particular action x, and the specific action x for which it is consistent will be up for grabs, depending upon which proof the inconsistency checker finds.)
New paper from MIRI: "Toward idealized decision theory"

I had an idea, and was wondering what its fatal flaw was. For UDT, what happens if, instead of proving theorems of the form "actionx --> utilityx" , it proves theorems of the form "PA+actionx |- utilityx"?

At a first glance, this seems to remove the problem of spurious counterfactuals implying any utility value, but there's probably something big I'm missing.

1So8res8y
PA + "A() = x" may be inconsistent (proving, e.g., that 0=1 etc.).
Baysian conundrum

This is actually isomorphic to the absent-minded driver problem. If you precommit to going straight, there is a 50/50 chance of being at either one of the two indistinguishable points on the road. If you precommit to turning left, there is a nearly 100% chance of being at the first point on the road (Since you wouldn't continue on to the second road point with that strategy.) It seems like probability can be determined only after a strategy has been locked into place.

3gjm8y
The absent-minded driver problem [http://lesswrong.com/lw/182/the_absentminded_driver/].
Open thread, Sept. 29 - Oct.5, 2014

Question for AI people in the crowd: To implement Bayes' Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?

Also, we talk about world-models a lot here, but what exactly IS a world-model?

0khafra8y
Not quite the way I'd put it. If you know the exact prior for the unique event you're predicting, you already know the posterior. All you need is a non-pathologically-terrible prior, although better ones will get you to a good prediction with fewer observations.
0MrMind8y
In order of (decreasing) reliability: through science, through expert consensus, through crowd-sourcing, through personal estimates. Simply the set of sentences or events declared true. For a world-model to be useful those sentences are better to be relevant, that is, can be used to derive probabilities of the questions at hand.
0D_Malik8y
Machine learning can sorta do this, with human guidance. For instance, if we want to predict whether an animal is a dog or an elephant given its weight and its height, we could find a training set (containing a bunch of dogs and a bunch of elephants) and then fit 2 2-variate lognormal distributions to this training set - one for the dogs, and one for the elephants. (Using some sort of gradient descent, say). Then P(weight=w, height=h | species=s) is just the probability density at the point (w, h) on the distribution for species s. Search term: "generative model". And in this context a world-model might be a joint distribution over, say, all triples (weight, height, label). Though IRL there's too much stuff in the world for us to just hold a joint distribution over everything in our heads, we have to make do with something between a Bayes net and a big ball of adhockery.
1skeptical_lurker8y
Machine learning. More speculatively, approximations to solomonoff induction.
[LINK] Article in the Guardian about CSER, mentions MIRI and paperclip AI

I'd call it a net positive. Along the axis of "Accept all interviews, wind up in some spectacularly abysmal pieces of journalism" and "Only allow journalism that you've viewed and edited", the quantity vs quality tradeoff, I suspect the best place to be would be the one where the writers who know what they're going to say in advance are filtered, and where the ones who make an actual effort to understand and summarize your position (even if somewhat incompetent) are engaged.

I don't think the saying "any publicity is good publicity&... (read more)

9Sean_o_h8y
Thanks. Re: your last line, quite a bit of this is possible: we've been building up a list of "safe hands" journalists at FHI for the last couple of years, and as a result, our publicity has improved while the variance in quality has decreased. In this instance, we (CSER) were positively disposed towards the newspaper as a fairly progressive one with which some of our people had had a good set of previous interactions. I was further encouraged by the journalist's request for background reading material. I think there was just a bit of a mismatch: they sent a guy who was anti-technology in a "social media is destroying good society values" sort of way to talk to people who are concerned about catastrophic risks from technology (I can see how this might have made sense to an editor).
Persistent Idealism

I think there's an important distinction to be made between the different levels of earning to give. Really, there's a spectrum between "donate 5 percent of income" at one end, and "devote existence to resolving issue" at the other end. For humans trying to do the best they can, in fact, trying to scale up too fast can lead to severe burnout. So caring for yourself and having a good life and low stress is a good idea because it guards against burnout. It is better to donate a thousand dollars a month to resolve an issue than three thous... (read more)

1cameroncowan8y
Being a man, I'm very good at compartmentalizing thing so I view giving and charity in its own box of things that I do. I hold what I do around charity and giving and this kind of thing in extremely high esteem especially because I work within myself, within my soul, within subtle realms, and this kind of thing. However, while I guess that is a method of giving I think living a good life and having sufficient resources to invest in oneself is a far safer bet than giving away as much money as possible. You really have to think of yourself in order not to become a charity case yourself. Doing that which is fulfilling and making you a better person has a duplicative effect and amplifies all that you do without the burnout or poverty stricken state that I think this system leads to. I agree that to the person doing it its not as hard as it might seem. I live on very little an most people think that I'm crazy or that its so hard but its not. However, there are moments like when my car needs something done or there is an event I'd like to go to that I really wish I had those resources to just pick up and go do that thing. I guess in that way I don't see this extreme sacrifice method of giving is conducive to a good life. Does that make sense?
1Jiro8y
Define "burnout". It sounds like this means "has a psychological aversion to continuing to give"--but if that's what it means, then someone who gives ten dollars a year and is psychologically unwilling to increase this amount could be described as already burned out at higher values, which I'm pretty sure is not your intention.
2Lumifer8y
I am pretty sure "most people" value having a shiny new car more highly than the lives of ten unknowns somewhere far away. Revealed preferences are revealed.
Persistent Idealism

The reason to make lots of money to give it away is elaborated on here, in the paragraph about the lawyer who wants to clean up the beach.

Summary version: More charities are funding-limited than volunteer-limited, and if you are making a sufficient amount of money, working one extra hour and donating the proceeds from that hour gets more done, saves more people, than using that hour to volunteer. The important part is to actually save people.

Saving people is far more important than giving consistently (If the best way to save people is to give each month,... (read more)

1cameroncowan8y
This seems terribly inefficient and dependent you a great deal of personal sacrifice to achieve a goal. I guess I do t understand why someone would completely change their lifestyle just to help as many people as possible. If the goal is saving people in a large way it seems to me that aligning oneself with the people and organizations doing that work and regularly supplying them with the needed resources is far better than this system. I would say that the best way to live in this manner would be to position oneself as a donor to be called upon at a moments notice to fill a gap at a certain level. Working just to give money just isn't mindful of the commitment, the stress, and the need for self care. In order for you to be at your best you have to be well yourself. If you are living in a small apartment eating ramen while the beach is pristine that would not be optimal in my mind.
Avoiding doomsday: a "proof" of the self-indication assumption

Just taking a wild shot at this one, but I suspect that the mistake is between C and D. In C, you start with an even distribution over all the people in the experiment, and then condition on surviving. In D, your uncertainty gets allocated among the people who have survived the experiment. Once you know the rules, in C, the filter is in your future, and in D, the filter is in your past.

MIRI's 2014 Summer Matching Challenge

Just sent 40 bucks your way. Though I am a college student, I decided that I wanted to begin a donation habit so future me is less likely to go "All discretionary income will be used on me personally". Thus, this.

2lukeprog8y
Great, thanks!
MIRI's 2014 Summer Matching Challenge

When I last looked at the bar, it had 99 donors and ~80k dollars donated, and now it has 104 donors and ~190k dollars donated. From this, I can deduce that somebody donated a whole hell of a lot of money.

Positive reinforcement for the donor! Group approval for the benefactor! High fives and internet hugs all around!

2lukeprog8y
:)
Open thread, August 4 - 10, 2014

Physics puzzle: Being exposed to cold air while the wind is blowing causes more heat loss/feels colder than simply being exposed to still cold air.

So, if the ambient air temperature is above body temperature, and ignoring the effects of evaporation, would a high wind cause more heat gain/feel warmer than still hot air?

3bramflakes8y
Yes, it's how hair dryers work.
0tut8y
Yes. This happens sometimes in a really wet Sauna. But conditions in which you actually feel this also kill you in less than a day. You need to lose about 100 W of heat in order to keep a stable body temperature, and moving air only feels hotter than still air if you are gaining heat from the air.
2shminux8y
Yes. Your body would try to cool your face exposed to hot air by circulating more blood through it, creating a temperature gradient through the surface layer. Consequently, the air nearest your face would be colder than ambient. A wind would blow away the cooler air, resulting in the air with ambient temperature touching your skin. Of course, in reality humidity and sweating are major factors, negating the above analysis.
5Lumifer8y
Yes, though ignoring the effects of evaporation is ignoring a major factor.
MIRI 2014 Summer Matching Challenge and one-off opportunity to donate *for free*

6450 stellar sent. For some reason, it took several days to receive the stellar, and I did not receive the 1000 free stellar, instead I got 6500.

3lukeprog8y
Thanks!
August 2014 Media Thread

Further recommendations. Twice as many this time.

(remember, feedback on which songs were good and which ones sucked, possibly by PM, helps tailor recommendations to what you like.)

Setsugetsuka (Yukari) Another Yukari song since you seem to like those.

Lonesome Cat (Miku) Rock song about a cat. Funny.

Tori No Uta (IA) Cover of a song originally by the voice provider. Probably the best example of vocal tuning I've yet come across.

Cloud Rider (IA) Quite energetic, and one of the more prominent IA songs.

Smile Again (Miku, Gumi) I'm pretty sure this song needs to... (read more)

0ShardPhoenix8y
Thanks, I fixed the link. (Haven't got around to listening to the others yet - honestly a large number of suggestions tends to be a bit intimidating).
Fifty Shades of Self-Fulfilling Prophecy

This seems highly exploitable.

Anyone here want to try to use these bogus numbers to get a publisher to market their own fanfiction?

2alexanderwales8y
I think the big problem is the "filing the serial numbers off" part of it. I never read "Masters of the Universe", but it seems to me that it didn't actually involve all that much in the way of vampires or werewolves. Whereas if you had a fic about time traveling robots, a human resistance from the future, and UFAI, it would be really hard to get people to believe that it wasn't Terminator. Or if you had a story about a superhero who works as a reporter and his evil genius nemesis, people are going to see that it's Superman unless you file the story away so hard that you'd be better off rewriting it from scratch. The best way to go about it seems to be to just start with a story that doesn't rely too heavily on whatever canon you're working with, so that once you have the readership, you can make the jump without having to refactor too terribly much.
2arromdee8y
I wrote what could best be described as a proto-rationalist Sailor Moon fanfic. Bear in mind that it's really old--I last worked on it around 2000 and it predates even HPMOR. It doesn't try to sell rationalism, but it has Sailor Moon do things that make sense. I never finished it but I got to the Doom Tree story. http://www.rahul.net/arromdee/fanfic.html [http://www.rahul.net/arromdee/fanfic.html]
0ChristianKl8y
You could even tell the publisher how your fanfiction has as much readers as 50 Shades of Gray had before it was "found".
4ialdabaoth8y
Yes. What's the market for a Transformers / GI Joe / MASK / Robotech / G-Force / Star Blazers crossover?
5polymathwannabe8y
I'm interested, but who reads Captain Planet fanfiction these days?
Top-Down and Bottom-Up Logical Probabilities

I disagree strongly, but here is a prototype of one anyways.

There are top-down and bottom-up approaches to logical probabilities. Top-down approaches typically involve distributions selected to fit certain properties, and, while being elegant and easy to apply math to, are often quite uncomputable. Bottom-up approaches take an agent with some information, and ask what they should do to assign probabilities/find out more, leading to a more "hacky" probability distribution, but they also tend to be easier to compute. Interestingly enough, given lim... (read more)

0Manfred8y
This part is a bit misleading because there's nothing special about having a starting distribution and updating it (thought that's definitely a bottom-up trait). It's also okay to create the logical probability function all at once, or through Monte Carlo sampling, or other weird stuff we haven't thought up yet.
Politics is hard mode

ADBOC.

Yes, we need to shift emphasis from "boo politics" to "politics is a much more difficult topic to discuss rationally than others".

But "hard mode" doesn't have nearly the emotional kick needed to dissuade the omnipresent Dunning-Kruger effect in politics. Running with the video game metaphor, I'm thinking something more along the lines of the feeling of great apprehension induced before playing I Wanna Be The Guy, Kaizo Mario, or the Zero Mercy Minecraft maps. But all the phrases used to refer to that particular cluster o... (read more)

"It's quiet... too quiet"

"I'm confident... too confident"

July 2014 Media Thread

Due to the preposterous number of Vocaloid songs out there, "best" in practice often means "personal favorites of the limited subset the person you are talking to has heard of". Vocaloid seems to follow Sturgeon's Law, as does everything else with low barriers to entry (like fanfiction), but fortunately, it doesn't take much time to check whether a given song is good, so hunting for hidden gems is a fairly fruitful activity as far as Vocaloid songs go. A useful site for this task is VocaDB

Endorsing Gwern's response below, here are five ... (read more)

0ShardPhoenix8y
Thanks for the suggestions.
Rationality Quotes July 2014

The part after it was about how bad guys tend to be like people who have overspecialized in a less useful skill. You will never be able to beat them at what they do, but you don't need to. Said in the context of a very under-powered protagonist. Time for the rest of the quote, though it makes less and less sense as time goes on.

Everyone who will ever oppose you in life is a crazy, burly dude with a spoon, and you will never be able to outspoon them. Even the powerful people, they’re just spooning harder and more vigorously than everyone else, like hungry orphan children eating soup. Except the soup is power. I’ll level with you here: I have completely lost track of this analogy.

Rationality Quotes July 2014

It wasn’t easier, the ghost explains, you just knew how to do it. Sometimes the easiest method you know is the hardest method there is.

It’s like… to someone who only knows how to dig with a spoon, the notion of digging something as large as a trench will terrify them. All they know are spoons, so as far as they’re concerned, digging is simply difficult. The only way they can imagine it getting any easier is if they change – digging with a spoon until they get stronger, faster, and tougher. And the dangerous people, they’ll actually try this.

-Aggy, from ... (read more)

5Sabiola8y
What is dangerous about that?
An Attempt at Logical Uncertainty

To add to the hail of links, you might want to inspect the big official MIRI progress report on the problem here.

Also, though i know quite a bit less about this topic than the other people here (correct me if I'm wrong somebody), I'm a little suspicious of this distribution because I don't see any way to approximate the length of the shortest proof. Given an unproven mathematical statement for which you aren't sure whether it is true or false, how could you establish even a rough estimate of how hard it is to prove in the absence of actually trying to prove it?

Open thread, 23-29 June 2014

Silly question for people who work at MIRI: If you had the choice between receiving one flash drive from the 5-year-future MIRI employees, and acquiring one year's supply of NZT-48, which would you pick?

I don't work at MIRI but: in the movie the guy cranks out a novel in like one night. That's years of work compressed into a few hours. He then proceeds to understand enough about markets to become extremely wealthy (thus negating the time travel informed betting angle), itself a many-year task. Most importantly: NZT-48 lets him figure out how to make MORE NZT-48 with fewer side effects, thus ensuring an indefinite supply. The NZT-48 is definitely the correct choice.

Approaching the question from the other direction: How much has MIRI really accomplished in ... (read more)

Open thread, 23-29 June 2014

What do the physicists on here think of Sean Carroll's attempt at deriving the Born rule here?

Is it correct, interesting but flawed, wrong, or what?

1TheMajor8y
I coincidentally read that paper today (confession: I am not a physicist yet, still a student), and I am really suspicious of his use of unitary transformations. A transformation is unitary if and only if it preserves the l^2-norm, which is precisely what the Born rule describes (i.e. that the l^2-norm is the correct norm on wavefunctions). I asked myself which step would break down if rather than the Born rule the actual probability was the amplitude to the power 4, and I haven't found it yet (provided we also redefine 'unitary'). But (hopefully) I'm just misunderstanding the problem..?
1MrMind8y
I think it is correct, but works only in a limited setup, and the authors claim is rather optimistic. I'm going to write an explanatory post in before the weekend hits.
Utilitarianism and Relativity Realism

Instead of measuring "bad events per unit of time measured from the other person's point of view", wouldn't "bad events per unit of subjective time" be a much better metric which doesn't fall prey to this paradox?

And why are you bothering to distinguish between "there is no true preferred rest frame" and "there is a true rest frame which is perfectly indistinguishable from all the other moving ones"? They both make the exact same predictions, so why not just fold them into one hypothesis? What does that little epiphe... (read more)

8Viliam_Bur8y
Indeed it would. Otherwise, any simulated human universe could be significantly ethically improved by adding a simple code that would make it run slowly (from our point of view; inside the simulation it would be the same) when the humans inside the simulation are happy.
June 2014 Media Thread

The Universal Death Clock In short, this video displays a 43-part clock in Minecraft which pulses once every 1.3 googol years, and uses that to talk about universal heat death and deep time. It's somewhat chilling to watch, and provides a nice system of units to use for talking about really long times. 3 trillion years is 8 Death Clock Units (DCU's).

Also, figuring out how long energy could be generated in the universe is an interesting mental exercise. I think I figured out how to generate power until 15 DCU's.

Open thread, 9-15 June 2014

I was trying to figure out how big 3^^^3 was, which led to the following interesting math result. How high would a power tower of 3's have to be to surpass a googolplex raised to the googolplexth power? For what value of X is (3^^X)>(googolplex^^2)? I don't have the full answer, but an upper bound for X is 16. A power tower of 3's 16 high is guaranteed to be vastly larger than a googolplex raised to itself. And when you consider that 3^^^3 is a power tower 7.6 trillion 3's tall... it's way larger than I thought.

5gjm8y
In what follows, all logs are to base 3. Definition: [a,b,c,d] := a^(b^(c^d))), etc. Lemma: log [a,b,...,z] = log a^[b,...,z] = (log a) [b,...,z]. Definition: {n} := [3,...,3] with n 3's. Lemma: log {n} = {n-1}. Definition: G := [10,10,100]. OK. So we want to know when {n} > [G,G]. Taking logs, this is the same as {n-1} > G log G = [10,10,100] log [10,10,100]. Taking logs again, it's the same as {n-2} > log log [10,10,100] + log [10,10,100] which (unless it comes out amazingly close) is the same as {n-2} > log [10,10,100] = (log 10) [10,100]. Taking logs again, this is the same as {n-3} > log log 10 + log [10,100] = log log 10 + 100 log 10, which again is basically the same as {n-3} > 100 log 10, and now we're in the realm of small numbers and we find that [3,3] is too small but [3,3,3] is way more than enough. So I'm pretty sure the answer is that you need six 3s in your tower. Since it's easy to get this kind of thing wrong -- which is why I introduced notation and lemmas intended to make the manipulations as simple as possible -- I just did it again a different way (a couple of steps of algebra on paper, plus some ordinary floating-point arithmetic on a computer) and without the informal throwing away of much smaller bits. I can confirm that five 3s aren't nearly enough and six 3s are way more than you need. [EDITED to remove a definition and a lemma that I never actually used, and to add a little clarification.]
New organization - Future of Life Institute (FLI)

What would you say is the most effective organization to donate to to reduce artificial biology X-risks?

2Vika8y
No single organization comes to mind, though we have a long list [http://thefutureoflife.org/resources/biotechnology] of candidates - if any of them seem particularly effective, please let us know!
Group Rationality Diary, June 1-15

Actually, that gives me an idea. I've noticed that I have difficulty reducing goof-off internet time below about 90 min/day, so I'll only work on it to funge against internet time.

Group Rationality Diary, June 1-15

I just made this decision about 5 minutes ago, so I'm posting it here as some form of commitment to stick to it.

A while ago, I decided to make a Minecraft adventure map. An embarrassing amount of time was invested into making it, but sizable progress was made in that time. That's what kept me going. The feeling of making progress on a big personal project.

But taking an outside view on it...

How much time will it take to complete? A whole hell of a lot more.

What could I do in that time instead? Make more friends, learn a programming language or two, plenty o... (read more)

4Jan_Rzymkowski8y
It reminds me greatly my making of conlangs (artificial languages). While I find it creative, it takes vehement amounts of time to just create a simple draft and an arduous work to make satisfactory material. And all I'd get is just two or three people calling it cool and showing just a small interest. And I always know I'll get bored with that language in few days and never make as much as to translate simple texts. And yet every now and then I get an amazing idea and can't stop myself from "wasting" hours, planning and writing about some conlang. And I end up being unsatisfied. I don't think it is about Sunk Cost. It's more about a form of addiction toward creative works. Some kind of vicious cycle, where brain engages in activity, that just makes you want more to do that activity. The more you work on it, the more you want to do it, until reaching saturation, when you just can't look at it anymore.

You could have also spent that time watching TV or surfing reddit. I'd rank a half-finish project that genuinely used your creative energy above those sorts of things.

I guarantee you most of the people on this site have a mountain of similar half-finished projects.

Dissolving the Thread of Personal Identity

Yes, the answer to that question is mind dependent. But this isn't really a big deal. If a person decides that there is an 80% probability that a banana will appear in front of them, their P(A banana will appear in front of me) is 0.8. If a Midwesterner decides that they are guaranteed not to be in Kansas after waking up from a coma, their P(I am in Kansas) is about 0. If I decide that I am definitely not a brain in a vat, my P(I am a vat brain) is about 0.

I suspect there is still some way to get non-stupid probabilities out of this mess from a series of o... (read more)

1Slider8y
To me signing up for superpower surgery can raise "if there exists a me, it is superpowered" to arbitarily high but it would at the same time lower "after the surgery there is a me" at the same rate. This would kinda leave a funny edgecase where a brain in a vat could correctly conclude that "I don't exist" if it finds evidence that nothing that fits it's self image exists in the world (ie beings with hands etc). It would still be blatantly obvious that something is having the thought and it would be really nice if "I" would refer to that thing regardless of how you picture yourself. You could have a situation where you are a brain in a vat in your lap with all your sensory inputs being conveyed by a traditional body. It would still be pretty challenging to determine whether you are your skull or the fishbowl in your hands. Maybe the multilayered use of me in the previous sentence points at the right way? So what happens to the thing you are now (or your extended-you) is a different question on what you will become (your core-you). That way the only way for core-you to terminate would be to not to have thoughts. Breaking the extended-you would thus not terminate your toughts and core-you would still be core-you.
Dissolving the Thread of Personal Identity

Sure! I'll contribute some thoughts. Just send me a draft.

0plex8y
Cool. You've covered a lot of what I was planning on starting with (why initially intuitive models of "me" don't work), so I'll just link back and start on the later bits.
MIRI Donation Collaboration Station Redux: The Final Push (IMPORTANT)

MIRI did not win the 250k in upgrades. However, ~ 110k was raised total that day with ~50k worth of donations, and 21/24 2000 dollar golden tickets were picked up by them, which is pretty damn good. The group cooperation strategy was quite effective. Examine the sidebar on the left of this page. http://svgives.razoo.com/giving_events/svg14/home

Calling all MIRI supporters for unique May 6 giving opportunity!

Sorry to respond here, but it's a bit important. We are actually behind first place by about 8 donors, so recruiting an extra 8+ people total may allow us to win the grand prize.

3Vulture8y
Totally cool with me. I've hijacked my own comment over on SSC to make the same plea :-)
Calling all MIRI supporters for unique May 6 giving opportunity!

This is somewhat important:

We suspect that anonymous donations don't count towards the "unique donations" total, so if you are donating, please register your name to ensure that you are counted.

Thank you!

Calling all MIRI supporters for unique May 6 giving opportunity!

Only 40 donations have been made this hour, so would anybody else mind chipping in? They would probably be very high value donations, since it seems to be a bit below the threshold required to win an hour.

[This comment is no longer endorsed by its author]Reply
Calling all MIRI supporters for unique May 6 giving opportunity!

Miscellaneous thoughts:

It's very heartening to see so many of these golden tickets so far having gone our way. Go team! Yay for group cooperation! \(n_n)/

I am a bit confused about what happened at 3 and 4 AM, though. Since the early morning hours were being targeted specifically, did the other charities cooperate and focus their efforts specifically on those time slots, or were there just not enough people awake to win at those times?

EDIT: Apparently, the people running the fundraiser initially accidentally had it set up so that each organization was only... (read more)

MIRI Donation Collaboration Station

I'm not sure, but I don't think we will have access to the number of people who donated for all the other charities. And I suspect that something may be wrong in the math, because that strategy of "donate every minute until 2000 donations occur total" would lead to badly overfilling that hour with donations if, say, 1800 donations were made on behalf of all the other charities.

That math looks like you are calculating the expected value of a raffle ticket randomly awarded to one donor with a value of 2000$.

But instead, the 2000$ is awarded to th... (read more)

0TylerJay8y
Oh, I must have misread it. I thought it was essentially a raffle. I mixed it up with this part:
MIRI Donation Collaboration Station

It seems like one of the biggest issues will be making sure donors know what other donors are doing during this time period, to prevent overfilling of matching funds, to make sure the right number of people try to spike the donation box at 5, and to coordinate Tyler's "split up donations" idea during non-peak hours. Maybe there could be a single comment here that is being edited through the day so donors know what the best thing to do at that time is? (preliminary idea)

2Malo8y
Yeah, coordination will be very important. See this comment [http://lesswrong.com/r/discussion/lw/k5e/miri_donation_collaboration_station/av3x] for some of my thoughts.
Open Thread April 16 - April 22, 2014

What are the most effective charities working towards reducing biotech or pandemic x-risk? I see those mentioned here occasionally as the second most important x-risk behind AI risk, but I haven't seen much discussion on the most effective ways to fund their prevention. Have I missed something?

1ThisSpaceAvailable8y
Biotech x-risk is a tricky subject, since research into how to prevent it also is likely to provide more information on how to engineer biothreats. It's from nontrivial to know what lines of research will decrease the risk, and which will increase it. One doesn't want a 28 Days Later type situation, where a lab doing research into viruses ends up being the source of a pandemic.
0Oscar_Cunningham8y
Note that Friendly AI (if it works) will defeat all (or at least a lot of) x-risk. So AI has a good claim to being the most effective at reducing x-risks, even the ones that aren't AI risk. If you anticipate an intelligence explosion but aren't worried about UFAI then your favourite charity is probably some non-MIRI AI research lab (Google?).
Making LessWrong notable enough for its own Wikipedia page

I'd be quite cautious about seeking greater media coverage without a plan to deal with an "Eternal September" on Less Wrong.

0khafra8y
Hacker News [http://news.ycombinator.com] had a semi-joking strategy, "everyone post articles on Haskell internals*" on days following media exposure. It actually seemed to work pretty well--but I don't know if we have enough posting volume, and enough un-posted articles on the mathematical side of decision theory and anthropics to use a similar strategy. *(edit: it was Erlang internals; gjm's memory is better than mine).
1Error8y
This. In particular, significant media coverage that portrays LW as high-status ("this is where all the smart people hang out!") is likely to end badly.
Habitual Productivity

One of my keys for productivity/unproductivity is that when I get interested in something, I become completely locked in on it. I have noticed that trying to stop a timewaster is rather futile while it is in progress, so I've developed a strong aversion towards taking on new distractions, obligations, or anything that will waste my time, because not starting in the first place is my point of greatest control over what I will be doing. This is one of the few areas where procrastination actually can help. If somebody tells me about some interesting thing I s... (read more)

December Monthly Bragging Thread

First thing that comes to mind that I just did tonight...I stumbled across a probability "paradox", noticed that it had an infinity in it, got suspicious, expressed it in a form with finite population size, and took the limit as the population size went to infinity, and what do you know... the paradox vanished in a puff of canceling fractions.

Lotteries & MWI

This is another website that may be of use. Just fire it up for a while, pause the stream of numbers, and do what you will with them. It is guaranteed to be quantum-random.

Decoherence as Projection

Well, it isn't quite that, but I made an analogue of it prompted by that exact same thought. Movie 3-d glasses are polarized (the two slightly different images on the screen have orthogonal polarizations, so each image only goes through one lens), so if you can sneak two or more pairs of 3-d glasses out of a movie theater, you can pop the lenses out of one pair, and tape them on the other pair (rotated so that almost all light is canceled out.) The resulting cross-polarized improvised glasses are so dark, that if you made them just right, it is possible to stare straight at the sun and see sunspots. However, this makes them quite useless for most other purposes.

Group Rationality Diary, November 1-15

I've been keeping it up for about 2/3rds of a month by now, so it seems to have worked quite well: I have made my first major step towards being productive. I did it with several tricks.

One: At the end of every day, make a to-do list for tomorrow, and set it as my computer background. Now, every time I get on the computer, I am reminded of which task I am supposed to do.

Two: Logging where all my time goes. I can look at the weeks and identify productive and non-productive times, have ready outside view information available on how long something will tak... (read more)

Change the labels, undo infinitely good improvements

It doesn't work for all problems, but the provided problems become much more manageable when you look at the magnitude and number of utility changes, rather than the magnitude and number of utilities. I could be horribly wrong, but looking at the set of utility changes rather than the set of utilities seems like it could be a productive line of inquiry.

Living in Many Worlds

If you are ever interested in actually using quantum randomness to base a decision off of, whether you are up against a highly accurate predictor, can't decide between two fun activities for the day, or something else where splitting yourself may be of use, then there is a very helpful quantum random number generator here. Simply precommit to one decision in case the ending digit is a 0, and another if the ending digit is a 1, and look at this webpage. Right Here.

0[anonymous]9y
Myself, I use this [http://www.random.org/coins/?num=1&cur=60-eur.italy-2euro].
Open thread, August 19-25, 2013

Um... In the HPMOR notes section, this little thing got mentioned.

"I am auctioning off A Day Of My Time, to do with as the buyer pleases – this could include delivering a talk at your company, advising on your fiction novel in progress, applying advanced rationality skillz to a problem which is tying your brain in knots, or confiding the secret answer to the hard problem of conscious experience (it’s not as exciting as it sounds). I retain the right to refuse bids which would violate my ethics or aesthetics. Disposition of funds as above."

That... (read more)

4ArisKatsaris9y
Well, keep in mind that Eliezer himself claims that "it's not as exciting as it sounds". And of course you always need to have in mind that what Eliezer considers to be "the secret answer to the hard problem of conscious experience" may not be as satisfying an answer to you as it is to him. After all, some people think that the non-secret answer to the hard problem of conscious experience is something like "consciousness is what an algorithm feels like from the inside" and this is quite non-satisfactory to me (and I think it was non-satisfactory to Eliezer too). (And also, I think the bidding started at something like $4000.)
0CAE_Jones9y
I got excited for the fraction of a second it took me to remember that everyone who could possibly want to bid could probably afford to spend more money than I have to my name on this without it cutting into their living expenses. Unless my plan was "Bid $900, hope no one outbids, ask Eliezer to get me a job as quickly as possible", which isn't really that exciting a category, however useful.
0Mitchell_Porter9y
I might have bid on that, but the auction is already over.
Load More