Comment 1: If anyone wants to comment or reply here, but can’t afford the karma hit, feel free to PM me and I’ll comment for you without listing your name. I have 169 karma to burn (97% positive!), from comments going back to Feb 2015. However, I’ve wanted to update to a different username, so I don’t mind destroying this one.
Comment 2: It might be wise not to discuss tactics where eugine can read it. (Also, causing lots of discussion might be his goal, but so far we haven’t talked about it much and it’s just been a background annoyance.)
Is there interest in a skype call or some other private forum to discuss possible solutions?
[pollid:1160]
I believe CronoDAS is referring to Algernon's Law. Gwern describes the issues pretty well here, including several classes on "loopholes" we might employ to escape the general rule.
The classifications of different types of loopholes is still pretty high level, and I'd love to see some more concrete and actionable proposals. So, don't take this as saying "this is old hat", but only as a jumping off point for further discussion.
This may not be a generalized solution, but it looks like you have rigorously defined a class of extremely common problems. I suspect deriving a solution from game theory would be the formalized version of John Stuart Mill trying to derive various principles of Liberty from Utilitarianism.
Meta: 4.5 hours to write, 30mins to take feedback and edit.
I always find this sort of info interesting. Same for epistemic status. It's nice to know whether someone is spit-balling a weird idea they aren't at all sure of, versus trying to defend a rigorous thesis. Tha...
I was surprised to see mention of MIRI and Existential Risk. That means that they did a little research. Without that, I'd be >99% sure it was a scam.
I wonder if this hints at their methodology. Assuming it is a scam, I'd guess they find small but successful charities, then find small tight-knit communities organized around them and target those communities. Broad, catch-all nets may catch a few gullible people, but if enough people have caught on then perhaps a more targeted approach is actually more lucrative?
Really, it's a shame to see this happen ev...
Although compressing a complex concept down to a short term obviously isn't lossless compression, I hadn't considered how confusing the illusion of transparency might be. I would have strongly preferred that "Thinking Fast and Slow" continue to use the words "fast" and "slow". As such, these were quite novel points:
...
they don't immediately and easily seem like you already understand them if you haven't been exposed to that particular source
they don't overshadow people who do know them into assuming that the names contain th
I've always hated jargon, and this piece did a good job of convincing me of its necessity. I plan to add a lot of jargon to an Anki deck, to avoid hand-waving at big concepts quite so much.
However, there are still some pretty big drawbacks in certain circumstances. A recent Slate Star Codex comment expressed it better than I ever have:
...One cautionary note about “Use strong concept handles”: This leans very close to coining new terms, and that can cause problems.
Dr. K. Eric Drexler coined quite a few of them while arguing for the feasibility of atomically
Meta note before actual content: I've been noticing of late how many comments on LW, including my own, are nitpicks or small criticisms. Contrarianism is probably the root of why our kind can't cooperate, and maybe even the reason so many people lurk and don't post. So, let me preface this by thanking you for the post, and saying that I'm sharing this just as an FYI and not as a critique. This certainly isn't a knock-down argument against anything you've said. Just something I thought was interesting, and might be helpful to keep in mind. :)
Clearly it was ...
As I understand it, Eliezer Yudkowski doesn't do much coding, but mostly purely theoretical stuff. I think most of Superintelligence could have been written on a typewriter based on printed research. I also suspect that there are plenty of academic papers which could be written by hand.
However, as you point out, there are also clearly some cases where it would take much, much longer to do the same work by hand. I'd disagree that it would take infinite time, and that it can't be done by hand, but that's just me being pedantic and doesn't get to the substanc...
I like this idea. I'd guess that a real economist would phrase this problem as trying to measure productivity. This isn't particularly useful though. Productivity is output (AI research) value over input (time), so this begs the question of how to measure the output half. (I mention it mainly just in case it's a useful search term.)
I'm no economist, but I do have an idea for measuring the output. It's very much a hacky KISS approach, but might suffice. I'd try and poll various researchers, and ask them to estimate how much longer it would take them to do t...
I could get behind most of the ideas discussed here, but I'm wary of the entire "Standards of Discourse and Policy on Mindkillers" section. It's refreshing to have a section of the internet not concerned with politics. Besides, I don't think the world is even Pareto optimized, so I don't think political discussions are even useful, since acquiring better political views incur opportunity costs. Why fight the other side to gain an inch of ground when we could do something less controversial but highly efficient at improving things? I'm all for dis...
By "manufactured values" I meant artificial values coming from nurture rather than innate human nature. Obviously there are things we give terminal value, and things we give instrumental value. I meant to refer to a subset of our terminal values which we were not born with. That may be a null set, if it is impossible to manufacture artificial values from scratch or from acquired tastes. Even if this is the case, that wouldn't imply that instrumental values could not be constructed from terminal values as we learn about the world. There are 4 poss...
My possibly stupid question is: "Are some/all of LessWrong's values manufactured?"
Robin Hanson brings up the plasticity of values. Humans exposed to spicy food and social conformity pressures rewire their brain to make the pain pleasurable. The jump from plastic qualia to plastic values is a big one, but it seems plausible. It seems likely that cultural prestige causes people to rewire things like research, studying, etc. as interesting/pleasurable. Perhaps intellectual values and highbrow culture are entirely manufactured values. This seems mild...
"well, you're also ultimately basing yourself on intuitions for things like logic, existence of mind-independent objects, Occamian priors, and all the other viewpoints that you view as intuitively plausible, so I can jolly well use whatever intuitions I feel like too."
It's true that a priori using intuition is about as good as using an intuitive tool like inductive reasoning. However, induction has a very very strong track record. The entire history of science is one of humans starting out with certain intuitive priors, and huge numbers of the...
the idea of people buying our product because we are EAs makes me uncomfortable.
In retrospect, I think that would make me uncomfortable too. In your position, I'd probably feel like I'd delivered an ultimatum to someone else, even if they were the one who actually made the suggestion. On the other hand, maybe a deep feeling of obligation to charity isn't a bad thing?
Would you say that you are not interested in paying more for a healthier product, not convinced that MealSquares is better for you, something else?
Based on my (fairly limited) understand...
Someone gave you a downvote. If it was on my behalf or on the behalf of Soylent, then for the record I thought it was funny. :)
Hmmm, that's worrying. I played with some numbers for a 5'6" male, and got this:
99 lbs yields "Your BMI is way too low to be living"
100lbs yields 74 years
150lbs yields 76 years
200lbs yields 73 years
250lbs yields 69 years
300lbs yields 69 years
500lbs yields 69 years
999lbs yields 69 years
It looks to me like they are pulling data from a table, and the table maxes out under 250lbs?
avoiding obesity
Not to be pedantic, but I thought this might be of interest: As I understand it, amount of exercise is a better predictor of lifespan than weight. That is, I would expect someone overweight but who exercises regularly to outlive someone skinny who never exercises.
For example, this life expectancy calculator outputs 70 years for a 5"6" 25 year old male who weighs 300lbs, but exercises vigorously daily. Changing the weight to 150 lbs and putting in no exercise raised the life expectancy by only 1 year. (a bit less than I was expe...
you'll help us earn money for effective giving
I realize you are in the startup phase now, and so it probably makes sense for you to put any surplus funds into growth rather than donating now. However, 2 questions:
Once you finish with your growth phase, about what percent of your net proceeds do you expect to donate?
What sorts of EA charities are you interested in?
I've been using MealSquares regularly, without realizing that that you guys were LWers or EAs. As such, I've been using mostly s/Soylent because of the cost difference. (A 400 Calorie Me...
Good point. It seems like we 1) value an incredibly diverse assortment of things, and 2) value our freedom to fixate on any particular one of those things. So, any future which lacks some option we now have will be lacking. Because at some point we have to choose one future over another, perhaps we will always have a tiny bit of nostalgia. (Assuming that the notion of removing that nostalgia from our minds is also abhorrent.)
I'll also note that after a bit more contemplation, I've shifted my views from what I expressed in the second paragraph of my comment...
That's my understanding as well. I was trying to say that, if you were to formalize all this mathematically, and took the limit as number of Bayesian updates n went to infinity, uncertainty would go to zero.
Since we don't have infinite time to do an infinite number of updates, in practice there is always some level of uncertainty > 0%.