All of adamisom's Comments + Replies

I messaged Jim on a different platform and he promptly replied:

You can get a zipfile of card images from

Woot! I haven't done this before but my plan is to order cheap, fast card sleeves from amazon, and also cheap playing cards, regular-print the card images, and do sleeve <- card-image on top of playing-card (for backing)

There's also this currently-defunct link to buy a nicer print version than that, maybe the link will be fixed when you read this, idk: (read more)

“I'd probably suggest writing a novel first.”

It blows my mind that nobody (?) has written a sci-fi novel on alignment yet.

The Number is kind of an alignment novel, but you only see that late in the book. Arguably the Crystal Trilogy is a mis-alignment novel. Oh, and of course, there's Friendship is Optimal. 
I thought I came across one a few years ago, though it might have been a different x-risk. There's an alien civilization that's discovered that's dead (and 'one of their 'sciences' killed them off).
5Tomás B.1y
It's actually kinda hard, if you want it to not be nonsense. And then you have to make it exciting. I've thought about having alignment going on as a subplot in a conventional adventure story - woven in every few chapters - and emphasize at the end how meaningless the more-conventional story was in comparison to the alignment work.  In terms of time-loop stuff, I think a protagonist who is demonstrably not smart enough to do the alignment work himself, and must convince the world's geniuses to work on alignment every loop, might be grimly amusing. 

Working with others in a shared environment with scientific ground rules ensures that your biases and their biases form a non intersecting set

I liked your first point but come on here.

How is that not the point of peer review, whether formal or informal?

Lack of curiosity made people lose money to Madoff. This you already know - people did not due their due diligence.

Here's what Bienes, a partner of Madoff's who passed clients to him, said to the PBS interviewer for The Madoff Affair (before the 10 minute mark) when asked how he thought Madoff could promise 20%:

Bienes: ‘How do I know? How do you split an atom, I know that you can split them, I don’t know how you do it. How does an airplane fly? I don’t ask.’ ‘Did you ask him?’ ’Never! Why would I ask him? I wouldn’t understand it if he explained it!’

And... (read more)

Interesting. Modern religious people tend to not believe in the devil much, probably because that is not a very reassuring thing to believe and it is a pretty much a feelings based cafeteria today. This sounds like an example where believing in the devil would have been useful. "Maybe god wants us to be lucky, or maybe the devil tempts us into financial doom."

Are you kidding me? I'm staring right now, beside me, at a textbook chapter filled with catalogings of human values, with a list of ten that seem universal, with theories on how to classify values, all with citations of dozens of studies: Chapter 7, Values, of Chris Peterson's A Primer In Positive Psychology.

LessWrong is so insular sometimes. Like lionhearted's post Flashes of Nondecisionmaking yesterday---as if neither he nor most of the commenters had heard that we are, indeed, driven much by habit (e.g. The Power of Habit; Self-Directed Behavior; The Pr... (read more)

If this comment was written by a bot that produces phrases maximizing the ratio of the number of usages of pleasant-dopamine-buzz-in-group LessWrong language to non-in-group language, it would produce something like this.

I say this even though I really appreciate the comment and think it has genuine insight.

When in Rome... But yes, you are right, of course :(

Agreeing that it should have been in Discussion.

If this gets upvoted highly, I will update in favor of LessWrong continuing to become more in-group-y, more cutesy, and less attached-to-actual-change-y. It's becoming so much delicious candy!

This is the comment a super villain would make if he wanted less competition.

I want to upvote your comment, but I can't bring myself to do so without hearing more about your reasoning. Without it, your comment reads like a mere personal attack.
I don't think it's productive to think this way. Yvain wrote a great post which I currently can't find where he points out, among other things, that it's generally a bad idea for your primary reaction to an event to be a reaction to how you think it fits into an overarching narrative (e.g. "this just goes to show you can't trust those dirty Greens"). The LW community doesn't strike me as homogeneous enough that it's productive to model it using in-groupiness, cutesiness, and attached-to-actual-changiness parameters that can be inferred from current posts and that determine the value of future posts. Evaluate posts separately, and if you want to model something, model individual users. And for what it's worth, this post isn't typical of the kind of post I want to write. Would your reaction have been substantially different if this had been posted in Discussion?

How can someone have such a good memory?

Maybe he has been keeping a list of them (eg. ) or posted regularly about them somewhere?

More like an exception handling routine that's just checking for out-of-bounds errors.

Oh God. I love this place.

And this is why I love LessWrong, folks--sometimes. In other rationality communities--ones that conceived of rationality as something other than "accomplishing goals well"--this kind of post would be hurrah'd.

Why? Because dying is painful? Beyond that, I see them equivalently.

The goal behind altruism is to improve the quality of life for the human race. The motivation for altruism maybe due to evolutionary reasons such as propogation of the species etc., but it is not the same as altruism. This post is however about the latter as you have rightly pointed out. Nevertheless, and therefore, the way to go about maximizing is to first ensure that all people currently alive remain alive and well taken care of. After that there's plenty of time to go about having more babies :)
Among other reasons, if you die there will be people mourning you, whereas if you had never existed in the first place there won't.

Non-existing is not the same thing as ceasing to exist.

Hey guys, how about we debate who's being egoistic about saving the world and who isn't? That sounds like a really good way to use LessWrong and knowledge of world-saving.

We do seem to love accusing people of being altruistic only for singling.

Which is why I"m still puzzled by a simplistic moral dilemma that just won't go away for me: are we morally obligated to have children, and as many as we can? Sans using that using energy or money to more efficiently "save" lives, of course. It seems to me we should encourage people to have children, a common thing that many more people will actually do than donate philanthropically, in addition to other philanthropy encouragements.

A lot of people don't consider failure to exist the same as dying. Of course, we need some level of procreation as long as there is death, and humanity would probably continue to expand even then.

are we morally obligated to have children, and as many as we can?

Cost of a first-world child is.... checks random Google result $180,000 to get them to age 18. Cost of saving a kid in Africa from dying of Malaria is ~$1,000.

Right now having children is massively selfish, because there's options that are more than TWO magnitudes of order more effective. It'd be like blowing up the train in order to save the deaf kids from the original post :)

It seems to me like a pretty small probability that an AI not designed to self-improve will be the first AI that goes FOOM, when there are already many parties known to me who would like to deliberately cause such an event.

I know this is four years old, but this seems like a damn good time to "shut up and multiply" (thanks for that thoughtmeme by the way).

That is cute.. no? More childish than evil. He should just be warned that's trolling.

There really should be a comment edit history feature. Maybe it only activates once a comment reaches +10 karma.

I just wanted to tell everyone that it is great fun to read this in the voice of that voice actor for the Enzyte commercial :)

This is wrong.

If you discard the emotionally-laden word "agenda" (in my experience, its usage always indicates negative affect toward the thing with the "agenda"), what you're basically saying is this: Anyone or any organization that concludes that the evidence for something is strong and that it matters, and who consequently takes a stand---their conclusions should be thrown out a priori. You did say "effectively nullifies anything they say"--those are damn strong words. So what you're implying, AFAICT, is that you only liste... (read more)

That they have a bias is trivial to see. Every issue has multiple sides, and if someone only or predominantly presents just one side, they are clearly biased. This site fits this to a T. I challenge you to find any information on this site which details the benefits of porn. Compare this, for example. to the writings of Dan Savage, who frequently discusses porn, but gives a much more balanced view. He also has an agenda, of course, but it's not related to porn. I don't listen to those who look for supporting arguments for the side they already picked, whether they post online or ring my door bell in a hope of converting me. I advise that you do not, either, but it's your call.

I know this is old. What is really meant by "does not help their case, either" is "it hurts their case that they don't have formal training". I vehemently disagree. Not that I think formal training is bad. Just that I think giving emphasis to this indirect indicator of their competence is misleading, because there's plenty of direct evidence--if you read the site--that they 'know what they're talking about'.

They have an agenda (prewritten bottom line), which effectively nullifies anything they say.

It seems to me this could be a smartphone app. Whenever a person wants to make a prediction about a personal event, they click on the app and speak, with a pause between the thing and how likely you think it is. The app could just store verbatim text, separating question/answer, and timestamping recordings in case you want to update your prediction later. If you learn to specify when you think the outcome will occur, it can make a sound to remind you to check off whether it happened; otherwise it could remind you periodically, like at the end of every day. Why couldn't it have data analysis tools to let you visualize calibration, or find useful patterns and alert you? Seems a plausible app to me.

The only time I've ever read a vague four-word sentence that deserves an upvote. Such things tickle me.


And what if it is? I am not claiming this is so. It is rhetorical. What then?

Teach the best case that there is for each of several popular opinions. Give the students assignments about the interactions of these different opinions, and let/require the students the students to debate which ones are best, but don't give a one-sided approach.

Darn it.

Even though you are talking explicitly about signaling, I still couldn't help myself from liking it.

I also like chaosmis' comment. It expressed what I should have.... Though his comment might also be a sinister meta-signaling-signaling trolling :P

God, I hate signaling.

(Wait, am I doing it right now?)

(Oh shit, and now.)


You say signaling like it's a bad thing.

Wow. This is fascinating.

You, Ezekiel, are basically saying 'I'm aware that a behavior expressing pedantry like that is a signalling thing, that it specifically signals "nerdiness", and that such a person is trying to 'cultivate an image'."

"Oh, and I just did that"

... Presumably you value signaling and cultivating an image with the aim of belonging in a nerdy LessWrong in-group.

facepalm What are we becoming?

P.S. On an unrelated topic, I think the site founder is wrong about some things. And I just thought you ought to know that I'm such a contrarian :)

In my experience, the people on this site don't perceive signalling as wrong or useless, even when it's superficial. I do not understand why that's so because I perceive most of signalling as a waste of resources and think that cultivating a community which tried to minimize unnecessary signalling would be good.

Unfortunately, AGI isn't a "risky technology" where mostly is going to cut it in any sense, including adhering to expectations for safety regulation.

All the more reason to use resources effectively. Relatively few safety campaigns have attempted to influence manufacturers. What you tend to see instead are F.U.D. campaigns and neagtive marketing - where organisations attempt to smear their competitors by spreading negative rumours about their products. For example, here is Apple's negative marketing machine at work.

I'm hoping that this survey reveals that it is incorrect to refer to us LessWrongers as "gentlemen"

... And yes, you may take that either way. After all, there were questions on both our gender and our # of sexual partners.

The best kind! (except not really)


Haha... Obviously it's not the target behavior, but I, at least, assume that almost everyone who has commented to that effect has actually done it.

Because you have something I aspire to (multiple casual sex partners), why else?

Largely a result of Salsa dancing.

I mean LW karma (plus I'm a Redditor too) -

from my study of human societies, I believe my remark is called a "joke" - though I admit some people are bad at making jokes :p

Also, I failed to answer quite a few questions when I got 110, thinking I'd be penalized for wrong answers... Apparently I failed at reading the directions which state you should answer all of them facepalm


No. Stop. The only reason necessary is because we want more of that behavior, right?

The behavior of posting comments to the effect that we have taken the survey?

You're entirely correct. And if you read that post, you'll see why your reply is funny. :)

But I absolutely believe in karma. I guess that makes me spiritual. The things you find out about yourself eh?

even if you interpret karma as reddit/lw karma, or social consequences, "absolutely" is too much. don't bet your house on it.

I took the survey.

As per ancient tradition (apparently) - give me karma

Hmm. I got 110. And then because that's ridiculous, and I have an ego, I took it a second--and third--time, subsequently scoring 126 and 140. (I reported 125 on the survey because I know 110 isn't right.)

And while I was trying harder on the second and third attempts (as a result of realizing 'oh, I guess most of these actually are easy to everyone else, not just me, so I shouldn't be so leisurely'), I wasn't superbly focused on any--for example, I became distracted on the third attempt with something in another tab for more than 10 minutes before remembering it.

All I'm saying is I'm dubious of this IQ test.

I reported my first try answer (which also seemed unrealistically low to me). I think on balance it might be best for everyone to just report their first try answer accepting the test is normed low and then for macro analysis it can be adjusted / compared with another test like SAT scores
I'm pretty sure you're not supposed to do that -- IIRC tests are designed to give reasonably accurate results in absence of practice effects. I had taken this same test one year ago and I'm pretty sure I answered certain questions faster than I would have if I had never seen them before (though this effect was almost exclusively in the easy, early questions, which took a very small part of the 40 minutes anyway -- I did score 9 points higher than last year but I had a headache (and hadn't realized I could go back to previous questions) back then so that sounds about right).
Calibration for other people looking at this comment: I took the test and got 10 points higher than my self-reported IQ. I think it picks up on a different kind of reasoning than the usual type of IQ test!

Back in grade school, I took several real-life IQ tests and usually scored in the high 130's to low 140's. I'd heard of Raven's Progressive Matrices, but this was the first time I'd taken that type of test. It was quite humbling. I got 122 on From what I've heard in #lesswrong, most people score low on this test.

I opened the test again in a different browser, VPN'd from a different country. It gave the same questions. That means your subsequent tests aren't valid. You already knew many of the answers. Worse, you knew which questions had stumped you before. You were probably thinking about those questions before you started the test a second or third time.

It suffers the usual problems of tests, among which are that test-taking is itself a skill.

That said, I don't think re-taking the test produces a valid result - a lot of the time I spent on the test was figuring out the rules of the puzzles as much as solving them. The problematic nature of the initial result is a reflection of the weakness of the test, as you noted, but re-taking the test simply introduces a new suite of problems.

In other words, you're meta-cogitation is 1 - do I trust my very certain intuition? or 2 - do I trust the heuristic from formal/mathematical thinking (that I see as useful partially and specifically to compensate for inaccuracies in our intuition)?

Holy shit, even today only 1 in 10,000 articles are retracted for fraud.

I am assuming these retracted articles are a tiny fraction of the actual number/% of articles with fraud, and such a tiny fraction as to not give reliable evidence for the proportion increasing; so the graph's data isn't particularly useful.

Yes! In fact I was just reading the lbod a week ago!

... Which is fucking awesome. The dude's been my inspiration for at least two years and I remember reading the announcement on his blog a year ago. In fact, it's likely that reading his blog lead me to other blogs which lead me to LessWrong. (I don't remember exactly how I found LessWrong.)

His posts on learning and time-management are very useful. The little book of productivity, an ebook he released a few years ago, is superb.

Somebody sounds grouchy :/ In fact, it would be completely unsurprising if I had read the other comments. Oops.

Results: 4+16+2+16+1+27(last option) = 144? WTF?

The issue of the bug in question has been discussed already, in this very thread -- but even if it hadn't, I don't think the discovery of a bug should stun you this much.

The book will be written before he feels his work is done on FAI.

You know it's true because intuitively you need a non-work outlet for creativity and flow, and it makes sense to write about rationality in an entertaining way and to write a character (Harry) which he can take as an inspiration.

Of course, you also know it's false because the man prioritizes and FAI matters far more.

... I find a lot of rationalizations are like that. One of the most useful quick-n-dirty rationality heuristics LessWrong has given me is to 'consider the opposite'

Cool. I should have specified 'I'm intrigued; can you move down a level of specificity (as to how)?'

Most people seem to believe that it's absurd to suppose we're living in the Matrix; I point out that theism is not significantly different from this.

Or both: isn't intelligence correlated with size of social circle?

If so it really could be that the average friend is smarter than the average person.

Perhaps the problem here is simply that we think average intelligence is dumber than it really is.

Everyone considers themselves to be "above average" and so merely "average" surely cannot be-gasp!-not so bad! (Obviously it depends on your perspective. To a LessWronger, pretty much everything else looks stupid.)

That's not just due to raw (perceived) intelligence.

I nearly did too, which makes me wonder if a few people did; the only difference is an 'a' and I guess I assumed atheism would be at the top

Load More