If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

New Comment
105 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

I request the attention of a moderator to the wiki editing war that began a week ago between Gleb_Tsipursky and VoiceofRa, regarding the article on Intentional Insights. So far. VoiceofRa has deleted it twice, and Gleb_Tsipursky has restored it twice.

Due to the way the editing to remove the page was done, to see the full editing history it is necessary to look also at the pseudo-article titled Delete.

I do not care whether there is an article on Intentional Insights or not, but I do care about standards for editing the wiki.


Thank you for raising this.

I suggest Gleb not be permitted to edit the page as he is motivated to not be impartial. I also suggest Ra equally not edit the page and we leave it to others to modify. (I hate saying "others will do it" but at worst I will)

Perhaps also best to add that Intentional Insights is not officially affiliated with LW?

I think there surely should be an article on Intentional Insights but it should be as neutrally written as possible. Deleting it seems like mere vandalism to me.

I've put a new version of the page in place. It is much less uncritical than what Gleb wrote.
... Er, but now that seems to be gone and I don't even see any record of my edits. Perhaps I needed to do something different on account of the page having been deleted. (I just visited the page with redirect=no or redirect=off or whatever it is, edited its contents to replace the magic #REDIRECT[[Delete]] or whatever it was, and saved -- was that wrong?) Anyway, it looks as if Gleb has now recreated the page again, so it exists but is rather one-sidedly promotional. [EDITED to add:] No, wait, I was seeing an old cached version. I think the wiki must be serving up pages with misleading cache-control headers or something. I think it's all OK.
This is rather strange. If I go to https://wiki.lesswrong.com/wiki/Intentional_Insights, I see Gleb's last version. If I go to https://wiki.lesswrong.com/index.php?title=Intentional_Insights, I see gjm's version. However, clicking on the history tab of the latter page lists no edits since Gleb's of 19 November. On the "Delete" page, the history at https://wiki.lesswrong.com/index.php?title=Delete&action=history shows a third attempt by VoiceofRa at 18:49 on 30 November 2015 (timezone unspecified but probably UDT) to delete the InIn material. There seems to be some sort of inconsistency in the wiki. VoiceofRa's misuse of redirection does not help.
Try doing control-f5 to force a full reload of the page. Until I did that, I found that I saw "my" version when logged in and Gleb's version when logged out. I think I saw weird behaviour in the histories too, but I forget exactly what.
I'm using Safari on a Mac, so ctrl-F5 isn't a meaningful keystroke. Trying a different browser that I've never accessed these pages before with gave the same behaviour. That browser (Firefox) has ctrl-shift-R for a forced reload, and that made the wiki pages for the article itself show your version. However, the history and discussion pages weren't up to date, until I did more forced reloads. Now even more weirdness is happening. In Firefox, if I force-reload https://wiki.lesswrong.com/wiki/Intentional_Insights, I get the latest version. If I do an ordinary reload, I get Gleb's old version. Force reload -- new. Ordinary reload -- old. I have never seen this behaviour before. Clearly something is wrong somewhere, but what? Back in Safari, I cleared all history and cookies. Same behaviour as before: one URL gets Gleb's old version, one gets your version. This happens whether I'm logged in or out. I see that the history has a few minor edits by Gleb around 06:38, 1 December 2015. UDT right now is 22:53, 30 November 2015. What timezone does the wiki run on?
looks like this happened: by taking the old ININ page and renaming it "delete" it took the history over there to a page named "delete". Assuming gleb made a new inin page; that would be a second set of history. and be making a mess of the wiki in general. Nothing fancy; probably not purposefully done to hide the edit history; rather probably done to add permanence to the action of trying to delete the page (and had the effect of messing with the edit history as well)
Thank you.
Thanks for figuring this out, all! I have little wiki editing experience, so this is quite helpful knowledge.
You need to stop editing that article, GJM had it to a good place.
I think GJM was too harsh.
I think argument for you editing the wiki on InIn to correct gjm's version is much better than the argument for Gleb editing the wiki on InIn; there's a reason Wikipedia has a rule against autobiographies outside of user pages.
NancyLebovitz made a post about it here, so you can share your perspective there.
it's worth considering: https://wiki.lesswrong.com/wiki/Help:User_Guide maybe we should keep the contents of that page to a minimum, link to InIn and let it stand for itself. Worth noting as well - the following affiliates do not have their own pages; * mealsquares * beeminder * prediction book * omnilibrium While it does not necessarily mean that they should not; InIn would do well to stand with it's peers. suggested text:

For those of you who always wanted to know what is it like to put your head in a particle accelerator when it's turned on...

On 13 July 1978, Anatoli Petrovich Bugorski was checking a malfunctioning piece of the largest Soviet particle accelerator, the U-70 synchrotron, when the safety mechanisms failed. Bugorski was leaning over the equipment when he stuck his head in the path of the 76 GeV proton beam. Reportedly, he saw a flash "brighter than a thousand suns" but did not feel any pain.

The left half of Bugorski's face swelled up beyond recognition and, over the next several days, started peeling off, revealing the path that the proton beam (moving near the speed of light) had burned through parts of his face, his bone and the brain tissue underneath. However, Bugorski survived and even completed his Ph.D. There was virtually no damage to his intellectual capacity, but the fatigue of mental work increased markedly. Bugorski completely lost hearing in the left ear and only a constant, unpleasant internal noise remained. The left half of his face was paralyzed due to the destruction of nerves. He was able to function well, except for the fact that he had occasional complex ... (read more)

A paper.


Although bullshit is common in everyday life and has attracted attention from philosophers, its reception (critical or ingenuous) has not, to our knowledge, been subject to empirical investigation. Here we focus on pseudo-profound bullshit, which consists of seemingly impressive assertions that are presented as true and meaningful but are actually vacuous. We presented participants with bullshit statements consisting of buzzwords randomly organized into statements with syntactic structure but no discernible meaning (e.g., “Wholeness quiets infinite phenomena”). Across multiple studies, the propensity to judge bullshit statements as profound was associated with a variety of conceptually relevant variables (e.g., intuitive cognitive style, supernatural belief). Parallel associations were less evident among profundity judgments for more conventionally profound (e.g., “A wet person does not fear the rain”) or mundane (e.g., “Newborn babies require constant attention”) statements. These results support the idea that some people are more receptive to this type of bullshit and that detecting it is not merely a matter of indiscriminate skepticism but rather a discernment of deceptive vagueness in otherwise impressive sounding claims. Our results also suggest that a bias toward accepting statements as true may be an important component of pseudo-profound bullshit receptivity.

I liked this part:

"Participants were also given an attention check. For this, participants were shown a list of activities (e.g., biking, reading) directly below the following instructions: “Below is a list of leisure activities. If you are reading this, please choose the “other” box below and type in ‘I read the instructions’”. This attention check proved rather difficult with 35.4% of the sample failing (N = 99). However, the results were similar if these participants were excluded. We therefore retained the full data set."

Nice paper. p. 558 (Study 4): It's strange not to say why the data will not be considered further. The data are available, the reduction is clean, but the keys look a bit too skeletal given that copies of the orignal surveys don't seem to be available (perhaps because Raven's APM and possibly some other scales are copyrighted). Still, it's great of the journal and the authors to provide the data. Anyway, I'll take a look. The supplement contains the statements and the corresponding descriptive statistics for their profundity ratings. It's an entertaining read. ETA: For additional doses of profundity, use Armok_GoB's profound LW wisdom generator.

I'm not sure what the best way is to add new data to an old debate - going back to post in the original thread means that only one person will see it - so I thought I'd post it here.

Anyway, the new data pertains to my previous debates with VoiceOfRa over gay rights and fertility rate. I just found out that Singapore bans male homosexuality (but lesbianism is legal) but women have only 1.29 children each, while similar countries Hong Kong and Japan have legal homosexuality, and fertility rates of 1.3 and 1.41.

Now, obviously three countries are not statistically significant, and it could be that if Singapore would have an even lower birth rate if they legalised homosexuality. But it still seems unlikely that it would have much impact, if any, and for someone who cares a lot about increasing the birth rate, sexuality is a distraction from the issue that careers are higher status than raising children.

I'm not sure what the best way is to add new data to an old debate - going back to post in the original thread means that only one person will see it - so I thought I'd post it here.

It also shows up in 'Recent Comments.' In general, I think it's better to continue old conversations in the old spot, rather than disconnecting them.

On the other hand, a new comment is more likely to be seen if it's in the current open thread. I'm not sure whether keeping a conversation in one place is more important.
Maybe a very specific conversation (such as giving a specific person advice) should be kept in one place, whereas a conversation of more general interest is inevitably going to be discussed repeatedly in different threads.

Here's drawing your attention to this year's Effective Altruism Survey, which was recently released and which Peter Hurford linked to in LessWrong Main. As he says there:

This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.

If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.

Take the EA Survey

When will be the next LessWrong census and who will run it?

I will run it if no one else has piped up. Scheduled for Feb if no one else takes it on. I will post asking for questions in a month.
I was going to email Scott, i.e., Yvain, if he needed any help, and/or if he was planning on running it at all. I never bothered to carry that out. I will email him now, though, unless you've confirmed he won't be doing it for 2015, or early 2016. Anyway, if you'll ultimately be doing it, let me know if you need help, and I can pitch in.
will give it a few days for other replies, I will PM you when I am ready.

I've occasionally seen lists of peoples favorite sequences articles or similar but is there any inverse? Articles or parts of sequences on lesswrong which contain errors or which are probably misleading or poorly written that anyone would like to point to?

I understand that the quantum physics sequence is controversial even within LessWrong. Generally, though, all of the sequences could benefit from annotations.
Apparently the metaethics sequence confused everyone.
I definitely didn't get it the first time I read it, but currently I think it's quite good. Maybe it's written in a way that's confusing if you don't already know the punchline (or maybe metaethics confuses people).
I know the punchline - CEV. To me, it seemed to belabour points that felt obvious to me, while skipping over, or treating as obvious, points that are really confusing. Regardless of whether CEV is the correct ethical system, it seems to me that CEV or CV is a reasonably good schelling point, so that could be a good argument to accept it on pragmatic grounds.
How could it be a Schelling point when no one has any idea what it is?
I meant 'program the FAI to calculate CEV' might be a reasonably good Schelling point for FAI design. I wasn't suggesting that you or I could calculate it to inform everyday ethics.
Um, doesn't the same objection apply? How could programming the FAI to calculate CEV be a Schelling point when no one has any idea what CEV is? It is not the case that we only don't know how to calculate it -- we have no good idea what it is.
Its, you know, human values. My impression is that the optimistic idea is that people have broadly similar, or at least compatible, fundamental values, and that if people disagree strongly in the present, this is due to misunderstandings which would be extrapolated away. We all hold values like love, beauty and freedom, so the future would hold these values. I can think of various pessimistic outcomes, such as one of the most fundamental values is the desire not to be ruled over by an AI, and so the AI immediately turns itself off, or that status games make fulfilling everyone's values impossible. Anyway, since I've heard a lot about CEV (on LW), and empathic AI (when FAI is discussed outside LW) and little about any other idea for FAI, it seems that CEV is a Schelling point, regardless of whether or not it should be. Personally, I'm surprised I haven't heard more about a 'Libertarian FAI' that implements each person's volition separately, as long as it doesn't non-consensually affect anyone else. Admittedly, there's problems involving, for instance, what limits should be placed on people creating sentient beings to prevent contrived infinite torture scenarios, but I would have thought given the libertarian bent of transhumanists someone would be advocating this sort of idea.
Schelling points are not a function of what one person knows, they are a function of what a group of people is likely to pick without coordination as the default answer. But even ignoring this, CEV is just too vague to be a Schelling point. It's essentially defined as "all of what's good and none of what's bad" which is suspiciously close to the definition of God in some theologies. Human values are simply not that consistent -- which is why there is an "E" that allows unlimited handwaving.
I realise that it's not a function of what I know, what I meant is that given that I have heard a lot about CEV, it seems that a lot of people support it. Still, I think I am using 'Schelling point' wrongly here - what I mean is that maybe CEV is something people could agree on with communication, like a point of compromise. Do you think that it is impossible for an FAI to implement CEV?
A Schelling point, as I understand it, is a choice that has value only because of the network effect. It is not "the best" by some criterion, it's not a compromise, in some sense it's an irrational choice from equal candidates -- it's just that people's minds are drawn to it. In particular, a Schelling point is not something you agree on -- in fact, it's something you do NOT agree on (beforehand) :-) I don't know what CEV is. I suspect it's an impossible construct. It came into being as a solution to a problem EY ran his face into, but I don't consider it satisfactory.
Hmm, that's not what I think is the punchline :P I think it's something like "your morality is an idealized version of the computation you use to make moral decisions."
Really? That seems almost tautological to me, and about as helpful as 'do what is right'.
Well, perhaps the controversy is that that's it. That it's okay that there's no external morality and no universally compelling moral arguments, and that we can and should act morally in what turns out to be a fairly ordinary way, even though what we mean by "should" and "morally" depends on ourselves.
It all adds up to normality, and don't worry about it. See, I can sum up an entire sequence in one sentence! This also doesn't seem like the most original idea, in fact I think this "you create your own values" is the central idea of existentialism.
http://lesswrong.com/lw/i5/bayesian_judo/ This one where Eliezer seems to be bragging about using the Chewbacca defense.
That's not the chewbacca defense. It's going on the offense against something he disagrees with by pointing out implications. The Aumann bit is just throwing his hands up in the air.
The Aumann bit is him quoting something which doesn't actually prove what he's quoting it to prove, but which he knows his opponent can't refute because he's never heard of it. It isn't him throwing his hands up in the air--it's an argument, just a fallacious one.
Both times he used it, he's giving up on getting somewhere and is just screwing with the guy; it's not part of his main argument. The first time he's trying to stop him from weaseling out. Plus, Aumann doesn't mean that, taken in its literal form. But, it applies indirectly, aspirationally: to try to be rational and try to share relevant information, etc. so as to approximate the conditions under which it would apply. Indeed, the most reasonable interpretation of the other's suggestion to agree to disagree is that they both stop trying to be more right than they are (because uncomfortable, can of worms, etc). That's the opposite of the rationalist approach, and going against that is exactly how he used it - 'if they disagree, someone is doing something wrong', is not very wrong. The second time, it's just 'Screw this, I'm out of here'.
It's worded like an argument. And he and the bystanders would, when listening to it, believe that Eliezer had made an argument that nobody was able to refute. The impact of Eliezer's words depends on deceiving him and the bystanders into thinking it is, and was intended as, a valid argument. In one sense this is a matter of semantics. If you knowingly state something that sounds like an argument, but is fallacious, for the purposes of tricking someone, does that count as "making a bad argument" (in which case Eliezer is using the Chewbacca Defense) or "not making an argument at all" (in which case he isn't)?

There is a LW post about rational home buying.But how to rationally buy a car?

I don't have the knowledge to give a full post, but I absolutely hate car repair. And if you buy a used car, there's a good chance that someone is selling it because it has maintenance issues. This happened to me, and no matter how many times I took the car to the mechanic it just kept having problems. On the other hand, new cars have a huge extra price tag just because they're new. So the classic advice is to never buy a new car, because the moment you drive it off the lot it loses a ton of value instantly. Here are a couple ideas for how to handle this: 1. Buy a car that's just off a 2 or 3 year lease. It's probably in great shape and is less likely to be a lemon.There are companies that only sell off-lease cars. 2. Assume a lease that's in its final year. (at http://www.swapalease.com/lease/search.aspx?maxmo=12 for example) Then you get a trial period of 4-12 months, and will have the option to buy the car. This way you'll know if you like the car or not and if it has any issues. The important thing to check is that the "residual price" that they charge for buying the car is reasonable. See this article for more info on that: http://www.edmunds.com/car-leasing/buying-your-leased-car.html There are a ton of articles out there on how to negotiate a car deal, but one suggestion that might be worth trying is to negotiate and then leave and come back the next day to make the purchase. In the process of walking out you'll probably get the best deal they're going to offer. You can always just come back ten minutes later and make the purchase--they're not going to mind and the deal isn't going to expire (even if they say it is).
If you really hate repairs, doesn't it make much more sense just to lease yourself?
My real solution was not to own a car at all. Feel free to discount my advice appropriately!
lease will usually end up more expensive but you pay-by-the-month so it can be affordable to some people. (that's how lease companies profit)
I wonder if this will help: http://lesswrong.com/r/discussion/lw/mv8/general_buying_considerations/ I wrote it to be general; if you consider each point with whether it relates or not; then you will be a lot closer to criteria.
In the usual way: figure out your budget, figure out your requirements, look at cars in the intersection of budget and requirements and, provided the subset is not null, pick one which you like the most.
I suspect however the rest: seems sound.
Its intent was to point out that purchasing cars is not qualitatively different from purchasing any other thing and that the usual heuristics one uses when buying, say, a computer, apply to buying cars as well.
Purchasing cars often requires haggling. Purchasing computers rarely does. Also, cars are often bought used and it is in the interests of the salesman to conceal information about the used car from you such as hidden problems. Computers are more rarely bought used, rarely have hidden problems that can impair their long-term functioning but are not obvious, and when bought used are often not bought for a price high enough that it's even worth the seller's effort to deceive you about the status of the computer. Furthermore, computers cam be bought in pieces and cars cannot.

It seems to me lately that commute time is actually pretty comfortably spent thinking on problems which require 'holding off on proposing solutions' (I don't drive.) I used to misspend it by going over stuff in circles, but now I actually look forward to it and compose lists of things I have to do/buy/wash etc. (also, I spend far less of it belowground, which is still - years after I moved - a palpable relief.) I had tried listening to podcasts, but it made my ears hurt after a while, and simply 'disconnecting' during 'stupid commute' made me disgruntled. Apparently thinking doesn't feel too bad!:)

Can somebody point out text books or other sources that lead to an increased understanding of how to influence more than one person (the books I know address only 1:1, or presentations)? There are books on how to run successful businesses, etc, but is there overarching knowledge that includes successful states, parties, NGOs, religions, other social groups (would also be of interest for how to best spread rationality...). In the Yvain framework: given the Moloch as a taken, what are good resources that describe how to optimally influence the Moloch with many self-interested agents and for example its inherent game-theoretic problems as long as AI is not up to the task?

I believe the usual term for this is "politics". This is one classic reference.
Thanks Lumifer. The Prince is worth reading. However, tranferring his insights regarding princedoms to how to design and spread memeplexes in the 21st century does have its limits. Any more suggestions?
I suspect that at the more detailed level you have to be more specific -- you do want to run a party or an NGO or a cult or design nerd-culture memes or what? Things become different. Beyond the usual recommendation -- go into a large B&N and browse (or chase "similar" links on Amazon) -- I can point out an interesting book about building political movements from scratch in the XXI century.
There are actually a few others, such as Group Psychology, Marketing, Economics and Mechanism Design. In general, I see this as a big problem that requires many different frameworks to have an effect.
Can you point out your 3-5 favorite books/frameworks?

How to build a better PhD

There are too many PhD students for too few academic jobs — but with imagination, the problem could be solved.



this seems like a good list of productive systems that I know sound reasonable. Worth looking over them and considering them for yourself.


I am interested in an analysis of words/day or letters/day published in the LW forums over time (as comments, separately to posts). Can someone point me in the direction of accessing the full database of comments in easy manual-hackity analyseable format - a csv or something... Or can someone else with that access make a graph of characters published/day.

My intention is to approximate a value for characters/time or words/time (i.e. typing speed) to grasp an understand of how much approximate "human-equivalent" time is spent on the forums, and how it has changed over the period of the existence of the forum.

How to gather like-minded enthusiasts together?

How do you go about finding people who share your goals and are willing to cooperate with you on working to attain those goals? I haven't been very successful with this so far. It seems that there should be thousands of people around the world who think like me yet I've only been able to find a few.

Try looking for people you are willing to cooperate with.
the question is quite general. Can you be more specific about what steps you have taken so far and exactly where you are being held up?
Depends very much on the goal. For some goals it's "start a company". For others it might be: "Start a local LW meetup" Given your LW presence, you don't use your real life name. You don't have the country and city in which you live in your profile. That makes it harder for people to find you. Finding other people is a lot easier if you are open about who you are instead of hiding.

For various reasons, I don't listen to podcasts. Is there any reasonable way to get a text version of a podcast when none has been provided? (Pretend I'm totally deaf.)

There are a number of text-to-speech (transcription) services out there. Kalzumeus uses CastingWords. The cheapest option (price determines speed of transcription) runs a dollar a minute, so it'll be pricey for most podcasts--but you could consider it a donation and send the finished transcript on to the podcast creator, ideally convincing them to start producing their own transcripts. There's also a lot of transcription software out there, but the quality might not be as good as you'd like.

'My father had one job in his life, I've had six in mine, my kids will have six at the same time'

In the ‘gig’ or ‘sharing’ economy, say the experts, we will do lots of different jobs as technology releases us from the nine to five. But it may also bring anxiety, insecurity and low wages

This is still industry dependent. hospitality industry has a short time-span per job, health industry workers don't change jobs.

User behaviour: Websites and apps are designed for compulsion, even addiction. Should the net be regulated like drugs or casinos?

When I go online, I feel like one of B F Skinner’s white Carneaux pigeons. Those pigeons spent the pivotal hours of their lives in boxes, obsessively pecking small pieces of Plexiglas. In doing so, they helped Skinner, a psychology researcher at Harvard, map certain behavioural principles that apply, with eerie precision, to the design of 21st‑century digital experiences.

I can't see any plausible scenario where regulating the internet like drugs or casinos would lead to a net positive outcome.

Have updated: my list of common human goals article.

now includes: Improve the tools available - sharpen the axe, write a new app that can do the thing you want, invent systems that work for you. prepare for when the rest of the work comes along.

in the sense of, making a good workplace; sharpening the tools; knolling), and more...

What people have done which has helped them when they had no hope Check out the livejournal link-- it's also got good comments

This is very specifically a discussion for personal accounts, not advice.

I'm willing to be that a formal study would turn up much the same sort of thing-- what helped was very varied-- even contradictory between one person and another, though with some overlap.

that was a lot less helpful than I expected it to be. TL;DR - anything and everything. Do small things. Do something with structure or regularity, or you will want to anyway, (work for several people), consider a terrible mantra about how everything is hopeless but you are still above zero. "I lived through today" or something. pets help some; not others.
The takeaway might be that at least some people find things that help, and you need to find something that suits you. What were you hoping for from the links?
the posts were not very well organised. would have been more helpful if they were ordered like problem: feelings: tried: what worked: they were mostly a mess of all the above in any order; along with a lot of "hope things get better for you". maybe it's all in my head expecting content to be more ordered and rationally oriented.
Maybe it would make sense for you to ask here for the kind of content you want .
I didn't have a need right now; but strategies are always good to have in times of preparation for when things go drastically wrong. (even for advising other people in circumstances) was hoping for more is all. not a big deal.

Iterating Grace, a curious little art book.

Subtitle: Heartfelt Wisdom and Disruptive Truths from Silicon Valley's Top Venture Capitalists. But it's really about being trampled by llamas.

Who are the equivalents of olympic champions for soft/social skills? What occupations do they usually hold?

I am aware of a show format in America, in which a host invites a guest to a news show, and chats with them. I would assume that would require them to be able to hit up a conversation with pretty much anyone.

Or the insurance salesman signing a deal in well over the majority of the cases.

... or the cult leader.

What is the present day equivalent of the Byzantine courtier managing to turn friendships into grudges, and making lovers stab each other in the... (read more)

I no longer remember where I read this idea: There have always been people highly attuned to identifying whom they need to flatter to obtain favors. Before modern democracy, when sovereign used to equal king, the sycophants trained in the art of ingratiating themselves with the sovereign were the courtiers whom you could always see surrounding the monarch. Now that the sovereign in most Western countries is "We the People," you can find the same sycophants using the same arts to gain favors from the sovereign---they're the PR consultants, political strategists, and most candidates for public office.
There are many different social skills. The skills of a good coach are different from the skills of a good salesperson. Trump speaks about he would put people like Carl Icahn into negotiating deals. I think Icahn is at the top of the league. I would put Oprah as well into the olympic chamion category but she's very different from Icahn. Of course you can have conversations by drawing out possible conversation topics. On the other hand I doubt that the quality of those conversations are very high. Having high quality conversations is a lot about opening up on an emotional level. I went into my last 4-day personal development workshop with the expectation that while I'm at the emotional resonance with the workshop, it's likely that people will approach me more in public. After the first day I was travelling home (15 minutes walking + 20 minutes driving the train) and two people approached me for navigation advise. On the second day one person approached me. I thought to myself "This is crazy." That was enough to close down and nobody approached me the next two days. There's nothing that I could do based on reading a book that puts me into that state. It's all the emotional effect of opening up. Does that mean that I'm generally highly skilled at having conversations? No, I'm not top tier. Most of my days in the last year I was acting rather introverted. I think you make a mistake if you try to focus on mental work such as mapping out conversation topics instead of dealing with emotional and physical issues.
I would expect to find them among lawyers, marketers, fixers/facilitators/deal-makers. Red-pill people probably want to be there, too :-/ A political operator, playing the usual games in the corridors of power. Steve Jobs was said to be very very good at this. Salesman is the obvious one. There are LOTS of books on how to Win Friends and Influence People :-) starting from the Carnegie classic. Check out your local B&N or browse Amazon.

I request the attention of a moderator to the wiki editing war that began a week ago between Gleb_Tsipursky and VoiceofRa, regarding the article on Intentional Insights. So far. VoiceofRa has deleted it twice, and Gleb_Tsipursky has restored it twice.

Due to the way the editing to remove the page was done, to see the full editing history it is necessary to look also at the pseudo-article titled Delete.

I do not care whether there is an article on Intentional Insights or not, but I do care about standards for editing the wiki.

[This comment is no longer endorsed by its author]Reply

I just started gathering some intelligence from potential competitors in entering a 'new industry'. Looks like there is basically just 1 major competitor. Just discovered they got a multimillion dollar government grant! I feel it's so unfair, and cheated. How can I compete or be a value adding collaborator at another stage of the value chain now! Anyone have experience applying for government grants as a startup - including for startups that aren't internet based?

Write to the company and ask to be involved? Offer your services? Do you care about making money off the thing? Or getting the thing done? (this question might help you guide your future thinking)

Is there any correlation between facial recognition and computer-driven cars? Just a strange idea inspired by this article that got into my head, along with a cached knowledge of software recognition performing roughly similar to humans and because it's cached knowledge I'm not sure how reliable it is. Anyone more familiar with this?

I'm making a comparison between facial recognition and recognition of everything else, and I'm not sure how good it is, although it's fundamentally focusing on the same thing.

tl:dr if human recognition =+- software recognition ... (read more)

Sure, there's some correlation, but a correlation can just mean that if one's getting better, the other probably is too. Just knowing that a correlation exists doesn't help us much. The reason why self-driving cars could be almost perfect even if face recognition still had problems is that self-driving cars don't need to have great detection rates on people - it's enough to know where the road is and where other stuff is, the nature of that other stuff is only of secondary concern. To find out where stuff is, self-driving cars don't have to use still images of the environment - they can use things like multiple cameras, fancy laser range-finders, and motion parallax.