All of diegocaleiro's Comments + Replies

Copied from the Heterodox Effective Altruism facebook group (

Giego Caleiro I've read the comments and now speak as me, not as Admin:
It sems to me that the Zurich people were right to exclude Roland from their events. Let me lay out the reasons I have, based on extremely partial information:

1) IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.

2) The politeness of EAs is in great part... (read more)

(Moderator note: I banned Diego partially for a long history of inflamatory commenting, but mostly for various highly deceptive and manipulative actions he took in the Bay Area community. This comment doesn’t have much to do with that, but it reminded me that he was still around.)

If you are rather kicked out without reason than with, and others rather with than without, then o̶b̶v̶i̶o̶u̶s̶l̶y̶ the simple way to satisfy everyone is that you let the kicked-out choose whether they receive a reason.
Giego I agree with your post in general. > IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there. This is just a strawman that has cropped up here. From the beginning I said I don't mind dropping any topic that is not wanted. This never was the issue.

Eric Weinstein argues strongly against returns being 20century level, and says they are now vector fields, not scalars. I concur (not that I matter)

The Girardian conclusion, and general approach of this text make sense.
But the strategy that is best is forgiving 2 tits for tat, or something like that, worth emphasizing.
Also it seems you are putting some moral value in long term mating that doesn't necessarily reflect our emotional systems or our evolutionary drives. Short tem mating is very common and seen in most societies where there's enough resources to go around and enough intersexual geographical proximity. Recently there are more and stronger arguments emerging against female short t

... (read more)

This sounds cool. Somehow it reminded me of an old, old essay by Russell on architecture.

It's not that relevant, so just if people are curious

I am now a person who moved during adulthood, and I can report past me was right except he did not account for rent.

It seems to me the far self is more orthogonal to your happiness. You can try to optimize for maximal long term happiness.

This doesn't seem very feasible to me given both the prediction horizon being short and my preferences changing in ways I would not have predicted. Option value seems like a much better framework for thinking about the future.

Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.

I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.

Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.

First, I think we should seperate two ideas. 1. Creating a reference class. 2. Thinking in probabilities. "Thinking in probabilities" is a consistent talking point for Musk - every interview where's he asked how he's able to do what he does, he mentions this. Here's an example I found with a quick Google search: [] So that covers probability. In terms of reference class, I think what Thiel and Musk are both saying is that previous startups are really bad to use as a reference class for new startups. I don't know if that means they generally reject the idea of reference classes, but it does give me pause in using them to figure out the chances of my company succeeding based on other similar companies.

You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy.

Yes, I stand by those words even now (that I am happy).

I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That's what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly - if they actually needed it - AI money should reach Nick, Paul and Stuart before our team.) We'll be presenting it in Oxford, tomorrow?? Sh... (read more)

He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc... if you sum those things up, and look at the model, I would say it ignores about that many things). I'm fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.

The only reason we understand V1 is because it is a retinotopic inverted map that has been through very... (read more)

All true points, but consider your V4 example. We have software that is gradually approaching mammalian-level ability for visual information processing (not human-level just yet, but our visual cortex is larger than most animals' entire cortices, so that's not surprising). So, as far as building AI is concerned, so what if we don't understand V4 yet, if we can produce software that is that good at image processing?

Oh, so boring..... It was actually me myself screwing up a link I think :(

Skill: being censored by people who hate censorship. Status: not yet accomplished.

Wow, that's so cool! My message was censored and altered.

Lesswrong is growing an intelligentsia of it's own.

(To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they'll sound as formal as the ones I read!)

Also fascinating that it was near instantaneous.

[This comment is no longer endorsed by its author]Reply
What happened? That sounds very weird.

No, that's if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don't care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.

But the reference class of Diego's thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p

US Patent No. 4,136,359: "Microcomputer for use with video display"[38]—for which he was inducted into the National Inventors Hall of Fame. US Patent No. 4,210,959: "Controller for magnetic disc, recorder, or the like"[39] US Patent No. 4,217,604: "Apparatus for digitally controlling PAL color display"[40] US Patent No. 4,278,972: "Digitally-controlled color signal generation means for use with display"[41]

Yeah, as I said above US4136359 is doubtless an excellent patent but it really isn't in any useful sense a patent on the personal computer. It's a patent on a way of getting better raster graphics out of a microcomputer connected to a television.

Basically because I never cared much for cryonics, even with the movie about me being done about it. Trailer:

For me cryonics is like soap bubbles and contact improv. I like it, but you don't need to waste your time knowing about it.

But since you asked: I've tried to get rich people in contact with Robert McIntyre, because he is doing a great job and someone should throw money at him.

And me, for that matter. All my donors stopped earning to give, so I'm with no donor cashflow now, I might have to "retire&qu... (read more)

Wow, that's so cool! My message was censored and altered. Lesswrong is growing an intelligentsia of it's own. (To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they'll sound as formal as the ones I read!) Also fascinating that it was near instantaneous.
I looked at the flowchart and saw the divergence between the two opinions into mostly separate ends: settling exoplanets and solving sociopolitical problems on Earth on the slow-takeoff path, vs focusing heavily on how to build FAI on the fast-takeoff path, but then I saw your name in the fast-takeoff bucket for conveying concepts to AI and was confused that your article was mostly about practically abandoning the fast-takeoff things and focusing on slow-takeoff things like EA. Or is the point that 2014!diego has significantly different beliefs about fast vs. slow than 2015!diego?

I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).

Yes I am.

Step 1: Learn Bayes

Step 2: Learn reference class

Step 3: Read 0 to 1

Step 4: Read The Cook and the Chef

Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically

Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.

Step 7: Talk to Michael Valentine about it, who has been reaso... (read more)

Note that the billionaires disagree on this. Thiel says that people should think more like calculus and less like probability, while Musk(the inspiration for the cook and the chef) says that people think in certainties while they should think in probabilities.
I model probabilistic thinking as something you build on top of all this. First you learn to model the world at all (your steps 3-8), then you learn the mathematical description of part of what your brain is doing when it does all this. There are many aspects of normative cognition that Bayes doesn't have anything to say about, but there are also places where you come to understand what your thinking is aiming at. It's a gears model of cognition rather than the object-level phenomenon. If you don't have gears models at all, then yes, it's just another way to spout nonsense. This isn't because it's useless, it's because people cargo-cult it. Why do people cargo-cult Bayesianism so much? It's not the only thing in the sequences. The first post, The Simple Truth, big parts of Mysterious Answers to Mysterious Questions, and basically all of Reductionism are about the gears-model skill. Even the name rationalism evokes Descartes and Leibniz, who were all about this skill. My own guess is that Eliezer argued more forcefully for Bayesianism than for gears models in the sequences because, of the two, it is the skill that came less naturally to him, and that stuck. What would cargo-cult gears models look like? Presumably, scientism, physics envy, building big complicated models with no grounding in reality. This too is a failure mode visible in our community.
So for us to understand what you're even trying to say, you want us to read a bunch of articles, talk to one of your friends, listen to a speech, and only then will we become EAs good enough for you? No thanks.

I am particularly skeptical of transhumanism when it is described as changing the human condition, and the human condition is considered to be the mental condition of humans as seen from the human's point of view.

We can make the rainbow, but we can't do physics yet. We can glimpse at where minds can go, but we have no idea how to precisely engineer them to get there.

We also know that happiness seems tighly connected to this area called the NAcc of the brain, but evolution doesn't want you to hack happiness, so it put the damn NAcc right in the medial sli... (read more)

Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robus... (read more)

Beware the Dunning–Kruger effect. Looking at the big picture, you could also say that there convincing evidence for a cap on the lifespan of a biological organism. Heck, some trees have been alive for over 10,000 years! Yet, once you look at the nitty-gritty details of biomedical research, it becomes clear that even adding just a few decades to the human lifespan is a very hard problem and researchers still largely don't know how to solve it. It's the same for AGI. Maybe truly super-human AGI is physically impossible due to complexity reasons, but even if it is possible, developing it is a very hard problem and researchers still largely don't know how to solve it.

EA is an intensional movement.

I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:

My PHD will likely be a book on altruism, and any respectable altruist these days is worried about AI at least 30% of his waking life.

That's how I see it anyway. Mos... (read more)

Not particularly disagreeing, I just found it odd in comparison to other EA writings. Thanks for the clarification.

Very sorry about that, I thought he held the patent for some aspect of computers that had become widespread, in the same way Wozniak holds the patent for personal computers. This was incorrect. I'll fix it.

What patent for personal computers does Wozniak hold?

I'm looking for a sidekick if someone feels that such would be an appropriate role for them. This is me for those who don't know me:

And this is my flowchart/life;autobiography in the last few years:

Nice to meet you! :)

Polymathwannabe asked: What would be your sidekick's mission?

R: It feels to me like that would depend A LOT on the person, the personality, our physical distance, availability and interaction typ... (read more)

I read your documents. Please PM me to organise a conversation.

My take is that what matters in fun versus work is where the locus of control is situated. That is, where does your subjective experience tell you the source of you doing that activity comes from.

If it comes from within, then you count it as fun. If it comes from the outside, you count it as work.

This explains your feeling, and explains the comments in this thread as well. When past-self sets goals for you, you are no longer the center of locus of control. Then it feels like negatively connoted work.

That's how it is for me anyway.

I like this idea, it seems to ring true to me.
Personally, even when I'm the one assigning myself "work", it's still a negative experience.

That is false. Bostrom thought of FAI before Eliezer. Paul thought of the Crypto. Bostrom and Armstrong have done more work on orthogonality. Bostrom/Hanson came up with most of the relevant stuff in multipolar scenarios. Sandberg/EY were involved in the oracle/tool/sovereign distinction.

TDT, which is EY work does not show up prominently in Superintelligence. CEV, of course, does, and is EY work. Lots of ideas on Superintelligence are causally connected to Yudkowksy, but no doubt there is more value from Bostrom there than from Yudkowsky.

Bostrom got 1.5... (read more)

Bostrom thought of FAI before Eliezer.

To be completely fair, although Nick Bostrom realized the importance of the problem before Eliezer, Eliezer actually did more work on it, and published his work earlier. The earliest publication I can find from Nick on the topic is this short 2003 paper basically just describing the problem, at which time Eliezer had already published Creating Friendly AI 1.0 (which is cited by Nick).

Do you have the link for that or at least the keywords? I assume Bostrom called it something else.

My concern is that there is no centralized place where emerging and burgeoning new rationalists, strategists and thinkers can start to be seen and dinosaurs can come to post their new ideas.

My worry is about the lack of centrality, nothing to do with the central member being LW or not.

Well, from what I remember, LW has always been diverse. There is a core that has been interested in AI risk since EY's early writing and the SL4 mailing list. There is a newer group that HMPOR brought in, and so on. Some of these groups explicitly complained about too many posts in categories they were not interested in. One of the proposed solutions was to fork LW into subreddits. That's what reddit does after all, and it seems to works for them. What happened instead was the exodus - a fork into separate sites. The EA people have their own forum now. The rationality bloggers hang out on blogs/facebook. The MIRI AI risk people have their own forum as well. How does centrality in particular help? I mean it helps a little to have less pages to load to get the content that you want, but on the other hand when posts are too frequent its annoying to have to wade through a bunch of stuff you are not interested in. LW of course is now pretty low volume - but if you look back at the history, there was a time when people were complaining (numerous people at various times) that there was too much stuff they didn't like (at least this is how I remember it, but I'm not even bothering to search to find some example posts). I just saw Jurassic World - so my mind is having some extra trouble interpreting your use of 'dinosaurs'. I guess you mean old high quality posters who no longer post? If you have some ideas you want to write and communicate and get feedback on, then your best bet is probably to write them up first on your own blog, and then submit them for discussion on multiple sites, and then gently link the resulting discussions together. Also, directly emailing people and asking for comment is sometimes useful - totally depends on your goals. The other strategy is to just write stuff and let the internet figure it out. I dont blog so much recently, but when I used to blog more I just wrote articles and never bothered with promoting them or even telling anybody abou

Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here?

There could be a research section, a Upvoted section and a discussion section, where the research section is also displayed within the upvoted, trending one.

On second thought, I'll risk it []. (I might post a comment to it with a compilation of my ideas and my favorites of others' ideas, but it might take me a while.)
I'd rather not expose myself to the potential downvotes of a full Discussion post, and I also don't know how to put polls in full posts, only in comments []. Nonetheless I am pretty pro-poll in general and I'll try to include more of them with my ideas.

The solutions were bad in purpose so other people would come up with better solutions on the spot. I edited to clarify :)

I just want to flag that despite simple, I feel like writings such as this one are valuable both as introductory concepts and so the new branches with more details are created by other researchers.

You can carry it on by posting it monthly, there is no structure determining who creates threads. Like all else that matters in this world, it is done by those who show up for the job. I've made some bragging threads in the past noticing others didn't. Do the same for this :)

Ah, I wasn't aware of that! Very well then, I'll begin doing just that.

Arrogance: I caution you not to take this as advice for you to your own life, because frankly, arrogance goes a long, long loooooong way. Most rationalists are less arrogant in person than they should about their subject areas, and rationalist women who identify as females and are straight are even less frequently arrogant than the already low base rate. But some people are over-arrogant, and I am one of these. Over arrogance isn't about the intensity of arrogance, it is about the non-selectivity. The problem I have always had and been told again and again... (read more)

You're joking, right? We're arrogant as all hell, most of us are. I know I am. And it needs to fucking stop, because arrogance is ugly even when you're knowledgeable.
Dunning-Kruger - learn it, fear it. So long as you are aware of that effect, and aware of your tendency to arrogance (hardly uncommon, especially among the educated), you are far less likely to have it be a significant issue. Just be vigilant. I have similar issues - I find it helpful to dive deeply into things I am very inexperienced with, for a while; realizing there are huge branches of knowledge you may be no more educated in than a 6th grader is humbling, and freeing, and once you are comfortable saying "That? Oh, hell - I don't know much about that, and will never find the time to", you can let it go and relax a bit. Or - I have. (my favorites are microbiology, or advance mathematics. I fancy myself smart, but it is super easy to be so totally over my head it may as well be mystic sorcery they're talking about. Humbles you right out.) Big chunks of this board do that as well, FWIW.
6Adam Zerner8y
Would you mind sharing examples? It'd help me to understand/internalize the idea your talking about. I too am sort of arrogant. Idk. I like to think that assign appropriate confidence levels based on the domain/information/situation, but I probably don't.

A Big Fish in a Small Pond: for many years I assumed it was better to be a big fish in a small pond than to try to be a big fish in the ocean. This can be decomposed into a series of mistakes, only part of which I learned to overcome so far.

1)It is based on the premise that social rankings matter more than they actually do. Most of day to day life is determined by environment, and being in a better environment, surrounded by better and different people is more valuable experientially and in terms of output than being a big fish in a small pond.

2)It enco... (read more)

I tend to think like that and I tend to see how 1) is indeed a mistake. I would now prefer to be surrounded by brighter people than being the locally brightest. I am not sure I understand 2) and 3) is interesting I am not sure I understand 4) but maybe I can add a 5) it all comes from school socializing us to classroom sizes. We are used to thinking what matters to be the best of the local 30. Lesson: de-schooling, learning how comparisons over 30 people, say, eight billion, work. Our local school brightest would be nobody at MIRI.
On the topic of social rankings, it seems that physical attractiveness, which seems to cause higher social ranking, may actually be confounded by variables like confidence. Or alternatively, physical attractiveness confounds confidence. I'm sure there's a statistical term for this, I just don't know it. I've based this on personal experience. Here are some quotes from Quora answers from people who self-claim attactiveness to illustrate: for men: 'What a lot of guys don't realize is that the power that comes from being attractive comes more from the internal things you develop by being attractive (confidence, charisma, boldness etc) and not physical attributes. I know guys with average looks who get with more girls than me, or are more captivating speakers, or more popular. I didn't have to work at any of those things, but guys who did and did it well can often do it better.' for women it may be very different. In fact, looking into this, it seems like at least one woman differentiates men's interest into predatorial and admiratory: 'There is enough and more sexual and romantic attention. You start taking it for granted and sometimes are a little afraid of it. The truth is that you become a little cynical and jaded because most of these men aren't really interested in you as a person. You are either a conquest or a trophy. There are exceptions of course, but they are not so easy to come by. And some attentions that are forced upon you are intrusive and sometimes even violative (I have been stalked twice).' Interestingly, she doesn't pass any value judgements, implying either doesn't change their social status...rather, she feels it lowers her social status, as if it devalues her non physical traits. That being says, she goes on to say: 'A lot of women, including me, say that they would be valued more for their mind than their looks. I have been thinking about this lately about why intelligence should be valued over looks, although the more immediate reaction i

I think this depends on how exactly the big fish treat the small fish in the pond/ocean. For example, if you take a job where your colleagues are more skilled than you, which of the following scenarios is more likely?

a) You will have a lot of opportunity to learn from your colleagues: you will be able to watch them work, to see how they solve problems; if you make a mistake they will explain you what that was wrong and what you could have done instead. You will learn a lot, and a few years later you will be one of those experts.

b) You will be the at the bo... (read more)

I much enjoyed your posts so far Kaj, thanks for creating them.

I'd like to draw attention, in this particular one, to

Viewed in this light, concepts are cognitive tools that are used for getting rewards.

to add a further caveat: though some concepts are related to rewards, and some conceptual clustering is done in a way that maps to the reward of the agent as a whole, much of what goes on in concept formation, simple or complex, is just the wire together, fire together old saying. More specifically, if we are only calling "reward" what is a r... (read more)

If you are particularly interested in sexual status, I wrote about it before here, dispelling some of the myth.

Usually dominance is related to a power that is maintained by agression, stress or fear.

The usual search route will lead you to some papers:

What I would do would be find some 2015 2014 papers and check their bibliography, or ask the principal investigator about which papers are more interesting on it.

I have a standing interest in other primates and cetaceans as well, so I'd look for attempts to show that others have or don't have prestige.

The technical academic term for (1) Is prestige and (2) Is Dominance. Papers which distinguish the two are actually really interesting.

I second Creutzer's request for links to these papers.
Can you give us some citations? I would love to read academic papers in this domain, but somehow I've been very bad at finding stuff that relates to the thing we call "status".

Status isn't strictly zero sum. Some large subset of sexual status is. Also humans have many different concomitant status hierarchies.

Should the violin players at Titanic have stopped playing the violin and tried to save more lives?

What if they could have saved thousands of Titanics each? What if there already was such a technology that could play a deep sad violin song on the background, and project holograms of violin players playing in deep sorrow as the ship sank.

At some point, it becomes obvious that doing the consequentialist thing is the right thing to do. The question is whether the reader believes 2015 humanity has already reached that point or not.

We already produce beauty, ... (read more)

Why not actual fields medalists?

Tim Ferris lays out a guide for how to learn anything really quickly, which involves contacting whoever was great at that ten years ago and asking them who is great that should not be.

Doing that for field medalists and other high achievers is plausibly extremely high value.

Engineers would be more useful if it really is crunch time.

This would cause me to read Slate Star Codex and to occasionally comment. It may do the same for others.

This may be a positive outcome, though I am not certain of it.

Hard Coded AI is less likely than ems, since ems which are copies or modified copies of other ems would instantly be aware that the race is happening, whereas most of the later stages of hard-coded AI could be concealed from strategic opponents for part of the period in which they would have made hasty decisions, if only they knew.

There is a gender difference in resource constraint satisfaction worth mentioning: males in most primate species are less resource constrained than females, including humans. The main reason why females require fewer resources to be emotionally satisfied is that the upper bound on how many resources are required to attract the males with the best genes, acquire their genes and parenting resources, and have nearly as many children as possible, as well as taking good care of these children and their children is limited. For males however, because there is ... (read more)

since men are wired to mate diversely then obviously the recipient must feel the same not different. I mean it takes 2 to tango. I've met women who wanted to ** with me and once asked the proponent that I had a lover and she said: so what? Lesson over.

None of Miles's arguments resonates with me, basically because one counterargument could erase the pragmatic relevance of his points in one fell swoop:

The vast majority of expected value is on changing policies where the incentives are not aligned with ours. Cases where the world would be destroyed no matter what happened, or cases where something is providing a helping hand - such as the incentives he suggests - don't change where our focus should be. Bostrom knows that, and focuses throughout on cases where more consequences derive from our actions. It's... (read more)

None of Miles's arguments resonates with me, basically because one counterargument could erase the pragmatic relevance of his points in one fell swoop:

The vast majority of expected value is on changing policies where the incentives are not aligned with ours. Cases where the world would be destroyed no matter what happened, or cases where something is providing a helping hand - such as the incentives he suggests - don't change where our focus should be. Bostrom knows that, and focuses throughout on cases where more consequences derive from our actions. It... (read more)

[This comment is no longer endorsed by its author]Reply

What are some more recent papers or books on the topic of Strategy and Conflict that take a Schellingian approach to the dynamics of conflict?

I find it hard to believe that the best book on any topic of relevance was written in 1981.

Load More