All of Quinn's Comments + Replies

For me the scary part was Meta's willingness to do things that are minimally/arguably torment-nexusy and then put it in PR language like "cooperation" and actually with a straight face sweep the deceptive capability under the rug.

This is different from believing that the deceptive capability in question is on it's own dangerous or surprising.

My update from cicero is almost entirely on the social reality level: I now more strongly than before believe that in the social reality, rationalization for torment-nexus-ing will be extremely viable and accessible to... (read more)

I'm a bit puzzled by these reactions. In a sense yes, this is technically teaching an AI to deceive humans... but in a super-limited context that doesn't really generalize even to other versions of Diplomacy, let alone real life. To me, this is in principle teaching an AI to deceive, but only in a similar sense as having an AI in Civilization sometimes make a die roll to attack you despite having signed a peace treaty. (Analogous to Cicero sometimes attacks you despite having said that it won't.) It's a deception so removed from anything that's relevant fo... (read more)

preorders as the barest vocabulary for emergence

We can say "a monotonic map, is a phenomenon of as observed by ", then, emergence is simply the impreservation of joins.

Given preorders and , we say a map in "preserves" joins (which, recall, are least upper bounds) iff where by "" we mean .

Suppose is a measurement taken from a particle. We would like for our measurement system to be robust against emergence, which is literally operationalized by measuring one particle, measuring another, t... (read more)

Can someone explain to me how the giry monad factors in? For some , executing to get a would destroy information: what information, and why not destroy it? (Am I being too hasty comparing probability monad to haskell monad?)

4Scott Garrabrant2mo
When a voter compares two lottery-lotteries, they take expected utilities with respect to the innerΔ, but they sample with respect to the outerΔ, and support whichever sampled thing they prefer. If we collapse and treat everything like the outerΔ, that just gives us the original maximal lotteries, which e.g. is bad because it chooses anarchy in the above example. If we collapse and treat everything like the innerΔ, then existence will fail, because there can be non-transitive cycles of majority preferences.

Florian Brandl, Felix Brandt, and Hans Georg Seedig

hyperlink 404

3Scott Garrabrant2mo
Does it work now?

Why is a codomain of [0,1] more general than a preorder?

The function only uses the preorders on candidates implied by the utility functions.

The (implicit, as OP says "obvious/not matter") measurability of a compact set seems like more structure than a preorder, to me, and I'm not thinking of "generalization" as imposing more structure.

3Scott Garrabrant2mo
Yeah, preorder is misleading. I was only trying to say with as few characters as possible that they are only considering a ranking of candidates possibly with ties. (Which is a preorder, but is less general.)

lol, I filed the same market on manifold before scrolling down and seeing you already did.

Thanks!

  • 0.75 to 0.95 vs. 0.75 to 0.9 is strictly my transcription bug, not being careful enough.
  • In general I wasn't auditing the code from the Jonas Moss comment, I just stepped through looking at the functionality. I should've been more careful, if I was going to make a claim about the conversion factor.
  • You're kinda right about the question "if it's a constant number of lines written exactly once, does it really count as boilerplate?" I can see how it feels a little dishonest of me to imply that the ratio is really 15:1. The example I was thinking of w
... (read more)
2Sam Nolan4mo
Thanks for the flag! I might not be understanding correctly, but I don't think there's a problem here with the actual underlying code just my explanation of it (we all hate magic numbers). Which is very fair enough, the notebook is much too dense for my liking. It's still a work in progress! I agree! The Squiggle team is looking to create different quantiles for different distributions. I've needed them on several occasions. You can check out the discussion on GitHub here [https://github.com/quantified-uncertainty/squiggle/discussions/284]. It's on my todo list.
1Mo Putera4mo
Just letting you know that you seem to have double-pasted the 3rd bullet point.
Answer by QuinnJul 14, 2022112

Yes, the problem is real. I'd try your solution if it existed.

Optimal for me would be emacs or vscode keybindings, not the 4-fingers of tablet computing.

2AllAmericanBreakfast5mo
I’ll let you know when I have a functional prototype, I’d love your feedback :)

Unlikely, see here (Rohin wrote a TLDR for alignment newsletter, see the comment).

Some of what follows is similar to something I wrote on EA Forum a month or so ago.

Returns on meatspace are counterfactually important to different people to different degrees. I think it's plausible that some people simply can't keep their eye on the ball if they're not getting consistent social rewards for trying to do the thing, or that the added bandwidth you get when you move from discord to meatspace actually provides game-changing information.

I have written that if you're not this type who super needs to be in meatspace with their tribe, who can cul... (read more)

2Evan_Gaensbauer5mo
Thank you for this detailed reply. It's valuable, so I appreciate the time and effort you've put into it. The thoughts I've got to respond with are EA-focused concerns that would be tangential to the rationality community, so I'll draft a top-level post for the EA Forum instead of replying here on LW. I'll also read your EA Forum post and the other links you've shared to incorporate into my later response. Please also send me a private message if you want to set up continuing the conversation over email, or over a call sometime.

missed opportunities to build a predictive track record and trump

I was reminiscing about my prediction market failures, the clearest "almost won a lot of mana dollars" (if manifold markets had existed back then) was this executive order. The campaign speeches made it fairly obvious, and I'm still salty about a few idiots telling me "stop being hysterical" when I accused him of being what he's writing on the tin that he is pre inauguration even though I overall reminisce that being a time when my epistemics were way worse than they are now.

However, there d... (read more)

Is there an EV monad? I'm inclined to think there is not, because EV(EV(X)) is a way simpler structure than a "flatmap" analogue.

I find myself, just as a random guy, deeply impressed at the operational competence of airports and hospitals. Any good books about that sort of thing?

1JBlack5mo
It is pretty impressive that they function as well as they do, but seeing how the sausage is made (at least in hospitals) does detract from it quite substantially. You get to see not only how an enormous number of battle hardened processes prevent a lot of lethal screw-ups, but also how also how sometimes the very same processes cause serious and very occasionally lethal screw-ups. It doesn't help that hospitals seem to be universally run with about 90% of the resources they need to function reasonably effectively. This is possibly because there is relentless pressure to cut costs, but if you strip any more out of them then people start to die from obviously preventable failures. So it stabilizes at a point where everything is much more horrible than it could be, but not quite to an obviously lethal extent. As far as your direct question goes, I don't have any good books to recommend.

Stuart Russell in the FLI podcast debate outlined things like instrumental convergence and corrigibility, though it took a backseat to his own standard/nonstandard model approach, and challenged him to publish reasons why he's not compelled to panic in a journal, but warned him that many people would emerge to tinker with and poke holes in his models.

The main thing I remember from that debate is that Pinker thinks the AI xrisk community is needlessly projecting "will to power" (as in the nietzschean term) onto software artifacts.

You may be interested: the NARS literature describes a system that encounters goals as atoms and uses them to shape the pops from a data structure they call bag, which is more or less a probabilistic priority queue. It can do "competing priorities" reasoning as a natural first class citizen, and supports mutation of goals.

But overall your question is something I've always wondered about.

I made an attempt to write about it here, I refer systems of fixed/axiomatic goals as "AIXI-like" and systems of driftable/computational goals "AIXI-unlike".

I share your i... (read more)

Jotted down some notes about the law of mad science on the EA Forum. Looks like some pretty interesting open problems in the global priorities, xrisk strategy space. https://forum.effectivealtruism.org/posts/r5GbSZ7dcb6nbuWch/quinn-s-shortform?commentId=DqSh6ifdXpwHgXnCG

Ambition, romance, kids

Two premises of mine are that I'm more ambitious than nearly everyone I meet in meatspace and normal distributions. This implies that in any relationship, I should expect to be the more ambitious one.

I do aspire to be a nagging voice increasing the ambitions of all my friends. I literally break the ice with acquaintances by asking "how's your master plan going?" because I try to create vibes like we're having coffee in the hallway of a supervillain conference, and I like to also ask "what harder project is your current project a war... (read more)

Borlaug was a super absentee parent, his wife did everything herself and he (presumably) sent back cash while globetrotting. How many of these ambitious people with kids aren't super involved in their kids' lives?

does "you are what you can't stop yourself from doing" help you in this time? Querying your revealed preferences for behavior that is beyond effortless, that it would take effort to not do, can be very informative.

1Cui9mo
Bertrand Russell points out productive compulsion as well! Emotional drive to create, or stand up, etc.

Yesterday I quit my job for direct work on epistemic public goods! Day one of direct work trial offer is April 4th, and it'll take 6 weeks after that to know if I'm a fulltime hire.

I'm turning down

  • raise to 200k/yr usd
  • building lots of skills and career capital that would give me immense job security in worlds where investment into one particular blockchain doesn't go entirely to zero
  • having fun on the technical challenges

for

  • confluence of my skillset and a theory of change that could pay huge dividends in the epistemic public goods space
  • 0.35x paycut
... (read more)

yeah the bet pressured me to post it a little early.

I'd be interested in elaboration of your view of comparative advantage shifting. You mean shifting more toward lucrative E2G opportunities? Shifting more away from capacity to make lucrative alignment contributions?

Do you have any recommendations for what would make it less rambly?

6NunoSempere10mo
An editor

Would there be a way of estimating how many people within the amazon organization are fanatical about same day delivery ratio against how many are "just working a job"? Does anyone have a guess? My guess is that an organization of that size with a lot of cash only needs about 50 true fanatics, the rest can be "mere employees". What do yall think?

3gwern10mo
I can't really think of any research bearing on this, and unclear how you'd measure it anyway. One way to go might be to note that there is a wide (and weird) variance between the efficiency of companies: market pressures are slack enough that two companies doing as far as can be told the exact same thing in the same geographic markets with the same inputs might be almost 100% different (I think was the range in the example of concrete manufacturing in one paper I read); a lot of that difference appears to be explainable by the quality of the management, and you can do randomized experiments in management coaching or intensity of management and see substantial changes in the efficiency of a company [https://www.gwern.net/notes/Competence#bloom-et-al-2012] (Bloom - the other one - has a bunch of studies like this). Presumably you could try to extrapolate from the effects of individuals to company-wide effects, and define the goal of the 'fanatical' as something like 'maintaining top-10% industry-wide performance': if educating the CEO is worth X percentiles and hiring a good manager is worth 0.0Y percentiles and you have such and such a number of each, then multiply out to figure out what will bump you 40 percentiles from an imagined baseline of 50% to the 90% goal. Another argument might be a more Fermi estimate style argument from startups. A good startup CEO should be a fanatic about something, otherwise they probably aren't going to survive the job. So we can assume one fanatic at least. People generally talk about startups beginning to lose the special startup magic of agility, focus, and fanaticism at around Dunbar's number level of employees like 300, or even less (eg Amazon's two-pizza rule which is I guess 6 people?). In the 'worst' case that the founder has hired 0 fanatics, that implies 1 fanatic can ride herd over no more than ~300 people; in the 'best' case that he's hired dozens, then each fanatic can only cover for more like 2 or 3 non-fanatics. I'm
2Dagon10mo
I'm not sure "fanatical" is well-defined enough to mean anything here. I doubt there are any who'd commit terrorist acts to further same-day delivery. There are probably quite a few who believe it's important to the business, and a big benefit for many customers. You're absolutely right that a lot of employees and contractors can be "mere employees", not particularly caring about long-term strategy, customer perception, or the like. That's kind of the nature of ALL organizations and group behaviors, including corporate, government, and social groupings. There's generally some amount of influencers/selectors/visionaries, some amount of strategists and implementers, and a large number of followers. Most organizations are multidimensional enough that the same people can play different roles on different topics as well.
1JBlack10mo
I don't think it needs any true fanatics. It just needs incentives. This isn't to say there won't be fanatics anyway. There probably aren't many things that nobody can get fanatical about. This is even more true if they're given incentives to act fanatical about it.

Obviously there are considerations downstream of articulating this, one is that when but so it's reasonable to hedge on ending up in world B even though it's not strictly more probable than ending up in world A.

We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge" things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!

I heard it from Abram Demski at AISU'21.

Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever which will be 100 valuable if you end up in world A, or you can pull lever whi... (read more)

1TLW1y
It is often the case that you are confident in the sign of an outcome but not the magnitude of the outcome. This heuristic is what happens if you are simultaneously very confident in the sign of positive results, and have very little confidence in the magnitude of negative results.
3Dagon1y
Why are you specifying 100 or 0 value, and using fuzzy language like "acceptably small" for disvalue? Is this based on "value" and "disvalue" being different dimensions, and thus incomparable? Wouldn't you just include both in your prediction, and run it through your (best guess of) utility function and pick highest expectation, weighted by your probability estimate of which universe you'll find yourself in?
1Measure1y
I'm not sure I understand. If the lever is +100 in world A and -90 in world B, it seems like a good bet if you don't know which world you're in. Or is that what you mean by "acceptably small amount of disvalue"?
1Quinn1y
Obviously there are considerations downstream of articulating this, one is that when P(A)>P(B) but V(LA|A)<V(LB|B) so it's reasonable to hedge on ending up in world B even though it's not strictly more probable than ending up in world A.

critiques and complaints

I think one of the most crucial meta skills i've developed is honing my sense of who's criticizing me vs. who's complaining.

A criticism is actionable, implicitly often it's from someone who wants you to win. A complaint is when you can't figure out how you'd actionably fix something or improve based on what you're being told.

This simple binary story is problematic. It can empower you to ignore criticism you don't like by providing a set of excuses, if you're not careful. Sometimes it's operationally impossible to parse out a critic... (read more)

It could be coincidental, but since then I think the rate of pondering founding/building ideas has increased. Perhaps my ability to see myself in a founder role has increased.

(Which isn't specifically about profitable business models, so could be orthogonal to the billionaire suggestion: most of the "buildy" ideas I ponder are grant-reliant / unprofitable)

2Linch1y
Congrats!

One time an EA just asked me "have you considered becoming a billionaire?", which I found very potent.

2Linch1y
How's that going? (Sincere question)
4MondSemmel1y
Here [https://forum.effectivealtruism.org/posts/m35ZkrW8QFrKfAueT/an-update-in-favor-of-trying-to-make-tens-of-billions-of] is a thread on the EA forum which makes this case explicitly.

I think of the 4 community reviews (5 if Yglesias is considered a community member), this is my favorite. I like that you carved right through wishing you were watching a different movie or expecting to watch a different movie, and just extracted the maximal value from the movie that you indeed actually watched. I feel like when I talk about ways the movie could've been better, I'm expressing dissatisfaction with the people who made it; when you do it, you're suggesting ways for the characters to improve.

I like that what others interpreted as cheap shots a... (read more)

handed a miracle

The missed opportunity I'm most annoyed about is around this-- In my version everything could go as planned via the first hypothesized miracle, then either 1. everyone could die anyway, or 2. they'd have to go back to the drawing board, get creative, decentralize (i.e. invest in other orgs or individuals), and try again. So much richer than the movie we actually watched.

hmu for a haskell job in decentralized finance. Super fun zero knowledge proof stuff, great earning to give opportunity.

Are shelling points the occam's razor of mechanism design?

In game theory, a focal point (or Schelling point) is a solution that people tend to choose by default in the absence of communication. (wikipedia)

Intuitively I think simplicity is a good explanation for a solution being converged upon.

Does anyone have any crisp examples that violate the schelling point - occam's razor correspondence?

1acylhalide1y
Anchoring effect is enough for a Schelling point, it doesn't have to be simple solution. For instance a new nation that wants to move away from dictatorship is automatically going to build a democracy with multiple independent arms (legislature, judiciary, executive), a constitution, periodic elections of representatives, etc. They could choose to try a direct democracy or change the term from 5 years to 1 year, they could choose to have public elections for the judiciary too, or any other deviation from how democracies usually run, but they won't. Fear of the unknown + no creativity or motivation will be sufficient from them to copy existing countries' democratic structure.

Major guilty pleasure of mine is Aaron Sorkin, who once did a show called Newsroom about a large news broadcast project that, against all odds and incentives, doubles down on the duty of media elites to inform the public and so on. It's either unbearably corny (insulting) or unbearably corny (affectionate) depending on who's watching.

After the first broadcast of their re-invigorated show, the producer says

in the old days of like 10 minutes ago, we did the news well. You know how? We decided to.

I was thinking about this post and I got my streams crossed... (read more)

3Bill Prada10mo
For an inspiring movie scene for the moment I’d go with Apollo 13. The nerdy engineers saving the mission by coming up with a kluge to fit the wrong shape and size charcoal CO2 scrubbers. A palpable payoff to the JFK inspirational speech. https://spacecenter.org/apollo-13-infographic-how-did-they-make-that-co2-scrubber/ [https://spacecenter.org/apollo-13-infographic-how-did-they-make-that-co2-scrubber/]
2Jon Garcia1y
Thanks for the link. That call-and-response was beautiful.

Disvalue via interpersonal expected value and probability

My deontologist friend just told me that treating people like investments is no way to live. The benefits of living by that take are that your commitments are more binding, you actually do factor out uncertainty, because when you treat people like investments you always think "well someday I'll no longer be creating value for this person and they'll drop me from their life". It's hard to make long term plans, living like that.

I've kept friends around out of loyalty to what we shared 5-10 years ago w... (read more)

1acylhalide1y
Imo choosing to disconnect from people who are no longer providing any value to you is just a healthy thing to do, even a deontologist should agree with that.
3Dagon1y
One thing to be careful about in such decisions - you don't know your own utility function very precisely, and your modeling of both future interactions and your value from such are EXTREMELY lossy. The best argument for deontological approaches is that you're running on very corrupt hardware, and rules that have evolved and been tested over a long period of time are far more trustworthy than your ad-hoc analysis which privileges obvious visible artifacts over more subtle (but often more important) considerations.

Many years ago a mentor told me that critics of abduction point out that induction can make it redundant by making credences in hypotheses about facts, and that this is in fact more aligned with the idea that you don't have a credence in the facts directly you instead have a credence in some model of the facts. I haven't spent any time in the literature since then. Overall, do you think abduction is underrated? I do a lot of skimming lesswrong posts about logic and probability and so on and basically never see it.

1Darmani1y
I'm having a little trouble understanding the question. I think you may be thinking of either philosophical abduction/induction or logical abduction/induction. Abduction in this article is just computing P(y | x) when x is a causal descendant of y. It's not conceptually different from any other kind of conditioning. In a different context, I can say that I'm fond of Isil Dillig's thesis work on an abductive SAT solver and its application to program verification, but that's very unrelated.

I may refine this into a formal bounty at some point.

I'm curious if censorship would actually work in the context of blocking deployment of superpowerful AI systems. Sometimes people will mention "matrix multiplication" as a sort of goofy edge case, which isn't very plausible, but that doesn't mean there couldn't be actual political pressure to censor it. A more plausible example would be attention. Say the government threatens soft power against arxiv if they don't pull attention is all you need, or threatens soft power against harvard if their linguistic... (read more)

1acylhalide1y
You'll need a govt body full of people who are aligned in their thinking, no one should defect. Also Yudkowsky's response to this would prolly be that it isn't enough to censor the first time the idea is created, someone else will just discover another (or the same) path to AGI independently. See pivotal act [https://arbital.com/p/pivotal/].

I think there exists a generic risk of laundering problem. If you say "capitalism is suboptimal" or "we can do better" people are worried about trojan horses, people worry that you're just toning it down to garner mainstream support when behind closed doors you'd look more like like "my specific flavor of communism is definitely the solution". I'm not at all saying I got those vibes from the "transformation of capitalism" post, but that I think it's plausible someone could get those vibes from it. Notably, the book "Inadequate Equilibria" was explicitly ab... (read more)

4pde1y
Our first public communications probably over-emphasized one aspect of our thinking, which is that some types of bad (or bad on some people's preferences) outcomes from markets can be thought of as missing components of the objective function that those markets are systematically optimizing for. The corollary of that absolutely isn't that we should dismantle markets or capitalism, but that we should take an algorithmic approach to whether and how to add those missing incentives. A point that we probably under-emphasized at first is that intervening in market systems (whether through governmental mechanisms like taxes, subsidies or regulation, or through private sector mechanisms like ESG objectives or product labeling schemes) has a significant chance of creating bad and unintended consequences via Goodhart's law and other processes, and that these failures can be viewed as deeply analogous to AI safety failures. We think that people with left and right-leaning perspectives on economic policy disagree in part because they hold different Bayesian priors about the relative likelihood of something going wrong in the world because market fail to optimize for the right outcome, or because some bureaucracy tried to intervene in people's lives or in market processes in a unintentionally (or deliberately) harmful way. To us it seems very likely that both kinds of bad outcomes occur at some rate, and the goal of the AI Objectives Institute is to reduce rates of both market and regulatory failures. Of course there are also political disagreements about what goals should be pursued (which I'd call object level politics, and which we're trying not to take strong organizational views on) and on how economic goals should be chosen (where we may be taking particular positions, but we'll try to do that carefully).

any literature on estimates of social impact of businesses divided by their valuations?

the idea that dollars are a proxy for social impact is neat, but leaves a lot of room for goodhart and I think it's plausible that they diverge entirely in cases. It would be useful to know, if possible to know, what's going on here.

1Josh Jacobson1y
there's paid tools that estimate this, probably poorly
1Quinn1y
thinking about this comment [https://www.lesswrong.com/posts/kuDKtwwbsksAW4BG2/zvi-s-thoughts-on-the-survival-and-flourishing-fund-sff?commentId=aCu7tC6LAqRiyACgv]

Cheers, thanks for writing. I was very anti-math high school student, almost got expelled for throwing a temper tantrum at my algebra 2 teacher cuz I thought it wasn't fair they were making me sit through it. That was 10th grade and they didn't make me take any other math courses. 7 or 8 years later I took the placement exam at a community college, placed into precalc I, retreated to khanacademy and retook the exam a few months later placing into calc I, took that and discrete and all of their sequels, ended up getting straight As and tutoring every single... (read more)

1Jan Christian Refsgaard1y
That is hard to believe, you seem so smart at the UoB discord and your podcast :), thanks for sharing

In my (cited by OP) review I say "I think on net it has everything it needs to raise the discourse level", but something about your comment got me more pessimistic! I've been disappointed that I didn't love it was much as I wanted to since the moment I watched it, but it's possible I'll dislike it more over time.

I share OP's love of Big Short, and I could tell Vice was a regression from that accomplishment. DLU is also a regression from that accomplishment, not just from a filmmaking perspective but from a self-indulgent partisanship perspective.

1NicholasKross1y
Vice is a regression from Big Short w.r.t. focusing on systemic problems, but I think part of the message was about how much individual choices can matter. (I liked it prolly more than Big Short, but in case you didn't notice from parts of my review, my taste is not trustworthy/applicable to others)

Methods, famously, includes the line "I am a descendant of the line of Bacon", tracing empiricism to either Roger (13th century) or Francis (16th century) (unclear which).

Though a cursory wikiing shows an 11th century figure providing precedents for empiricism! Alhazen or Ibn al-Haytham worked mostly optics apparently but had some meta-level writings about the scientific method itself. I found this shockingly excellent quote

The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of a

... (read more)

you're most likely right about it being harder in the industry!

Are they in charge (of that)? Who chose them?

I don't think they need permission or an external mandate to do the right thing!

Why have I heard about Tyson investing into lab grown, but I haven't heard about big oil investing in renewable?

Tyson's basic insight here is not to identify as "an animal agriculture company". Instead, they identify as "a feeding people company". (Which happens to align with doing the right thing, conveniently!)

It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying "we're an oil company"? When they could instead be going around saying "we're a powering stuff" company. Being a powering stuff company means you hav... (read more)

5ChristianKl1y
Yes, this is more about you not hearing about it. SHELL HAS A BIGGER CLEAN ENERGY PLAN THAN YOU THINK — CLEANTECHNICA INTERVIEW [HTTPS://CLEANTECHNICA.COM/2020/05/01/SHELL-HAS-A-BIGGER-CLEAN-ENERGY-PLAN-THAN-YOU-THINK-CLEANTECHNICA-INTERVIEW/] BP BETS FUTURE ON GREEN ENERGY, BUT INVESTORS REMAIN WARY [HTTPS://WWW.WSJ.COM/ARTICLES/BP-BETS-FUTURE-ON-GREEN-ENERGY-BUT-INVESTORS-REMAIN-WARY-11601402304] It seems that Tyson invested 150 million [https://futurism.com/lab-grown-meat-tyson-is-making-a-massive-investment-in-a-meatless-future] into a fund for new food solutions. In contrast to that Exxon invested 600 million [https://www.scientificamerican.com/article/biofuels-algae-exxon-venter/]in algae biofuels back in 2009 and more afterward.
1JBlack1y
The main problem is that prior investment into the oil method of powering stuff doesn't translate into having a comparative advantage in a renewable way of powering stuff. They want a return on their existing massive investments. While this looks superficially like a sunk cost fallacy, it isn't. If a comparatively small investment (mere billions) can ensure continued returns on their trillions of sunk capital for another decade, it's worth it to them. Investment into renewable powering stuff would require substantially different skill sets in employees, in very different locations, and highly non-overlapping investment. At best, such an endeavour would constitute a wholly owned subsidiary that grows while the rest of the company withers. At worst, a parasite that hastens the demise of the parent while eventually failing in the face of competition anyway.
2Yoav Ravid1y
I do vaguely remember hearing of big oil doing that, though perhaps not as much as meat producers do with lab grown meat, try looking into it.
2Pattern1y
1. Might be a little bit harder in that industry. 2. Are they in charge (of that)? Who chose them?

I was probably assuming third party evaluator. I think the individuals should be free to do another project while they wait for the metrics to kick in / the numbers to come back. I think if the metrics come back and it turns out they had done a great job, then they should gain social capital to spend on their future projects, and maybe return to a project similar to the one they shuttered in the future.

You're right that this is a problem if the metrics are expected to be done in house!

4Zvi1y
Metrics are everywhere and always a problem. If the project doesn't continue and the metrics are used to judge the person's performance, it's even more of a Goodhart issue, so I'd be very cautious about judging via known metrics, unless a given situation provides a very good fit.

I deeply appreciate how you feel about EA becoming a self-perpetuating but misaligned engine. It's much stronger writing than what I've told people (usually on discord) when they bring up EA movement building as a cause area.

I think more can be said about TMM. One angle is patience, the idea that we can think of EA institutions as being an order or several of magnitude more wealthy in the future instead of thinking of them as we currently think of them. Combine this insight with some moderate credence in we are not in the hinge of history, you could turn t... (read more)

2ChristianKl1y
If you shut down the org after 18 months, how will you evaluating them based on impact metrics in any meaningful way?

Rats and EAs should help with the sanity levels in other communities

Consider politics. You should take your political preferences/aesthetics, go to the tribes that are based on them, and help them be more sane. In the politics example, everyone's favorite tribe has failure modes, and it is sort of the responsibility of the clearest-headed members of that tribe to make sure that those failure modes don't become the dominant force of that tribe.

Speaking for myself, having been deeply in an activist tribe before I was a rat/EA, I regret I wasn't there to hel... (read more)

0Viliam1y
But what if that makes my tribe lose the political battle? I mean, if rationality actually helped win political fights, by the power of evolution we already would have been all born rational...

Positive and negative longtermism

I'm not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.

In this shortform, I'm going to take a polarity approach. I'm going to bring each pole to it's extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.

Negative longtermism is saying "let's not let some bad stuff happen", namely extinction. It wants to preserve. If nothing gets better for the poor or the an... (read more)

Load More