LESSWRONG
LW

46
Lukas Finnveden
2925Ω506184783
Message
Dialogue
Subscribe

Previously "Lanrian" on here. Research analyst at Redwood Research. Views are my own.

Feel free to DM me, email me at [my last name].[my first name]@gmail.com or send something anonymously to https://www.admonymous.co/lukas-finnveden 

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
Mikhail Samin's Shortform
Lukas Finnveden5d40

Interesting, thanks. I think I had heard the rumor before and believed it. 

In the linked study, it looks like they asked the people about regret very shortly after the suicide attempt. This could both bias the results towards less regret to have survived (little time to change their mind) or more regret to have survived (people might be scared to signal intent to retry suicide, for fear of being committed, which I think sometimes happens soon after failed attempts). 

Reply
peterbarnett's Shortform
Lukas Finnveden7d40

I read the claim as saying that "some people and institutions concerned with AI safety" could have had more than order of magnitude more resources than they actually have, by investing. Not necessarily a claim about the aggregate where you combine their wealth with that of AI-safety sympathetic AI company founders and early employees. (Though maybe Carl believes the claim about the aggregate as well.)

Over the last 10 years, NVDA has had ~100x returns after dividing out the average S&P 500 gains. So more than an OOM of gains were possible just investing in public companies.

(Also, I think it's slightly confusing to point to the fact that the current portfolio is so heavily weighted towards AI capability companies. That's because the AI investments grew so much faster than everything else. It's consistent with a small fraction of capital being in AI-related stuff 4-10y ago — which is the relevant question for determining how much larger gains were possible.)

Reply
peterbarnett's Shortform
Lukas Finnveden8d1816

My argument wouldn't start from "it's fully negligible". (Though I do think it's pretty negligible insofar as they're investing in big hardware & energy companies, which is most of what's visible from their public filings. Though private companies wouldn't be visible on their public filings.) Rather, it would be a quantitative argument that the value from donation opportunities is substantially larger than the harms from investing.

One intuition pump that I find helpful here: Would I think it'd be a highly cost-effective donation opportunity to donate [however much $ Carl Shulman is making] to reduce investment in AI by [however much $ Carl Shulman is counterfactually causing]? Intuitively, that seems way less cost-effective than normal, marginal donation opportunities in AI safety.

You say "I think that accelerating capabilities buildouts to use your cut of the profits to fund safety research is a bit like an arsonist donating to the fire station". I could say it's more analogous to "someone who wants to increase fire safety invests in the fireworks industry to get excess returns that they can donate to the fire station, which they estimate will reduce far more fire than their fireworks investments caused", which seems very reasonable to me. (I think the main difference is that a very small fraction of fires are caused by fireworks. An even better comparison might be for a climate change advocate to invest in fossil fuels when that appears to be extremely profitable.)

Insofar as your objection isn't swayed by the straightforward quantiative consequentialist case, but is more deontological-ish in nature, I'd be curious if it ultimately backs out to something consequentialist-ish (maybe something about signaling to enable coordination around opposing AI?). Or if it's more of a direct intuition.

Reply
An epistemic advantage of working as a moderate
Lukas Finnveden15d91

My read is that the cooperation he is against is with the narrative that AI-risk is not that important (because it's too far away or whatever). This indeed influences which sorts of agencies get funded, which is a key thing he is upset about here.

Hm, I still don't really understand what it means to be [against cooperation with the narrative that AI risk is not that important]. Beyond just believing that AI risk is important and acting accordingly. (A position that seems easy to state explicitly.)

Also: The people whose work is being derided definitely don't agree with the narrative that "AI risk is not that important". (They are and were working full-time to reduce AI risk because they think it's extremely important.) If the derisiveness is being read as a signal that "AI risk is important" is a point of contention, then the derisiveness is misinforming people. Or if the derisiveness was supposed to communicate especially strong disapproval of any (mistaken) views that would directionally suggest that AI risk is less important than the author thinks: then that would just seems like soldier mindset (more harshly critizing views that push in directions you don't like, holding goodness-of-the-argument constant), which seems much more likely to muddy the epistemic waters than to send important signals.

Reply
An epistemic advantage of working as a moderate
Lukas Finnveden15d40

Ok. If you think it's correct for Eliezer to be derisive, because he's communicating the valuable information that something shouldn't be "cooperated with", can you say more specifically what that means? "Not engage" was speculation on my part, because that seemed like a salient way to not be cooperative in an epistemic conflict.

Reply
An epistemic advantage of working as a moderate
Lukas Finnveden15d92

Adele: "Long story short, derision is an important negative signal that something should not be cooperated with"

Lukas: "If that's a claim that Eliezer wants to make (I'm not sure if it is!) I think he should make it explicitly and ideally argue for it."

Habryka: "He has explicitly argued for it"

What version of the claim "something should not be cooperated with" is present + argued-for in that post? I thought that post was about the object level. (Which IMO seems like a better thing to argue about. I was just responding to Adele's comment.)

Reply1
An epistemic advantage of working as a moderate
Lukas Finnveden15d8-1

If that's a claim that Eliezer wants to make (I'm not sure if it is!) I think he should make it explicitly and ideally argue for it. Even just making it more explicit what the claim is would allow others to counter-argue the claim, rather than leaving it implicit and unargued.[1] I think it's dangerous for people to defer to Eliezer about whether or not it's worth engaging with people who disagree with him, which limits the usefulness of claims without arguments.

Also, aside on the general dynamics here. (Not commenting on Eliezer in particular.) You say "derision is an important negative signal that something should not be cooperated with". That's in the passive voice, more accurate would be "derision is an important negative signal where the speaker warns the listener to not cooperate with the target of derision". That's consistent with "the speaker cares about the listener and warns the listener that the target isn't useful for the listener to cooperate with". But it's also consistent with e.g. "it would be in the speakers interest for the listener to not cooperate with the target, and the speaker is warning the listener that the speaker might deride/punish/exclude the listener if they cooperate with the target". General derision mixes together all these signals, and some of them are decidedly anti-epistemic.

  1. ^

    For example, if the claim is "these people aren't worth engaging with", I think there are pretty good counter-arguments even before you start digging into the object-level: The people having a track record of being willing to publicly engage on the topics of debate, of being willing to publicly change their mind, of being open enough to differing views to give MIRI millions of dollars back when MIRI was more cash-constrained than they are now, and understanding points that Eliezer think are important better than most people Eliezer actually spends time arguing with.

    To be clear, I don't particularly think that Eliezer does want to make this claim. It's just one possible way that "don't cooperate with" could cash out here, if your hypothesis is correct.

Reply
Wei Dai's Shortform
Lukas Finnveden19d20

Thanks!

Reply
Yudkowsky on "Don't use p(doom)"
Lukas Finnveden20d42

Thanks, I think I'm sympathetic to a good chunk of this (though I think I still put somewhat greater value on subjective credences than you do). In particular, I agree that there are lots of ways people can mess up when putting subjective credences on things, including "assuming they agree more than they do".

I think the best solution to this is mostly to teach people about the ways that numbers can mislead, and how to avoid that, so that they can get the benefits of assigning numbers without getting the downside. (E.g.: complementing numerical forecasts with scenario forecasts. I'm a big fan of scenario forecasts.)

My impression is that Eliezer holds a much stronger position than yours. In the bit I quoted above, I think Eliezer isn't only objecting to putting too much emphasis on subjective credences, but is objecting to putting subjective credences on things at all.

Reply
Thoughts on Gradual Disempowerment
Lukas Finnveden20d20

You're right that most AIs would probably also be barred from participating from most wealth creation events, but the ones that do (maybe by being hosted by, or part of, the new hot corporations) can scale / reproduce really quickly to double down on whatever advantage that they have from being in the inner circle.

I still don't understand why the AIs that have access would be able to scale their influence more quickly than the AI-assisted humans who have the same access.

(Note that Tom never talked about index funds, just about humans investing their money with the help of AIs, which should allow them to stay competitive with AIs. You brought up one way in which some humans are restricted from investing their money, but IMO that constraint applies at least as strongly to AIs as to humans, so I just don't get how it gives AIs a relative competitive advantage.)

Reply
Load More
Project ideas for making transformative AI go well, other than by working on alignment
Extrapolating GPT-N performance
45Notes on cooperating with unaligned AIs
Ω
22d
Ω
8
63Being honest with AIs
Ω
25d
Ω
6
132AI-enabled coups: a small group could use AI to seize power
Ω
5mo
Ω
23
50What's important in "AI for epistemics"?
1y
2
18Project ideas: Backup plans & Cooperative AI
2y
0
20Project ideas: Sentience and rights of digital minds
2y
0
43Project ideas: Epistemics
2y
4
20Project ideas: Governance during explosive technological growth
2y
0
44Non-alignment project ideas for making transformative AI go well
2y
1
28Memo on some neglected topics
2y
2
Load More
Inside/Outside View
4 years ago
(+429/-68)
Conservation of Expected Evidence
5 years ago
(+106)
Acausal Trade
5 years ago
(+11/-39)