landfish

landfish's Comments

landfish lab

Wellll, I just signed up for wt.social and so far the interface and experience look terrible. It seems designed around sharing news articles, and that's not very interesting or useful or better than Reddit. I would not call it at least google plus level of good.

I agree that it might take a large amount of funding to get something off the ground that has a chance of competing.

Honestly, I'd be pretty happy to see lesswrong shortform evolve more features rival facebook's discussion space in some way. I'm not sure that's actually the right direction, but I am saying I'm interesting in that direction.

Ikaxas' Shortform Feed

Nuclear arms control & anti-proliferation efforts are a big one here. Other forms of arms control are important too.

landfish lab

Our mixed-motive conflict with social media apps

Modern computers are trash. I'm ready for better interfaces and better AI capabilities that are more aligned with our interests.

I'm going to talk about my phone as a "computer" rather than a collection of (mostly social media) apps, because the thing I want to interface is the computer, not just the apps.

Because that's exactly part of the problem. I don't have enough control over how I interact with the apps. The apps attempt to exert control over my attention. In some ways this is okay -- I do want apps to be high quality and useful, and I want my attention to be drawn to high quality useful things. However, the apps try to draw my attention using short term reward cycles that I often do not endorse upon reflection. This is a kind of superstimulus that we didn't evolve to handle. I want my phone's software to help me with this. I want this to be an Operating System feature and not an app feature, because I don't trust the apps. I have a mixed-motive conflict with the apps, and I want more leverage.

A mixed-motive conflict is one where many interests align but some do not. The name comes from Thomas Schelling's work, The Strategy of Conflict, and can be applied in many domains: Nuclear game theory, advertising, and of course social media apps.

Now, in an ideal world I shouldn't have to turn to an OS to give me greater control over the content apps provide me with. Ideally, the incentives would be aligned between me, the customer, and the app's designers and maintainers. If this were the case, I posit Facebook would look very different. There would be far more controls that would allow me to select the things I want to see that I endorse as good, rather than just the ones that keep me maximally engaged. (I know Facebook has changed their algorithms to optimize factors other than screen time, but I'm including this in the sense of 'engaged').

There really should be a data layer which Facebook presents via an API that my OS can control, allowing me to tweak things like the feed, events I see, etc. Facebook would prefer to control the interface, both because it's easier for them to develop and because it's more effective at keeping me engaged. Except, it may not be.

I pledge to look for social media platforms that allow me greater control over my own data. Initially, this may be limited to in-app control over my feed. But in the long term, I want an OS that will interface with my social data feeds and give me options for control. I am aware that browser plugins exist to assist with this, but they rely on hacks that prevent a smooth experience, and Facebook deliberately prevents them from providing many features. I pledge to look for social media companies whose incentives align closer with my own.

I'm not inherently anti-Facebook. If Facebook decides to give me far more control over my social data and interactions, I would consider paying for this service. However, I'm not optimistic about these prospects.

I'm not pledging to abandon Facebook in favor of Mastodon or equivalent, or even to become an active Mastodon user. My commitment is of a longer term nature. The social media apps are social. They require network effects to be useful. They're about building communities. I want to let my community know that I'm unhappy with the equilibrium we find ourselves and want something better. Not just with Facebook, but the whole ecosystem of OSes and phones. The future of AI should be an enriching and enabling one, and that requires navigating the myriad challenges of mixed-motive conflicts and organizing together with our social networks to use the bargaining power we possess.

Absent coordination, future technology will cause human extinction

I haven't yet formed clear hypotheses around what is preventing effective coordination around climate change. My current approach is to examine what led to the fairly successful nuclear arms control treaties and what is causing them to fail now. I have found Thomas Schelling's work quite useful for thinking about international cooperation, but I'm missing a lot of models around internal state politics that enables or prevents those states from being able to negotiate effectively.

One area I'm quite interested in, in regards to climate coordination / conflict, is geoengineering. Several high-impact geoengineering methods seem economically feasible to do unilaterally at scale. This seems like a complicated mixed-motive conflict. I'm not clear where the Schelling Points will be, but I am going to try to figure this out. I'd love to see other people do their own analyses here!

Absent coordination, future technology will cause human extinction
First, I am not at all sure history shows international coordination has ever done anything about limiting war.

I think there's a decent case that the Peace of Westphalia is a case of this. It wasn't strong centralized coordination, but it was a case of major powers getting together and engineering a peace that lasted for a long time. I agree that both the League of Nations and the UN have not been successful at the large-scale peacekeeping that their founders hoped for. I do think there are some arguments that the post-WWII US + allies prevented large scale wars. Obviously nuclear deterrence was a big part of that, but it doesn't seem like the only part. I wouldn't call this a big win for explicit international cooperation, but it is an example of a kind of prevention. I recognize that the kind of coordination I'm calling for is unprecedented, and it's unclear whether it's possible.

What I like about the urn metaphor is the recognition that the process is ongoing and it's very hard to model the effects of technologies before we invent them. It's very simplified, but it illustrates that particular point well. We don't know what innovation might lead to an intelligence explosion. We don't know if existentially-threatening biotech is possible, and if so what that might look like. I think the metaphor doesn't capture the whole landscape of existential threats, but does illustrate one class of them.

Absent coordination, future technology will cause human extinction

This sounds roughly right to me. There is the FAI/UFAI threshold of technological development, and after humanity passes that threshold, it's unlikely that coordination will be a key bottleneck in humanity's future. I think many would disagree with this take, who think multi-polar worlds are more likely and that AGI systems may not cooperate well, but I think the view is roughly correct.

The main thing I'm pointing at in my post is 5) and 3)-transition-to-5). It seems quite possible to me that SAI will be out of reach for a while due to hardware development slowing, and that the application of other technologies could threaten humanity in the meantime.

Absent coordination, future technology will cause human extinction

I'd be surprised if a chernobyl/fukushima/mayak level disaster every fifty years led to human extinction over 500 years. Why do you think that is the case?

Absent coordination, future technology will cause human extinction

Exchange from my facebook between Robin Hanson and myself:

Robin Hanson "Will" is WAY too strong a claim.

Jeffrey Ladish The key assumption is that tech development will continue in key areas, like computing and biotech. I grant that if this assumption is false, the conclusion does not follow.

Jeffrey Ladish On short-medium (<100-500 years) timescales, I could see scenarios where tech development does not reach "black marble" levels of dangerous. I'd be quite surprised if on long time scales 1k - 100k years we did not reach that level of development. This is why I feel okay making the strong claim, though I am also working on a post about why this might be wrong.

Robin Hanson You are assuming something much stronger than merely that tech improves.

Jeffrey Ladish However, I think we may have different cruxes here. I think you may believe that there can be fast tech development (i.e. Age of Em), without centralized coordination of some sort (I think of markets as kinds of decentralized coordination), without extinction.

Jeffrey Ladish I'm assuming that if tech improves, humans will discover some autopoietic process that will result in human extinction. This could be an intelligence explosion, it could be synthetic biotech ("green goo"), it could be some kind of vacuum decay, etc. I recognize this is a strong claim.

Robin Hanson Jeffrey, a strong assumption quite out of line with our prior experience with tech.

Jeffrey Ladish That's right.

Jeffrey Ladish Not out of line with our prior experience of evolution though.

Robin Hanson Species tend to improve, but they don't tend to destroy themselves via one such improvement.

Jeffrey Ladish They do tend to destroy themselves via many improvements. Specialists evolve then go extinct.
Though I think humans are different because we can engineer new species / technologies / processes. I'm pointing at reference classes like biotic replacement events: https://eukaryotewritesblog.com/2017/08/14/evolutionary-innovation/

Jeffrey Ladish I'm working on an longform argument about this, will look forward to your criticism / feedback on it.

Robin Hanson The risk of increasing specialization creating more fragility is not at all what you are talking about in the above discussion.

Jeffrey Ladish Yes, that was sort of a pedantic point. I do think it's related but not very directly. But the second point, about the biotic replacement reference class, is the main one.

Absent coordination, future technology will cause human extinction

I didn't really write this in "lesswrong style", but I think it's still appropriate to put this here. There are a number of assumptions implicit in this post that I don't spell out, but plan to with future posts.

Does the US nuclear policy still target cities?

I do find the destruction of capital cities from "decapitation strikes" especially worrying, for three reasons.

1) they disrupt NC3 systems

II) they remove the highest levels of leadership and thus make command hierarchies less clear to both sides

III) as you note, they involve the destruction of cities. I would be very surprised if a US - Russia nuclear war broke out without Washington DC and Moscow being hit with multiple nuclear weapons.

The question becomes -- can the destruction of most cities be avoided even with a few being destroyed? It seems unclear. Airports are another very problematic target, as they're always located near cities and provide backup runaways to military aircraft. Huge fallout problem for cities.



Load More