Jackson Wagner

Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about puzzle videogames and other things at jacksonw.xyz

Wiki Contributions

Comments

I will definitely check out that youtube channel!  I'm pretty interested in mechanism design and public-goods stuff, and I agree there are a lot of good ideas there.  For instance, I am a huge fan of Georgism, so I definitely recognize that going all-in on the "libertarian individualist approach" is often not the right fit for the situation!  Honestly, even though charter cities are somewhat an intrinsically libertarian concept, part of the reason I like the charter city idea is indeed the potential for experimenting with new ways to manage the commons and provide public goods -- Telosa is explicitly georgist, for example, and even hyper-libertarian Prospera has some pretty interesting concepts around things like crime liability insurance, which in the USA is considered a pretty left-wing (or maybe "far-liberal"?  idk...) idea for trying to reduce gun violence.

But yeah, a lot of common leftist critiques of society/capitalism/etc can feel kind of... shallow, or overly-formulaic, or confused about the incentives of a given situation, to me?  So I'd like to get a better understanding of the best versions of the leftist worldview, in order to better appreciate what the common critiques are getting at.

Yup, there are definitely a lot of places (like 99+% of places, 99+% of the time!) which aren't interested in a given reform -- especially one as uniqely big and experimental as charter cities.  This is why in our video we tried to focus on political tractability as one of the biggest difficulties -- hopefully we don't come across as saying that the world will instantly be tiled over with charter cities tomorrow!  But some charter cities are happening sometimes in some places -- in addition to the examples in the video, Zambia is pretty friendly towards the idea, and is supportive of the new-city project Nkwashi. (I think Charter Cities Institute considers Nkwashi to be their biggest current partnership?)  Democracy was achieved, after all, even if it still hasn't won a total victory even after 250+ years.

Thanks, this is exciting and inspiring stuff to learn about!

I guess another thing I'm wondering about, is how we could tell apart genes that impact a trait via their ongoing metabolic activities (maybe metabolic is not the right term... what I mean is that the gene is being expressed, creating proteins, etc, on an ongoing basis), versus genes that impact a trait via being important for early embryonic / childhood development, but which aren't very relevant in adulthood.  Genes related to intelligence, for instance, seem like they might show up with positive scores in a GWAS, but their function is confined to helping unfold the proper neuron connection structures during fetal development, and then they turn off, so editing them now won't do anything.  Versus other genes that affect, say, what kinds of cholesterol the body produces, seem more likely to have direct impact via their day-to-day operation (which could be changed using a CRISPR-like tool).

Do we have any way of distinguishing the one type of genes from the other?  (Maybe we can just look at living tissue and examine what genes are expressed vs turned off?  This sounds hard to do for the entire genome...)  Or perhaps we have reason to believe something like "only 20% of genes are related to early development, 80% handle ongoing metabolism, so the GWAS --> gene therapy pipeline won't be affected too badly by the dilution of editing useless early-development genes"?

Is there a plausible path towards gene therapies that edit dozens, hundreds, or thousands of different genes like this? I thought people were worried about off-target errors, etc? (Or at least problems like "you'll have to take 1000 different customized doses of CRISPR therapy, which will be expensive".) So my impression is that this kind of GWAS-inspired medicine would be most impactful with whole-genome synthesis? (Currently super-expensive?)

To be clear I agree with the main point this post is making about how we don't need animal models, etc, to do medicine if we have something that we know works!

(this comment is kind of a "i didn't have time to write you a short letter so I wrote you a long one" situation)

re: Infowar between great powers -- the view that China+Russia+USA invest a lot of efforts into infowar, but mostly "defensively" / mostly trying to shape domestic opinion, makes sense.  (After all, it must be easier to control the domestic media/information lansdscape!)  I would tend to expect that doing domestically-focused infowar stuff at a massive scale would be harder for the USA to pull off (wouldn't it be leaked? wouldn't it be illegal somehow, or at least something that public opinion would consider a huge scandal?), but on the other hand I'd expect the USA to have superior infowar technology (subtler, more effective, etc).  And logically it might also be harder to percieve effects of USA infowar techniques, since I live in the USA, immersed in its culture.

Still, my overall view is that, although the great powers certainly expend substantial effort trying to shape culture, and have some success, they don't appear to have any next-gen technology qualitatively different and superior to the rhetorical techniques deployed by ordinary successful politicians like Trump, social movements like EA or wokeism, advertising / PR agencies, media companies like the New York Times, etc.  (In the way that, eg, engineering marvels like the SR-72 Blackbird were generations ahead of competitors' capabilities.)  So I think the overall cultural landscape is mostly anarchic -- lots of different powers are trying to exert their own influence and none of them can really control or predict cultural changes in detail.


re: Social media companies' RL algorithms are powerful but also "they probably couldn't prevent algorithms from doing this if they tried due to goodharts law".  -- Yeah, I guess my take on this is that the overt attempts at propaganda (aimed at placating the NYT) seem very weak and clumsy.  Meanwhile the underlying RL techniques seem potentially powerful, but poorly understood or not very steerable, since social media companies seem to be mostly optimizing for engagement (and not even always succeeding at that; here we are talking on LessWrong instead of tweeting / tiktoking), rather than deploying clever infowar superweapons.  If they have such power, why couldn't left-leaning sillicon valley prevent the election of Trump using subtle social-media-RL trickery?
(Although I admit that the reaction to the 2016 election could certainly be interpreted as sillicon valley suddenly realizing, "Holy shit, we should definitely try to develop social media infowar superweapons so we can maybe prevent this NEXT TIME."  But then the 2020 election was very close -- not what I'd have expected if info-superweapons were working well!)

With Twitter in particular, we've had such a transparent look at its operations during the handover to Elon Musk, and it just seems like both sides of that transaction have been pretty amateurish and lacked any kind of deep understanding of how to influence culture.  The whole fight seems to have been about where to tug one giant lever called "how harshly do we moderate the tweets of leftists vs rightists".  This lever is indeed influential on twitter culture, and thus culture generally -- but the level of sophistication here just seems pathetic.

Tiktok is maybe the one case where I'd be sympathetic to the idea that maybe a lot of what appears to be random insane trends/beliefs fueled by SGD algorithms and internet social dynamics, is actually the result of fairly fine-grained cultural influence by Chinese interests.  I don't think Tiktok is very world-changing right now (as we'd expect, it's targeting the craziest and lowest-IQ people first), but it's at least kinda world-changing, and maybe it's the first warning sign of what will soon be a much bigger threat?  (I don't know much about the details of Tiktok the company, or the culture of its users, so it's hard for me to judge how much fine-grained control China might or might not be exerting.)

Unrelated -- I love the kind of sci-fi concept of "people panic but eventually go back to using social media and then they feel fine (SGD does this automatically in order to retain users)".  But of course I think that the vast majority of users are in the "aren't panicking" / never-think-about-this-at-all category, and there are so few people in the "panic" category (panic specifically over subtle persuasion manipulation tech that isn't just trying to maximize engagement but instead achieve some specific ideological outcome, I mean) that there would be no impact on the social-media algorithms.  I think it is plausible that other effects like "try not to look SO clickbaity that users recognize the addictiveness and leave" do probably show up in algorithms via SGD.


More random thoughts about infowar campaigns that the USA might historically have wanted to infowar about:

  • Anti-communism during the cold war, maybe continuing to a kind of generic pro-corporate / pro-growth attitude these days. (But lots of people were pro-communist back in the day, and remain anti-corporate/anti-growth today!  And even the republican party is less and less pro-business... their basic model isn't to mind-control everyone into becoming fiscal conservatives, but instead to gain power by exploiting the popularity of social conservativism and then use power to implement fiscal conservativism.)
    • Maybe I am taking a too-narrow view of infowar as "the ability to change peoples' minds on individual issues", when actually I should be considering strategies like "get people hyped up about social issues in order to gain power that you can use for economic issues" as a successful example of infowar?  But even if I consider this infowar, then it reinforces my point that the most advanced stuff today all seems to be variations on normal smart political strategy and messaging, not some kind of brand-new AI-powered superweapon for changing people's minds (or redirecting their focus or whatever) in a radically new way.
  • Since WW2, and maybe continuing to today, the West has tried to ideologically immunize itself against Nazi-ism.  This includes a lot of trying to teach people to reject charismatic dictators, to embrace counterintuitive elements of liberalism like tolerance/diversity, and even to deny inconvenient facts like racial group differences for the sake of social harmony.  In some ways this has gone so well that we're getting problems from going too far in this direction (wokism), but in other ways it can often feel like liberalism is hanging on by a thread and people are still super-eager to embrace charismatic dictators, incite racial conflict, etc.

"Human brains are extremely predisposed to being hacked, governments would totally do this, and the AI safety community is unusually likely to be targeted."
-- yup, fully agree that the AI safety community faces a lot of peril navigating the whims of culture and trying to win battles in a bunch of diverse high-stakes environments (influencing superpower governments, huge corporations, etc) where they are up against a variety of elite actors with some very strong motivations.  And that there is peril both in the difficulty of navigating the "conventional" human-persuasion-transformed social landscape of today's world (already super-complex and difficult) and the potentially AI-persuasion-transformed world of tomorrow.  I would note though, that these battles will (mostly?) play out in pretty elite spaces, wheras I'd expect the power of AI information superweapons to have the most powerful impact on the mass public.  So, I'd expect to have at least some warning in the form of seeing the world go crazy (in a way that seems different from and greater than today's anarchic internet-social-dynamics-driven craziness), before I myself went crazy.  (Unless there is an AI-infowar-superweapon-specific hard-takeoff where we suddenly get very powerful persuasion tech but still don't get the full ASI singularity??)


re: Dath Ilan -- this really deserves a whole separate comment, but basically I am also a big fan of the concept of Dath Ilan, and I would love to hear your thoughts on how you would go about trying to "build Dath Ilan" IRL.

  • What should an individual person, acting mostly alone, do to try and promote a more Dath-Ilani future?  Try to practice & spread Lesswrong-style individual-level rationality, maybe (obviously Yudkowsky did this with Lesswrong and other efforts).  Try to spread specific knowledge about the way society works and thereby build energy for / awareness of ways that society could be improved (inadequate equilibria kinda tries to do this? seems like there could be many approaches here).  Personally I am also always eager to talk to people about specific institutional / political tweaks that could lead to a better, more Dath-Ilani world: georgism, approval voting, prediction markets, charter cities, etc.  Of those, some would seem to build on themselves while others wouldn't -- what ideas seem like the optimal, highest-impact things to work on?  (If the USA adopted georgist land-value taxes, we'd have better land-use policy and faster economic growth but culture/politics wouldn't hugely change in a broadly Dath-Ilani direction; meanwhile prediction markets or new ways of voting might have snowballing effects where you get the direct improvement but also you make culture more rational & cooperative over time.)
  • What should a group of people ideally do?  (Like, say, an EA-adjacent silicon valley billionaire funding a significant minority of the EA/rationalist movement to work on this problem together in a coordinated way.)  My head immediately jumps to "obviously they should build a rationalist charter city":
    • The city doesn't need truly nation-level sovereign autonomy, the goal would just be to coordinate enough people to move somewhere together a la the Free State Project, gaining enough influence over local government to be able to run our own policy experiments with things like prediction markets, georgism, etc.  (Unfortunately some things, like medical research, are federally regulated, but I think you could do a lot with just local government powers + creating a critical mass of rationalist culture.)
    • Instead of moving to a random small town and trying to take over, it might be helpful to choose some existing new-city project to partner with -- like California Forever, Telosa, Prospera, whatever Zuzalu or Praxis turn into, or other charter cities that have amenable ideologies/goals.  (This would also be very helpful if you don't have enough people or money to create a reasonably-sized town all by yourself!)
    • The goal would be twofold: first, run a bunch of policy experiments and try to create Dath-Ilan-style institutions (where legal under federal law if you're still in the USA, etc).  And second, try to create a critical mass of rationalist / Dath Ilani culture that can grow and eventually influence... idk, lots of people, including eventually the leaders of other governments like Singapore or the UK or whatever.  Although it's up for debate whether "everyone move to a brand-new city somewhere else" is really a better plan for cultural influence than "everyone move to the bay area", which has been pretty successful at influencing culture in a rationalist direction IMO!  (Maybe the rationalist charter city should therefore be in Europe or at least on the East Coast or something, so that we mostly draw rationalists from areas other than the Bay Area.  Or maybe this is an argument for really preferring California Forever as an ally, over and above any other new-city project, since that's still in the Bay Area.  Or for just trying to take over Bay Area government somehow.)
  • ...but maybe a rationalist charter city is not the only or best way that a coordinated group of people could try to build Dath Ilan?

(Copies from EA Forum for the benefit of lesswrongers following the discussion here)

Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, "modeling stuff about yourself" in your brain) in a way that optimism/pessimism or pain-avoidance doesn't.  (Although wouldn't a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc?  Even tiny mammals like mice/rats display sophisticated social behaviors...)

I tend to assume that some kind of panpsychism is true, so you don't need extra "circuitry for experience" in order to turn visual-information-processing into an experience of vision.  What would such extra circuitry even do, if not the visual information processing itself?  (Seems like maybe you are a believer in what Daniel Dennet calls the "fallacy of the second transduction"?)
Consequently, I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"!  But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.

So, I tend to think that fish and other primitive creatures probably have "qualia", including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it's kind of just "suffering happening nowhere" or "an experience of suffering not connected to anything else" -- the fish doesn't know it's a fish, doesn't know that it's suffering, etc, the fish is just generating some simple qualia that don't really refer to anything or tie into a larger system.  Whether you call such a disconnected & shallow experience "real qualia" or "real suffering" is a question of definitions.

I think this personal view of mine is fairly similar to Eliezer's from the Sequences: there are no "zombies" (among humans or animals), there is no "second transduction" from neuron activity into a mythical medium-of-consciousness (no "extra circuitry for experience" needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia.  So, animals and even simpler systems probably have qualia in some sense.  But since animals aren't self-aware (and/or have less self-awareness than humans), their qualia don't matter (and/or matter less than humans' qualia).

...Anyways, I think our core disagreement is that you seem to be equating "has a self-model" with "has qualia", versus I think maybe qualia can and do exist even in very simple systems that lack a self-model.  But I still think that having a self-model is morally important (atomic units of "suffering" that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it's probably fine to eat fish.

I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake.  I agree that I see a lot of people being confused and making mistakes, but I don't think the problems are solved!

Why would showing that fish "feel empathy" prove that they have inner subjective experience?  It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy.  Couldn't fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?

Conversely, isn't it possible for fish to have inner subjective experience but not feel empathy?  Fish are very simple creatures, while "empathy" is a complicated social emotion.  Especially in a solitary creature (like a shark, or an octopus), it seems plausible that you might have a rich inner world of qualia alongside a wide variety of problem-solving / world-modeling skills, but no social instincts like jealousy, empathy, loyalty, etc.  Fish-welfare advocates often cite studies that seem to show fish having an internal sense of pain vs pleasure (eg, preferring water that contains numbing medication), or that bees can have an internal sense of being optimistic/risky vs pessimistic/cautious -- if you think that empathy proves the existence of qualia, why are these similar studies not good enough for you?  What's special about the social emotion of empathy?

Personally, I am more sympathetic to the David Chalmers "hard problem of consciousness" perspective, so I don't think these studies about behaviors (whether social emotions like jealousy or more basic emotions like optimism/pessimism) can really tell us that much about qualia / inner subjective experience.  I do think that fish / bees / etc probably have some kind of inner subjective experience, but I'm not sure how "strong", or vivid, or complex, or self-aware, that experience is, so I am very uncertain about the moral status of animals.  (Personally, I also happily eat fish & shrimp all the time.)

In general, I think this post is talking about consciousness / qualia / etc in a very confused way -- if you think that empathy-behaviors are ironclad proof of empathy-qualia, you should also think that other (pain-related, etc) behaviors are ironclad proof of other qualia.

Hi Trevor!  I appreciate this thread of related ideas that you have been developing about intelligence agencies, AI-augmented persuasion techniques, social media, etc.

  • It seems important to "think ahead" about how the power-struggle over AI will play out as things escalate to increasingly intense levels, involving eg national governments and militaries and highly-polarized political movements and etc.
  • Obviously if some organization was hypercompetent and super-good at behind-the-scenes persuasion, we wouldn't really know about it!  So it is hard to 100% confidently dismiss the idea that maybe the CIA has next-gen persuasion tech, or whatever.
  • Obviously we are already, to a large extent, living in a world that is shaped by the "marketplace of ideas", where the truth often gets outcompeted by whatever sounds best / is most memetically fit.  Thinking about these dynamics (even without anything AI-related or any CIA conspiracies) is confusing, but seems very important.  Eg, I myself have been deeply shaped by the crazy memetic landscape in ways that I partially endorse and partially don't.  And everything I might try to do to achieve impact in the world needs to navigate the weird social landscape of human society, which in many respects is in a kind of memetic version of the "equilibrium of no free energy" that yudkowsky talks about in Inadequate Equilibria (although there he is talking mostly about an individual-incentives landscape, rather than a memetic landscape).
  • AI super-persuasion does seem like something we might plausibly get before we get general ASI, which seems like it could be extremely weird / dangerous / destabilizing.


That said, I think this post is too conspiratorial in assuming that some combination of social media companies / national governments understand how to actually deploy effective persuasion techniques in a puppetmaster-like way which is way beyond everyone else.  I think that the current situation is more like "we are living in an anarchic world influenced by an out-of-control memetic marketplace of ideas being influenced by many different actors of varying levels of sophistication, none of whom have amazing next-level gameboard-flipping dominance".  Some scattered thoughts on this theme:

  • If the CIA (or other entities affiliated with the US government, including tech companies being pressured by the government) is so good at persuasion ops, why are there so many political movements that seem to go against the CIA's interests?  Why hasn't the government been able to use its persuasion jiujitsu to neutralize wokeism and Trump/MAGA-ism?  From an establishment perspective, both of these movements seem to be doing pretty serious damage to US culture/institutions.  Maybe these are both in the process of being taken down by "clown attacks" (although to my eye, this looks less like an "attack" from CIA saboteurs, and more like a lot of genuine ordinary people in the movement themselves just being dumb / memetic dynamics playing out deterministically via social dynamics like yudkowsky's "evaporative cooling of group beliefs")?  Or maybe ALL of, eg, wokeism, is one GIANT psy-op to distract the American people from creating a left-wing movement that is actually smart and effective?  (I definitely believe something like this, but I don't believe it's a deliberate military psy-op... rather it's an emergent dynamic.  Consider how corporations are differentially friendlier to wokeism than they are to a more economically-focused, class-based Bernie-ism, so wokeism has an easier time spreading and looking successful, etc.  It also helps that wokeism is memetically optimized to appeal to people in various ways, versus a genuinely smart-and-effective left-wing policy idea like Georgism comes off as boring, technocratic, and hard-to-explain.)
    • Basically, what I am saying is that our national politics/culture looks like the product of anarchic memetic optimization (recently turbocharged by social media dynamics, as described by folks like Slate Star Codex and the book "Revolt of the Public") much moreso than the product of top-down manipulation.
  • If google & facebook & etc are so good at manipulating me, why do their efforts at influence often still seem so clumsy?  Yes, of course, I'm not going to notice the non-clumsy manipulations!  And yes, your "I didn't speak up, because I wasn't as predictable as the first 60%" argument certainly applies here -- I am indeed worried that as technology progresses, AI persuasion tech will become a bigger and bigger problem.  But still, in the here and now, Youtube is constantly showing me these ridiculous ideological banners about "how to spot misinformation" or "highlighting videos from Black creators" or etc... I am supposed to believe that these people are some kind of master manipulators?  (They are clearly just halfheartedly slapping the banners on there in a weak attempt to cover their ass and appease NYT-style complaints that youtube's algorithm is unintentionally radicalizing people into trumpism... they aren't even trying to be persuasive to the actual viewers, just hamfistedly trying to look good to regulators...)
  • Where is the evidence of super-persuasion techniques being used by other countries, or in geopolitical situations?  One of the most important targets here would be things like "convincing Taiwanese to identify mostly as ethnic Chinese, or mostly as an independent nation", or the same for trying to convince Ukrainians to align more with their Russian-like ethnicity and language or with the independent democracies of western Europe.  Ultimately, the cultural identification might be the #1 decisive factor in these countries' futures, and for sure there are lots of propaganda / political messaging attempts from all sides here.  But nobody seems like they have some kind of OP superweapon which can singlehandedly change the fate of nations by, eg, convincing Taiwanese people of something crazy, like embracing their history as a Japanese colony and deciding that actually they want to reunify with Japan instead of remaining independent or joining China!
    • Similarly, the Russian attempts to interfere in the 2016 election, although initially portrayed as some kind of spooky OP persuasion technique, ultimately ended up looking pretty clumsy and humdrum and small-scale, eg just creating facebook groups on themes designed to inflame American cultural divisions, making wacky anti-Hillary memes, etc.
    • China's attempts at cultural manipulation are probably more advanced, but they haven't been able to save themselves from sinking into a cultural atmosphere of intense malaise and pessimism, one of the lowest fertility rates in the world, etc.  If persuasion tech was so powerful, couldn't China use it to at least convince people to keep plowing more money into real estate?
  • Have there been any significant leaks that indicate the USA is focused on persuasion tech and has seen significant successes with it?  If I recall correctly, the Edward Snowden leaks (admittedly from the NSA which focuses on collecting information, and from 10 years ago) seemed to mostly indicate a strategy of "secretly collect all the data" --> "search through and analyze it to identify particular people / threats / etc".  There didn't seem to be any emphasis on trying to shape culture more broadly.
    • Intelligence agencies in the USA devote some effort to "deradicalization" of eg islamist terrorists, extreme right-wingers, etc.  But this stuff seems to be mostly focused on pretty narrow interventions targeting individual people or small groups, and seems mostly based on 20th-century-style basic psychological understanding... seems like a far cry from A/B testing the perfect social-media strategy to unleash on the entire population of some middle-eastern country to turn them all into cosmopolitan neoliberals.

Anyways, I guess my overall point is that it just doesn't seem true that the CIA, or Facebook, or China, or anyone else, currently has access to amazing next-gen persuasion tech.  So IMO you are thinking about this in the wrong way, with too much of a conspiratorial / Tom Clancy vibe.  But the reason I wrote such a long comment is because I think you should keep exploring these general topics, since I agree with you about most of the other assumptions you are making!

  • We are already living in a persuasion-transformed world in the sense that the world is full of a lot of crazy ideas which have been shaped by memetic dynamics
  • Social media in particular seems like a powerful lever to influence culture (see Slate Star Codex & Revolt of the Public)
  • It seems like you probably COULD influence culture a ton by changing the design of social media, so it's a little funny that nobody seems to be intentionally using this to build a persuasion superweapon
    • (Nevertheless I think nobody really understands the long-term cultural effects of social media well enough to make deliberate changes to achieve eventual intended results.  And I think there are limits to what you could do with current techniques -- changing the design & policies of a site like Twitter might change the broad cultural vibe, but I don't think we could create an especially persuasive superweapon that could be aimed at particular targets, like making Taiwanese people culturally identify with Japan)
  • It definitely seems like AI could be used for all kinds of censorship & persuasion-related tasks, and this seems scary because it might indeed allow the creation of persuasion superweapons.
  • Totally separately from all the above stuff about persuasion, the shadowier parts of governments (military & intelligence-agency bureaucracies) seem very important to think about when we are trying to think ahead about the future of AI technology and human civilization.
Load More