A lot of my recent research work focusses on:

1. building the case for why perfect surveillance is becoming increasingly hard to avoid in the future

2. thinking through the implications of this, if it happened

When I say perfect surveillance, imagine everything your eyes see and your ears hear is being broadcast 24x7x365 to youtube (and its equivalents in countries where youtube is banned) and imagine this is true for all 8 billion people.

I'm unsure whether I should devote more of my research time to 1 or 2. If lots of people buy the hypothesis in 1, I'd rather devote time to 2. If people don't buy the hypothesis in 1, I might want to spend time on 1 instead.

 

Hence I wished to solicit opinions. Do you consider perfect surveillance inevitable? Why or why not?

New Answer
New Comment

6 Answers sorted by

Noosphere89

51

Yes, by default.

I'd drop the inevitable word, but barring massive change like a technology regression or laws being passed that are very strong (and it surviving the AI era), near-perfect surveillance is very likely to happen, and the safe assumption is that nothing you do will be hidable/hidden by default.

I'm not addressing whether surveillance is good or bad.

Thanks for letting me know

Dagon

53

I avoid terms (and concepts) like "inevitable".  There are LOTS of unknowns, and many future paths that go through this, or not.  Scenario-based conditionality (what would lead to this, and what would lead elsewhere) seems more fruitful.

Perfect surveillance is the default for electronic intelligence - logging is pretty universal.  I think this is likely to continue in future universes where most people are electronic.

I think the answer is "Mu" for universes with no people in them.  

I think the likely path is "never perfect, but mostly increasing over time" for universes with no singularity or collapse.  

I'd love to more about implications of the CURRENT level of observation.  Things that are just now feasible, and the things that are promoting or holding them back.  For instance, individual facial recognition got a wave of reporting a few years ago, and I honestly don't know if it just quietly became universal or if the handwringing and protests actually worked to keep it only in very controlled and visible places (like border crossings).

I'd love to more about implications of the CURRENT level of observation

I have a feeling that the current bottleneck is data integration rather than data collection.

5Dagon
I think both, by a long shot.  I estimate I spend over half my time outside of easy video surveillance (room without a webcam or phone pointed in a useful direction, or outdoors not in easy LOS of a traffic or business cam), and a slightly different half for audio.  For neither of these is high-fidelity POV data available at all, as described in the post. For those times when I AM under surveillance, the quality is low and the integration is minimal.  There are legal and technical challenges for anyone to use it against me.  And it's VERY easy to find times/place where I'm not being recorded when I choose to.  
1samuelshadrach
Yes I agree with this. As of 2025 data collection and integration both remain unsolved. Although “solving” it often just means grunt work (like the NSA improving their codebase to analyse stuff), it’s not always a big research challenge. 

Hi Dagon


Thanks for this reply.

Predicting and planning multiple scenarios is a good idea, maybe I should do this.

It’s difficult to pinpoint what “current level” implies. As long as there are missing pieces, it’s arguably still in the future not the present. For example, at a technical level there may be no difficulty in hiring 10,000 drone pilots to surveill a city; using off-the-shelf drones and existing workforce of drone pilots. But reality is that this has not happened yet in any city, and there might be some reasons why.


If current level only means lite... (read more)

2Dagon
Yes, I mean the current deployed level.  News hasn't really covered anything major in the last few years on the topic, and I don't know if it's stagnated or the reporting has just given up.
1samuelshadrach
Oh okay I think there are news publications and independent people covering these issues they’re just not the most popular ones. If you follow the right people you’ll get the latest news. I understand there’s value in someone (maybe me, maybe AI) collecting and summarising all the news in one place. Thanks for the suggestion.   Stated without endorsement: Websites, interviews, books, twitter handles of: Freedom of the press foundation  Signal Wikileaks Freedom house DEFCON conferences Edward Snowden, Julian Assange, Meredith Whittaker, Ross Ulbricht, Moxie Marlinspike, Matthew Green There’s also publications on foreign affairs and health of democracy in various countries, from which you can extract the relevant articles.  Lastly hacker news is also worth following. 

Anon User

41

Hm, not sure about it being broadcast vs consumed by a powerful AI that somebody else has at least a partial control over.

To be clear, when you say powerful you still mean less powerful than ASI, right?

What are your thoughts on whether this organisation will be able to secure the data they collect? My post has some my thoughts on why securing data may be difficult even if you're politically powerful.

1Anon User
Yes, potentially less that ASI, and security is definitely an issue, But people breaching the security would hoard their access - there will be periodic high-profile spills (e.g. celebrities engaged in sexual activities, or politicians engaged in something inappropriate would be obvious targets), but I'd expect most of the time people would have at least an illusion of privacy.

whestler

30

I'd like to hear the arguments why you think perfect surveillance would be more likely in the future. I definitely think we will reach a state where surveillance is very high, high enough to massively increase policing of crimes, as well as empower authoritarian governments and the like, but I'm not sure why it would be perfect. 

It seems to me that the implications of "perfect" surveillance are similar enough to the implications of very high levels of surveillance that number 2 is still the more interesting area of research. 

Thanks for the reply. 

You can read my linked post for more on how surveillance will increase. 

But yes good to know you’d rather I write more about 2.  

StartAtTheEnd

32

I have considered automated mass-surveillance likely to occur in the future, and tried to prevent it, since about 20 years ago. It bothers me that so many people don't have enough self-respect to feel insulted by the infringement of their privacy, and that many people are so naive that they think surveillance is for the sake of their safety.

Privacy has already been harmed greatly, and surveillance is already excessive. And let me remind you that the safety we were promised in return didn't arrive.  

The last good argument against mass-surveillance was "They cannot keep an eye on all of us" but I think modern automation and data processing has defeated that argument (people have just forgotten to update their cached stance on the issue).

Enough ranting. The Unabomber argued for why increases in technology would necessarily lead to reduced freedom, and I think his argument is sound from a game theory perspective. Looking at the world, it's also trivial to observe this effect, while it's difficult to find instances in which the amount of laws have decreased, or in which privacy has been won back (also applies to regulations and taxes. Many things have a worrying one-way tendency). The end-game can be predicted with simple exterpolation, but if you need an argument it's that technology is a power-modifier, and that there's an asymmetry between attack and defense (the ability to attack grows faster, which I believe caused the MAD stalemate).

I don't think it's difficult to make a case for "1", but I personally wouldn't bother much with "2" - I don't want to prepare myself for something when I can help slow it down. Hopefully web 3.0 will make smaller communities possible, resisting the pathelogical urge to connect absolutely everything together. By which time, we can get separation back, so that I can spend my time around like-minded people rather than being moderated to the extent that no groups in existence are unhappy with my behaviour. This would work out well unless encryption gets banned.

The maximization of functions lead to the death of humanity (literally or figuratively), but so does minimization (I'm arguing that pro-surveillance arguments are moral in origin and that they make a virtue out of death)

Thanks for this reply. You do seem to be thinking on lines similar to mine, focusing on where the incentives lead in the longterm not just shortterm. 

Can you identify the specific arguments from ISAIF that you find persuasive on why future humans will have reduced freedom?

I agree that since Enlightenment it has become easier and easier to destroy a region of matter (using some sort of weapon or explosive) and harder to defend against it being destroyed. (Except in a game theory way where you commit to destroying something else in return)

I’m curious ho... (read more)

1StartAtTheEnd
Sorry in advance for the wordy reply.  Here's my version (which might be the same. I take responsibility for any errors, but no credit for any overlap with Ted's argument) 1: New technologies seem good at first/on the surface. 2: Now that something good is available, you need to adapt it (or else you're putting yourself or others at a disadvantage, which social forces will punish you for) 3: Now that the new technology is starting to be common, people find a way to exploit/abuse it. This is because technology is neutral, it can always be use for both good and bad things, you cannot seperate the two. 4: In order to stop abuse of said technology, you need to monitor its use, restrict access with proof of identity, to regulate it, or to create new and even stronger technology. 5: Now that you're able to regulate the new technology, you must do so. If you can read peoples private emails, and you choose not to, you will be accused of aiding pedophiles and terrorists (since you could arguably have caught them if you did not respect their privacy) This dynamic has a lot of really bad consequences, which Ted also writes about. For instance, once gene editing is possible, why would we not remove genes which results in "bad traits"? If you do not take actions which makes society safer, you will be accused of making society worse. So we might be forced to sanitize even human nature, making everyone into inoffensive and lukewarm drones (as the traits which can result in great people and terrible people are the same, the good and the bad cannot be separated. This is why new games and movies are barely making any money, and it's why Reddit is dying. They removed the good together with the bad)    I can slow it down for myself by not engaging in these new technologies (IoT, subscription-based technology, modern social media, etc.) and using fringe privacy-based technologies, or simply not making noise (If nothing you say escapes the environment in which you said it, you'r
2samuelshadrach
Thanks for the reply.  That is an impressive argument tbh. Let me try rephrasing it a different way. Technology gives people more ability to predict and control systems, often by removing variance in outcomes. When the system is basically the entire environment humans are interacting with, this also reduces variance in outcomes of what humans do and what humanity does.   I guess my hope lies in being able to do things that reduce the variance of humanity’s future as a side effect (don’t go extinct or run experiments that risk extinction for example) but doesn’t necessarily reduce the variance of individual human beings’ outcomes as strongly. For instance democracy in some ways reduces the variance of outcomes for the society (no dictator can come to power, stable law and order, stable economic growth possibly) but increases the variance of outcomes for the individual (social mobility is possible, more freedom to pick careers, marry, travel, read, write etc)   Sorry maybe my example on web3 wasn't clear. I mean you as the owner of your machine can audit what packets are entering or exiting it, even if you don’t trust the microprocessor to not have a hardware backdoor. I agree as of today it’s easier to protect an in person conversation than a digital one from ever being recorded. (Although I will say even ideas created in other persons mind are an information leak, you need to really think through who else that person is going to interact with and what their computer opsec is, etc) Even one mistake blows up your entire secrecy in the presence of a smart and motivated adversary. I have this idea of a community that completely isolates geographically for decades, this ensures nobody ever comes into contact with people from the outside. I’m curious about your thoughts on that.    I get where you’re coming from but again, how do we actually do this? I don’t mean at a technical level, I mean politically. 
1StartAtTheEnd
Predict and control... I'm not sure about that, actually. The world seems to be a complex system, which means that naive attempts at manipulating it often fail. I don't think we're using technology to control others in the manner that we can choose their actions for them, but we are decreasing the diversity of actions that one can take (for instance, anything which can be misunderstood seems to be no go now, as strangers will jump in to make sure that nothing bad is going on, as if it was their business to get involved in other peoples affairs). So our range of motion is reduced, but it's not locked to a specific direction which results in virtue or something. I don't think that the world can be controlled, but I also think that attempts at controlling by force mistaken, as there's more upstream factors which influence most of society. For instance, if your population is buddhist, they will believe that treating others well is the best thing to do, which I think is a superior solution to placing CCTVs everywhere. The best solutions don't need force, and the one which use force never seem optimal (consider the war on drugs, the taboo on sexuality, attempts at stopping piracy, etc). I think the correct set of values is enough (but again, the receiver needs to agree that they're correct voluntarily). If everyone can agree on what's good, they will do what's good, even if you don't pressure them into doing so.  I'm also keeping extinction events in mind and trying to combat them, I just do so from a value perspective instead. I'm opposed to creating AGIs, and we wouldn't have them if everyone else were opposed as well. Some people naively believe that AGIs will solve all their problems, and many don't place any special value on humanity (meaning that they don't resist being replaced by robots). But there's also many people like me who enjoy humanity itself, even in its imperfection. This is likely possible, yeah. But you can design things in such a way that they're s

Benjy_Forstadt

2-6

I don’t think perfect surveillance is inevitable. 

I would prefer it, though. I don’t know any other way to prevent people from doing horrible things to minds running on their computers. It wouldn’t need to be publicly broadcast though, just overseen by law enforcement. I think this is much more likely than a scenario where everything you see is shared with everyone else.

Unfortunately, my mainline prediction is that people will actually be given very strong privacy rights, and will be allowed to inflict as much torture on digital minds under their control as they want. I’m not too confident in this though.


 

One of our cruxes is probably likelihood of law enforcement actually securing the data they collect, versus it being leaked.

Thanks for the reply.

Sorry, I think I'm going to avoid discussing your point about digital minds in this post, it's best for a separate post. There's a number of considerations there (ASI timelines, unipolar versus multipolar post-ASI world) that would take time to discuss.

Assuming a pre-ASI world, do you have guesses for what our crux might be? I'm not convinced perfect surveillance is inevitable either, but I probably assign higher odds to it than you.

4 comments, sorted by Click to highlight new comments since:

I don't know about inevitable but I imagine that it is such an attractive option to governments that if the technology gets there it will be enacted before laws are passed preventing it, if any ever are. I would include a version of this where it is practically mandatory through incentives like greatly increased cost of insurance, near inability to defend yourself in a court or cross borders if you lacked it, or it just becomes the social norm to give up as much data about yourself as possible.

That said, I also think that if things go well we will have good space technology allowing relatively small communities to live in self sustaining habitats/colony ships which would kind of break any meaningful surveillance.  
 

This is a very off the cuff remark, I haven't given this topic a great deal of thought before reading this post so make of that what you will.

Hey. Thanks for the reply. 

“Self sustaining” seems like the key word here. The colony would need independent supply of food, water and energy, and it would need independent military and government.

What time scale are you thinking around?

And do you expect space colonies to obtain this level of political freedom from existing nuclear powers? If yes why? 

Honestly, I'm not sure. I read about the biosphere 2 experiments a while ago and they pretty much failed to make a self sustaining colony with only a few people and way more mass than we could practically get into space. I really want us as a species to keep working on that so we can solve any potential problems in parallel with our development of rockets or other launch systems. I could see a space race esque push getting us there in under a decade but there currently isn't any political or commercial motivation to do that. I don't know if it would necessarily need a military. I could easily be very wrong but there's so much space in space and so much stuff on earth trying to conquer a habitat with a few thousand people on it seems a little unnecessary. Italy won't take over Vatican city, not because they can't but because there really isn't a good reason to. As for political freedom, that's the most speculative of all as I understand it less than the technology. My intuition is that they could simply because a self sustaining colony doesn't need to produce any surplus a government would be interested in taxing. If you set up an asteroid mining operation I can see all the governments wanting to take a cut of the profits but if all you wanted was to get away from an implicit surveillance state it would have to be truly dystopian to keep you from leaving. As long as you don't launch anything dangerous toward Earth and you aren't growing exponentially to the point where you might rival the power of a country and you aren't engaging in incredibly lucrative trade, the only motivation left to govern you would be control for control's sake and I guess I'm just optimistic enough to think that there will always be at least one place on earth with high tech that isn't that dystopian.

Got it.


I’m not sure but I think building a colony (or hiding in an existing colony) in a remote rainforest or mountaineous region  may be easier to achieve if the goal is just security through obscurity. Also easier to be self-sustaining, atleast with today’s tech. There’s many such groups of people that exist today, that are mostly self-sustaining yet don’t produce enough surplus that anyone else cares to find out what they’re doing. 

My guess is it’ll be one of the nuclear powers who will build the first space colony to begin with so it’ll be theirs by default, no conquering needed. Also the US defence establishment in particular has a history of wanting ownership and soft power over emerging technologies long before it’s obvious what the commercial value from it will be, and I don’t see that as irrational from their point of view.