Yes, by default.
I'd drop the inevitable word, but barring massive change like a technology regression or laws being passed that are very strong (and it surviving the AI era), near-perfect surveillance is very likely to happen, and the safe assumption is that nothing you do will be hidable/hidden by default.
I'm not addressing whether surveillance is good or bad.
I avoid terms (and concepts) like "inevitable". There are LOTS of unknowns, and many future paths that go through this, or not. Scenario-based conditionality (what would lead to this, and what would lead elsewhere) seems more fruitful.
Perfect surveillance is the default for electronic intelligence - logging is pretty universal. I think this is likely to continue in future universes where most people are electronic.
I think the answer is "Mu" for universes with no people in them.
I think the likely path is "never perfect, but mostly increasing over time" for universes with no singularity or collapse.
I'd love to more about implications of the CURRENT level of observation. Things that are just now feasible, and the things that are promoting or holding them back. For instance, individual facial recognition got a wave of reporting a few years ago, and I honestly don't know if it just quietly became universal or if the handwringing and protests actually worked to keep it only in very controlled and visible places (like border crossings).
I'd love to more about implications of the CURRENT level of observation
I have a feeling that the current bottleneck is data integration rather than data collection.
Hi Dagon
Thanks for this reply.
Predicting and planning multiple scenarios is a good idea, maybe I should do this.
It’s difficult to pinpoint what “current level” implies. As long as there are missing pieces, it’s arguably still in the future not the present. For example, at a technical level there may be no difficulty in hiring 10,000 drone pilots to surveill a city; using off-the-shelf drones and existing workforce of drone pilots. But reality is that this has not happened yet in any city, and there might be some reasons why.
If current level only means lite...
Hm, not sure about it being broadcast vs consumed by a powerful AI that somebody else has at least a partial control over.
To be clear, when you say powerful you still mean less powerful than ASI, right?
What are your thoughts on whether this organisation will be able to secure the data they collect? My post has some my thoughts on why securing data may be difficult even if you're politically powerful.
I'd like to hear the arguments why you think perfect surveillance would be more likely in the future. I definitely think we will reach a state where surveillance is very high, high enough to massively increase policing of crimes, as well as empower authoritarian governments and the like, but I'm not sure why it would be perfect.
It seems to me that the implications of "perfect" surveillance are similar enough to the implications of very high levels of surveillance that number 2 is still the more interesting area of research.
Thanks for the reply.
You can read my linked post for more on how surveillance will increase.
But yes good to know you’d rather I write more about 2.
I have considered automated mass-surveillance likely to occur in the future, and tried to prevent it, since about 20 years ago. It bothers me that so many people don't have enough self-respect to feel insulted by the infringement of their privacy, and that many people are so naive that they think surveillance is for the sake of their safety.
Privacy has already been harmed greatly, and surveillance is already excessive. And let me remind you that the safety we were promised in return didn't arrive.
The last good argument against mass-surveillance was "They cannot keep an eye on all of us" but I think modern automation and data processing has defeated that argument (people have just forgotten to update their cached stance on the issue).
Enough ranting. The Unabomber argued for why increases in technology would necessarily lead to reduced freedom, and I think his argument is sound from a game theory perspective. Looking at the world, it's also trivial to observe this effect, while it's difficult to find instances in which the amount of laws have decreased, or in which privacy has been won back (also applies to regulations and taxes. Many things have a worrying one-way tendency). The end-game can be predicted with simple exterpolation, but if you need an argument it's that technology is a power-modifier, and that there's an asymmetry between attack and defense (the ability to attack grows faster, which I believe caused the MAD stalemate).
I don't think it's difficult to make a case for "1", but I personally wouldn't bother much with "2" - I don't want to prepare myself for something when I can help slow it down. Hopefully web 3.0 will make smaller communities possible, resisting the pathelogical urge to connect absolutely everything together. By which time, we can get separation back, so that I can spend my time around like-minded people rather than being moderated to the extent that no groups in existence are unhappy with my behaviour. This would work out well unless encryption gets banned.
The maximization of functions lead to the death of humanity (literally or figuratively), but so does minimization (I'm arguing that pro-surveillance arguments are moral in origin and that they make a virtue out of death)
Thanks for this reply. You do seem to be thinking on lines similar to mine, focusing on where the incentives lead in the longterm not just shortterm.
Can you identify the specific arguments from ISAIF that you find persuasive on why future humans will have reduced freedom?
I agree that since Enlightenment it has become easier and easier to destroy a region of matter (using some sort of weapon or explosive) and harder to defend against it being destroyed. (Except in a game theory way where you commit to destroying something else in return)
I’m curious ho...
I don’t think perfect surveillance is inevitable.
I would prefer it, though. I don’t know any other way to prevent people from doing horrible things to minds running on their computers. It wouldn’t need to be publicly broadcast though, just overseen by law enforcement. I think this is much more likely than a scenario where everything you see is shared with everyone else.
Unfortunately, my mainline prediction is that people will actually be given very strong privacy rights, and will be allowed to inflict as much torture on digital minds under their control as they want. I’m not too confident in this though.
One of our cruxes is probably likelihood of law enforcement actually securing the data they collect, versus it being leaked.
Thanks for the reply.
Sorry, I think I'm going to avoid discussing your point about digital minds in this post, it's best for a separate post. There's a number of considerations there (ASI timelines, unipolar versus multipolar post-ASI world) that would take time to discuss.
Assuming a pre-ASI world, do you have guesses for what our crux might be? I'm not convinced perfect surveillance is inevitable either, but I probably assign higher odds to it than you.
I don't know about inevitable but I imagine that it is such an attractive option to governments that if the technology gets there it will be enacted before laws are passed preventing it, if any ever are. I would include a version of this where it is practically mandatory through incentives like greatly increased cost of insurance, near inability to defend yourself in a court or cross borders if you lacked it, or it just becomes the social norm to give up as much data about yourself as possible.
That said, I also think that if things go well we will have good space technology allowing relatively small communities to live in self sustaining habitats/colony ships which would kind of break any meaningful surveillance.
This is a very off the cuff remark, I haven't given this topic a great deal of thought before reading this post so make of that what you will.
Hey. Thanks for the reply.
“Self sustaining” seems like the key word here. The colony would need independent supply of food, water and energy, and it would need independent military and government.
What time scale are you thinking around?
And do you expect space colonies to obtain this level of political freedom from existing nuclear powers? If yes why?
Honestly, I'm not sure. I read about the biosphere 2 experiments a while ago and they pretty much failed to make a self sustaining colony with only a few people and way more mass than we could practically get into space. I really want us as a species to keep working on that so we can solve any potential problems in parallel with our development of rockets or other launch systems. I could see a space race esque push getting us there in under a decade but there currently isn't any political or commercial motivation to do that. I don't know if it would necessarily need a military. I could easily be very wrong but there's so much space in space and so much stuff on earth trying to conquer a habitat with a few thousand people on it seems a little unnecessary. Italy won't take over Vatican city, not because they can't but because there really isn't a good reason to. As for political freedom, that's the most speculative of all as I understand it less than the technology. My intuition is that they could simply because a self sustaining colony doesn't need to produce any surplus a government would be interested in taxing. If you set up an asteroid mining operation I can see all the governments wanting to take a cut of the profits but if all you wanted was to get away from an implicit surveillance state it would have to be truly dystopian to keep you from leaving. As long as you don't launch anything dangerous toward Earth and you aren't growing exponentially to the point where you might rival the power of a country and you aren't engaging in incredibly lucrative trade, the only motivation left to govern you would be control for control's sake and I guess I'm just optimistic enough to think that there will always be at least one place on earth with high tech that isn't that dystopian.
Got it.
I’m not sure but I think building a colony (or hiding in an existing colony) in a remote rainforest or mountaineous region may be easier to achieve if the goal is just security through obscurity. Also easier to be self-sustaining, atleast with today’s tech. There’s many such groups of people that exist today, that are mostly self-sustaining yet don’t produce enough surplus that anyone else cares to find out what they’re doing.
My guess is it’ll be one of the nuclear powers who will build the first space colony to begin with so it’ll be theirs by default, no conquering needed. Also the US defence establishment in particular has a history of wanting ownership and soft power over emerging technologies long before it’s obvious what the commercial value from it will be, and I don’t see that as irrational from their point of view.
A lot of my recent research work focusses on:
1. building the case for why perfect surveillance is becoming increasingly hard to avoid in the future
2. thinking through the implications of this, if it happened
When I say perfect surveillance, imagine everything your eyes see and your ears hear is being broadcast 24x7x365 to youtube (and its equivalents in countries where youtube is banned) and imagine this is true for all 8 billion people.
I'm unsure whether I should devote more of my research time to 1 or 2. If lots of people buy the hypothesis in 1, I'd rather devote time to 2. If people don't buy the hypothesis in 1, I might want to spend time on 1 instead.
Hence I wished to solicit opinions. Do you consider perfect surveillance inevitable? Why or why not?