Epistemic status: Initially, this was a practical advice post, because I thought privacy was obviously important. After some discussion, I realised that I cared about privacy as a terminal goal, and that this view wasn't universal. It's therefore hard for me to judge the instrumental use of privacy for other goals fairly.
As more and more of our lives plays out online, the importance of deciding who gets access to what information about us is only going up. Privacy is not a new topic on LW, but I think there's still fertile ground here, particularly around privacy online.
I'm planning a follow-up post with pointers for practical advice if you decide you do care.
When we talk about "privacy" we mean a lot of different things. Words are complicated. I'll define privacy as the [social, technical, legal, practical] ability to control who has access to what information about you. Societies and people with greater [social, technical, legal, practical] ability to control access to their information have more privacy, even when they choose not to use this ability. A necessary prerequisite for this kind of privacy is knowledge about how your information is being used.
There are three important classes of privacy within this definition:
In all three cases, especially #1 and #3, your main defense is the practical fact that nobody cares about your data specifically. As we'll see, I think this still comes with significant downsides.
Related but distinct concepts:
Security: The technical ability to stop others accessing your data. A subskill of privacy as defined above.
Anonymity: If you're anonymous, your actions cannot be associated with your true identity or with any of your other actions.
Pseudonymity: If you're pseudonymous, your actions cannot be associated with other pseudonyms or your 'true' name, but can be associated with other actions under the same pseudonym.
I just care about privacy. I feel uncomfortable when somebody sits behind me and is looking at my screen, even when I'm doing utterly unimpeachable things online. The idea of Google recording every website I've ever visited sends a shiver down my spine even if they never used it for anything and stored it perfectly securely.
If this is true for you at all, you might want to see what information companies online have about you, and decide whether you're comfortable with it or not. In particular, 'off-Facebook activity' includes a log of when you've visited any website with a 'share to Facebook' button on it.
Since telling Google to stop serving me personalised ads, and using a VPN, I only see YouTube ads in Dutch (which I speak not at all). Needless to say, I now very rarely get caught up in an advert and forget to skip it after 5 seconds.
Excellent work on the overall downsides of ads and personalised content has already been done on LW, so I won't try to reinvent the wheel. "Most ads are now in a language you don't speak" may be second only to completely blocking adverts for reducing their effect. There are many other possible bad effects of personalisation of ads and content (differential mind manipulation, for instance). I've heard that looking into placing ads on Facebook and discovering how narrow a category you can market to is an excellent incentiviser for privacy.
..."millions of Americans were separated by an algorithm into one of eight categories, also described as ‘audiences’, so they could then be targeted with tailored ads on Facebook and other platforms.One of the categories was named ‘Deterrence’, which was later described publicly by Trump’s chief data scientist as containing people that the campaign “hope don’t show up to vote”. In Georgia, despite Black people constituting 32% of the population, they made up 61% of the ‘Deterrence’ category. In North Carolina, Black people are 22% of the population but were 46% of ‘Deterrence’. In Wisconsin, Black people constitute just 5.4% of the population but made up 17% of ‘Deterrence’."— Channel 4 News
..."millions of Americans were separated by an algorithm into one of eight categories, also described as ‘audiences’, so they could then be targeted with tailored ads on Facebook and other platforms.
One of the categories was named ‘Deterrence’, which was later described publicly by Trump’s chief data scientist as containing people that the campaign “hope don’t show up to vote”.
In Georgia, despite Black people constituting 32% of the population, they made up 61% of the ‘Deterrence’ category. In North Carolina, Black people are 22% of the population but were 46% of ‘Deterrence’. In Wisconsin, Black people constitute just 5.4% of the population but made up 17% of ‘Deterrence’."
— Channel 4 News
Whether this actually works or not is an open question in my mind; I haven't done enough research to know. Whether the technology is getting better, and whether it will eventually work, even if it doesn't currently, is more clear to me. Certainly the incentives are not changing (unless we as a society go out of our way to change them), and both the computing power and the amount of data available are headed upwards.
While privacy ≠ censorship, the latter is impossible without compromising the former. If you don't know what people are looking at, you can't block specific things. Supporting privacy technology stops you being censored, and helps provide the tools for others to avoid censorship.
This has positives and negatives depending on how you feel about the average thing that is currently censored. In the UK and the US, this is currently largely torrenting websites, terrorism, and child pornography, but other countries have much more extensive blocks (the great firewall of China, for example). One of the main problems is that "terrorism" is a nebulous enough concept as to be defined as anything from "actively planning to invade the seat of government to prevent a transition of power" to "anybody who disagrees with me". The important point is that privacy technology will help all censored sites, including the ones you think should be censored.
Though this is one of the standard arguments for (and against) privacy, I think something is missing. Many S-risks require cultural lock-in. Though unlikely to be impactful, reducing the risk of lock-in by preventing censorship may be valuable given the magnitude of the consquences. Indeed, depending on your opinions about longtermism and X-risk, miniscule chances of decreasing S-risk could outweigh all other considerations for privacy.
We need a realm shielded from signaling and judgment. A place where what we do does not change what everyone thinks about us, or get us rewarded and punished. Where others don’t judge what we do based on the assumption that we are choosing what we do knowing that others will judge us based on what we do. Where we are free from others’ Bayesian updates and those of computers, from what is correlated with what, with how things look. A place to play. A place to experiment. To unwind. To celebrate. To learn. To vent. To be afraid. To mourn. To worry. To be yourself. To be real. We need people there with us who won’t judge us. Who won’t use information against us. -Zvi
We need a realm shielded from signaling and judgment. A place where what we do does not change what everyone thinks about us, or get us rewarded and punished. Where others don’t judge what we do based on the assumption that we are choosing what we do knowing that others will judge us based on what we do. Where we are free from others’ Bayesian updates and those of computers, from what is correlated with what, with how things look. A place to play. A place to experiment. To unwind. To celebrate. To learn. To vent. To be afraid. To mourn. To worry. To be yourself. To be real.
We need people there with us who won’t judge us. Who won’t use information against us.
Increasingly, the internet is a dangerous place for those with the wrong opinions. Thinking is therefore not something to do in public. This becomes dangerous when it's not clear where is and isn't public, and indeed if there is anywhere that isn't.
Given certain events of the past few years, it would be remiss of me not to at least touch on pseudonymity. Remaining truly anonymous is near-impossible with ip tracking, fingerprinting, and sign-up requirements to view much of the internet. Remaining pseudonymous is technically easier, but still hard for practical reasons.
Pseudonymity requires a commitment to using a different pseudonym for each online persona, and keeping nice clean gaps between them. If you, say, write erotic fiction, publish edgy poetry, or use dating apps, but don't want this associated with your LinkedIn account, you need a pseudonym. Ideally, you also need to make sure you haven't used the same email address or phone number to sign up, that you haven't used the same profile picture, etc.
I think pseudonymity is achievable with a little careful thought. I think it also could be worth it for people who say things on Tinder they wouldn't say in a job interview, not just bloggers with thriving psychiatric practices.
This should be obvious, but if you didn't find the above reasons compelling, taking steps to improve your privacy online is probably not worth doing. Stop reading and go do something higher-value. I think this is a completely reasonable outcome given the subjectiveness and the assumptions of the reasons-to-care.
Getting recommendations for things you like is nice. Sometimes it's too nice, but there are clear upsides to getting shown content you're likely to enjoy.
Increasing levels of privacy require more and more time, and will break more and more websites. Even with a comparatively moderate approach, I get captchas and/or email verification codes near-constantly, and have to use a different browser for video calls because they just don't work otherwise.
Privacy seems to be losing in an arms race. What was insanity 50 years ago is now normality, and what is normality now will be insanity in 50 years. In particular, digital fingerprinting, which uses a number of traits about your browser or your phone to uniquely identify you, seems to be nearly impossible to prevent, as it requires no cookies at all, and may identify you even with a different IP address.
There are some techniques to avoid fingerprinting, either with 'budgets' of information permitted, by frequently changing some features of your fingerprint, or by spoofing many features to make all users look more similar. I'm not sure how effective any of these are; there is a risk that some of these approaches could be highly-identifying on their own. I've not found anything that could consistently beat the free tools available online to see how fingerprintable you are.
I think the best defense against this claim is that "we're currently losing an arms race for something important" is (at least sometimes) an argument to do more, not less. Additionally, it may be possible to make it economically not-worth-it to track you (by blocking trackers and sending do-not-track signals), even if it's still technically possible. You can also minimise the information that's linked to your fingerprint by limiting access to e.g. location and voice services.
Personally, I think privacy is important, at least enough to spend an hour getting the low-hanging fruit. I tried to make it an open question in my mind while writing the post, to avoid writing the bottom line. Stay tuned for a follow-up with some suggestions that I think are that low-hanging fruit.
Thanks to the LessWrong feedback team for their help with this post (I can highly recommend to anybody writing a post).
Governmental Privacy: unless you have taken serious steps to guard your data, you have negligible privacy from the government.
The government is not a monolith. If you for example apply for a grant from the NIH, you likely have a decent amount of privacy while on the other hand, the NSA is able to access a lot of information from you.