Context: As part of our efforts of working on Open Questions the LessWrong team has been reaching out to various researchers we know and asked them about questions they would be interested in getting answered.

This question was given to us by Ryan Carey and some of the answers below are the result of us trying to answer the question for a day on a private LessWrong instance, copied over to allow other people to contribute and read what we wrote.

New Answer
Ask Related Question
New Comment

3 Answers sorted by

Here is a list of models of mine that overall paint the picture of "government has a lot of advantages over industry, enough that I expect government to be a lot better at keeping secrets". I feel overall relatively confident in this conclusion, though in the absence of hard empirical data I probably wouldn't go over 80% confidence.

Importance of obfuscation

The more I think about the difficulty of keeping secrets, the more the importance of active obfuscation seems clear to me. From what I understand in most military scenarios, it is rarely the case that the real planned strategy did not reach the enemy, instead the correct strategy was only one of many reports that the enemy received, and the enemy was often unable to distinguish the correct reports from the false ones.

Another example is the registration of patents. U.S. patents are public, but as I understand it many major companies often register dozens of fake patents to avoid others from being able to predict their next product. The difficulty is often in distinguishing the correct patent from all the fake and useless patents that a company registers.

The key difficulty of obfuscation is that lying is difficult. It is hard to manufacture some false statement about reality that isn't contradicted by some more readily available fact about reality, though this depends on the domain of the secret. I think there are two major categories of secrets, "strategic secrets" and "external secrets".

Strategic secrets are secrets about which strategy you are going to pursue. In this case, obfuscation is often easy, and bluffing is a very common occurrence. It's rare that someone has easily verifiable facts about your psychology or decision making process that help them rule out all but one of your potential strategies, and this kind of secret is the basis of most adversarial games and their resulting strategies.

External secrets are facts about external reality that you want to prevent from becoming known by someone else. These are much harder to keep secret, and coming up with good obfuscations is often difficult since to confuse your enemy you need to create a hypothesis that is plausible, but not easily falsifiable. Depending on the domain, this might be quite difficult.

An example of the difficulty of obfuscation or deception that I am familiar with is the manufacturing of fake videos for speedrunning purposes. The speedrunning community has repeated experience with people trying to splice together video segments of a speed run into a run of the full game, or using emulators to make perfectly timed inputs at the correct moments. However, new ways to verify the veracity of a speedrun are constantly being developed including audio analysis to notice sharp transitions that would not occur in natural video footage, or analyses of the input limitations of standard controllers. Since new ways of identifying fake speedruns are constantly being developed, someone who is trying to fake a video will find it almost impossible to account for all of these and produce a video that will hold up as false for a long period of time.

The difficulty of preventing information leaks

My current model of cybersecurity suggests that it is almost impossible to keep any information stored on any computer with public internet access private, and that even for air-gapped computers keeping information on them secure is still extremely difficult.

My current model is that in cybersecurity, offense is vastly easier than defense and that you can likely acquire the relevant vulnerabilities to breach any specific machine for less than $10 million (based on the cost of a zero-day exploit for most operating systems being around $1 million dollars). The primary obstacle to cybersecurity is likely other people knowing precisely what information they are looking for, and identifying the precise targets that should be breached.

Even air-gapped computers are not safe from attacks, as the Stuxnet attack demonstrated, which compromised multiple nuclear power plants in Iran even though the plants were completely cut off from the internet.

My model is that for any group that is larger than 100 people, going through the necessary effort to keep digital information secret is likely impossible.

Cost of verification:

Another major determinant of the difficulty of keeping a secret is the cost for others to verify that it is indeed true. This is particularly important in the context of active obfuscation. Here are some concrete examples:

You have a spy-plane with a maximum altitude of 10km that you want to keep secret. An enemy nation state receives that information, but also received false reports that you intentionally dissiminated that your maximum altitude is actually 15km, 8km or 5km. The ability to make use of your secret is now determined by the enemies ability to differentiate the correct hypothesis from the fictional ones. They basically have two ways of achieving this:

1. They have some ability knowledge of your strategy or psychology that allows them to infer which report is the correct one. E.g. if one of the reports was extracted from a highly guarded facility, and the other ones were suspiciously sent in via email from anonymous sources.

2. They can perform some experiment or some inference that allows them to distinguish between the competing hypotheses. The cost of this can differ a lot between different secrets. If they have access to a file of yours that is encoded with a cryptographic key, the cost of verifying that a single key is the correct one is probably neglegible, so finding the correct one from 5 keys is likely trivial.

In the spy-plane example, your enemy might be able to leverage their models of your manufacturing process to rule out some hypotheses (maybe because they think it's currently impossible to build a spy-plane with a maximum altitude of 15 km). Or they might be able to combine some other knowledge they have about your plane, like the shape of its wings, to rule out certain numbers. But they will almost certainly struggle a lot more with verifying or falsifying one of your maximum altitude numbers, than they will with verifying the correct cryptographic key for a file of yours they stole.

Power theory:

The ability of an actor to prevent the release of a secret is primarily determined by the negative consequences they can inflict on someone else if they catch you trying to release the secret and the incentive the other actor has to get access to the secret.

In this model, governments have a variety of significant advantages over corporations. In particular, the government has access to guns and the ability to threaten violence, and the threat of serious prison time if you release governmental secrets.

Corporations usually only have access to civil litigation, and while this provides some protection and likely prevents whole companies in your country to spring into existence that have no purpose but to steal your technology, but still leaves a lot of room for international competitors to steal your secrets with little fear of consequences, and allows companies to take strategic risks for industrial espionage by just budgeting resources for the consequences of potential litigation.

In some situations corporations have more power over each other, which presumably allows them to keep better secrets. If you have a highly integrated industry in which many companies rely on each other's products, then the threat of severing those ties might limit adversarial action like releasing secrets. An example here might be the graphics-card manufacturing industry, which tends to only have two big players (Intel and AMD) and many industry actors rely on good relations to one of these companies to function. As such, they are less likely to steal and use secrets from AMD or Intel, for fear of retribution from those organizations.

However, it might be the case that because of the delicacy of international relations, the ability for one country to punish another country for releasing secrets might actually be more limited than the ability for one international company to punish another international company. An example might be two international companies that compete in the same market, that have coordinated on pricing or splitting markets, and could renege on those agreements if they notice a defection by the other.

Concrete example: Mutually assured destruction via patents.

I have the cashed model that many of the worlds biggest software companies are well aware that they are constantly infringing on each others patents, and that they coordinate around that by agreeing to not pursue the violation of those patents.

This also enables a pretty natural way of punishment in case one party does clearly violate the terms of the agreement, and allows the other party to threaten to enforce their patents if one party decides to release the other party's secret.

(Comment by Ryan)

My model is that for any group that is larger than 100 people, going through the necessary effort to keep digital information secret is likely impossible.

A strong claim...

My naive expectation is that government has been more successful. This expectation rests on three things:

1. Industry is only interested in commercially relevant secrets. Government is interested in commercially relevant secrets, and also a variety of non-commercial secrets like those with military applications. Therefore a government is more likely to try to keep any random technological secret than a company will, because many of them are not commercially viable.

2. Historically, powerful technological secrets have been developed explicitly under government authority. In the United States example, these have been government laboratories or heavily regulated companies who yield the secrets to the government and don't share them with the industry. Comparatively few such secrets are developed under the auspices of the private sector alone (unless they have been much more successful in keeping them secret than I expect).

3. Governments usually have capabilities that industries lack, like powers of investigation and violence. They can and do routinely use these capabilities in the protection of secrets. It is rare for a commercial entity to have anything like that capacity, and even if they do there is no presumption of legitimacy the way there is for governments.

So the government is interested in more kinds of powerful technological secrets, and originates most of them, while having and using additional tools for keeping them secret.

Following on assumption #1, it feels worth it to address the question of incentives. For example, a corporation only has a positive incentive to invest in security relative to the profits they expect from the secret or secrets in question. Further, they always have an incentive to cut costs and security is notorious for being a target because its relationship to profits is poorly understood, and that is how the judgments are made.

By contrast, the government tends to have security protocols first and then decide what to protect with them later. The United S... (read more)

The grandfather of a friend in college had a technique for producing irreplaceably smooth ball bearings that he preferred to keep secret rather than patent. He had two employees, one of which was his son in law.

I suspect there are a lot of small cases like this, because it would be weird for me to know the only one.

3 Related Questions

I'm sorry, what? please explain.
(Moved your comment to the top-level) We set up a separate server with the LessWrong code and used it to test out the related question features that you now see. Since adding related questions is the kind of thing you can't really try out on the live-server and the whole feature went through multiple iterations and schema changes while we were trying it out. We do this all the time to try out various features before we push them live, or before we decide to scrap them.
2Answer by habryka3y
TECHNOLOGY ARMS RACE DYNAMICS If we have an AI arms race, then the ability for the leading company to keep secrets is highly relevant for the maximum lead that the top company can have over competitors, which in turn determines the amount of resources they can invest into safety applications DIFFICULTY OF SELECTIVELY RELEASING SAFETY RESEARCH FROM AN INSTITUTION Concrete scenario: Imagine that you are FHI and some government official approaches you and says Should you respond with: or BIOSECURITY AND POTENTIAL DANGEROUS APPLICATIONS In biosecurity, you have many applications of potentially dangerous technology, often for the development of better medicine. You often have both industry and government research labs doing research into those technologies. Which ones should you differentially encourage, given that a lot of the risk comes from leaking potentially highly dangerous technologies?
2Answer by habryka3y
(This comment was originally made by Ruby [] on a private instance of LessWrong) Habryka and I have been working on this question and the parent question Has government orindustry had greater past success in maintaining really powerful technological secrets? [] After a half hour conversation, this is the state of my thoughts. Some questions feel intrinsically interesting to us, but often there’s a more practical motive, namely, a decision to be made. Here the real question is, given the opportunity, should one prefer the development of powerful (and dangerous) technologies take place in industry or government spaces? We might expect that think tanks and research organizations might be in a position to influence such choices. REASONS KEEPING SECRET MIGHT BE IMPORTANT The track record of keeping secrets is relevant to our preference of government vs industry under the assumption that keeping powerful technologies is important. I am not certain of this a priori, but let’s count instances where keeping technologies secret might be important: * Straightforwardly, there are actors who would use the technologies to cause harm, e.g. people who would create weaponized virus strains given the chance. * You are in an AI arms race dynamic and your ability to maintain a lead on your competitors affords you the breathing room to do safety work. Suppose you have a 12-month lead, if you can your progress secret, then you have margin with which to do safety work while still staying ahead. If you can’t reliably keep your advances secret (and the secrets are important), you can’t make use of your lead for safety work. * If your AI Safety Research work involves a mix of capabilities work (which you want to keep secret) and safety work (which you want to keep public), then your ability to conduct positive-sum safety
Comments by Ryan Carey []: And
3Answer by Elizabeth3y
Post-WW2, the allies sold enigma machines to developing countries, without mentioning they had broken the code. Source: [] Note: some people speculate the purchasing countries knew this and accepted the risk ( [(]
3Answer by Elizabeth3y
The British kept radar secret during WW2, and attributed pilots' ability to shoot down planes in the dark to vitamin A. Source: []
2Answer by habryka3y
* Nuclear weapons * Dangerous biotechnology * Cultures of dangerous pathogens (seems a bit weird to call this a secret, but it has a lot of attributes of secrets)Tools and knowledge to build better gene-editing technologyConcrete applications of existing technology that could be highly dangerous * Chemical weapons * Drone manufacturing technology? * Methods of terrorism * Potential critical weakpoints in infrastructureOther terrorism ideas
(Originally posted by Ruby) Have "tools and knowledge to build better gene-editing technology" been kept secret? I had the impression developments were paraded out at conferences and in papers, etc.
My sense is that there is significant government research here that is being kept confidential, with parallel private sector research that is mostly public
(Originally posted by Ruby) "Drone manufacturing" might be unnecessary or something, or more accurately, you can figure out how to manufacture a drone so long as you can obtain one. I believe many systems can be reverse-engineered. [] Though maybe less algorithms and training data. Google's current search engine algorithm might actually be secret? I've heard it is supposed to be, but I don't know.
Well, that's the reason why governments try really hard to prevent foreign powers getting access to their drones and airplanes and other technologies.
10 comments, sorted by Click to highlight new comments since: Today at 7:17 AM

(Originally posted by Ruby)

Evidence of the Government and Industry both failing to keep secrets in the face of a concerted effort by China. Just some evidence that we shouldn't expect good secret to be the norm and should expect much of both government and industry to be quite vulnerable. Perhaps elsewhere we can find exceptions.

The Department of Justice (DOJ) has charged two Chinese nationals with being part of a decade-long, government-sponsored global hacking campaign that included the alleged theft of information from 45 US tech companies and government agencies, including NASA’s Jet Propulsion Laboratory and Goddard Space Flight Center.

Excerpts from the Department of Justice announcement:

The Technology Theft Campaign
Over the course of the Technology Theft Campaign, which began in or about 2006, Zhu, Zhang, and their coconspirators in the APT10 Group successfully obtained unauthorized access to the computers of more than 45 technology companies and U.S. Government agencies based in at least 12 states, including Arizona, California, Connecticut, Florida, Maryland, New York, Ohio, Pennsylvania, Texas, Utah, Virginia and Wisconsin.  The APT10 Group stole hundreds of gigabytes of sensitive data and information from the victims’ computer systems, including from at least the following victims: seven companies involved in aviation, space and/or satellite technology; three companies involved in communications technology; three companies involved in manufacturing advanced electronic systems and/or laboratory analytical instruments; a company involved in maritime technology; a company involved in oil and gas drilling, production, and processing; and the NASA Goddard Space Center and Jet Propulsion Laboratory.  In addition to those victims who had information stolen, Zhu, Zhang, and their co-conspirators successfully obtained unauthorized access to computers belonging to more than 25 other technology-related companies involved in, among other things, industrial factory automation, radar technology, oil exploration, information technology services, pharmaceutical manufacturing, and computer processor technology, as well as the U.S. Department of Energy’s Lawrence Berkeley National Laboratory. 

Other links here:

^ Hewlitt Packard Enterprise and IBM among the Managed Service Providers hacked.

The methods used don't sound especially advanced. Spear phishing, send emails which look real but contain malware, then installing Trojans and keyloggers, some stuff with domain names to switch up IP addresses frequently. Seem "means and methods" in the unsealed indictment.

(Originally posted by Ruby) Relevant

Economic Espionage and Industrial Spying (Cambridge Studies in Criminology)

"Operation Brunnhilde"[edit]

Some of these activities were directed via the East German Stasi (Ministry for State Security). One such operation, "Operation Brunnhilde," operated from the mid-1950s until early 1966 and made use of spies from many Communist Bloc countries. Through at least 20 forays, many western European industrial secrets were compromised.[35] One member of the "Brunnhilde" ring was a Swiss chemical engineer, Dr. Jean Paul Soupert (also known as "Air Bubble"), living in Brussels. He was described by Peter Wright in Spycatcher as having been "doubled" by the Belgian Sûreté de l'État.[35][36] He revealed information about industrial espionage conducted by the ring, including the fact that Russian agents had obtained details of Concorde's advanced electronics system.[37] He testified against two Kodak employees, living and working in Britain, during a trial in which they were accused of passing information on industrial processes to him, though they were eventually acquitted.[35]

  • Google China being hacked by China. Google claims 20 other companies hacked.
  • - - - - - - - -

According to Edward Snowden, the National Security Agency spies on foreign companies.[62] In June 2015 Wikileaks published documents over National Security Agency spied French companies.[63]

^ I might expect they also spy on domestic companies too.

Concerns of national governments is a pretty interesting read too.

(Originally posted by Ruby)

China stealing/copying weapons from the US:

In 2011, Dongfan “Greg” Chung, an aerospace engineer from Orange County, was sentenced to 24 years and 5 months in prison for spying for the Chinese regime and stealing more than 250,000 documents from Boeing and Rockwell. Included were designs for the C-17 Globemaster III, a Boeing freighter, which he snuck out in 2006.

(Originally posted by Ruby)

One possible argument in favor of industry (generally, not historically) is greater overall competence which might apply also to keeping secrets. Though this could be a "local" problem more than an inherent one - reasonably you could expect there to be specific governments which are staffed by competent people and are overall competent. Right now government is low status (at least on the Left) and pays poorly, but this is a contingent state.

(Originally posted by Ruby)

This article discusses the currently poor state of security clearance infrastructure in the US. Short summary: due to security breaches by hackers, Obama administration took away a contract from a company conducting 60% of federal background investigations. In the process of changing who does the investigation (Obama awarded contracts to four new outside providers) , an enormous backlog built up. For a sense of scale:

A congressional hearing held just before the holidays in December provided encouraging evidence that the combination of Obama and Trump reforms now being implemented is starting to show results. The backlog of unclosed investigations fell from over 700,000 at the beginning of 2018 to about 600,000 at year’s end, with that number currently being whittled down at the rate of 3,000-4,000 per week. According to the head of the background investigations bureau, 55,000 requests for investigations are being received each week, but 59,000 are being completed.

Somehow I don't trust a system with that much volume and that pressure to quickly get through things to maintain high standards. Also the use of all these outside contractors.

Then again, maybe not on these people have truly powerful technological secrets to keep. Maybe there are better processes in those cases and more thorough background checks, etc., are done.

Potential empirical questions we can answer

  • Is there a Wikipedia list of data breaches/espionage stuff?
    • Can we systematically go through that and create a spreadsheet?
  • Can we go through some list of major technologies and just see which ones people tried to keep secret and how long it took people to reproduce them?
  • What is the amount of financial resources that goes into defending from espionage?
    • What is the amount of resources that goes into espionage?

New to LessWrong?