You can’t expect anyone to “quickly prove they’re acting in good faith,” that’s a highly unreasonable bar.
Okay. I think what I want is feedback on tactics, not strategy. I don't want to debate why AI pause political movement is required, yet again. I don't want to debate why US intelligence community will accelerate development of ASI, yet again. I can debate details of how gigapixel cameras work.
And I don't have time for actual "proof" yes, I'm just gonna make a guess. Which could involve me projecting stuff based on reference class of similar comments or similar people commenting.
I don't think you understand what a "pixel" is.
If you want to see a 1 km x 1 km area at a resolution of 0.1m, then you will need 10 000 x 10 000 = 100 000 000 points on the image, AKA 100 megapixels. This is (ideally) independent of technology. You can walk around and take fifty 2 MP pictures and stitch them together, you can fly a drone a few hundred meters up and take a wide-angle shot, or you can fly a satellite overhead and take a picture from space. The distance doesn't matter.
From that, it's a simple extrapolation that it'll (again, ideally) take 100 MP/km^2 * 778 km^2 = 77800 megapixels = 77.8 gigapixels to surveil the land area of New York City to that level of detail. Again, that could be from an array of low-resoultion cameras, a wide-angle camera nearby, or a telephoto camera far away.
(Also, a brief search suggests facial recognition needs about 3mm (0.003m) resolution to identify individuals, not 100 mm (0.1m))
A 1 petapixel camera could cover an area of 3162 km x 3162 km to that resolution, or roughly the entire United States in a single snapshot (ignoring practicalities like the curve of the earth, of course). It could also be used to count someone's nosehairs if you set it up differently.
I understand all this. I am assuming a constant angular field of view, let's just assume 120 degrees for now. I assuming a single photo from a single camera placed at a single location covering 10^15 pixels. I am not talking about multiple photos stitched together, or moving the camera around, and so on.
(Yes, the camera will necessarily have multiple sensor arrays and internally stitch the data together anyway)
And yes a petapixel camera with 120 degree (or some other large field of view) could cover the United States at 3 mm resolution.
I am not sure if we are actually disagreeing.
I am saying someone should be able to place a camera outside US borders and yet be able to do facial recognition of people inside from thousands of kilometres away.
I asked this question to Opus
Suppose it's a bright day, the surface 3x3 mm reflects 5% of light. How many photons it will direct to a square of (10*10 / 10^15) m^2 in area, 40 kilometers away? Per second.
It works out to 1 photon per year.
This math assumes a raw pixel with no optics, which is an absurd way to build a camera. With a 1m lens at 40km, you could get ~10⁵ photons per second (13 OOM better).
The problem here is the diffraction limit. At the 2,500 km ranges discussed, 3mm resolution requires a single aperture of ~500m or a constellation of ~7,500 JWST-scale telescopes tiling the coverage. Optical interferometry could theoretically reduce the count, but requires maintaining satellite relative positions to within a wavelength of light.
Thanks this comment is useful!
single aperture of ~500m
maintaining satellite relative positions to within a wavelength of light
There's no law of physics that prevents humanity from building either of these things. I'm just pessimistic about the engineering advancing to the point that we can build this in next 10 years. (without help of superhuman intelligence, that is)
The constant angular field of view is the disagreement. A camera in the mid-gigapixel to low-terapixel range could cover one city by using an appropriate lens at an arbitrary distance (including space).
Any sensor finer than that would either cover substantial amounts of "boring" area (e.g. nature preserves, agricultural areas), or increase the resolution beyond your target.
I am still not clear where we are disagreeing, sorry.
What do you think is the bottleneck to building a petapixel camera that lets you do facial recognition from outside national borders? I don't think you can simply stitch a bunch of gigapixel cameras together and achieve this.
A camera that can do facial recognition from outside of national borders doesn't need to be a petapixel one. A mid-gigapixel camera with good optics can cover an entire city at once (or at least it could if it wasn't for all the buildings in the way).
The main barrier to petapixel cameras is that they don't serve your goal of full public monitoring (regardless of whether it's by the government or by everyone individually).
A camera that can do facial recognition from outside of national borders doesn't need to be a petapixel one. A mid-gigapixel camera with good optics can cover an entire city at once (or at least it could if it wasn't for all the buildings in the way).
This is technically true. But yes, if you had the tech to build this it would also become trivial to built a petapixel camera too (for someone who can afford it). The hard part is doing 0.1 metre resolution from a 10,000 kilometre.
Thanks for this exchange btw, I guess in future I could be more precise.
The main barrier to petapixel cameras is that they don't serve your goal of full public monitoring (regardless of whether it's by the government or by everyone individually).
Why?
Assume we had the tech to manufacture petapixel cameras, and individuals worldwide could purchase them (i.e. a govt couldn't just lock down the supply chain). Why does this not eventually to a world with zero privacy for everyone?
Lesswrong disclaimer
2026-03-01
Petapixel cameras won't exist soon
Disclaimer
Huge amount of background context on why I care about petapixel cameras
Petapixel cameras won't exist soon