anithite

B.Eng (Mechatronics)

Wiki Contributions

Comments

TLDR:Memory encryption alone is indeed not enough. Modifications and rollback must be prevented too.

  • memory encryption and authentication has come a long way
  • Unless there's a massive shift in ML architectures to doing lots of tiny reads/writes, overheads will be tiny. I'd guesstimate the following:
    • negligible performance drop / chip area increase
    • ~1% of DRAM and cache space[1] 

It's hard to build hardware or datacenters that resists sabotage if you don't do this. You end up having to trust the maintenance people aren't messing with the equipment and the factories haven't added any surprises to the PCBs. With the right security hardware, you trust TSMC and their immidiate suppliers and no one else.

Not sure if we have the technical competence to pull it off. Apple's likely one of the few that's even close to secure and it took them more than a decade of expensive lessons to get there. Still, we should put in the effort.

in any case, overall I suspect the benefit-to-effort ratio is higher elsewhere. I would focus on making sure the AI isn't capable of reading its own RAM in the first place, and isn't trying to.

Agreed that alignment is going to be the harder problem. Considering the amount of fail when it comes to building correct security hardware that operates using known principles ... things aren't looking great.

</TLDR> rest of comment is just details

Morphable Counters: Enabling Compact Integrity Trees For Low-Overhead Secure Memories

Memory contents protected with MACs are still vulnerable to tampering through replay attacks. For example, an adversary can replace a tuple of { Data, MAC, Counter } in memory with older values without detection. Integrity-trees [7], [13], [20] prevent replay attacks using multiple levels of MACs in memory, with each level ensuring the integrity of the level below. Each level is smaller than the level below, with the root small enough to be securely stored on-chip.

[improvement TLDR for this paper: they find a data compression scheme for counters to increase the tree branching factor to 128 per level from 64 without increasing re-encryption when counters overflow]

Performance cost

Overheads are usually quite low for CPU workloads:

  • <1% extra DRAM required[1]
  • <<10% execution time increase

Executable code can be protected with negligible overhead by increasing the size of the rewritable authenticated blocks for a given counter to 4KB or more. Overhead is then comparable to the page table.


For typical ML workloads, the smallest data block is already 2x larger (GPU cache lines 128 bytes vs 64 bytes on CPU gives 2x reduction). Access patterns should be nice too, large contiguous reads/writes.

Only some unusual workloads see significant slowdown (EG: large graph traversal/modification) but this can be on the order of 3x.[2]

 

A real example (intel SGX)

Use case: launch an application in a "secure enclave" so that host operating system can't read it or tamper with it.

It used an older memory protection scheme:

  • hash tree
  • Each 64 byte chunk of memory is protected by an 8 byte MAC
  • MACs are 8x smaller than the data they protect so each tree level is 8x smaller
    • split counter modes in the linked paper can do 128x per level
  • The memory encryption works.
    • If intel SGX enclave memory is modified, this is detected and the whole CPU locks up.

SGX was not secure. The memory encryption/authentication is solid. The rest ... not so much. Wikipedia lists 8 separate vulnerabilities including ones that allow leaking of the remote attestation keys. That's before you get to attacks on other parts of the chip and security software that allow dumping all the keys stored on chip allowing complete emulation.

AMD didn't do any better of course One Glitch to Rule Them All: Fault Injection Attacks Against AMD’s Secure Encrypted Virtualization

How low overheads are achieved

  • ECC (error correcting code) memory includes extra chips to store error correction data. Repurposing this to store MACs +10 bit ECC gets rid of extra data accesses. As long as we have the counters cached for that section of memory there's no extra overhead. Tradeoff is going from ability to correct 2 flipped bits to only correcting 1 flipped bit.
  • DRAM internally reads/writes lots of data at once as part of a row. standard DDR memory reads 8KB rows rather than just a 64B cache line. We can store tree parts for a row inside that row to reduce read/write latency/cost since row switches are expensive.
  • Memory that is rarely changed (EG:executable code) can be protected as a single large block. If we don't need to read/write individual 64 byte chunks, then a single 4KiB page can be a rewritable unit. Overhead is then negligible if you can store MACs in place of an ECC code.
  • Counters don't need to be encrypted, just authenticated. verification can be parallelized if you have the memory bandwidth to do so.
    • can also delay verification and assume data is fine, if verification fails, entire chip shuts down to prevent bad results from getting out.
  1. ^

    Technically we need +12.5% to store MAC tags. If we assume ECC (error correcting code) memory is in use, which already has +12.5% for ECC, we can store MAC tags + smaller ECC at the cost of 1 bit of error correction.

  2. ^

    Random reads/writes bloat memory traffic by >3x since we need to deal with 2+ uncached tree levels. We can hide latency by delaying verify of higher tree levels and panicking if it fails before results can leave chip (Intel SGX does exactly this). But if most traffic bloats, we bottleneck on memory bandwidth and perf drops a lot.

What you're describing above is how Bitlocker on Windows works on every modern Windows PC. The startup process involves a chain of trust with various bootloaders verifying the next thing to start and handing off keys until windows starts. Crucially, the keys are different if you start something that's not windows (IE:not signed by Microsoft). You can't just boot Linux and decrypt the drive since different keys would be generated for Linux during boot and they won't decrypt the drive.

Mobile devices and game consoles are even more locked down. If there's no bootloader unlock from your carrier or device manufacturer and no vulnerability (hardware or software) to be found, you're stuck with the stock OS. You can't boot something else because the chip will refuse to boot anything not signed by the OEM/Carrier. You can't downgrade because fuses have been blown to prevent it and enforce a minimum revision number. Nothing booted outside the chip will have the keys locked up inside it needed to decrypt things and do remote attestation.

 

Root isn't full control

Having root on a device isn't the silver bullet it once was. Security is still kind of terrible and people don't do enough to lock everything down properly, but the modern approach seems to be: 1) isolate security critical properties/code 2) put it in a secure armored protected box somewhere inside the chip. 3) make sure you don't stuff enough crap inside the box the attacker can compromise via a bug too.

They tend to fail in a couple of ways

  1. Pointless isolation (EG:lock encryption keys inside a box but let the user ask it to encrypt/decrypt anything)
  2. the box isn't hack proof (examples below)
    1. they forget about a chip debug feature that lets you read/write security critical device memory
    2. you can mess with chip operating voltage/frequency to make security critical operations fail.
  3. The stuff too much stuff inside the box (EG: a full Java virtual machine and a webserver)

I expect ML accelerator security to be full of holes. They won't get it right, but it is possible in principle.

 

ML accelerator security wish-list

As for what we might want for an ML accelerator:

  • Separating out enforcement of security critical properties:
    • restrict communications
      • supervisory code configures/approves communication channels such that all data in/out must be encrypted
      • There's still steganography. With complete control over code that can read the weights or activations we can hide data in message timing for example
      • Still, protecting weights/activations in transit is a good start.
    • can enforce "who can talk to who"
      • example:ensure inference outputs must go through supervision model that OKs them as safe.
        • Supervision inference chip can then send data to API server then to customer
        • Inference chip literally can't send data anywhere but the supervision model
    • can enforce network isolation for dangerous experimental system (though you really should be using airgaps for that)
  • Enforcing code signing like what apple does. Current Gen GPUs support virtualisation and user/kernel modes. Control what code has access to what data for reading/writing. Code should not have write access to itself. Attackers would have to find return oriented programming attacks or similar that could work on GPU shader code. This makes life harder for the attacker.
    • Apple does this on newer SOCs to prevent execution of non-signed code.

Doing that would help a lot. Not sure how well that plays with infiniband/NVlink networking but that can be encrypted too in principle. If a virtual memory system is implemented, it's not that hard to add a field to the page table for an encryption key index.

Boundaries of the trusted system

You'll need to manage keys and securely communicate with all the accelerator chips. This likely involves hardware security modules that are extra super duper secure. Decryption keys for data chips must work on and for inter-chip communication are sent securely to individual accelerator chips similar to how keys are sent to cable TV boxes to allow them to decrypt programs they have paid for.

This is how you actually enforce access control.

If a message is not for you, if you aren't supposed to read/write that part of the distributed virtual memory space, you don't get keys to decrypt it. Simple and effective.

"You" the running code never touch the keys. The supervisory code doesn't touch the keys. Specialised crypto hardware unwraps the key and then uses it for (en/de)cryption without any software in the chip ever having access to it.

Hardware encryption likely means that dedicated on-chip hardware to handle keys and decrypting weights and activations on-the-fly.

The hardware/software divide here is likely a bit fuzzy but having dedicated hardware or a separate on-chip core makes it easier to isolate and accelerate the security critical operations. If security costs too much performance, people will be tempted to turn it off.

Encrypting data in motion and data at rest (in GPU memory) makes sense since this minimizes trust. An attacker with hardware access will have a hard time getting weights and activations unless they can get data directly off the chip.

Many-key signoff is nuclear-lauch-style security where multiple keyholders must use their keys to approve an action. The idea being that a single rogue employee can't do something bad like copy model weights to an internet server or change inference code to add a side channel that leaks model weights or to sabotage inference misuse prevention/monitoring.

This is commonly done in high security fields like banking where several employees hold key shares that must be used together to sign code to be deployed on hardware security modules.

Vulnerable world hypothesis (but takeover risk rather than destruction risk). That + first mover advantage could stop things pretty decisively without requiring ASI alignment

As an example, taking over most networked computing devices seems feasible in principle with thousands of +2SD AI programmers/security-researchers. That requires an Alpha-go level breakthrough for RL as applied to LLM programmer-agents.

One especially low risk/complexity option is a stealthy takeover of other AI lab's compute then faking another AI winter. This might get you most of the compute and impact you care about without actively pissing off everyone.

If more confident in jailbreak prevention and software hardening, secrecy is less important.

First mover advantage depends on ability to fix vulnerabilities and harden infrastructure to prevent a second group from taking over. To the extent AI is required for management, jailbreak prevention/mitigation will also be needed.

Slower is better obviously but as to the inevitability of ASI, I think reaching top 99% human capabilities in a handful of domains is enough to stop the current race. Getting there is probably not too dangerous.

Current ATGMs poke a hole in armor with a very fast jet of metal (1-10km/s). Kinetic penetrators do something similar using a tank gun rather than specially shaped explosives.

"Poke hole through armor" is the approach used by almost every weapon. A small hole is the most efficient way to get to the squishy insides. Cutting a slot would take more energy. Blunt impact only works on flimsy squishy things. A solid shell of armor easily stopped thrown rocks in antiquity. Explosive over-pressure is similarly obsolete against armored targets.

TLDR:"poke hole then destroy squishy insides" is the only efficient strategy against armor.

Modern vehicles/military stuff are armored shells protecting air+critical_bits+people

Eliminate the people and the critical bits can be compacted. The same sized vehicle can afford to split critical systems into smaller distributed modules.

Now the enemy has make a lot more holes and doesn't know where to put them to hit anything important.

This massively changes offense/defence balance. I'd guess by a factor of >10. Batteries have absurd power densities so taking out 75% of a vehicle's batteries just reduces endurance. Only way to get a mobility kill is to take out wheels.

There are still design challenges:

  • how to avoid ammo cook-off and chain reactions.
  • misdirection around wheel motors (improves tradeoffs)
  • efficient manufacturing
  • comms/radar (antenna has to be mounted externally)

Zerg rush

Quantity has a quality of its own. Military vehicles are created by the thousands, cars by the millions. Probably something similarly sized or a bit smaller, powered by an ICE engine and mass produced would be the best next gen option.

EMP mostly affects power grid because power lines act like big antennas. Small digital devices are built to avoid internal RF like signals leaking out (thanks again FCC) so EMP doesn't leak in very well. DIY crud can be done badly enough to be vulnerable but basically run wires together in bundles out from the middle with no loops and there's no problems.

Only semi-vulnerable point is communications because radios are connected to antennas.

Best option for frying radios isn't EMP, but rather sending high power radio signal at whatever frequency antenna best receives.

RF receiver can be damaged by high power input but circuitry can be added to block/shunt high power signals. Antennas that do both receive and transmit (especially high power transmit) may already be protected by the "switch" that connects rx and tx paths for free. Parts cost would be pretty minimal to retrofit though. Very high frequency or tight integration makes retrofitting impractical. Can't add extra protection to a phased array antenna like starlink dish but it can definitely be built in.

Also front-line units whose radios facing the enemy are being fried are likely soon to be scrap (hopefully along with the thing doing the frying).

anithite162

RF jamming, communication and other concerns

TLDR: Jamming is hard when comms system is designed to resist it. Civilian stuff isn't but military is and can be quite resistant. Frequency hopping makes jamming ineffective if you don't care about stealth. Phased array antennas are getting cheaper and make things stealthier by increasing directivity.(starlink terminal costs $1300 and has 40dbi gain). Very expensive comms systems on fighter jets using mm-wave comms and phased array antennas can do gigabit+ links in presence of jamming undetected.

civilian stuff is trivial to jam

Fundamentals of Jamming radio signals (doesn't favor jamming)

  • Jammer fills big chunk of radio spectrum with some amount of watts/MHz of noise
  • EG:Russian R-330ZH puts out 10KW from 100MHz to 2GHz (approx 5KW/GHz or 5W/MHz)
    • more than enough to drown out civvy comms like wifi that use <<1W signal spanning 10-100MHz of bandwidth even with short link far away from jammer.
    • Comms designed to resist jamming can use 10W+ and reduce bandwidth of transmission as much as needed at cost of less bits/second.
      • low bandwidth link (100kb/s) with reasonable power budget is impossible to jam practically until jammer is much much closer to receiver than transmitter.
  • GPS and satcom signals easy to jam because of large distance to satellite and power limits.
  • Jamming increases required power density to get signal through intelligibly. Transmitter has to increase power or use narrower transmit spectrum. Fundamentally signal to noise ratio decreases and Joules/bit increases.

Communication Stealth

directional antennas/phased arrays

  • military planes use this to communicate stealthily
  • increases power sent/received to/from particular direction
  • bigger antenna with more sub-elements increases directionality/gain
  • Starlink terminals are big phased array antennas
    • this quora answer gives some good numbers on performance
      • Starlink terminal gives approx 3000x (35dbi) more power in chosen direction vs omnidirectional antenna
      • Nessesary to communicate with satellite 500+km away
    • Starlink terminals are pretty cheap
      • smaller phased arrays for drone-drone comms should be cheaper.
    • drone that is just a big Yagi antenna also possible and ludicrously cheap.
    • stealthy/jam immune comms for line of sight data links at km ranges seem quite practical.

development pressure for jam resistant comms and associated tech

  • little development pressure on civvy side B/C FCC and similar govt. orgs abroad shut down jammers
  • military and satcom will drive development more slowly
  • FCC limits on transmit power can also help
    • Phased array transmit/receive improves signal/noise
    • This is partly driving wifi to use more antennas to improve bandwidth/reliability
    • hobbyist drone scene could also help (directional antennas for ground to drone comms without requiring more power or gimbals)

Self driving cars have to be (almost)perfectly reliable and never have an at fault accident.

Meanwhile cluster munitions are being banned because submunitions can have 2-30% failure rates leaving unexploded ordinance everywhere.

In some cases avoiding civvy casualties may be a similar barrier since distinguishing civvy from enemy reliably is hard but militaries are pretty tolerant to collateral damage. Significant failure rates are tolerable as long as there's no exploitable weaknesses.

Distributed positioning systems

Time of flight distance determination is in some newer Wifi chips/standards for indoor positioning.

Time of flight across a swarm of drones gives drone-drone distances which is enough to build a very robust distributed positioning system. Absolute positioning can depend on other sensors like cameras or phased array GPS receivers, ground drones or whatever else is convenient.

Overhead is negligible because military would use symmetric cryptography. Message authentication code can be N bits for 2^-n chance of forgery. 48-96 bits is likely sweet spot and barely doubles size for even tiny messages.

Elliptic curve crypto is there if for some reason key distribution is a terrible burden. typical ECC signatures are 64 bytes (512 bits) but 48 bytes is easy and 32 bytes possible with pairing based ECC. If signature size is an issue, use asymmetric crypto to negotiate a symmetric key then use symmetric crypto for further messages with tight timing limits.

Load More