I was asked to respond to this comment by Eliezer Yudkowsky. This post is partly redundant with my previous post.


Why is flesh weaker than diamond?

When trying to resolve disagreements, I find that precision is important. Tensile strength, compressive strength, and impact strength are different. Material microstructure matters. Poorly-sintered diamond crystals could crumble like sand, and a large diamond crystal has lower impact strength than some materials made of proteins.

Even when the load-bearing forces holding large molecular systems together are locally covalent bonds, as in lignin (what makes wood strong), if you've got larger molecules only held together by covalent bonds at interspersed points along their edges, that's like having 10cm-diameter steel beams held together by 1cm welds.

lignin (what makes wood strong)

That's an odd way of putting things. The mechanical strength of wood is generally considered to come from it acting a composite of cellulose fibers in a lignin matrix, though that's obviously a simplification.

If Yudkowsky meant "cellulose fibers" instead of "lignin", then yes, force transfers between cellulose fibers pass through non-covalent interactions, but because fibers have a large surface area relative to cross-section area, those non-covalent interactions collectively provide enough strength. The same is true with modern composites, such as carbon fibers in an epoxy matrix. Also, there generally are some covalent bonds between cellulose and lignin and hemicellulose.

Bone is stronger than wood; it runs on a relatively stronger structure of ionic bonds

Bone has lower tensile strength than many woods, but has higher compressive strength than wood. Also, they're both partly air or water. Per dry mass, I'd say their strengths are similar.

Saying bone is stronger than wood because "it runs on a relatively stronger structure of ionic bonds" indicates to me that Yudkowsky has some fundamental misunderstandings about material science. It's a non sequitur that I don't know how to engage with. (What determines the mechanical strength of bonds is the derivative of energy with length.)

But mainly, bone is so much weaker than diamond (on my understanding) because the carbon bonds in diamond have a regular crystal structure that locks the carbon atoms into relative angles, and in a solid diamond this crystal structure is tesselated globally.

This seems confused, conflating molecular strength and the strength of macroscopic materials. Yes, perfect diamond crystals have higher theoretical strength than perfect apatite crystals, but that's almost irrelevant. The theoretical ideal strength of most crystals is much greater than that of macroscopic materials. In practice, composites are used when high-strength materials are needed, with strong fibers embedded in a more-flexible matrix that distributes load between fibers. (Crystals also have low toughness, because they can fracture along smooth planes, which requires less energy than complex fractures.)

But then, why don't diamond bones exist already? Not just for the added strength; why make the organism look for calcium and phosphorus instead of just carbon?

The search process of evolutionary biology is not the search of engineering; natural selection can only access designs via pathways of incremental mutations that are locally advantageous, not intelligently designed simultaneous changes that compensate for each other.

Growth or removal of diamond requires highly-reactive intermediates. Production of those intermediates requires extreme conditions which require macroscopic containment, so they cannot be produced by microscopic systems. Calcium phosphate, unlike diamond, can be made from ions that dissolve in water and can be transported by proteins. That is why bones are made with calcium phosphate instead of diamond. The implication that lack of diamond bones in animals is a failure of the evolutionary search process is very wrong.

There were, last time I checked, only three known cases where evolutionary biology invented the freely rotating wheel. Two of those known cases are ATP synthase and the bacterial flagellum, which demonstrates that freely rotating wheels are in fact incredibly useful in biology, and are conserved when biology stumbles across them after a few hundred million years of search. But there's no use for a freely rotating wheel without a bearing and there's no use for a bearing without a freely rotating wheel, and a simultaneous dependency like that is a huge obstacle to biology, even though it's a hardly noticeable obstacle to intelligent engineering.

This conflates microscopic and macroscopic wheels, which should be considered separately.

Wikipedia has a whole page on this topic, which is a decent starting point.

For macroscopic rotation:

  • Blood vessels cannot rotate continuously, so nutrients cannot be provided to the rotating element to grow it.
  • Without smooth surfaces to roll on, rolling is not better than walking.

On a microscopic scale, rotating proteins are floating in cytosol, so connecting them to resources isn't a problem. Evolution seems perfectly able to make variants of them; there are several types of ATPase and flagellar motors. ATPase may:

  • use H+ ions, or Na+ ions
  • attach to different membranes
  • mainly produce or consume ATP

Flagellal motors can often reverse direction for run-and-tumble motion, and there are quite a few variants.

The reason you don't see many rotating elements in proteins is...they're just not normally very useful. Oscillating protein conformations are generally both smaller and easier to evolve, so you see lots of ping-pong mechanisms and few rotational ones. In the cases where rotation is useful, you see rotation:

  • Flagellal motors are used because rotation of flagella is useful for propulsion.
  • For ATP synthase, rotation is good because it allows coupling a gradient to a reaction with a variable ratio (of transports to ATP) as long as free energy is negative.
  • Unwinding DNA involves rotation, so DNA helicases can rotate.
  • RNA polymerase has a rotational nanomotor.
  • TrwB uses rotation to move DNA across membranes.
  • The bacterial Rho factor is basically a RNA helicase that causes transcription termination.
  • RecA family proteins use rotary nanomotors for DNA repair and recombination.
  • Many viruses use molecular motors to package their genome into procapsids. It was previously believed that eg the phi29 motor and T4 DNA packaging motor rotate, but they're now understood to use DNA revolution around a (slightly larger) channel without rotation.
  • etc

How much evolutionary advantage is there to stronger bone, if what fails first is torn muscle?

Bones with higher specific strength could have the same strength with less mass, which would be a nontrivial evolutionary advantage.

(Analogously, the collection of faults that add up to "old age" is large enough that a little more age resistance in one place is not much of an advantage if other aging systems or outward accidents will soon kill you anyways.)

I have the impression I disagree with Yudkowsky about the main causes of aging. See eg my post on Alzheimer's.

I don't even think we have much of a reason to believe that it'd be physically (rather than informationally) difficult to have a set of enzymes that synthesize diamond.

That is very wrong. Diamond is hard to make with enzymes because they can't stabilize intermediates for adding carbons to diamond.

It could just require 3 things to go right simultaneously, and so be much much harder to stumble across than tossing more hydroxyapatite to lock into place in a bone crystal. And then even if somehow evolution hit on the right set of 3 simultaneous mutations, sometime over the history of Earth, the resulting little isolated chunk of diamond

"3 mutations" is not particularly difficult relative to some things that have evolved. It's also not sufficient for making diamond.

Talking to the general public is hard.

I think I feel your pain right now.

Then why aren't these machines strong like human machines of steel are strong? Because iron atoms are stronger than carbon atoms? Actually no, diamond is made of carbon and that's still quite strong. The reason is that these tiny systems of machinery are held together (at the weakest joints, not the strongest joints!) by static cling.

When strength was important, strong materials have evolved. Spider silk aligns covalent bonds along its length, and can have higher tensile strength than steel per volume. Cellulose fibers are strong; wood is fairly strong and they're only a fraction of its volume. The actual structural elements of wood can have better tensile strength per mass than steel.

And then the deeper question: Why does evolution build that way? And the deeper answer: Because everything evolution builds is arrived at as an error, a mutation, from something else that it builds.

Per the above, evolution has done better than Yudkowsky seems to think.

If somebody says "Okay, fine, you've validly explained why flesh is weaker than diamond, but why is bone weaker than diamond?" I have to reply "Valid, iiuc that's legit more about irregularity and fault lines and interlaced weaker superstructure and local deformation resistance of the bonds, rather than the raw potential energy deltas of the load-bearing welds."

Again, Yudkowsky is conflating mechanical strength at different scales, and the strength of composites with pure materials. He keeps putting forth diamond as an ultimate material, but tensile strength of 6mm wide 0.25mm thick CVD diamond films ranged from 230 to 410 MPa. Larger composites of those would obviously be significantly weaker. Larger diamond crystals would also be weaker, closer in strength to the worst samples than the best ones. Carbon fiber has better, cheaper, and more-reliable strength than CVD diamond. Large diamonds are not even necessarily stronger than strong wood in terms of tensile strength per mass.

New to LessWrong?

New Comment
30 comments, sorted by Click to highlight new comments since: Today at 3:45 PM

I've only skimmed the post, but I've strong-upvoted it for the civil tone. Some EY-critical posts here are written in such an inflammatory manner that they put me off reading them, and even make me suspicious of the epistemics that produced this criticism. In contrast, I really appreciate the ability to write about strong factual disagreements without devolving into name-calling.

Also strong upvoted, but mainly because it seems far more correct than the post by titotal.

In general, the factors that govern the macroscopic strength of materials can often have surprisingly little to do with the strength of the bonds holding them together. A big part of a material's tensile strength is down to whether it forms cracks and how those cracks propagate. I predict many LWers would enjoy reading The New Science of Strong Materials which is an excellent introduction to materials science and its history. (Cellulose is mentioned, and the most frequent complaint about it as an engineering material lies in its tendency to absorb water.)

It's actually not clear to me why Yudkowsky thinks that ridiculously high macroscopic physical strength is so important for establishing an independent nanotech economy. Is he imagining that trees will be out-competed by solar collectors rising up on stalks of diamond taller than the tallest tree trunk? But the trees themselves can be consumed for energy, and to achieve this, nanobots need only reach them on the ground. Once the forest has been eaten, a solar collector lying flat on the ground works just as well. One legitimate application for covalently bonded structures is operating at very higher temperatures, which would cause ordinary proteins to denature. In those cases the actual strength of the individual bonds does matter more.

It's actually not clear to me why Yudkowsky thinks that ridiculously high macroscopic physical strength is so important for establishing an independent nanotech economy.

Why do you think Yudkowsky thinks this? To me this whole conversation about material strength is a tangent from the claim that Drexlerian nanotech designed by a superintelligence could do various things way more impressive than biology.

To me this whole conversation about material strength is a tangent from the claim that Drexlerian nanotech designed by a superintelligence could do various things way more impressive than biology.

I think this interpretation is incomplete. Being able to build a material that's much stronger than biological materials would be impressive in an absolute sense, but it doesn't imply that you can easily kill everyone. Humans can already build strong materials, but that doesn't mean we can presently build super-weapons in the sense Yudkowsky describes.

A technology being "way more impressive than biology" can either be interpreted weakly as "impressive because it does something interesting that biology can't do" or more strongly as "impressive because it completely dominates biology on the relevant axes that allow you to easily kill everyone in the world." I think the second interpretation is supported by his quote that,

It should not be very hard for a superintelligence to repurpose ribosomes to build better, more strongly bonded, more energy-dense tiny things that can then have a quite easy time killing everyone.

A single generation difference in military technology is an overwhelming advantage. The JSF F35 Lockheed Martin Lightning II cannot be missile-locked by an adversary beyond 20-30 miles. Conversely, it can see and weapon lock an opposing 4th gen fighter from >70 miles fire a beyond-visual-range missile that is almost impossible to evade for a manned fighter. 

In realistic scenarios with adequate preparation and competent deployment a generation difference in aircraft can lead to 20/1 K/D ratios. 5th generations fighters are much better than 4th generation fighters are much better than 3rd generation fighters etc. Same for tanks, ships, artillery, etc. This difference is primarily technological. 

It is not at all unlikely to suppose that a machine superintelligence could not only rapidly design new materials, artificial organisms and military technologies vastly better than those constructed by humans today. These could indeed be said  to form superweapons. 

The idea that AI-designed nanomachines will outcompete bacteria and consume the world in a grey goo swarm perhaps may seem fanciful but that's not at all evidence that it isn't in the cards. Now, there are goodish technical arguments that bacteria are already at various thermodynamic limits. As bhauth notes it seems that Yudkowsky underrates the ability of evolution-by-natural-selection to find highly optimal structures. 

However, I don't see this enough evidence to prohibiting grey goo scenarios. Being somewhere at a Pareto optimum doesn't mean you can't be outcompeted. Evolution is much more efficient than it is sometimes given credit for but it still seems to miss obvious improvements. 

Of course, nanotech is likely a superweapon even without grey goo scenarios so this is only a possible extreme. And finally of course (a) mechanical superintelligence(s) posesses many advantages over biological humans any of which may prove more relevant for a take-over scenario in the short-term. 

Yes! The New Science of Strong Materials is a marvelous book, highly recommended. It explains simply and in detail why most materials are at least an order of magnitude weaker than you'd expect if you calculated their strength theoretically from bond strengths: it's all about how cracks or dislocations propagate.

However, Eliezer is talking about nano-tech. Using nanotech, by adding the right microstructure at the right scale, you can make a composite material that actually approaches the theoretical strength (as evolution did for spider silk), and at that point, bond strength does become the limiting factor.

On why this matters, physical strength is pretty important for things like combat, or challenging pieces of engineering like flight or reaching orbit. Nano-engineered carbon-carbon composites with a good fraction of the naively-calculated strength of (and a lot more toughness than) diamond would be very impressive in military or aerospace application. You'd have to ask Eliezer, but I suspect the point he's trying to make is that if a human soldier was fighting a nano-tech-engineered AI infantry bot made out of these sorts of materials, the bot would win easily.

[-]EGI4mo70

This post is very well written and addresses most of the misunderstandings in Yudkowsky's biomaterial post. Thanks for that.

There is one point where I would disagree with you but you seem to know more about the topic than I do so I'm going to ask: Why exactly do you think diamond is so hard to synthesise via enzyme? I mean it is obvious that an enzyme cannot use the same path we currently use for diamond synthesis, but the formation of C-C bonds is quite ubiquitous in enzyme catalysed reactions (E.g. fatty acid synthesis). So I could easily imagine repeated dehydrogenation and carbon addition leading to a growing diamond structure. Of course with functional groups remaining on the surface. What makes you think any such path must fail? (That this would not be very useful, very energy intensive and a difficult to evolve multi step process is quite clear to me and not my question.)

[-]bhauth4mo149

Controlled C-C bond formation in mild conditions is always enabled by nearby functional groups. In cells, the most important mechanism is the aldol reaction. Nearby functional groups can stabilize the intermediates involved.

Chemists consider C-C bond formation to be, in general, one of the most important and difficult reaction types, and have extensively considered every possible way of doing it. Here are some C-C coupling reactions. The options here are limited and it's unlikely they're just overlooking something very easy.

Also note that coupling a 4th non-hydrogen atom to carbon is especially hard. Many C-C coupling reactions involve H moving and temporary double bond formation, and if there are 3 bonds that can't move, you can't form a double bond. So diamond formation requires radicals or something similar, and those are always high-energy.

I'm not a chemist, but the https://en.wikipedia.org/wiki/Cross_dehydrogenative_coupling looks like the most plausible approach, given the shortage of space. But as you say, that tends to require very strong oxidizing agents.

[-]EGI4mo10

Just wanted to say the same. Though with the diamond occluding more than a hemisphere getting all the machinery in place to provide both the substrate and oxydation at the same time will run into severe steric problems.

Strong oxydation per see is quite possible if you look at e.g. Cyp P450.

[-]EGI4mo40

I failed to properly consider the 4th carbon problem. So you are right, between the steric problems I mentioned with Roger and the stabilisation of the intermediate it is VERY hard to do with enzymes. I can think of a few routes that may be possible but they all have problems. Besides the CDC approach another good candidate might be oxydation of a CH or COH to a temporary carbocation with subsequent addition of a nucleophilic substrate. Generating and stabilizing the carbocation will of course be very hard.

That is very wrong. Diamond is hard to make with enzymes because they can't stabilize intermediates for adding carbons to diamond.

 

As a biochemist, I agree.

[-]roha5mo51

Meta-questions: How relevant are nanotechnological considerations for x-risk from AI? How suited are scenarios involving nanotech for making a plausible argument for x-risk from AI, i.e. one that convinces people to take the risk seriously and to become active in attempting to reduce it?

[-]gilch5mo147

The AI x-risk thesis doesn't require nanotech. Dangerously competent AIs are not going to openly betray us until they think they can win, which means, at minimum, they don't need us to maintain the compute infrastructure they'd need to stay alive. Currently, AI chips do require our globalized economy to produce.

AI takeover is a highly disjunctive claim; there are a lot of different ways it could happen, but the thesis only requires one. We could imagine a future society that has become more and more dependent on AIs and has semiautonomous domestic and industrial robots in widespread use. (It's harder to imagine that not happening, unless some other doom happens first.) One could imagine a lot of them getting hacked by a stealthy rogue AI and then turning on us all at once. The AI doom thesis only needs this level. paulfchristiano describes loss-of-control scenarios in What failure looks like without mentioning bio or nano.

But I think Yudkowsky's point in bringing up bio/nano ("diamondoid bacteria") was that a superintelligence could build its own self-sustaining infrastructure faster than you might think using methods you might not have thought of, for example, through bioengineering instead of robots. Or, you know, something else we haven't thought of. He's said something to the effect that since a superintelligence is smarter than him, he doesn't know what it would actually be able to do and suggests nanotech as a lower bound of what might be possible, because he's read Nanosystems, and he thought of it, even if he's not smart enough to implement it now.

Scenarios like these could happen sooner and might be harder to see coming. That makes it relevant for planning interventions, like what regulations to ask for or what messages to spread. Unfortunately, I think we run into inferential gaps with our audience more from Yudkowsky's scenarios than Christiano's.

So I think a lot of confusion comes from people calling it things like "the AI x-risk thesis". As far as I can tell, there are very few people who think that there will not be significant new dangers arising as ML systems grow more capable and the scaffolding around them grows more elaborate, and that those dangers stand a nontrivial chance of leading to the extinction of biological humans. But when you try to come up with a plan more specific than "try to ban general-purpose computing", it turns out that the exact threat model matters. The set of things that would be helpful to prevent Yudkowsky's FOOM and the set of things that would be likely to prevent the gradual irrelevance and uncompetitiveness of humans in Christiano's Whimper are almost entirely disjoint (but are both referred to as "AI x-risk").

To the extent that there are cheap actions that help with one but not the other, it is helpful on the margin to take those actions. When it comes to actions that destroy a lot of value but might marginally help with only one of the threat models, I think you would do well to figure out if that threat model is realistic.

Dangerously competent AIs are not going to openly betray us until they think they can win

Nit: Would seeing an AGI trying to betray us and fail count as evidence against a future sharp left turn? Failing that, what future observations does your world model say are really unlikely. If there are no such future observations, consider whether you are putting a lot of weight on an unfalsifiable model. Yudkowsky boasts that he derived his world model using the null string as input. I think that to the extent that's true, it should be interpreted as a giant flashing red flag instead of something to brag about.

But I think Yudkowsky's point in bringing up bio/nano ("diamondoid bacteria") was that a superintelligence could build its own self-sustaining infrastructure faster than you might think using methods you might not have thought of, for example, through bioengineering instead of robots.

Not a nit: the "you can build a computational substrate that is similar to what our current chip fabs produce without needing to build up giant supply chains or do lots of slow expensive experiments or deal with Amdahl's Law, by using the One Weird Trick of using nanotech / biotech" is exactly the assertion people are asking for evidence of. It's kind of central in the whole FOOM story.

Nice comment again, as usual, faul_sname. I think you hit the nail on the head with 'Can AI get compute-infrastructure-independence with self-reproducing microfactory tech?'

If the answer to this is yes, then the danger is higher than it would otherwise be that the AI will move against us soon after going rogue.

Although, even if the answer is no, there are still a lot of dangers. Persuasion/threats (e.g. becoming a wealthy dictator), collaboration/organization (e.g. working with a dictator),  brain control via engineered viruses and/or BCIs, etc. So I think there's sort of multiple branching paths of possibility, with variations and gradations. I don't think it's quite a smooth continuum since I think certain combos of ability and speed are unlikely or unworkable. Here's something like a summary of my mental model of the main three paths:

Can the AI get smart super fast and get quick independence? (FOOM)

Can the AI get smart moderately fast and gain slower semi-independence (e.g. nation with aid from an independent-AI which is militarily unconquerable)? (Thump)

Can the AI get more competent gradually and slowly make humans irrelevant, until we're no longer in a position to turn all AI off? (Whimper)

Personally, I think Thump is more likely than FOOM or Whimper. I think if strongly agentic AGI gets out of hand there will be some sort of humans+rogue_AI versus humans+controlled_AI standoff, like a cold war. Or maybe things will devolve into a hot war. I hope not, but that's something to start preparing against by forming strong international treaties now based on agreeing to ally against any nation which can be proven to be working with Rogue AGI or something. I dunno. International politics is not my area of expertise. I just think it should be in people's mental models as a possibility.

[-][anonymous]4mo82

Note how your optimal response changes a lot based on threat model.

Foom : stop the reaction from being able to begin. Foom is too fast to control. Optimal response : AI pauses (which add the duration of the pause to some living people's lives, who still may die after the foom). Political action : request AI pauses.

Thump: you need to keep up with the arms race. Dictator upgrading from technicals and ak-47s to hypersonic drone swarms? You need to be developing your own restricted models so you have your own equivalent weapons in greater numbers. The free world has vastly more resources so they can afford less efficient (and more controllable) AI models to r&d and build equivalent weaponry. Political action: request government funded moonshot effort to develop AI.

Whimper: you need to be upgrading humans with neural implants or other methods over time so they remain above some intelligence floor needed to not be scammed out of power. Humans don't need to stay the smartest creatures around but they need AI assistants they can trust and enough intelligence to double check their work. This let's humans retain property rights over the solar system, refusing to give AI any rights at all in sol, and they can enjoy utopia. Political action: request FDA overhaul.

Yeah, different regulatory strategies for different scenarios for sure. It's tricky though that we don't know which scenario will come to pass. I myself feel quite uncertain. There is an important distinction around FOOM scenarios. They are too fast to legislate while they are in progress. The others give humanity a chance to see what is happening and change the rules 'in flight '.

Preventative legislation for a scenario that has never yet happened and sounds like implausible science fiction is a particularly hard ask. I can see why, if someone thought FOOM was highly likely, they could be pessimistic about governance as a path to safety.

"The others give humanity a chance to see what is happening and change the rules 'in flight '."

This is possible in non-Foom scenarios, but not a given (e.g. super-human persuasion AIs).

Good point. Some specific narrow-domain superhuman skills, like persuasion, could also prevent in-flight regulation of slower scenarios. Another possible narrow domain would be one which enabled misuse on a scale that disrupted governments substantially, such as bioweapons.

[-][anonymous]4mo20

I can see why, if someone thought FOOM was highly likely, they could be pessimistic about governance as a path to safety.

It's worse than that because foom is so powerful the difference between "no government restricts AI meaningfully" and "9 out of 10 power blocs able to build AI restrict it" is small. Foom for a 90 day takeover implies a doubling time under a week, if all power blocs were equal in starting resources, the ""90 percent " regulation case vs the "no regulations " case is 4 doublings or about 4 weeks.

One governance solution proposed to handle this is "nuke em", but 7 day doubling times imply other things, like some method of building infrastructure that doesn't need humans current cities and factories and specialists, because by definition humans are not that fast at building anything. Just shipping parts around takes days.

It would be like trying to stop machine cancer. Nukes just buy time.

I personally don't think the above is possible starting from current technology, I am just trying to take the scenario seriously. (If it's possible at all I think you would need to bootstrap there through many intermediate stages of technology that take unavoidable amounts of time)

That is a really good point that there are intermediate scenarios -- "thump" sounds pretty plausible to me as well, and the likely-to-be-effective mitigation measures are again different.

I also postulate "splat": one AI/human coalition comes to believe that they are militarily unconquerable, another coalition disagrees, and the resulting military conflict is sufficient to destroy supply chains and also drops us into an equilibrium where supply chains as complex as the ones we have can't re-form. Technically you don't need an AI for this one, but if you had an AI tuned to for example pander to an egotistical dictator without having to deal with silly constraints like "being unwilling to advocate for suicidal policies" I could see that AI making this failure mode a lot more likely.

But when you try to come up with a plan more specific than "try to ban general-purpose computing", it turns out that the exact threat model matters.

I think this is why I'm more partial to Holden's "playbook, not plan" way of thinking about this, even if I'm not sure what to think of his 4 key categories of interventions. 

For macroscopic rotation:

  • Blood vessels cannot rotate continuously, so nutrients cannot be provided to the rotating element to grow it.
  • Without smooth surfaces to roll on, rolling is not better than walking.

There are other uses for macroscopic rotation besides rolling on wheels, e.g. propellers, gears, flywheels, drills, and turbines. Also, how to provide nutrients to detached components, or build smooth surfaces to roll on so your wheels will be useful, seem like problems that intelligence is better at solving than evolution.

Propellers are not better than flapping wings or fins. Machines use them because they're easier to build and drive.

Are you saying it is possible to construct a flying machine utilizing wings that would compete with a jet fighter in speed and fuel-efficiency?

[I would doubt it]

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year. Will this post make the top fifty?

Diamond is hard to make with enzymes because they can't stabilize intermediates for adding carbons to diamond.

This is very strong claim. It puts severe limitations on biotech capabilities. Do you have any references to support it?

[-]EGI4mo65

This is not the kind of stuff it is easy to find references on since Nanoengineering is not a real field of study (yet). But if you look at my discussion with bhauth above you will probably get a good idea of the reasoning involved.

No, it does not put severe limitations on biotech. Diamond is entirely unnecessary for most applications. Where it is necessary it can be manufactured conventionally and be integrated with the biosystems later.