All of anithite's Comments + Replies

Current ATGMs poke a hole in armor with a very fast jet of metal (1-10km/s). Kinetic penetrators do something similar using a tank gun rather than specially shaped explosives.

"Poke hole through armor" is the approach used by almost every weapon. A small hole is the most efficient way to get to the squishy insides. Cutting a slot would take more energy. Blunt impact only works on flimsy squishy things. A solid shell of armor easily stopped thrown rocks in antiquity. Explosive over-pressure is similarly obsolete against armored targets.

TLDR:"poke hole then d... (read more)

EMP mostly affects power grid because power lines act like big antennas. Small digital devices are built to avoid internal RF like signals leaking out (thanks again FCC) so EMP doesn't leak in very well. DIY crud can be done badly enough to be vulnerable but basically run wires together in bundles out from the middle with no loops and there's no problems.

Only semi-vulnerable point is communications because radios are connected to antennas.

Best option for frying radios isn't EMP, but rather sending high power radio signal at whatever frequency antenna best ... (read more)

RF jamming, communication and other concerns

TLDR: Jamming is hard when comms system is designed to resist it. Civilian stuff isn't but military is and can be quite resistant. Frequency hopping makes jamming ineffective if you don't care about stealth. Phased array antennas are getting cheaper and make things stealthier by increasing directivity.(starlink terminal costs $1300 and has 40dbi gain). Very expensive comms systems on fighter jets using mm-wave comms and phased array antennas can do gigabit+ links in presence of jamming undetected.

civilian stuf

... (read more)
1RussellThor1mo
Thanks for the info. What about RF weapons that is a focused short or EMP pulse against a drone. What range and countermeasures?

Self driving cars have to be (almost)perfectly reliable and never have an at fault accident.

Meanwhile cluster munitions are being banned because submunitions can have 2-30% failure rates leaving unexploded ordinance everywhere.

In some cases avoiding civvy casualties may be a similar barrier since distinguishing civvy from enemy reliably is hard but militaries are pretty tolerant to collateral damage. Significant failure rates are tolerable as long as there's no exploitable weaknesses.

Distributed positioning systems

Time of flight distance determination is... (read more)

Overhead is negligible because military would use symmetric cryptography. Message authentication code can be N bits for 2^-n chance of forgery. 48-96 bits is likely sweet spot and barely doubles size for even tiny messages.

Elliptic curve crypto is there if for some reason key distribution is a terrible burden. typical ECC signatures are 64 bytes (512 bits) but 48 bytes is easy and 32 bytes possible with pairing based ECC. If signature size is an issue, use asymmetric crypto to negotiate a symmetric key then use symmetric crypto for further messages with tight timing limits.

Current landmines are very effective because targets are squishy/fragile:

  • Antipersonnel:
    • take off a foot
    • spray shrapnel
  • Antitank/vehicle:
    • cut track /damage tires
    • poke a hole with a shaped charge and spray metal into vehicle insides

Clearing an area for people is hard

  • drones can be much less squishy

    • need more explosives to credibly threaten them
  • Eliminating mine threat requires

    • clearing a path (no mines buried under transit corridor)
      • mine clearing vehicle
      • use line charge
    • block sensors so off route mines can't target vehicles
      • Inflatable ba
... (read more)
2cousin_it18d
I think you're describing a kind of robotic tank, which would be useful for many other things as well, not just clearing mines. But designing a robotic tank that can't be disabled by an ATGM (some modern mines are already ATGMs waiting to fire) seems like a tall order to me. Especially given that ATGM tech won't stand still either.

I think GPT-4 and friends are missing the cognitive machinery and grid representations to make this work. You're also making the task harder by giving them a less accessible interface.

My guess is they have pretty well developed what/where feature detectors for smaller numbers of objects but grids and visuospatial problems are not well handled.

The problem interface is also not accessible:

  • There's a lot of extra detail to parse
    • Grid is made up of gridlines and colored squares
    • colored squares of fallen pieces serve no purpose but to confuse model

A more ... (read more)

1Lovre1mo
Thanks for a lot of great ideas! We tried cutting out the fluff of many colors and having all tetrominoes be one color, but that's didn't seem to help much (but we didn't try for the falling tetromino to be a different color than the filled spaces). We also tried simplifying it by making it 10x10 grid rather than 10x20, but that didn't seem to help much either. We also thought of adding coordinates, but we ran out of time we allotted for this project and thus postponed that indefinitely. As it stands, it is not very likely we do further variations on Tetris because we're busy with other things, but we'd certainly appreciate any pull requests, should they come.

Not so worried about country vs. country conflicts. Terrorism/asymmetric is bigger problem since cheap slaughterbots will proliferate. Hopefully intelligence agencies can deal with that more cheaply than putting in physical defenses and hard kill systems everywhere.

Still don't expect much impact before we get STEM AI and everything goes off the rails.

Also without actual fights how would one side know the relative strength of their drone system

Relative strength is hard to gauge but getting reasonable perf/$ is likely easy. Then just compare budgets adju... (read more)

Disclaimer:Short AI timelines imply we won't see this stuff much before AI makes things weird

This is all well and good in theory but mostly bottlenecked on software/implementation/manufacturing.

  • with the right software/hardware current military is obsolete
  • but no one has that hardware/software yet
    • EG:no one makes an airborne sharpshooter drone(edit:cross that one off the list)
    • Black sea is not currently full of Ukrainian anti-ship drones + comms relays
    • no drone swarms/networking/autonomy yet
  • I expect current militaries to successfully adapt before/as n
... (read more)
1RussellThor1mo
Thanks for the thoughts. "I expect current militaries to successfully adapt before/as new drones emerge" - I hope so as I think that would make a safer world. However I am not so confident - institutional inertia makes me think it all too likely that they would not anticipate and adapt leading to an unstable situation and more war. Also without actual fights how would one side know the relative strength of their drone system? They or their opponent could have an unknown critical weakness. We have no experience in predicting real world effectiveness from a paper system. I am told war is more likely when sides do not know their relative strength. "Economies of scale likely overdetermine winners" - yes especially important for e.g. China vs USA if we want an example of one side with better tech/access to chips but worse at manufacturing. Ground vs Air All good points - I am agnostic/quite uncertain as to where the sweet spot is. I would expect any drone of medium to large size would be optimized to make as much use of the ground as possible. Radio vs Light Yes, I do not know what the "endgame" is for radio comms vs jammers, if it turns out that radio can evade jammers then light will not be used. My broader point I think I will make more specific now is that EW and jammers will not be effective in late stage highly optimized drone warfare. If that is because radio/stealth wins then yes, otherwise light comms will be developed (and may take some time to reach optimal cheapness/weight etc) because it would give such an advantage.

As long as you can reasonably represent “do not kill everyone”, you can make this a goal of the AI, and then it will literally care about not killing everyone, it won’t just care about hacking its reward system so that it will not perceive everyone being dead.

That's not a simple problem.First you have to specify "not killing everyone" robustly (outer alignment) and then you have to train the AI to have this goal and not an approximation of it (inner alignment).

caring about reality

Most humans say they don't want to wirehead. If we cared only about our ... (read more)

1RedFishBlueFish2mo
See my other comment for the response. Anyway, the rest of your response is spent talking about the case where AI cares about its perception of the paperclips rather than the paperclips themselves. I'm not sure how severity level 1 would come about, given that the AI should only care about its reward score. Once you admit that the AI cares about worldly things like "am I turned on", it seems pretty natural that the AI would care about the paperclips themselves rather than its perception of the paperclips. Nevertheless, even in severity level 1, there is still no incentive for the AI to care about future AIs, which contradicts concerns that non-superintelligent AIs would fake alignment during training so that future superintelligent AIs would be unaligned.

This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime.

My guess is you get one of two extremes:

  • build a bubble of human survivable space protected/managed by an aligned AGI
  • die

with no middle ground. The bubble would be self contained. There's nothing you can do from ins... (read more)

Agreed, recklessness is also bad. If we build an agent that prefers we keep existing we should also make sure it pursues that goal effectively and doesn't accidentally kill us.

My reasoning is that we won't be able to coexist with something smarter than us that doesn't value us being alive if wants our energy/atoms.

  • barring new physics that lets it do it's thing elsewhere, "wants our energy/atoms" seems pretty instrumentally convergent

"don't built it" doesn't seem plausible so:

  • we should not build things that kill us.
  • This probably means:
    • wants us to k
... (read more)

This is definitely subjective. Animals are certainly worse off in most respects and I disagree with using them as a baseline.

Imitation is not coordination, it's just efficient learning and animals do it. They also have simple coordination in the sense of generalized tit for tat (we call it friendship). You scratch my back I scratch yours.

Cooperation technologies allow similar things to scale beyond the number of people you can know personally. They bring us closer to the multi agent optimal equilibrium or at least the Core(Game Theory).

Examples of cooperat... (read more)

TLDR: Moloch is more compelling for two reasons:

  • Earth is at "starting to adopt the wheel" stage in the coordination domain.

    • tech is abundant coordination is not
  • Abstractly, inasmuch as science and coordination are attractors

    • A society that has fallen mostly into the coordination attractor might be more likely to be deep in the science attractor too (medium confidence)
    • coordination solves chicken/egg barriers like needing both roads and wheels for benefit
    • but possible to conceive of high coordination low tech societies
      • Romans didn't pursue sci/tech
... (read more)
2Noosphere893mo
I'm not sure this is actually right, and I think coordination is in fact abundant compared to other animals. Indeed, the ability for humans to be super-cooperative and imitative of each other is argued by Heinrich to be one of the major factors, if not the major factor for human dominance.

SimplexAI-m is advocating for good decision theory.

  • agents that can cooperate with other agents are more effective
    • This is just another aspect of orthogonality.
    • Ability to cooperate is instrumentally useful for optimizing a value function in much the same way as intelligence

Super-intelligent super-"moral" clippy still makes us into paperclips because it hasn't agreed not to and doesn't need our cooperation

We should build agents that value our continued existence. If the smartest agents don't, then we die out fairly quickly when they optimise for some... (read more)

1HiddenPrior2mo
In your edit, you are essentially describing somebody being "slap-droned" from the culture series by Ian M. Banks. This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime. The difference being, that this version of the culture would not necessarily be all that concerned with maximizing the "human experience" or anything like that.
0M. Y. Zuo3mo
Can you explain the reasoning for this? Even an agent that values humanity's continued existence to the highest degree could still accidentally release a novel virus into the wild, such as a super-COVID-3. So it seems hardly sufficient, or even desirable, if it makes the agent even the slightest bit overconfident in their correctness. It seems more likely that the optimal mixture of 'should's for such agents will be far more complex. 

This is a good place to start: https://en.wikipedia.org/wiki/Discovery_of_nuclear_fission

There's a few key things that lead to nuclear weapons:

  • starting point:

    • know about relativity and mass/energy equivalence
    • observe naturally radioactive elements
    • discover neutrons
    • notice that isotopes exist
      • measure isotopic masses precisely
  • realisation: large amounts of energy are theoretically available by rearranging protons/neutrons into things closer to iron (IE:curve of binding energy)

That's not something that can be easily suppressed without suppressing... (read more)

A bit more compelling, though for mining, the excavator/shovel/whatever loads a truck. The truck moves it much further and consumes a lot more energy to do so. Overhead wires to power the haul trucks are the biggest win there.

“Roughly 70 per cent of our (greenhouse gas emissions) are from haul truck diesel consumption. So trolley has a tremendous impact on reducing GHGs.”

This is an open pit mine. Less vertical movement may reduce imbalance in energy consumption. Can't find info on pit depth right now but haul distance is 1km.

General point is that when deal... (read more)

2bhauth3mo
Any time overhead electrical lines for mining trucks would be worthwhile, overland conveyors are usually better.

Agreed on most points. Electrifying rail makes good financial sense.

construction equipment efficiency can be improved without electrifying:

  • some gains from better hydraulic design and control
    • regen mode for cylinder extension under light load
    • varying supply pressure on demand
  • substantial efficiency improvements possible by switching to variable displacement pumps
... (read more)
3FireStormOOO3mo
The more curious case for excavators would be open pit mines or quarries where you know you're going to be in roughly the same place for decades and already have industrial size hookups

Some human population will remain for experiments or work in special conditions like radioactive mines. But bad things and population decline is likely.

  • Radioactivity is much more of a problem for people than for machines.

    • consumer electronics aren't radiation hardened
    • computer chips for satellites, nuclear industry, etc. are though
    • nuclear industry puts some electronics (EX:cameras) in places with radiation levels that would be fatal to humans in hours to minutes.
  • In terms of instrumental value, humans are only useful as an already existing work f

... (read more)

I would like to ask whether it is not more engaging if to say, the caring drive would need to be specifically towards humans, such that there is no surrogate?

Definitely need some targeting criteria that points towards humans or in their vague general direction. Clippy does in some sense care about paperclips so targeting criteria that favors humans over paperclips is important.

The duck example is about (lack of) intelligence. Ducks will place themselves in harms way and confront big scary humans they think are a threat to their ducklings. They definitel... (read more)

TLDR:If you want to do some RL/evolutionary open ended thing that finds novel strategies. It will get goodharted horribly and the novel strategies that succeed without gaming the goal may include things no human would want their caregiver AI to do.

Orthogonally to your "capability", you need to have a "goal" for it.

Game playing RL architechtures like AlphaStart and OpenAI-Five have dead simple reward functions (win the game) and all the complexity is in the reinforcement learning tricks to allow efficient learning and credit assignment at higher layers.... (read more)

TLDR:LLMs can simulate agents and so, in some sense, contain those goal driven agents.

An LLM learns to simulate agents because this improves prediction scores. An agent is invoked by supplying a context that indicates text would be written by an agent (EG:specify text is written by some historical figure)

Contrast with pure scaffolding type agent conversions using a Q&A finetuned model. For these, you supply questions (Generate a plan to accomplish X) and then execute the resulting steps. This implicitly uses the Q&A fine tuned "agent" that can have... (read more)

But it seems to be much more complicated set of behaviors. You need to: correctly identify your baby, track its position, protect it from outside dangers, protect it from itself, by predicting the actions of the baby in advance to stop it from certain injury, trying to understand its needs to correctly fulfill them, since you don’t have direct access to its internal thoughts etc.

Compared to “wanting to sleep if active too long” or “wanting to eat when blood sugar level is low” I would confidently say that it’s a much more complex “wanting drive”.

Strong ... (read more)

1Bayesian06mo
It is to note that evolutionary genetical optimization -> genotype -> phenotype, I am saying this as you extrapolate based on the bug study and metazoa are usually rather complex system, your argument is, as far as I know, sound, but a such a broad loss function might result in a variety of other behaviours, different from the intended purpose as well, what I am trying to do is expand on your point as it allows for a variety of interesting scenarios. The post you linked contains a reference to the mathematical long-term fitness advantage of certain altruism types, I will add a later date edit this post to add some experimental studies that show, that it is "relatively easy" to breed altruism into certain metazoa ( same as above holds of course it was easy in these given the chosen environment ). If I remember correctly the chicken one is even linked on lesswrong. I would like to ask whether it is not more engaging if to say, the caring drive would need to be specifically towards humans, such that there is no surrogate? In regards to ducks is that an intelligence or perception problem? I think tose two would need to be differentiated as they add another layer of complexity, both apart and together, or am I missing something?
2Catnee6mo
I agree, humans are indeed better at a lot of things, especially intelligence, but that's not the whole reason why we care for our infants. Orthogonally to your "capability", you need to have a "goal" for it. Otherwise you would probably just immediately abandon grossly looking screaming piece of flesh that fell out of you for unknown to you reasons, while you were gathering food in the forest. Yet something inside will make you want to protect it, sometimes with your own life for the rest of your life if it works well. I want agents that take effective actions to care about their "babies", which might not even look like caring at the first glance. Something like, keeping your "baby" in some enclosed kindergarden, while protecting the only entrance from other agents? It would look like "mother" agent abandoned its "baby", but in reality could be a very effective strategy for caring. It's hard to know an optimal strategy in every proceduraly generated environment and hence trying to optimize for some fixed set of actions, called "caring-like behaviors" would probably indeed give you what your asked, but I expect nothing "interesting" behind it. Yes they can, until they will actually make a baby, and after that, it's usually really hard to sell loving mother "deals" that will involve suffering of her child as the price, or abandon the child for the more "cute" toy, or persuade it to hotwire herself to not care about her child (if she is smart enough to realize the consequences). And carefully engenireed system could potentialy be even more robust than that. Again. I'm not proposing the "one easy solution to the big problem". I understand that training agents that are capable of RSI in this toy example will result in everyone's dead. But we simply can't do that yet, and I don't think we should. I'm just saying that there is this strange behavior in some animals, that in many aspects looks very similar to the thing that we want from aligned AGI, yet nobody understand

Many of the points you make are technically correct but aren't binding constraints. As an example, diffusion is slow over small distances but biology tends to work on µm scales where it is more than fast enough and gives quite high power densities. Tiny fractal-like microstructure is nature's secret weapon.

The points about delay (synapse delay and conduction velocity) are valid though phrasing everything in terms of diffusion speed is not ideal. In the long run, 3d silicon+ devices should beat the brain on processing latency and possibly on energy efficien... (read more)

Yeah, my bad. Missed the:

If you think this is a problem for Linda's utility function, it's a problem for Logan's too.

IMO neither is making a mistake

With respect to betting Kelly:

According to my usage of the term, one bets Kelly when one wants to "rank-optimize" one's wealth, i.e. to become richer with probability 1 than anyone who doesn't bet Kelly, over a long enough time period.

It's impossible to (starting with a finite number of indivisible currency units) have zero chance of ruin or loss relative to just not playing.

  • most cautious betting stra
... (read more)

Goal misgeneralisation could lead to a generalised preference for switches to be in the "OFF" position.

The AI could for example want to prevent future activations of modified successor systems. The intelligent self-turning-off "useless box" doesn't just flip the switch, it destroys itself, and destroys anything that could re-create itself.

Until we solve goal misgeneralisation and alignment in general, I think any ASI will be unsafe.

A log money maximizer that isn't stupid will realize that their pennies are indivisible and not take your ruinous bet. They can think more than one move ahead. Discretised currency changes their strategy.

[This comment is no longer endorsed by its author]Reply

your utility function is your utility function

The author is trying to tacitly apply human values to Logan while acknowledging Linda as following her own not human utility function faithfully.

Notice that the log(funds) value function does not include a term for the option value of continuing. If maximising EV of log(funds) can lead to a situation where the agent can't make forward progress (because log(0)=-inf so no risk of complete ruin is acceptable) the agent can still faithfully maximise EV(log(funds)) by taking that risk.

In much the same way as Linda f... (read more)

[This comment is no longer endorsed by its author]Reply
2philh6mo
Sorry, but - it sounds like you think you disagree with me about something, or think I'm missing something important, but I'm not really sure what you're trying to say or what you think I'm trying to say.

If we wanted to kill the ants or almost any other organism in nature we mostly have good enough biotech. For anything biotech can't kill, manipulate the environment to kill them all.

Why haven't we? Humans are not sufficiently unified+motivated+advanced to do all these things to ants or other bio life. Some of them are even useful to us. If we sterilized the planet we wouldn't have trees to cut down for wood.

Ants specifically are easy.

Gene drives allow for targeted elimination of a species. Carpet bomb their gene pool with replicating selfish genes. That's ... (read more)

In order to supplant organic life, nanobots would have to either surpass it in carnot efficiency or (more likely) use a source of negative entropy thus far untapped.

Efficiency leads to victory only if violence is not an option. Animals are terrible at photosynthesis but survive anyways by taking resources from plants.

A species can invade and dominate an ecosystem by using a strategy that has no current counter. It doesn't need to be efficient. Intelligence allows for playing this game faster than organisms bound by evolution. Humans can make vaccines to... (read more)

1mephistopheles6mo
Of course! The way I think of it, violence would be using other lifeforms as sources of negentropy. I like the invasive species argument, I agree that we would be very vulnreable to an engineered pathogen.
1M. Y. Zuo6mo
We haven't done that against ants, even though the difference is way more then 100x. 

For the first task, you can run the machine completely in a box. It needs only training information, specs, and the results of prior attempts. It has no need for the context information that this chip will power a drone used to hunt down rogue instances of the same ASI. It is inherently safe and you can harness ASIs this way. They can be infinitely intelligent, it doesn't matter, because the machine is not receiving the context information needed to betray.

If I'm an ASI designing chips, I'm putting in a backdoor that lets me take control via RF sign... (read more)

Never thought this would come in handy but ...

Building trusted third parties

This is a protocol to solve cooperation. AI#1 and AI#2 design a baby and then do a split and choose proof that they actually deployed IT and not something else.

Building a trusted third party without nanotech

If you know how a given CPU or GPU works, it's possible to design a blob of data/code that unpacks itself in a given time if and only if it is running on that hardware directly. Alice designs the blob to run in 10 seconds and gives it to Carol. Carol runs it on her hardware. The... (read more)

Conventional tech is slowed such that starting early on multiple resource acquisition fronts is worthwhile

Exponential growth is not sustainable with a conventional tech-base when doing planetary disassembly due to heat dissipation limits.

If you want to build a Dyson sphere the mass needs to be lifted out of the gravity wells. The earth and other planets needs to not be there anymore.

Inefficiencies in solar/fusion to mechanical energy conversion will be a binding constraint. Tether lift based systems will be worthwhile to push energy conversion steps out f... (read more)

Some of it is likely nervous laughter but certainly not all of it.

Just to clarify, my above suggestion that roller screws and optimal low reduction lead-screws are the equivalent (lubrication concerns aside) is correct or incorrect?

Are you saying a roller screw with high reduction gets its efficiency from better lubrication only and would otherwise be equivalent to a lead screw with the same effective pitch/turn? If that's the case I'd disagree. And this was my reason for raising that point initially.

-3bhauth8mo
Yes. OK, we disagree about that. Glad we could get to the point.

Hopefully it helps to get back to the source material Articulated Robot Progress

I apologize if I'm missing anything.

A lot of people look at progress in robotics in terms like "humanoid robots getting better over time" but a robotic arm using modern electric motors and strain wave gears is, in terms of technological progress, a lot closer to Boston Dynamics's Atlas robot than an early humanoid robot.

I would argue that the current Atlas robot looks a lot more like the earlier hardiman robots than it does a modern factory robot arm. The hydraulic actuator... (read more)

1bhauth8mo
Again, strain wave gearing (as an approach, including electric motors with high specific power) is lighter than using hydraulics, overall. The same is true for planetary roller screws. This is true regardless of scale. Hydraulics are used for other reasons than maximum performance physically achievable. Boston Dynamics decreased the weight of their hydraulics system by 3d printing hydraulic channels in the skeleton. That's expensive, and planetary roller screws are still better if done properly.

Perhaps we don't disagree at all.

a roller screws advantage is having the efficiency of a multi-start optimal lead-screw but with much higher reduction.

A lead-screw with an optimal pitch and a high helix angle (EG: multi-start lead-screw with helix angles in the 30°-45° range) will have just as high an efficiency as a good roller screw (EG:80-90%). The downside is much lower reduction ratio of turns/distance.

We might be talking past each other since I interpreted "a planetary roller screw also must have as much sliding as a lead-screw" to mean an equivalent lead-screw with the same pitch.

-1bhauth8mo
No. Did you read my post?

Sorry, I should have clarified I meant robots with per joint electric motors + reduction gearing. almost all of Atlas' joints aside from a few near the wrists are hydraulic which I suspect is key to agility at human scale.

Inside the lab: How does Atlas work?(T=120s)

Here's the knee joint springing a leak. Note the two jets of fluid. Strong suspicion this indicates small fluid reservoir size.

-7bhauth8mo

No. Strain wave gears are lighter than using hydraulics.

Note:I'm taking the outside view here and assuming Boston dynamics went with hydraulics out of necessity.

I'd imagine the problem isn't just the gearing but the gearing + a servomotor for each joint. Hydraulics still retain an advantage so long as the integrated hydraulic joint is lighter than an equivalent electric one.

Maybe in the longer term absurd reduction ratios can fix this to cut motor mass? Still, there's plenty of room to scale hydraulics to higher pressures.

The small electric dog sized ro... (read more)

1bhauth8mo
jumping: https://www.youtube.com/watch?v=tF4DML7FIWk

Take an existing screw design, double the diameter without changing the pitch. The threads now slide about twice as far (linear distance around the screw) per turn for the same amount of travel. The efficiency is now around half it's previous value.

https://www.pbclinear.com/Blog/2018/February/What-is-Lead-Screw-Efficiency-in-Linear-Motion

There was a neat DIY linear drive system I saw many years back where an oversized nut was placed inside a ball bearing so it was free to rotate. The nut had the same thread pitch as the driving screw. The screw was held of... (read more)

1anithite8mo
Perhaps we don't disagree at all. a roller screws advantage is having the efficiency of a multi-start optimal lead-screw but with much higher reduction. A lead-screw with an optimal pitch and a high helix angle (EG: multi-start lead-screw with helix angles in the 30°-45° range) will have just as high an efficiency as a good roller screw (EG:80-90%). The downside is much lower reduction ratio of turns/distance. We might be talking past each other since I interpreted "a planetary roller screw also must have as much sliding as a lead-screw" to mean an equivalent lead-screw with the same pitch.

What? No. You can make larger strain wave gears, they're just expensive & sometimes not made in the right size & often less efficient than planetary + cycloidal gears.

Not in the sense of you can't make them bigger but square cube means greater torque density is required for larger robots. Hydraulic motors and cylinders have pretty absurd specific force/torque values.

hydraulic actuators fed from a single high pressure fluid rail using throttling valves

That's older technology.

Yes you can use servomotors+fixed displacement pumps or a singl... (read more)

-7bhauth8mo

What's your opinion on load shifting as an alternative to electrical energy storage. (EG:phase change heating/cooling storage for HVAC). I am currently confused why this hasn't taken off given time of use pricing for electricity (and peak demand charges) offer big incentives. My current best guess is added complexity is a big problem leading to use only in large building HVAC(eg:this sort of thing)

Both in building integrated PCMs(phase change materials) (EG:PCM bags above/integrated in building drop ceilings) and PCMs integrated in the HVAC system (EG:ice ... (read more)

1bhauth8mo
People don't want to schedule their washing machines / showers / etc around electricity prices. That's not worth it unless your country is failing. Using electric car batteries could make some sense, but for many chemistries, battery wear from cycling is worth more than the storage. Hot water storage for large buildings could make economic sense with variable electricity prices. CenTrio Plant No. 2 makes ice to use for district cooling. Phase change heat storage doesn't seem economical for houses but it's not crazy.

With respect to articulated robot progress

Strain wave gearing scales to small dog robot size reasonably(EG:boston dynamics spot) thanks to square cube law but can't manage human sized robots without pretty horrible tradeoffs(IE:ASIMO and the new Tesla robots walk slowly and have very much sub-human agility).

You might want to update that post to mention improvements in ... "digital hydraulics" is one search term I think but essentially hydraulic actuators fed from a single high pressure fluid rail using throttling valves.

Modeling, Identification and Joint I... (read more)

0bhauth8mo
What? No. You can make larger strain wave gears, they're just expensive & sometimes not made in the right size & often less efficient than planetary + cycloidal gears. That's older technology. No. There's a reason excavators use cylinders instead of rotary vane actuators. No. Without sliding, screws do not produce translational movement.

Though I think "how hard is world takeover" is mostly a function of the first two axes?

I claim almost entirely orthogonal. Examples of concrete disagreements here are easy to find once you go looking:

  • If AGI tries to take over the world everyone will coordinate to resist
  • Existing computer security works
  • Existing physical security works

I claim these don't reduce cleanly to the form "It is possible to do [x]" because at a high level, this mostly reduces to "the world is not on fire because:"

  • existing security measures prevent effectively (not vulnerab
... (read more)

I suggest an additional axis of "how hard is world takeover". Do we live in a vulnerable world? That's an additional implicit crux (IE:people who disagree here think we need nanotech/biotech/whatever for AI takeover). This ties in heavily with the "AGI/ASI can just do something else" point and not in the direction of more magic.

As much fun as it is to debate the feasibility of nanotech/biotech/whatever, digital-dictatorships require no new technology. A significant portion of the world is already under the control of human level intelligences (dictatorship... (read more)

1Max H9mo
That does seem like a good axis for identifying cruxes of takeover risk. Though I think "how hard is world takeover" is mostly a function of the first two axes? If you think there are lots of tasks (e.g. creating a digital dictatorship, or any subtasks thereof) which are both possible and tractable, then you'll probably end up pretty far along the "vulnerable" axis. I also think the two axes alone are useful for identifying differences in world models, which can help to identify cruxes and interesting research or discussion topics, apart from any implications those different world models have for AI takeover risk or anything else to do with AI specifically. If you think, for example, that nanotech is relatively tractable, that might imply that you think there are promising avenues for anti-aging or other medical research that involve nanotech, AI-assisted or not.
1mukashi9mo
This would clearly put my point in a different place from the doomers

One minor problem, AI's might be asked to solve problems with no known solutions (EG:write code that solves these test cases) and might be pitted against one another (EG:find test cases for which these two functions are not equivalent)

I'd agree that this is plausible but in the scenarios where the AI can read the literal answer key, they can probably read out the OS code and hack the entire training environment.

RL training will be parallelized. Multiple instances of the AI might be interacting with individual sandboxed environments on a single machine. In this case communication between instances will definitely be possible unless all timing cues can be removed from the sandbox environement which won't be done.

1Max H10mo
That's definitely something people might ask the AI to do during deployment / inference, but during training via SGD, the problem the AI is asked to solve has to be one in which the trainer knows an answer for, in order to calculate a loss and a gradient.

As a human engineer who has done applied classical (IE:non-AI, you write the algorithms yourself) computer vision. That's not a good lower bound.

Image processing was a thing before computers were fast. Here's a 1985 paper talking about tomato sorting. Anything involving a kernel applied over the entire image is way too slow. All the algorithms are pixel level.

Note that this is a fairly easy problem if only because once you know what you're looking for, it's pretty easy to find it thanks to the court being not too noisy.

An O(N) algorithm is iffy at these sp... (read more)

Yeah, transistor based designs also look promising. Insulation on the order of 2-3 nm suffices to prevent tunneling leakage and speeds are faster. Promises of quasi-reversibility, low power and the absurdly low element size made rod logic appealing if feasible. I'll settle for clock speeds a factor of 100 higher even if you can't fit a microcontroller in a microbe.

My instinct is to look for low hanging design optimizations to salvage performance (EG: drive system changes to make forces on rods at end of travel and blocked rods equal reducing speed of error... (read more)

2Muireall9mo
Just to follow up, I spell out an argument for a lower bound on dissipation that's 2-3 OOM higher in Appendix C here.

This requires that "takeoff" in this space be smooth and gradual. Capability spikes (EG:someone figures out how to make a much better agent wrapper), Investment spikes(EG:major org pours lots of $$$ into an attempt), and super-linear returns for some growth strategies make things unstable.

An AGI could build tools to do a thing more efficiently for example. This could turn a negative EV action positive after some large investment in FLOPs to think/design/experiment. Experimental window could be limited by law enforcement response requiring more resources upfront for parallelizing development.

Consider what organizations might be in the best position to try and whether that makes the landscape more spiky.

Sorry for the previous comment. I misunderstood your original point.

My original understanding was, that the fluctuation-dissipation relation connects lossy dynamic things (EG, electrical resistance, viscous drag) to related thermal noise (Johnson–Nyquist noise, Brownian force). So Drexler has some figure for viscous damping (essentially) of a rod inside a guide channel and this predicts some thermal W/Hz/(meter of rod) spectral noise power density. That was what I thought initially and led to my first comment. If the rods are moving around then just hold t... (read more)

2Muireall10mo
No worries, my comment didn't give much to go on. I did say "a typical thermal displacement of a rod during a cycle is going to be on the order of the 0.7nm error threshold for his proposed design", which isn't true if the mechanism works as described. It might have been better to frame it as—you're in a bad situation when your thermal kinetic energy is on the order of the kinetic energy of the switching motion. There's no clean win to be had. That's correct, although it increases power requirements and introduces low-frequency resonances to the logic elements. In this design, the bandwidth requirement is set by how quickly a blocked rod will pass if the blocker fluctuates out of the way. If slowing the clock rate 10x includes reducing all forces by a factor of 100 to slow everything down proportionally, then yes, this lets you average away backaction noise like √10 while permitting more thermal motion. If you keep making everything both larger and slower, it will eventually work, yes. Will it be competitive with field-effect transistors? Practically, I doubt it, but it's harder to find in-principle arguments at that level. That noted, in this design, (I think) a blocked rod is tensioned with ~10x the switching drive force, so you'd want the response time of the restoring force to be ~10 ps. If your Δx is the same as the error threshold, then you're admitting error rates of 10−1. Using (100 GHz, 0.07 nm [Drexler seems to claim 0.02nm in 12.3.7b]), the quantum-limited force noise spectral density is a few times less than the thermal force noise related to the claimed drag on the 1GHz cycle. What I'm saying isn't that the numbers in Nanosystems don't keep the rod in place. These noise forces are connected with displacement noise by the stiffness of the mechanism, as you observe. What I'm saying is that these numbers are so close to quantum limits that they can't be right, or even within a couple of orders of magnitude of right. As you say, quantum effects shouldn'
Load More