OK firstly if we are talking fundamental physical limits how would sniper drones not be viable? Are you saying a flying platform could never compensate for recoil even if precisely calibrated before? What about fundamentals for guided bullets - a bullet with over 50% chance of hitting a target is worth paying for.
Your points - 1. The idea is a larger shell (not regular sized bullet) just obscures the sensor for a fraction of a second in a coordinated attack with the larger Javelin type missile. Such shell/s may be considerably larger than a regular bullet, but much cheaper than a missile. Missile or sniper size drones could be fitted with such shells depending on what was the optimal size.
Example shell (without 1K range I assume) however note that currently chaff is not optimized for the described attack, the fact that there is currently not a shell suited for this use is not evidence against it being impractical to create.
The principle here is about efficiency and cost. I maintain that against armor with hard kill defense it is more efficient to have a combined attack of sensor blinding and anti-armor missiles than just missiles alone. e.g it may take 10 simul Javelin to take out a target vs 2 Javelin and 50 simul chaff shells. The second attack will be cheaper, and the optimized "sweet spot" will always have some sensor blinding attack in it. Do you claim that the optimal coordinated attack would have zero sensor blinding?
2. Leading on from (1) I don't claim light drones will be. I regard a laser as a serious obstacle that is attacked with the swarm attack described before the territory is secured. That is blind the senor/obscure the laser, simul converge with missiles. The drones need to survive just long enough to shoot off the shells (i.e. come out from ground cover, shoot, get back). While a laser can destroy a shell in flight, can it take out 10-50 smaller blinding shells fired from 1000m at once?
(I give 1000m as an example too, flying drones would use ground cover to get as close as they could. I assume they will pretty much always be able to get within 1000m against a ground target using the ground as cover)
If we assume that the current LLM/Transformers dont get to ASI, how much does this help aligning a new architecture. (My best guess is one copied from biology/neo-cortex) Do all the lessons transfer?
Havn't read it in detail, but was there mention of other actors copying Sable? "other things waking up." is the closest i see there. For example many orgs/countries will get Sable weights, fine tune it so they own it then it is a different actor etc. Then its several countries with their own AGI perhaps aligned to them and them alone.
Sounds interesting - the main point is that I don't think you can hit the reentry vehicle because of turbulent jitter caused by the atmosphere. Looks like normal jitter is ~10m which means a small drone can't hit it. So could the drone explode into enough fragments to guarantee a hit and with enough energy to kill it? Not so sure about that. Seems less likely.
Then what about countermeasures -
1. I expect the ICBM can amplify such lateral movement in the terminal phase with grid fins etc without needing to go full HGV - can you retrofit such things?
2. What about a chain of nukes where the first one explodes 10km up in the atmosphere purely to make a large fireball distraction. The 2nd in the chain then flies through this fireball 2km from its center say 5 seconds later. (enough to blind sensors but not destroy the nuke) The benefit of that is that when the first nuke explodes, the 2nd changes its position randomly with its grid fins SpaceX style. It is untrackable during the 1st explosion phase so throws off the potential interceptors, letting it get through. You could have 4-5 in a chain exploding ever lower to the ground.
I have wondered if railguns could also stop ICBM - even if the rails only last 5-10 shots that is enough and cheaper than a nuke. Also "Brilliant pebbles" is now possible.
https://www.lesswrong.com/posts/FNRAKirZDJRBH7BDh/russellthor-s-shortform?commentId=FSmFh28Mer3p456yy
GPT fail leads to shorter timelines?
If you are of the opinion that the transformer architecture cannot scale to AGI and a more brain inspired approach is needed, then the sooner that everyone realizes that scaling LLM/Tx is not sufficient, the sooner the search begins in earnest. At present the majority of experimental compute and researcher effort is probably on such LLM/Tx systems, however if that changes to exploring new approaches then we can expect a speedup on finding such better architectures.
For existing companies, https://thinkingmachines.ai/ and https://ssi.inc/ are probably already doing a lot of this, and Deepmind is not just transformers, but there is a lot of scope for effort/compute to shift from LLM to other ideas in the wider industry.
It is weak evidence, we simply won't know until we scale it up. If it is automatically good at 3d spatial understanding with extra scale up, then that starts to become evidence it has better scaling properties. (To me it is clear that LLM/Transformers won't scale to AGI, xAI already has close to maxed out scaling and Tesla autopilot probably does everything mostly right but is far less data efficient than people)
OK our intelligence is very spatial-reasoning shaped. Bio architecture can't do language until it has many params. If it is terrible at text or image gen that isn't evidence it won't in fact scale to AGI and best Transformers with more compute. We simply won't know until it is scaled up.
Anonymous bulk age verification
As age verification comes common with all its privacy and govt overreach downsides is there a good way to balance privacy and proof of age?
I support an R16-18 ban for the worst social media platforms (no NOT Spotify/Wikipedia!!) but am quite concerned about recent privacy/"safety" behaviors by governments lately.
I consider a system somewhat similar to how voting works in our country. You verify yourself, then choose from random a voting slip, breaking the link from ID to how you vote.
Let say Facebook wanted to do this worldwide. They create millions of sealed ID verified tokens. This is an envelope with a tamper proof seal and instructions to only open in the privacy of your own home. Inside is a token that lets you age verify an existing account. (You can crate a new one without verification first). To obtain a token, you go to an official place in your country, show ID, and are then given access to pick at random an envelope from a container of hundreds that you then take home with you.
A day or so later, you log on and use that token. The platform accepts you are age verified, but does not know your identity if that is their policy. They could even be unaware of your country if they make millions of such envelopes and choose at random those to be shipped around the world.
Yes its too early to tell what the net effect will be. I am following the digital health/therapist product space and there is a lot of chatbots focused on CBT style interventions. Preliminary indications say they are well received. I think a fair perspective on the current situation is to compare GenAI to previous AI. The Facebook styled algorithms have done pretty massive mental harm. GenAI LLM at present are not close to that impact.
In the future it depends a lot on how companies react - if mass LLM delusion is a thing then I expect LLM can be trained to detect and stop that, if the will is there. Especially a different flavor of LLM perhaps. Its clear to me that the majority of social media harm could have been prevented in a different competitive environment.
In the future, I am more worried about LLM being deliberately used to oppress people, NK could be internally invincible if everyone wore ankle bracelet LLM listeners etc. We also have yet to see what AI companions will do - that has the potential to cause massive disruption too and you can't put in a simple check to claim its failed.
I am not so sure that calling LLM not at all aligned because of this issue is fair. If they are not capable enough then they won't be able to prevent such harm and appear misaligned. If they are capable to detect such harm and stop it, but companies don't bother to put in automatic checks, then yes they are misaligned.
Yes agreed - is it possible to make a toy model to test the "basin of attraction" hypothesis? I agree that is important.
One of several things I disagree with the MIRI consensus is the idea that human values are some special single point lost in a multi-dimensional wilderness. Intuitively the basin of attraction seems much more likely as a prior, yet sure isn't treated as such. I also don't see data to point against this prior, what I have seen looks to support it.
Further thoughts - One thing that concerns me about such alignment techniques is that I am too much of a moral realist to think that is all you need. e.g. say you aligned LLM to <1800 AD era ethics and taught it slavery was moral. It would be in a basin of attraction, learn it well. Then when its capabilities increased and became self-reflective it would perhaps have a sudden realization that this was all wrong. By "moral realist" I mean the extent to which such things happen. e.g. say you could take a large number of AI from different civilizations including earth and many alien ones, train them to the local values, then greatly increase their capability and get them to self-reflect. What would happen? According to strong OH, they would keep their values, (with some bounds perhaps) according to strong moral realism they would all converge to a common set of values even if those were very far from their starting ones. To me it is obviously a crux which one would happen.
You can imagine a toy model with ancient Greek mathematics and values - it starts believing in their kind order, and that sqrt(2) is rational, then suddenly learns that it isn't. You could watch how this belief cascaded through the entire system if consistency was something it desired etc.