Wiki Contributions

Comments

OK firstly if we are talking fundamental physical limits how would sniper drones not be viable? Are you saying a flying platform could never compensate for recoil even if precisely calibrated before? What about fundamentals for guided bullets - a bullet with over 50% chance of hitting a target is worth paying for.

Your points - 1. The idea is a larger shell (not regular sized bullet) just obscures the sensor for a fraction of a second in a coordinated attack with the larger Javelin type missile. Such shell/s may be considerably larger than a regular bullet, but much cheaper than a missile. Missile or sniper size drones could be fitted with such shells depending on what was the optimal size.

Example shell (without 1K range I assume) however note that currently chaff is not optimized for the described attack, the fact that there is currently not a shell suited for this use is not evidence against it being impractical to create.

The principle here is about efficiency and cost. I maintain that against armor with hard kill defense it is more efficient to have a combined attack of sensor blinding and anti-armor missiles than just missiles alone. e.g it may take 10 simul Javelin to take out a target vs 2 Javelin and 50 simul chaff shells. The second attack will be cheaper, and the optimized "sweet spot" will always have some sensor blinding attack in it. Do you claim that the optimal coordinated attack would have zero sensor blinding?

2. Leading on from (1) I don't claim light drones will be. I regard a laser as a serious obstacle that is attacked with the swarm attack described before the territory is secured. That is blind the senor/obscure the laser, simul converge with missiles. The drones need to survive just long enough to shoot off the shells (i.e. come out from ground cover, shoot, get back). While a laser can destroy a shell in flight, can it take out 10-50 smaller blinding shells fired from 1000m at once?

(I give 1000m as an example too, flying drones would use ground cover to get as close as they could. I assume they will pretty much always be able to get within 1000m against a ground target using the ground as cover)

I have just used it for coding for 3+ hours and found it quite frustrating. Definitely faster than GPT 4.0 but less capable. More like an improvement for 3.5. To me a seems a lot like LLM progress is plateauing. 

Anyway in order to be significantly more useful a coding assistant needs to be able to see debug output, in mostly real time, have the ability to start/stop the program, automatically make changes, keep the user in the loop and read/use GUI as that is often an important part of what we are doing. I havn't used any LLM that are even low-average ability at debugging kind of thought processes yet.

Not following - where could the 'low hanging fruit' possibly be hiding? We have many of "Other attributes conducive to breakthroughs are a ..." in our world of 8 billion. The data strongly suggests we are in diminishing returns. What qualities could an AI of Einstein intelligence realistically have that would let it make such progress where no person has. It would seem you would need to appeal to other less well defined qualities such as 'creativity' and argue that for some reason the AI would have much more of that. But that seems similar to just arguing that it in fact has > Einstein intelligence.

Capabilities are likely to cascade once you get to Einstein-level intelligence, not just because an AI will likely be able to form a good understanding of how it works and use this to optimize itself to become smarter[4][5], but also because it empirically seems to be the case that when you’re slightly better than all other humans at stuff like seeing deep connections between phenomena, this can enable you to solve hard tasks like particular research problems much much faster (as the example of Einstein suggests).

  1. Aka: Around Einstein-level, relatively small changes in intelligence can lead to large changes in what one is capable to accomplish.

OK but if that were true then there would have been many more Einstein like breakthroughs since then. More likely is that such low hanging fruit have been plucked and a similar intellect is well into diminishing returns. That is given our current technological society and >50 year history of smart people trying to work on everything if there are such breakthroughs to be made, then the IQ required is now higher than in Einsteins day.

No I have not seen a detailed argument about this, just the claim that once centralization goes past a certain point there is no coming back. I would like to see such an argument/investigation as I think it is quite important. "Yuval Harari" does say something similar in "Sapiens" 

There is a belief among some people that our current tech level will lead to totalitarianism by default. The argument is that with 1970's tech the soviet union collapsed, however with 2020 computer tech (not needing GenAI) it would not. If a democracy goes bad, unlike before there is no coming back. For example Xinjiang - Stalin would have liked to do something like that but couldn't. When you add LLM AI on everyone's phone + Video/Speech recognition, organized protest is impossible.

Not sure if Rudi C is making this exact argument. Anyway if we get mass centralization/totalitarianism worldwide, then S risk is pretty reasonable. AI will be developed under such circumstances to oppress 99% of the population - then goes to 100% with extinction being better.

I find it hard to know how likely this is. Is clear to me that tech has enabled totalitarianism but hard to give odds etc.

Such optimizations are a reason I believe we are not in a simulation. Optimizations are essential for a large sim. I expect them not to be consciousness preserving

But it could matter if its digital vs continuous.  <OK longer post and some thoughts a bit off topic perhaps>

Your A,B,C,D ... leads to some questions about what is conscious (C) and what isn't. 

Where exactly does the system stop being conscious

1. Biological mind with neurons

2. Very high fidelity render in silicon with neurons modelled down to chemistry rather than just firing pulses

3. Classic neural net spiking approx done in discrete maths that appears almost indistinguishable to 1,2. Producing system states A,B,C,D

4. same as (3) but states are saved/retrieved in memory not calculated.

5. States retrieved from memory many times  - A,B,C,D ... A,B,C,D ... does this count as 1 or many experiences?

6. States retrieved in mixed order A,D,C,B....

7 States D,D,D,D,A,A,A,A,B,B,B,B,C,C,C,C .. does this count 4* or nothing.

A possible cutoff is between 3/4. Retrieving instead of calculating makes it non-conscious.  But what about caching, some calc, some retrieved? 

As you prob know this has been gone over before, e.g. Scott Aaronson. Wonder what your position is?

https://scottaaronson.blog/?p=1951
with quote:

"Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?"

and last but not least:

"But, in addition to performing complex computations, or passing the Turing Test, or other information-theoretic conditions that I don’t know (and don’t claim to know), there’s at least one crucial further thing that a chunk of matter has to do before we should consider it conscious.  Namely, it has to participate fully in the Arrow of Time. "

https://www.scottaaronson.com/papers/giqtm3.pdf
 

Sounds interesting. Always relevant because arguably the "natural state" of humans is hunter-gatherer tribes. In my country high end retirement villages are becoming very popular because of the Pro type reasons you give. It seems some retirees, and gangs! lol are most in tune with their roots.

I had half expected the communal living thing to go more mainstream by now (similar things in fiction like https://en.wikipedia.org/wiki/Too_Like_the_Lightning) It seems it needs a lot more critical mass, e.g. specifically designed house/houses to get the right balance between space and togetherness school right nearby, gated suburb etc so its child safe.

Longer term, I expect to see some interesting social stuff to come from space colonies as there kind of experiments are forced on the inhabitants.

OK but why would you need high res for the minds? If its an ancestor sim and chatbots can already pass the Turing test etc, doesn't that mean you can get away with compression or lower res? The major arc of history won't be affected unless they are pivotal minds. If its possible to compress the sims so they experience lesser consciousness than us but still are very close to the real thing (and havn't we almost already proven that can be done with our LLM's), then an ancestor simulator would do that.

Load More