Wiki Contributions


OK firstly if we are talking fundamental physical limits how would sniper drones not be viable? Are you saying a flying platform could never compensate for recoil even if precisely calibrated before? What about fundamentals for guided bullets - a bullet with over 50% chance of hitting a target is worth paying for.

Your points - 1. The idea is a larger shell (not regular sized bullet) just obscures the sensor for a fraction of a second in a coordinated attack with the larger Javelin type missile. Such shell/s may be considerably larger than a regular bullet, but much cheaper than a missile. Missile or sniper size drones could be fitted with such shells depending on what was the optimal size.

Example shell (without 1K range I assume) however note that currently chaff is not optimized for the described attack, the fact that there is currently not a shell suited for this use is not evidence against it being impractical to create.

The principle here is about efficiency and cost. I maintain that against armor with hard kill defense it is more efficient to have a combined attack of sensor blinding and anti-armor missiles than just missiles alone. e.g it may take 10 simul Javelin to take out a target vs 2 Javelin and 50 simul chaff shells. The second attack will be cheaper, and the optimized "sweet spot" will always have some sensor blinding attack in it. Do you claim that the optimal coordinated attack would have zero sensor blinding?

2. Leading on from (1) I don't claim light drones will be. I regard a laser as a serious obstacle that is attacked with the swarm attack described before the territory is secured. That is blind the senor/obscure the laser, simul converge with missiles. The drones need to survive just long enough to shoot off the shells (i.e. come out from ground cover, shoot, get back). While a laser can destroy a shell in flight, can it take out 10-50 smaller blinding shells fired from 1000m at once?

(I give 1000m as an example too, flying drones would use ground cover to get as close as they could. I assume they will pretty much always be able to get within 1000m against a ground target using the ground as cover)

Such optimizations are a reason I believe we are not in a simulation. Optimizations are essential for a large sim. I expect them not to be consciousness preserving

But it could matter if its digital vs continuous.  <OK longer post and some thoughts a bit off topic perhaps>

Your A,B,C,D ... leads to some questions about what is conscious (C) and what isn't. 

Where exactly does the system stop being conscious

1. Biological mind with neurons

2. Very high fidelity render in silicon with neurons modelled down to chemistry rather than just firing pulses

3. Classic neural net spiking approx done in discrete maths that appears almost indistinguishable to 1,2. Producing system states A,B,C,D

4. same as (3) but states are saved/retrieved in memory not calculated.

5. States retrieved from memory many times  - A,B,C,D ... A,B,C,D ... does this count as 1 or many experiences?

6. States retrieved in mixed order A,D,C,B....

7 States D,D,D,D,A,A,A,A,B,B,B,B,C,C,C,C .. does this count 4* or nothing.

A possible cutoff is between 3/4. Retrieving instead of calculating makes it non-conscious.  But what about caching, some calc, some retrieved? 

As you prob know this has been gone over before, e.g. Scott Aaronson. Wonder what your position is?
with quote:

"Maybe my favorite thought experiment along these lines was invented by my former student Andy Drucker.  In the past five years, there’s been a revolution in theoretical cryptography, around something called Fully Homomorphic Encryption (FHE), which was first discovered by Craig Gentry.  What FHE lets you do is to perform arbitrary computations on encrypted data, without ever decrypting the data at any point.  So, to someone with the decryption key, you could be proving theorems, simulating planetary motions, etc.  But to someone without the key, it looks for all the world like you’re just shuffling random strings and producing other random strings as output.

You can probably see where this is going.  What if we homomorphically encrypted a simulation of your brain?  And what if we hid the only copy of the decryption key, let’s say in another galaxy?  Would this computation—which looks to anyone in our galaxy like a reshuffling of gobbledygook—be silently producing your consciousness?"

and last but not least:

"But, in addition to performing complex computations, or passing the Turing Test, or other information-theoretic conditions that I don’t know (and don’t claim to know), there’s at least one crucial further thing that a chunk of matter has to do before we should consider it conscious.  Namely, it has to participate fully in the Arrow of Time. "

Sounds interesting. Always relevant because arguably the "natural state" of humans is hunter-gatherer tribes. In my country high end retirement villages are becoming very popular because of the Pro type reasons you give. It seems some retirees, and gangs! lol are most in tune with their roots.

I had half expected the communal living thing to go more mainstream by now (similar things in fiction like It seems it needs a lot more critical mass, e.g. specifically designed house/houses to get the right balance between space and togetherness school right nearby, gated suburb etc so its child safe.

Longer term, I expect to see some interesting social stuff to come from space colonies as there kind of experiments are forced on the inhabitants.

OK but why would you need high res for the minds? If its an ancestor sim and chatbots can already pass the Turing test etc, doesn't that mean you can get away with compression or lower res? The major arc of history won't be affected unless they are pivotal minds. If its possible to compress the sims so they experience lesser consciousness than us but still are very close to the real thing (and havn't we almost already proven that can be done with our LLM's), then an ancestor simulator would do that.

If thats right, and its almost always low-res sims that are sufficient then that destroys the main ancestor sim argument for our conscious experience being simulated. Low res is not conscious in the same way we are, different reference class to base reality bio-consciousness

If Windows95 was ever conscious (shock!) it would be very sure it was in a virtual machine (i.e like simulated) if it existed at the time when VM's existed.  It would reason about Moores law/resources going up exponentially. and be convinced it was in a VM. However I am pretty sure it would be wrong most of the time? Most Win95 instances in history were not run in VM and we have stopped bothering now? An analogy sort of but gives an interesting result.

Random ideas to expand on

Could this be cheaper than chips in an extreme silicon shortage? How did it learn, can we map connections forming and make better learning algorithms. 

Birds vs ants/bees.

A flock of birds can be dumber than the dumbest individual bird, a colony of bees/ants can be smarter than than the individual, and smarter than a flock of birds! Bird avoiding predator in geometrical pattern - no intelligence as predictability like fluid has no processing. Vs bees swarming the scout hornet or ants building a bridge etc. Even though no planning in ants, no overall plan in individual neurons? 

The more complex pieces the less well they fit together. Less intelligent units can form a better collective in this instance. Not like human orgs. 

Progression from simple cell to mitochondria - mito have no say anymore but fit in perfectly. Multi organism like hive are next level up - simpler creatures can have more cohesion in upper level. Humans have more effective institutions in spite of complexity b/c of consciousness, language etc. 

RISC vs CISC Intel vs NVIDIA, GPU for super computers. I though about this years ago, led to prediction that Intel or other CISC max business would lose to cheaper.

Time to communicate a positive singularity/utopia 

Spheres of influence, like we already have, uncontacted tribes, Amish etc. Taking that further, Super AI must leave earth, perhaps solar system, enhanced ppl to of earth eco-system, space colonies, or Mars etc.

Take the best/happy nature to expand, don't take suffering to >million stars.

Humans can't do interstellar faster than AI anyway even if that was the goal, it would have to prepare it first, and can travel faster. So no question majority of humanity interstellar is AI. Need to keep earth for people. What is max CEV? Well keep earth ecosystem, humans can progress, discover on their own? 

Is the progression to go outwards, human, posthuman/Neuralink, WBE? it is is some sci-fi Peter Hamilton/ Culture (human to WBE)

Long term all moral systems don't know what to say on pleasure vs self determination/achievement. Eventually we run out of inventing things - should it go asymptotically slower.

Explores should be on the edge of civilization. For astronomers, shouldn't celebrate JWST, but complain about Starlink - that is inconsistent. Edge of civilization has expanded past low earth orbit, that is why we get JWST. Obligation then to put telescopes further out. 

Go to WBE instead of super AI - know for sure it is conscious. 

Is industry, tech about making stuff less conscious with time? e.g. mechanical things have zero, vs a lot when done by people. Is that a principle for AI/robots? then there are no slaves etc.

Can ppl get behind this? - implied contract with future AI? acausal bargaining.

Turing test for WBE - how would you know?

Intelligence processing vs time

For search, exponential processing power gives linear increate in rating, Chess, Go. However this is a small search space. For life, does the search get bigger the further out you go.

e.g. 2 steps is 2^2 but 4 steps is 4^4. This makes sense if there are more things to consider the further ahead you look. e.g. house price for 1 month, general market, + economic trend. 10+ years then demographic trends, changing govt policy, unexpected changes in transport patterns, (new rail nearby or in competing suburb etc)

If applies to tech, then regular experiments shrink the search space, need physical experimentation to get ahead.

For AI, if its like intuition/search then need search to improve intuition. Can only learn from long term.


Long pause or not?

How long should we pause? 10 years? Even in stable society there is diminishing returns - seen this with pure maths, physics, philosophy, when we reach human limits, then more time simply doesn't help. Reasonable to assume with CEV like concept also.

Pause carries danger? Is it like the clear pond before a rapid, are we already in the rapid, then trying to stop is dangerous having baby is fatal etc. "Emmett Shear" of go fast slow, stop, pause, Singularity seems ideal, though possible? WBE better than super AI - cultural as elder? 

1984 quote “If you want a vision of the future, imagine a boot stamping on a human face--forever.”

"Heaven is high and the emperor is far away" is a Chinese proverb thought to have originated from Zhejiang during the Yuan dynasty.

Not possible earlier but is possible now. If democracies go to dictatorship but not back then pause is bad. Best way to keep democracies is to leave hence space colonies. Now in Xinjiang, the emperor is in your pocket, LLM can understand anything - how far back to go before this is not possible? 20 years, if not possible, then we are in the white water, and we need to paddle forwards, can't stop.

Deep time breaks all common ethics? 

Utility monster, experience machine, moral realism tiling the universe etc. Self determination and achievement will be in the extreme minority over many years. What to do, fake it forget it and keep achieving again? Just keep options open until we actually experience it.

All our training is about intrinsic motivation and valuing achievement rather than pleasure for its own sake. Great asymmetry in common thought  "meaningless pleasure" makes sense and seems bad or not good, but "meaningless pain" doesn't make it less bad. Why should that be the case. Evolution has biased us to not value pleasure or experience it as much as we "should"? Learn to take pleasure regard thinking "meaningless pleasure" is itself a defective attitude? If you could change yourself, should you dial down the need to achieve if you lived in a solved world?

What is "should" in is-ought. Moral realism in the limit? "Should" is us not trusting our reason, as we shouldn't. If reason says one thing, then it could be flawed as it is in most cases. Especially as we evolved, then if we always trusted it, then mistakes are bigger than benefits, so the feeling "you don't do what you should" is two systems competing, intuition/history vs new rational.

"most likely story you can think of that would make it be wrong" - that can be the hard part. For investments its sometimes easy - just they fail to execute, their competitors get better, or their disruption is itself disrupted.
Before the debate I put Lab leak at say 65-80%, now more like <10%. The most likely story/reason I had for natural origin being correct (before I saw the debate) was that the host was found, and the suspicious circumstances where a result of an incompetent coverup and general noise/official lies  mostly by the CCP around this.

Well I can't say for sure that LL was wrong of course, but I changed my mind for a reason I didn't anticipate - i.e. a high quality debate that was sufficiently to my understanding.

For some other things its hard to come up with a credible story at all, i.e. AGW being wrong I would really struggle to do.

Some advice I heard that was for investing was when committing to a purchase, write a story of what you think is most likely to make you lose your money. Perhaps you could identify your important beliefs that also perhaps are controversial and each year write down the most likely story you can think of that would make it be wrong? I also believe that you can only full learn from you  own experience so building up a track record is necessary.

Load More