This is a special post for quick takes by RussellThor. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
3 comments, sorted by Click to highlight new comments since: Today at 6:09 PM

Random ideas to expand on

https://www.theguardian.com/technology/2023/jul/21/australian-dishbrain-team-wins-600000-grant-to-develop-ai-that-can-learn-throughout-its-lifetime

https://newatlas.com/computers/human-brain-chip-ai/

https://newatlas.com/computers/cortical-labs-dishbrain-ethics/

Could this be cheaper than chips in an extreme silicon shortage? How did it learn, can we map connections forming and make better learning algorithms. 
 

Birds vs ants/bees.

A flock of birds can be dumber than the dumbest individual bird, a colony of bees/ants can be smarter than than the individual, and smarter than a flock of birds! Bird avoiding predator in geometrical pattern - no intelligence as predictability like fluid has no processing. Vs bees swarming the scout hornet or ants building a bridge etc. Even though no planning in ants, no overall plan in individual neurons? 

The more complex pieces the less well they fit together. Less intelligent units can form a better collective in this instance. Not like human orgs. 

Progression from simple cell to mitochondria - mito have no say anymore but fit in perfectly. Multi organism like hive are next level up - simpler creatures can have more cohesion in upper level. Humans have more effective institutions in spite of complexity b/c of consciousness, language etc. 

RISC vs CISC Intel vs NVIDIA, GPU for super computers. I though about this years ago, led to prediction that Intel or other CISC max business would lose to cheaper.

Time to communicate a positive singularity/utopia 

Spheres of influence, like we already have, uncontacted tribes, Amish etc. Taking that further, Super AI must leave earth, perhaps solar system, enhanced ppl to of earth eco-system, space colonies, or Mars etc.

Take the best/happy nature to expand, don't take suffering to >million stars.

Humans can't do interstellar faster than AI anyway even if that was the goal, it would have to prepare it first, and can travel faster. So no question majority of humanity interstellar is AI. Need to keep earth for people. What is max CEV? Well keep earth ecosystem, humans can progress, discover on their own? 

Is the progression to go outwards, human, posthuman/Neuralink, WBE? it is is some sci-fi Peter Hamilton/ Culture (human to WBE)

Long term all moral systems don't know what to say on pleasure vs self determination/achievement. Eventually we run out of inventing things - should it go asymptotically slower.

Explores should be on the edge of civilization. For astronomers, shouldn't celebrate JWST, but complain about Starlink - that is inconsistent. Edge of civilization has expanded past low earth orbit, that is why we get JWST. Obligation then to put telescopes further out. 

Go to WBE instead of super AI - know for sure it is conscious. 

Is industry, tech about making stuff less conscious with time? e.g. mechanical things have zero, vs a lot when done by people. Is that a principle for AI/robots? then there are no slaves etc.

Can ppl get behind this? - implied contract with future AI? acausal bargaining.

https://www.lesswrong.com/posts/qZJBighPrnv9bSqTZ/31-laws-of-fun

Turing test for WBE - how would you know?

Intelligence processing vs time

For search, exponential processing power gives linear increate in rating, Chess, Go. However this is a small search space. For life, does the search get bigger the further out you go.

e.g. 2 steps is 2^2 but 4 steps is 4^4. This makes sense if there are more things to consider the further ahead you look. e.g. house price for 1 month, general market, + economic trend. 10+ years then demographic trends, changing govt policy, unexpected changes in transport patterns, (new rail nearby or in competing suburb etc)

If applies to tech, then regular experiments shrink the search space, need physical experimentation to get ahead.

For AI, if its like intuition/search then need search to improve intuition. Can only learn from long term.

 

Long pause or not?

How long should we pause? 10 years? Even in stable society there is diminishing returns - seen this with pure maths, physics, philosophy, when we reach human limits, then more time simply doesn't help. Reasonable to assume with CEV like concept also.

Pause carries danger? Is it like the clear pond before a rapid, are we already in the rapid, then trying to stop is dangerous having baby is fatal etc. "Emmett Shear" of go fast slow, stop, pause, Singularity seems ideal, though possible? WBE better than super AI - cultural as elder? 

1984 quote “If you want a vision of the future, imagine a boot stamping on a human face--forever.”

"Heaven is high and the emperor is far away" is a Chinese proverb thought to have originated from Zhejiang during the Yuan dynasty.

Not possible earlier but is possible now. If democracies go to dictatorship but not back then pause is bad. Best way to keep democracies is to leave hence space colonies. Now in Xinjiang, the emperor is in your pocket, LLM can understand anything - how far back to go before this is not possible? 20 years, if not possible, then we are in the white water, and we need to paddle forwards, can't stop.

Deep time breaks all common ethics? 

Utility monster, experience machine, moral realism tiling the universe etc. Self determination and achievement will be in the extreme minority over many years. What to do, fake it forget it and keep achieving again? Just keep options open until we actually experience it.

All our training is about intrinsic motivation and valuing achievement rather than pleasure for its own sake. Great asymmetry in common thought  "meaningless pleasure" makes sense and seems bad or not good, but "meaningless pain" doesn't make it less bad. Why should that be the case. Evolution has biased us to not value pleasure or experience it as much as we "should"? Learn to take pleasure regard thinking "meaningless pleasure" is itself a defective attitude? If you could change yourself, should you dial down the need to achieve if you lived in a solved world?

What is "should" in is-ought. Moral realism in the limit? "Should" is us not trusting our reason, as we shouldn't. If reason says one thing, then it could be flawed as it is in most cases. Especially as we evolved, then if we always trusted it, then mistakes are bigger than benefits, so the feeling "you don't do what you should" is two systems competing, intuition/history vs new rational.

Rootclaim covid origins debate:


This piece relates to this manifold market
and these videos

I listened to most of the 17+ hours of the debate and found it mostly interesting, informative and important for someone either interested in COVID origins or practicing rationality.

I came into this debate about 65-80% lab leak, and left feeling <10% is most likely.

Key takeaways

  • The big picture of the lab leak is easy to understand and sounds convincing, however the details don't check out when put under scrutiny.
  • Both sides attempted Bayesian estimates and probabilities and got absolutely absurd differences in estimates.
  • Rootclaim failed to impress me - the takeaway I got is that they are well suited to say murder cases where there is history to go off, but when it comes to such a large messy, one-off event as COVID origins they didn't know what evidence to include, how to properly weight it etc. They didn't present a coherent picture of why we should accept their worldview and estimates. An example is where they asserted that even if Zoonosis was the origin then the claimed market was not the origin because the details of infected animals and humans wasn't what they expected. This seems an absurd claim to make with confidence judging on the data available. When forced to build models (rather than rely on multiplying probabilities) they were bad at it and overconfident in their conclusions from such models.
  • More generally this led me to distrust Bayesian inference type methods in complicated situations. Two smart reasonably well prepared positions could be off by say >1e12 in relative estimates. Getting all the details right, building consistent models that are peer reviewed by experts cannot be made up for by giving uncertainties to things.
  • Regarding AI, I have now more sympathy to the claim that P(Doom) is a measure of how the individual feels, rather than a defensible position on what the odds actually are.
[+][comment deleted]3mo10