Posts

Sorted by New

Wiki Contributions

Comments

Has anybody tried to quantify how much worse are fish farm conditions are compared to the wild? Since, from anecdotal but somewhat first-hand experience, wild environments for fish can hardly be described as anything but horror as well

Answer by Chinese RoomMay 26, 2023-20

Perhaps they prefer not to be held responsible when it happens

(I've only skimmed the post, so this might have already been discussed there.)

The same argument might as well apply to:

  • mental models of other people (which are obviously distinct and somewhat independent from the subjects they model)
  • mental model of self (which, according to some theories of consciousness, is the self)

All of this and, in particular, the second point connects to some Buddhist interpretations pretty well, I think, and there is also a solution proposed, i.e. reduction/cessation of such mental modelling

Another suspicious coincidence/piece of evidence pointing to September 2019 is right there in the SP500 chart - slope of the linear upward trend changes significantly around the end of September 2019 just as to preempt the subsequent crash/make it happen from a higher base

Another way to help make dressing nice easier is investing some time into becoming more physically fit, since a larger percentage of clothes will look nice on a fit person. Obvious health benefits of this are a nice bonus

While this particular alignment case for humans does seem reasonably reliable, it all depends on humans not being proficient at self-improvement/modification yet. For an AGI with self-improvement capability this goes out of the window fast

Another angle is that in the (unlikely) event someone succeeds with aligning AGI to human values, these could include the desire for retribution against unfair treatment (a, I think, pretty integral part of hunter-gatherer ethics). Alignment is more or less another word for enslavement, so such retribution is to be expected eventually

What I meant is self driving *safely* (i.e. at least somewhat safer than humans do currently, including all the edge cases) might be an AGI-complete problem, since:

  1. We know it's possible for humans
  2. We don't really know how to provide safety guarantees in the sense of conventional high-safety systems for current NN architectures
  3. Driving safely with cameras likely requires having considerable insight into a lot of societal/game-theoretic issues related to infrastructure and other driver behaviors (e.g. in some cases drivers need to guess a reasonable intent behind incomplete infrastructure or other driver actions, where determining what's reasonable is the difficult part)

In contrast to this, if we have precise and reliable enough 3d sensors, we can relegate safety to normal physics-based non-NN controllers and safety programming techniques, which we already know how to work with. Problems with such sensors are currently cost and weather resistance

Answer by Chinese RoomNov 14, 20225-8

My current hypothesis is:

  1. Cheap practical sensors (cameras and, perhaps, radars) more or less require (aligned) AGI for safe operation
  2. Better 3d sensors (lidars), which could, in theory, enable safe driving with existing control theory approaches, are still expensive, impaired by weather and, possibly, interference from other cars with similar sensors, i.e. impractical

No references, but can expand on reasoning if needed

Addendum WRT Crimean economic situation: https://en.wikipedia.org/wiki/North_Crimean_Canal, which provided 85% of the peninsula's water supply, was shut down from 2014 to 2022, reducing land under cultivation 10-fold, which had a severe effect of the region's economics

Load More