Wiki Contributions

Comments

I shared it as I though it might be interesting alternative view on the topic often discussed here. It was somewhat new to me, at least.

Sharing is not endorsement, if you're asking that. But it might be a discussion starter.

Maybe, disclaimer.

  • I have no formal philosophical education. Nor do I have much exposure to the topic as an amateure.
  • Neither do I have any formal logic education but I have some exposure to the concepts in my professional endaevours.
  • These are pretty much unedited notes as read OP. At the moment I don't have much left in me to actually make a coherent argument out of them so you can treat them as random thoughts.

There might be something it is like to be a computer or robot at salamander levels of capability at least, or there might not. But it’s a well-posed possibility, at least if you start with Strong AI assumptions. My fellow AI-fear skeptics might feel I’m ceding way too much territory at this point, but yes, I want to grant the possibility of SIILTBness here, just to see if we get anywhere interesting.

But you know what does _not_, even at this thought-experiment level, possess SIILTBness? The AIs we are currently building and extrapolating, via lurid philosophical nonsense, into bogeymen that represent categorically unique risks. And the gap has nothing to do with enough transistors or computational power.

This is a weird statement. The AIs we currently build are basically stateless functions. That is because we don't let AIs change themselves. Once we invent a NN that can train itself and let it do it we're f… doomed in any case where we messed up the error function (or where NN can change it). STIILTBness implies continuity of experience. Saying that current AI doesn't have it, while, probably, factually correct at the moment, doesn't mean it can't ever.

In my opinion, there are two necessary conditions for hyperanthropomorphic projection to point to something coherent and well-posed associated with an AI or robot (which is a precondition for that something to potentially become something worth fearing in a unique way that doesn’t apply to other technologies):

  1. SIILTBness: There is something it is like to be it
  2. Hyperness: The associated quality of experience is an enhanced entanglement with reality relative to our own on one or more pseudo-trait dimensions

Almost all discussion and debate focuses on the second condition: various sorts of “super”-ness.

This is most often attached to the pseudo-trait dimension of “intelligence,” but occasionally to others like “intentionality” and “sentience” as well. Super-intentionality for example, might be construed as the holding of goals or objectives too complex to be scrutable to us.

In my experience, "intelligence" as a word might be used to refer to the entity but the actual pseudo-trait is a bit different. I'd call it "problem solving" followed, probably, by "intentionality" or "goal fixation". E.g. paperclip maximiser has "intention" to keep making paperclips and possesses capacity to solve any general problem such as coming up with novel approached to obtain raw material and neutralise interference from meatbags trying to stop being turned into paperclips.

a bus schedule printed on paper arguably thinks 1000 moves ahead (before the moves start to repeat), and if you can’t memorize the whole thing, it is “smarter” than you.

Does it though? A road construction is planned next Monday and diversion is introduced. Suddenly, schedule is very wrong but can not correct its predictions. Author mentioned Gettier earlier but here they seemingly forgot about that concept.

The sum of the scraped data of the internet isn’t about anything, the way an infant’s visual field is about the world. So anything trained on the text and images comprising the internet cannot bootstrap a worldlike experience. So conservatively, there is nothing it is like to be GPT-3 or Dalle2, because there is nothing the training data is about.

However, that is an experience. And it is world-like, but it can not be experience as you're used to. You can't touch or smell it but you can experience it. Likewise, you can manipulate it just not the way you're used to. You can not push anything a meter to the left but you can write a post on Substack.

You can argue this implies absence of AI STIILTBness but it can also point to non-transferrability of STIILTBness. Or rather to inadequacy of the term as defined. You as a human can experience what it's like to be a salamander because there's a sufficient overlap in ability to experience same things in a similar way but you can not experience what it's like to be a NN on the Internet because the experience is too alien.

This also meant that NN can not experience what it's like to be a human (or a salamander) but it doesn't mean it can not model humans in a sufficiently accurate way to be able to interact with humans. And NN on the Internet probably can gather enough information through its experiences of said Internet to build such a model.

Could you argue that the processes at work in a deep-learning framework are sufficiently close to the brain that you could argue the AI is still constructing an entirely fictional, dream-like surreal worldlike experience? One containing an unusual number of cat-like visual artifacts, and lacking a “distance” dimension, but coherent in other ways, a maya-world for itself? Perhaps it is evolving what Bruce Sterling has been calling “alt intelligence”?

Certainly. But to the extent the internet is not about the actual world in any coherent way, but just a random junk-heap of recorded sensations from it, and textual strings produced about it by entities (us) that it hasn’t yet modeled, a modern AI model cannot have a worldlike experience by overfitting The junk-heap. At best, it can have a surreal, dream-like experience. One that at best, encodes an experience of time (embodied by the order of training data presented to it), but no space, distance, relative arrangement, body envelope, materiality, physics, or anything else that might contribute to SIILTBness.

Internet is not completely separated from the physical world (which we assume is the base reality), though. Internet is full of webcams, sensors, IoT devices, space satellites, and such. In a sense, an NN on the Internet can experience real world better than any human: it can see through electron microscope the tiniest things and through telescopes the biggest and furthest things, it can see in virtually all EM spectrum, it can listen through thousands of microphones all over the planet at the same time in ranges wider than any ear can hear, it can register every vibration a seismograph detects (including those on Mars), it can feel atmospheric pressure all over the world, it can know how much water is in most rivers on the planet, it knows atmospheric composition everywhere at once.

It likewise can manipulate world in some ways more than any particular human can. Thousands of robotic arms all over the world can assemble all sorts of machines. Traffic can be diverted by millions of traffic light, rail switches, instructions to navigation systems on planes and boats. How much economic influence can an NN on the Internet have by only manipulating data on the Internet itself?

Incompleteness of experience does not mean absence or deficiency of STIILTBness. Humans until very recently had no idea that there was a whole zoo of subatomic particles. Did that deny or inhibit human STIILTBness? Doesn't seem like it. Now we're looking at quantum physics and still coming to grips with the idea that actual physical reality can be completely different to what we seem to experience. That, however, didn't make a single philosopher even blink.

Is this enough for it to pose a special kind of threat? Possibly. Psychotic humans, or ones tortured with weird, incoherent sensory streams might form fantastical models of reality and act in unpredictable and dangerous ways as a result. But this is closer to a psychotic homeless person on the street attacking you with a brick than a “super” anything. It is a “sub” SIILTBness, not “super” because the sensory experiences that drive the training do not admit an efficient and elegant fit to a worldlike experience that could fuel “super” agent-like behaviors of any sort.

Doesn't this echo the core concern of "the AI-fear view"? AGI might end up not human-like but capable of influencing world. Its "fantastical model of reality" can be just close enough so we end up "attacked with a brick".

So if you actually wanted to construct an AI capable of coherently evolving along trajectories that get to hyper-SIILTBness, and perhaps exhibiting super-traits of any sort, that’s where you’d start: by feeding it vast amounts of sensorily structured training data that admits a worldlike experience rather than a dreamlike one, induces an I for which SIILTBNess is at least a coherent unknown, cranking up various knobs to Super 11, and finally, waiting for it to turn superevil, or at least superindifferent to us.

This is a weird strategy to propose in a piece entitled "Beyond Hyperantropomorphism". It basically suggests recreation of an artificial human intelligence by going through the typical human experience and then, somehow, by "cranking up dials" achieve hyper-STIILTBness. I don't believe anyone on the "AI-fear" side of the argument actually worried about this specific scenario. After all if the AI is human-like there's not much of an alignment problem. The AI would already understand what humans want. We might still need to negotiate with it to convince it doesn't need to kill us all. Well, we have half of an alignment problem.

The other half is for the case that I believe is more likely: a NN on the Internet. In this case we would actually need to let it know what is that we actually want, which is arguably harder because to my knowledge no one ever has actually fully stated it to any degree of accuracy. OP dismisses this case on the basis of STIILTBness non-transferrability.


Overall, I feel like this is not a good argument. I have vague reservations on the validity of the approach. I don't see justification for some of the claim, but author openly admits that some claims have no justification provided:

I’m going to assume, without justification or argument, that we can attribute SIILTBness to all living animals above the salamander complexity point

I'm also not convince that there's solid logical progression from one claim to the next at every step but I'm a little to tired to investigate it further. Maybe it's just my lack of education rearing its ugly face.

In the end, I feel like author doesn't engage fully in good faith. There's a lot of mentions of the central concept of STIILTBness and even an OK introduction of the concept in the first two parts but the core of the argument seem to be left out.

They do not get what it means to establish SIILTBness or why it matters.

And author fully agrees that people don't understand why it matters while also not actually trying to explain why it does.

For me it's an interesting new way of looking at AI but i fail to see how it actually addresses "the AI-fear".

The books are unavailable anywhere. Can we expect more anytime soon?

Have you tried radare2? If you have, how does it stack against IDA?

How does one uncover shadow values?

On format: a little bit of editing might improve reading experience. Just joining some paragraphs might be a great improvement.

Technical difficulties of development and maintenance of own platforms have been mentioned in other comments.

However, many own platforms lack revenue opportunities provided by centralized platforms. YouTube specifically has a huge benefit of built-in monetization. Most content creators on YouTube start earning money much earlier because YouTube manages ads for them. General trend I see is creators start getting sponsored videos sometime between 500,000 and 1MM subscribers. Depending on channel that can take about a year getting videos out at a regular pace. I hazard a guess that many would've given up much earlier if they had to think about monetization on their own instead of relying on the platform for that.