Gesild Muka

Posts

Sorted by New

Wiki Contributions

Comments

I would guess that the percentage of gay men who watch live music is roughly the same as gay men who watch live sports (or pretty much any leisure activity in society) but openly gay men are historically more common at concerts. Gays were considered dangerous deviants for a long time, maybe classical music/opera became a go to for 'openness' because it's mostly adults that attend so you could be openly gay without being harassed or accused of ulterior motives. My main belief: the stereotype is just because of association, not because of anything intrinsic.

Historically there are few public places you could be openly gay and not be harassed, concerts are one of those places.

I can’t help but read this simply as a politician who worries about their future hold on power. (I’d be curious to know how leaders discuss AI behind closed doors)

I mostly agree with the last part of your post about some experts never agreeing whether others (animal, artificial, etc) are conscious or not. A possible solution would be to come up with new language or nomenclature to describe the different possible spectrums and different dimensions they fall under. So many disagreements regarding this topic seem to derive from different parties having different definitions for AGI or ASI.

Here's how I tried (I haven't 100% succeeded): I decided that what goes on in my head wasn't enough. For a long time it was enough, I'd think about the things that interested me and maybe discuss them with some people and then move on to the next thing. This went on for years and some time was spent thinking about how I might put all that mental discourse to use but I never did. I worked at day jobs producing things I didn't care about and spent my free time exploring. Eventually I quit my day job, I spent more energy on personal pursuits anyway, and found ways to apply all my meta-practice on object level work. Eventually I landed in teaching and still get stuck in meta-land but find it useful to differentiate between understanding and intuition: Meta-level exploration is useful for improving understanding and object level practice improves intuition. I decided that understanding wasn't enough.

With their sharper senses I'd imagine that dogs experience the world in a much richer way than humans. Then, depending on your definition, you could say it makes dogs more 'conscious'. This opens the door for many other animals with bigger brains and more complex sensory organs than humans. Are they also more conscious?

I really love movies. This year I’ve gone to more re-releases and anniversary showings than new releases. I chalk that up to formulaic thinking behind newer movies. So we’re not running out of art but rather existing niches are being filled faster and in more clever ways and arguably faster than new niches emerge.

It could also have to do with the nature of film production. If a movie takes five years to make the design and production team is predicting what viewers will want 5 years in the future. The result can be stale overly commercialized type movies.

That’s a rather extreme idea, even if humanity was on the brink of extinction deceit is hard to justify.

We haven’t even scratched the surface of possible practical solutions, once those are exhausted there are many more possible paths.

Perhaps standards and practices for who can and should teach AI safety (or new related fields) should be better defined.

There are many atoms out there and many planets to strip mine and a superintelligence has infinite time. Inter species competition makes sense depending on where you place the intelligence dial. I assume that any intelligence that’s 1,000,000 times more capable than the next one down the ladder will ignore their ‘competitors’ (again, there could be collateral damage but likely not large scale extinction). If you place the dial at lower orders of magnitude then humans are a greater threat to AI, AI reasoning will be closer to human reasoning and we should probably take greater precautions.

To address the first part of your comment: I agree that we’d be largely insignificant and I think it’d be more inconvenient to wipe us out vs just going somewhere else or waiting a million years for us to die off, for example. The closer a superintelligence is to human intelligence the more likely it’ll act like a human (such as deciding to wipe out the competition). The more alien the intelligence the more likely it is to leave us to our own devices. I’ll think more on where the cutoff may be between dangerous AI and largely oblivious AI.

Load More