Dentin

Wiki Contributions

Comments

Yep, that pretty much handles it. Thanks for the update!

I don't find it surprising; 0.1% is a fairly low bar here on LW. I'm not considered that unusual here, and my calibrated guess is that I'm in the 0.3% category. There's a million people in the USA alone at that level, and three hundred thousand at 0.1%. That's a wide pool to select from.

Personally I wouldn't be surprised if Musk was substantially above the top 0.1%. I've seen a number of technical interviews with him; he and I have similar backgrounds and technical field strengths; and we are approximately the same age. I feel able to properly evaluate his competence, and I do not find it lacking.

For a really good example of what I would consider a 'dumb' way for AGI misalignment to be problematic, I recommend "accelerando" by charles stross. It's available in text/html form for free from his web site. Even now, after 20 years, it's still very full of ideas.

(FYI, sections are about ten years apart in the book, but last I read it it seemed like the dates are off by a factor of two or so. Eg. 2010 in the book corresponds loosely to 2020 in real life, 2020 in the book corresponds loosely to 2040, etc.)

In that book, the badness largely comes from increasingly competent / sentient corporate management and legal software.

I totally understand where you're coming from, and I apologize for straddling the line on Norm One. I saw that it was heavily downvoted without comment, and that struck me as unhelpful.

Regarding the post itself, it wasn't a matter of being unable to understand. It seemed likely to me that there were insights there, and that if I spent enough time on it I could pull them out. It was more about the other demands on my time, which I suspect isn't a unique circumstance.

Regarding probability of mistake, I think that's an unhelpful way of looking at it. IMO it's not so much mistake, as 'interface incompatibility'. A particular presentation style will be varying levels of compatible with different readers, with different message acceptance and error rates at different receivers. The total transmission of information is across the distribution. Minor changes to the presentation style sometimes have outsized effects on total information transfer. Words are hard.

The general point is that if your goal is information transfer, it's less about 'mistake' than getting the best integrated product across the distribution. If you have a different goal ("just getting the words out" is common for me), then optimizing for delivery may be irrelevant.

Real quick, minor readability concern: I was about a quarter of the way through the post, and fairly confused, before I figured out that the line in the pictures was facing backwards from my mental model.

IOW it wasn't intuitively obvious to me early in the post what 'in front of' or 'behind' meant. It might be worth indicating front/back on the images.

Not downvoting as I see some potential here, but also not upvoting. This post is very long, with little structure, and an unclear / tedious to derive takeaway. I'd recommend at a minimum splitting into sections, such as the AstralCodexTen "I II III IV V VI ..." scheme with opening/closing summaries. I would also guess at least half of it could be removed without damaging what you intend to convey.

In other words, there might be good content / ideas in here, but it would take too much effort for me to extract them. There are a great many things competing for my time, and I must be choosy about what I spend that time on.

AIUI, you've got the definition of a p-zombie wrong in a way that's probably misleading you. Let me restate the above:

"something that is externally indistinguishable from an entity that experiences things, but internally does not actually experience things"

The whole p-zombie thing hinges on what it means to "experience something", not whether or not something "has experience".

Answer by DentinJan 24, 20232-2

The problem here is that you're using undefined words all over the place. That's why this is confusing. Examples:

  1. "how would a compatibilist explain why the mentally insane (or hypnotized etc.) are not morally responsible?"

What is 'morally' in this context? What's the objective, "down at the quantum mechanical state function level" definition of 'moral'?

What exactly do you mean by 'responsible'?

  1. "would a compatibilist think that a computer programmed with a chess-playing algorithm has free will or is responsible for its decisions?"

What is a 'decision' here? Does that concept even apply to algorithms?

What does 'free will' mean here? Does 'free will' even make sense in this context?

  1. "how about animals? Do they have free will? Is my dog in any sense "responsible" for peeing on the carpet?"

Again, same questions: What do you mean by 'free will'? What do you mean by 'responsible'? The definitions you choose, are they objective, based on the territory, or are they labels on the map that we're free to reassign as we see fit?

The rest of the post continues in a similar vein. You're running into issues because you're confusing the words for being the reality, and saying "hey, these words don't match up". That's not a problem with reality; that's a problem with the words.

My advice would be to remember that ultimately, at the bottom of physics, there's only particles/forces/fields/probabilities - and nowhere in the rules governing physics is there a fundamental force for 'free will' or a particle for 'responsibility'.

To be fair, even if what you're referring to above is true (I don't believe it is - lookup table compression is a thing), it's an implementation detail. It doesn't matter that a naive implementation might not fit in our current observable universe; it need merely be able to exist in some universe for the argument to hold.

And in a way, this is my core problem with Searle's argument. I believe you can fully emulate a human with both sufficiently large lookup tables, and also with pretty small lookup tables combined with some table expansion/generation code running on an organic substrate. I don't challenge the argument based the technical feasibility of the table implementation. I challenge the argument on the basis that the author mistakenly believes that the implementation of any given table (static lookup table versus algorithmic lookup) somehow determines consciousness.

Please clarify/reword your statement; I can't figure out what you're trying to say. The word "that" is almost completely unspecified.

Load More