Gordon Seidoh Worley

If you are going to read just one thing I wrote, read The Problem of the Criterion.

More AI related stuff collected over at PAISRI

Sequences

Advice to My Younger Self
Fundamental Uncertainty: A Book
Zen and Rationality
Filk
Formal Alignment
Map and Territory Cross-Posts
Phenomenological AI Alignment

Comments

It'd be cool if this was interactive and highlighted the things that you consider a lie based on where you are on the chart, since any particular stance implies that things other stances consider to be lies would also be lies to you.

You can actually ask the LLM to give an answer as if it were some particular person. For example, just now, to test this, I did a chat with Claude about the phrase "wear a mask", and it produced different responses when I ask it what it would do upon hearing this phrase from public health officials if it was a scientist, a conspiracy theories, or a general member of the public, and in each case it gives reasonably tailored responses that reflect those differences. So if you know your message is going to a particularly unusual audience or you want to know how different types of people will interpret the same message, you can get it to give you some info on this.

None of your arguments land, and I think the reason you're getting downvoted, because they are mere outlines of arguments that don't actually make their case by getting into the details. You seem to hope we'll intuit the rest of your arguments, but you've posted this in a place that's maximally unlikely to share an intuition that would lead you to think that Eliezer is deeply irrational rather than merely mistaken on some technical points.

I think the average LessWrong reader would love to know if Eliezer is wrong about something he wrote in the sequences, but that requires both that he actually be wrong and that you clearly argue your case that he's wrong. Otherwise it's just noise.

One of the things we can do with five words is pick them precisely.

Let's use "you get about five words" as an example. Ray could have phrased this as something like "people only remember short phrases", but this is less precise. It leaves you wondering, how long is "short"? Also, maybe I don't think of myself as lumped in with "people", so I can ignore this. And "phrase" is a bit of a fancy word for some folks. "Five" is really specific, even with the "about" to soften it, and gives "you get about five words" a lot of inferential power.

Similarly, "wear a mask" was too vague. I think this was done on purpose at first because mask supplies were not ramped up, but it had the unfortunate effect of many people wearing ineffective masks that had poor ROI. We probably would have been better off with a message like "wear an N95 near people", but at first there was a short supply of N95, and so people might have not worn a mask at all if they didn't have an N95 rather than a worse mask.

On the other side, "wear a mask" demanded too much. Adding nuance like "when near people inside" would have been really helpful and avoided a lot of annoying masking policies that had little marginal impact on transmission but caused a lot of inconvenience and likely reduced how often people wore masks in situations where they really needed to because they were tired of wearing masks. For example, I saw plenty of people wear masks outside, but then take them off in their homes when with people not in their bubble because they treated masks like a rain coat: no need to keep it on when you're "safe" in your home, regardless of who's there.

Yep, this is actually how I used Claude in all the above experiments. I started new chats with it each time, which I believe don't share context between each other. Putting them in the same chat seemed likely to risk contamination from earlier context, so I wanted it to come at each task fresh.

I see most work like you describe about ontology as more extra abstractions to reason about ontologies on top of the basic thing that ontologies are.

So what is ontology fundamentally? Simply the categorization of the world, telling apart one thing from another. Something as simple as a sensor that flips the voltage on an output wire high or low based on whether there's more than X lumens of light hitting the sensor is creating an ontology by establishing a relationship between the voltage on the output wire and the environment surrounding the sensor.

Given ontology can be a pretty simple thing, I don't know if folks are confused about ontology so much as perhaps sometimes confused about how complex an ontology they can claim a system has.

Rationalists would be better off if they were more spiritual/religious

Reply46108
Load More