LESSWRONG
LW

1469
Benjy_Forstadt
710140
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No posts to display.
No wikitag contributions to display.
This is a review of the reviews
Benjy_Forstadt1mo68

I understand that when a person feels a lot is on the line it is often hard for that person to not come across as sanctimonious. Maybe it’s unfair of me, but that is how this comes across to me. Eg “people who allegedly care”.

Death with Dignity:


>Q2:  I have a clever scheme for saving the world!  I should act as if I believe it will work and save everyone, right, even if there's arguments that it's almost certainly misguided and doomed?  Because if those arguments are correct and my scheme can't work, we're all dead anyways, right?

A:  No!  That's not dying with dignity!  That's stepping sideways out of a mentally uncomfortable world and finding an escape route from unpleasant thoughts!”

This is a good insight about a possible reasoning mistake. Likewise, if more optimistic assumptions about AI are correct, you should not “step sideways” into an imaginary world where MIRI is right about everything “just to be safe”. Whatever problems come with AI need to be solved in the actual world, and in order to do that it is very very important to form good object-level beliefs about the problems

Reply
English writes numbers backwards
Benjy_Forstadt3mo175

The order of our numeral notation mirrors the order of our spoken numerals. I’m not sure if there are any languages that consistently order additive numerals from smallest to largest - “two and fifty and three hundred” instead of “three hundred and fifty two”. 

>💡Flipping the local ordering of pronunciation: If we're truly optimizing, we might as well say "twenty and hundred-three" while we're at it. The first words "and three-" don't tell you much until you know "three of what"? Whereas "and hundred-three" tells you the order of magnitude as soon as possible.

This suggestion is aesthetically in tension with your principle of ordering from smallest to largest. Why should we go with informativeness for multiplication and small-to-large order for addition? The larger number in a sum is more informative about the size of the value, that is probably why languages tend to pronounce additive numerals from larger to smaller. 

Interestingly, many languages actually do use the “hundred-three” order. You may be interested in this paper: https://www.nature.com/articles/s41599-023-02506-z, they have a striking geographic distribution. 

Reply1
Do you consider perfect surveillance inevitable?
Answer by Benjy_ForstadtJan 24, 20252-6

I don’t think perfect surveillance is inevitable. 

I would prefer it, though. I don’t know any other way to prevent people from doing horrible things to minds running on their computers. It wouldn’t need to be publicly broadcast though, just overseen by law enforcement. I think this is much more likely than a scenario where everything you see is shared with everyone else.

Unfortunately, my mainline prediction is that people will actually be given very strong privacy rights, and will be allowed to inflict as much torture on digital minds under their control as they want. I’m not too confident in this though.


 

Reply
Another argument against utility-centric alignment paradigms
Benjy_Forstadt1y10

Basically people tend to value stuff they perceive in the biophysical environment and stuff they learn about through the social environment.

So that reduces the complexity of the problem - it’s not a matter of designing a learning algorithm that both derives and comes to value human abstractions from observations of gas particles or whatever. That’s not what humans do either.

Okay then, why aren’t we star-maximizers or number-of-nation-states maximizers? Obviously it’s not just a matter of learning about the concept. The details of how we get values hooked up to an AGI’s motivations will depend on the particular AGI design but probably reward, prompting, scaffolding or the like.

Reply
Another argument against utility-centric alignment paradigms
Benjy_Forstadt1y10

I don’t think the way you split things up into Alpha and Beta quite carves things at the joints. If you take an individual human as Beta, then stuff like “eudaimonia” is in Alpha - it’s a concept in the cultural environment that we get exposed to and sometimes come to value. The vast majority of an individual human’s values are not new abstractions that we develop over the course of our training process (for most people at least).

Reply
Another argument against utility-centric alignment paradigms
Benjy_Forstadt1y134

There is a difference between the claim that powerful agents are approximately well-described as being expected utility maximizers (which may or may not be true) and the claim that AGI systems will have an explicit utility function the moment they’re turned on, and maximize that function from that moment on.

I think this is the assumption OP is pointing out: “most of the book's discussion of AI risk frames the AI as having a certain set of goals from the moment it's turned on, and ruthlessly pursuing those to the best of its ability”. “From the moment it’s turned on” is pretty important, because it rules out value learning as a solution

Reply
MIRI 2024 Communications Strategy
Benjy_Forstadt1y10

Edit: Retracted because some of my exegesis of the historical seed AI concept may not be accurate

Reply1
MIRI 2024 Communications Strategy
Benjy_Forstadt1y10

There will be future superintelligent AIs that improve themselves. But they will be neural networks, they will at the very least start out as a compute-intensive project, in the infant stages of their self-improvement cycles they will understand and be motivated by human concepts rather than being dumb specialized systems that are only good for bootstrapping themselves to superintelligence.

[This comment is no longer endorsed by its author]Reply
MIRI 2024 Communications Strategy
Benjy_Forstadt1y12

How does the question of whether AI outcomes are more predictable than AI trajectories reduce to the (vague) question of whether observations on current AIs generalize to future AIs?

Reply
Load More