LESSWRONG
LW

GravitasGradient
-12150
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
autonomy: the missing AGI ingredient?
GravitasGradient3y10

Perhaps "agency" is a better term here? In the strict sense of an agent acting in an environment?

And yeah, it seems we have shifted focus away from that.

Thankfully, thanks to our natural play instincts, we have a wonderful collection of ready made training environments: I think the field needs a new challenge of an agent playing video games, only receiving instructions of what to do using natural language.

Reply
Another (outer) alignment failure story
GravitasGradient4y-30

These stories always assume that an AI would be dumb enough to not realise the difference between measuring something and the thing measured.

Every AGI is a drug addict, unaware that it's high is a false one.

Why? Just for drama?

Reply
[AN #136]: How well will GPT-N perform on downstream tasks?
GravitasGradient5y10

The predicted cost for GPT-N parameter improvements is for the "classical Transformer" architecture? Recent updates like the Performer should require substantially less compute and therefore cost.

Reply
What do we *really* expect from a well-aligned AI?
GravitasGradient5y10

but indeed human utility functions will have to be aggregated in some manner

I do not see why that should be the case? Assuming virtual heavens, why couldn't each individuals personal preferences be fullfilled?

Reply
To what extent is GPT-3 capable of reasoning?
Answer by GravitasGradientJul 22, 202000

It seems pretty undeniable to me from these examples that GPT-3 can reason to an extend.

However, it can't seem to do it consistently.

Maybe analogous to people with mental and/or brain issues that have times of clarity and times of confusion?

If we can find a way to isolate the pattern of activity in GPT-3 that relates to reasoning we might be bale to enforce that state permanently?

Reply
No wikitag contributions to display.
-6Reducing the risk of catastrophically misaligned AI by avoiding the Singleton scenario: the Manyton Variant
2y
0