LESSWRONG
LW

2406
harfe
640Ω164900
Message
Dialogue
Subscribe

.

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
1harfe's Shortform
3y
21
Heuristics for assessing how much of a bubble AI is in/will be
harfe3d70

There’s no bigger narrative than the one AI industry leaders have been pushing since before the boom: AGI will soon be able to do just about anything a human can do, and will usher in an age of superpowerful technology the likes of which we can only begin to imagine. Jobs will be automated, industries transformed, cancer cured, climate change solved; AI will do quite literally everything.

The article unfortunately does not seriously consider the possibility that AGI has the potential to automate most jobs in a few years. The large investments into AI would be justified in this case, even if current revenue is small! I think this is an important difference to past bubbles.

OpenAI, Anthropic, and the AI-embracing tech giants are burning through billions, inference costs haven’t fallen (those companies still lose money on nearly every user query), and the long-term viability of their enterprise programs are a big question mark at best.

The part about inference costs seems false, unless they mean total inference costs of all their instances.

Reply
Uncommon Utilitarianism #3: Bounded Utility Functions
harfe4d10

Most[1] problems with unbounded utility functions go away if you restrict yourself to summable utility functions[2]. Summable utility functions can still be unbounded.

For example, if each planet in the universe gives you 1 utility, and P(universe has exactly n planets)=2−n for n≥1, then your utility function is unbounded but summable. In such a universe it would be very unlikely for a casino to hand out a large number of planets.

Your proof relies on the assumption

assuming that the casino has unbounded utility to hand out.

and this assumption would be wrong in my example.


  1. In fact, I do not know of an exception. ↩︎

  2. A summable function is a measurable function for which the integral of its absolute value is finite (using the probability measure for the integral in this context). ↩︎

Reply
What are the best standardised, repeatable bets?
harfe6mo40

Note that GWWC is shutting down their donor lottery, among other things: https://forum.effectivealtruism.org/posts/f7yQFP3ZhtfDkD7pr/gwwc-is-retiring-10-initiatives

Reply
harfe's Shortform
harfe9mo12

Mid 2027 seems too late to me for such a candidate to start the official campaign.

For the 2020 presidential election, many democratic candidates announced their campaign in early 2019, and Yang already in 2017. Debates happened already in June 2019. As a likely unknown candidate, you probably need a longer run time to accumulate a bit of fame.

Reply
harfe's Shortform
harfe9mo120

Also Musk's regulatory plan is polling well

What plan are you referring to? Is this something AI safety specific?

Reply
harfe's Shortform
harfe9mo10

I wouldn't say so, I don't think his campaign has made UBI advocacy more difficult.

But an AI notkilleveryoneism campaign seems more risky. It could end up making the worries look silly, for example.

Reply
harfe's Shortform
harfe9mo50

Their platform would be whatever version and framing of AI notkilleveryoneism the candidates personally endorse, plus maybe some other smaller things. They should be open that they consider the potential human disempowerment or extinction to be the main problem of our time.

As for the concrete policy proposals, I am not sure. The focus could be on international treaties, or banning or heavy regulation of AI models who were trained with more than a trillion quadrillion (10^27) operations. (not sure I understand the intent behind your question).

Reply
harfe's Shortform
harfe9mo8113

A potentially impactful thing: someone competent runs as a candidate for the 2028 election on an AI notkilleveryoneism[1] platform. Maybe even two people should run, one for the democratic primary, and one in the republican primary. While getting the nomination is rather unlikely, there could be lots of benefits even if you fail to gain the nomination (like other presidential candidates becoming sympathetic to AI notkilleveryoneism, or more popularity of AI notkilleveryoneism in the population, etc.)

On the other hand, attempting a presidential run can easily backfire.

A relevant previous example to this kind of approach is the 2020 campaign by Andrew Yang, which focussed on universal basic income (and downsides of automation). While the campaign attracted some attention, it seems like it didn't succeed in making UBI a popular policy among democrats.


  1. Not necessarily using that name. ↩︎

Reply1
Debate, Oracles, and Obfuscated Arguments
harfe9moΩ221

This can easily be done in the cryptographic example above: B can sample a new number y=p′⋅q′, and then present y to a fresh copy of A that has not seen the transcript for x so far.

I don't understand how this is supposed to help. I guess the point is to somehow catch a fresh copy of A in a lie about a problem that is different from the original problem, and conclude that A is the dishonest debater?

But couldn't A just answer "I don't know"?

Even if it is a fresh copy, it would notice that it does not know the secret factors, so it could display different behavior than in the x case where A knows the secret factors p,q.

Reply
What do coherence arguments actually prove about agentic behavior?
harfe9mo92

Some of these are very easy to prove; here's my favorite example. An agent has a fixed utility function and performs Pareto-optimally on that utility function across multiple worlds (so "utility in each world" is the set of objectives). Then there's a normal vector (or family of normal vectors) to the Pareto surface at whatever point the agent achieves. (You should draw a picture at this point in order for this to make sense.) That normal vector's components will all be nonnegative (because Pareto surface), and the vector is defined only up to normalization, so we can interpret that normal vector as a probability distribution. That also makes sense intuitively: larger components of that vector (i.e. higher probabilities) indicate that the agent is "optimizing relatively harder" for utility in those worlds. This says nothing at all about how the agent will update, and we'd need a another couple sentences to argue that the agent maximizes expected utility under the distribution, but it does give the prototypical mental picture behind the "Pareto-optimal -> probabilities" idea.

Here is an example (to point out a missing assumption): Lets say you are offered to bet on the result of a coin flip for 1 dollar. You get 3 dollars if you win, and your utility function is linear in dollars. You have three actions: "Heads", "Tails", and "Pass". Then "Pass" performs Pareto-optimally across multiple worlds. But "Pass" does not maximize expected utility under any distribution.

I think what is needed for the result is an additional convexity-like assumption about the utilities. This could be the set of achievable utility vectors is convex'', or even something weaker like every convex combination of achievable utility vectors is dominated by an achievable utility vector" (here, by utility vector I mean (uw)w∈W if uw is the utility of world w). If you already accept the concept of expected utility maximization, then you could also use mixed strategies to get the convexity-like assumption (but that is not useful if the point is to motivate using probabilities and expected utility maximization).

Or: even if you do expect powerful agents to be approximately Pareto-optimal, presumably they will be approximately Pareto optimal, not exactly Pareto-optimal. What can we say about coherence then?

The underlying math statement of some of these kind of results about Pareto-optimality seems to be something like this:

If ¯x is Pareto-optimal wrt utilities ui, i=1,…n and a convexity assumption (e.g. the set {(ui(x))ni=1:x} is convex, or something with mixed strategies) holds, then there is a probability distribution μ so that ¯x is optimal for U(x)=Ei∼μui(x).

I think there is a (relatively simple) approximate version of this, where we start out with approximate Pareto-optimality.

We say that ¯x is Pareto-ε-optimal if there is no (strong) Pareto-improvement by more than ε (that is, there is no x with ui(x)>ui(¯x)+ε for all i).

Claim: If ¯x is Pareto-ε-optimal and the convexity assumption holds, then there is a probability distribution μ so that ¯x is ε-optimal for U(x)=Ei∼μui(x).

Rough proof: Define Y:={(ui(x))ni=1:x} and ¯¯¯¯Y as the closure of Y. Let ~y∈¯¯¯¯Y be of the form ~y=(ui(¯x)+δ)ni=1 for the largest δ such that ~y∈¯¯¯¯Y. We know that δ≤ε. Now ~y is Pareto-optimal for Y, and by the non-approximate version there exists a probability distribution μ so that ~y is optimal for y↦Ei∼μyi. Then, for any x, we have $\mathbb{E}{i\sim\mu} u_i(x) \leq \mathbb{E}{i\sim\mu} \tilde y_i = \mathbb{E}{i\sim\mu} (u_i(\bar x) + \delta)\le \varepsilon + \mathbb{E}{i\sim\mu} u_i(\bar x), $ that is, ¯x is ε-optimal for U.

Reply
Load More
95Ilya Sutskever created a new AGI startup
1y
35
15Infra-Bayesian Logic
Ω
2y
Ω
2
92 Yoshua Bengio: How Rogue AIs may Arise
2y
12
1harfe's Shortform
3y
21