LESSWRONG
LW

55
S. Alex Bradt
701200
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
6Under what conditions should humans stop pursuing technical AI safety careers?
Q
5mo
Q
0
This is a review of the reviews
S. Alex Bradt1mo20

https://ifanyonebuildsit.com/5/why-dont-you-care-about-the-values-of-any-entities-other-than-humans

Reply
Four ways learning Econ makes people dumber re: future AI
S. Alex Bradt2mo42

but even then there's handwaving around why we'll suddenly start producing stuff that nobody wants.

"Stuff that nobody wants"? Like what? If you're referring to AI itself... Well, a lot of people want AI to solve medicine. All of it. Quickly. Usually, this involves a cure for aging. Maybe that could be done by an AI that poses no threat... but there are also people who want a superintelligence to take over the world and micromanage it into a utopia, or who are at least okay with that outcome. So "stuff that nobody wants" doesn't refer to takeover-capable AI.

If you're referring to goods and services that AIs could provide for us... Is there an upper limit to the amount of stuff people would want, if it were cheap? If there is one, it's probably very high.

Reply
Alex_Altair's Shortform
S. Alex Bradt2mo*52

Yep, that all sounds right. In fact, a directed graph can be called transitive if... well, take a guess. And k-uniform hypergraphs (edit: not k-regular, that's different) correspond to k-ary relations.

Here's another thought for you: Adjacency matrices. There's a one-to-one correspondence between matrices and edge-weighted directed graphs. So large chunks of graph theory could, in principle, be described using matrices alone. We only choose not to do that out of pragmatism.

(I've also heard of something even more general called matroid theory. Sadly, I never took the time to learn about it.)

Reply
A philosophical kernel: biting analytic bullets
S. Alex Bradt2mo52

such as the Continuum Hypothesis, which is conjectured to be independent of ZFC.

It's in fact known to be independent of ZFC. Sources: Devlin, The Joy of Sets; Folland, Real Analysis; Wikipedia.

Reply
AI #128: Four Hours Until Probably Not The Apocalypse
S. Alex Bradt3mo41

Not the way I'd use those words, nope. The first is a low bar; the second is extremely high, and includes a specific emotional reaction in it. I haven't seen any plausible vision of 2040 that I'd enthusiastically endorse, whether it's business-as-usual or dismantling stars, but it's not hard to come up with futures that are preferable to the end of love in the universe.

Reply
AI #128: Four Hours Until Probably Not The Apocalypse
S. Alex Bradt3mo21

If you can’t enthusiastically endorse that outcome, were it to happen, then you should be yelling at us to stop.

I don't think it's that simple. I'm not enthusiastic about transhumanism, so I can't enthusiastically endorse that outcome, but I can't bring myself to say, "Don't build AI because it'll make transhumanism possible sooner." If anything, I expect that having a friendly-to-everyone ASI would make it a lot easier to transition into a world where some people are Jupiter-brained.

I am quite willing to say, "Don't build AI until you can make sure it's friendly-to-everyone," of course.

Reply
Shortform
S. Alex Bradt3mo50

This comment has been tumbling around in my head for a few days now. It seems to be both true and bad. Is there any hope at all that the Singularity could be a pleasant event to live through?

Reply
Google and OpenAI Get 2025 IMO Gold
S. Alex Bradt3mo40

now a bunch of robots can do it. as someone who has a lot of their identity and their actual life built around “is good at math,” it’s a gut punch. it’s a kind of dying. [...] multiply that grief out by *every* mathematician, by every coder, maybe every knowledge worker, every artist… over the next few years… it’s a slightly bigger story

Have there been any rationalist writings on this topic? This cluster of social dynamics, this cluster of emotions? Dealing with human obsolescence, the end of human ability to contribute, probably the end of humans being able to teach each other things, probably the end of humans thinking of each other as "cool"? I've read Amputation of Destiny. Any others?

Reply
LLMs Can't See Pixels or Characters
S. Alex Bradt3mo202

Related: LLMs struggle with perception, not reasoning, in ARC-AGI.

Reply1
nikola's Shortform
S. Alex Bradt3mo30

Let's not forget that the AI action plan will be on the President's desk by Tuesday, if it isn't already.

I have to wake up to that every morning. Now you do, too.

Reply
Load More