LESSWRONG
LW

arisAlexis
666420
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
1arisAlexis's Shortform
2y
9
No wikitag contributions to display.
arisAlexis's Shortform
arisAlexis9d10

are they intelligent species with own will?

Reply
arisAlexis's Shortform
arisAlexis10d3-9

No overthinking AI risk. People, including here get lost in mind loops and complexity.

An easy guide with everything there being a fact:

  • We DO have evidence that scaling works and models are getting better
  • We do NOT have evidence that scaling will stall or reach a limit
  • We DO have evidence that models are becoming smarter in all human ways
  • We do NOT have evidence of a limit in intelligence that can be reached
  • We DO have evidence that smarter agents/beings can dominate other agents/beings in nature/history/evolution
  • We do NOT have evidence that ever a smarter agent/being was controlled by a lesser intelligent agent/being.
     

Given these easy to understand data points, there is only one conclusion. That AI risk is real, AI risk is NOW.

Reply
Consider chilling out in 2028
arisAlexis13d10

how can you know if it's exaggerated? It's like an earthquake. The fact that it didn't happen yet doesn't mean that it will not be destructive if it happens through time. The superintelligence slope doesn't stop somewhere to evaluate nor do we have any kind of signal that the more time passes the more improbable it is.

Reply
Consider chilling out in 2028
arisAlexis19d-3-3

Let's discuss for now, and then check in about it in 31 months.

 

I really don't like these kind of statements because it's like a null bet. Either the world has gone to hell and nobody cares about this article or author has "I was correct, told ya" rights. I think these kind of statements should not be made in the context of existential risk.

Reply
A deep critique of AI 2027’s bad timeline models
arisAlexis19d-3-4

my criticism is that the article is written in a way that is categorically "correcting for a faulted model" from an outsider. Yes you can suggest corrections of course if there is a blatant mistake. But the assumptions are the most important in these models and assumptions are best done by people that have worked and contributed in the top AI labs. 

Reply
Consider chilling out in 2028
arisAlexis19d-30

Although I don't like comments starting with "your logic slipped" because it sounds passive-aggressive "you are stupid" vibes I will reply. 

So what you are saying is that yes this time is different just not today. It will definately happen and all the doomerism is correct but not on a short timeline because ____ insert reasoning that is different than what the top AI minds are saying today. 

This is actually and very blatantly a self preserving mechanism that is called "norlmancy bias" very well documented for human species.

Reply2
Racial Dating Preferences and Sexual Racism
arisAlexis20d10

another data point is that there are literally no marketing ads showing white male with black female as a couple. Even when racial diversity needs to be shown even at lgbt or racial friendly groups, brochures etc it's always a black man with a white woman and never vice versa. I guess it's a chicken and egg problem.

Reply
Consider chilling out in 2028
arisAlexis20d-4-3

but you need to form this not like any other argument but like "first time in history of earth life, a species has created a new superior species". I think all these refutals are missing this specific point. This time is different.

Reply
A deep critique of AI 2027’s bad timeline models
[+]arisAlexis20d-15-21
A Slow Guide to Confronting Doom
arisAlexis3mo92

I think having a huge p(doom) vs a much smaller one would change this article substantially. If you have 20-30 or 50% doom you can still be positive. In all other cases it sounds like a terminal illness. But since the number is subjective living your life like you know you are right is certainly wrong. So I take most of your article and apply it in my daily life and the closest to this is being a stoic but by any means I don't believe that it would take a miracle for our civilization to survive. It's more than that and it's important.

Reply
Load More
0To know or not to know
5mo
3
2Frida van Lisa, a short story about adversarial AI attacks on humans
1y
0
66How to be an amateur polyglot
1y
16
1arisAlexis's Shortform
2y
9
7Why kill everyone?
2y
5
5The burden of knowing
2y
0