LESSWRONG
LW

36
AntonTimmer
312120
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
2AntonTimmer's Shortform
3y
2
Nathan Young's Shortform
AntonTimmer8mo10

Here is an example which I believe is directionally correct, it took me roughly 20 minutes to come up with it. The prompt is "how do living systems create meaning "?:

  1. My life feels like it has meaning (sensory-motor behavior and conceptual intentional aspects). Looking at it through an evolutionary perspective, it is highly likely that meaning assignment is the way through which living systems survived. Thus, there has to be some base biological level at which meaning is created through cell-cell communication/ bioelectricity/ biochemistry /biosensoring etc.
  2. Life is just made of atoms. Atoms are just automata. This implies, there is no meaning at the atom level and thus it cannot pop at a higher levels through emergence or some shit. You are delusional to believe there is some meaning assignment in life.
  3. Meaning is something that is defined through the language that we speak. It is well known that different cultures have different words and conceptual framing which implies that meaning is different in different cultures. Meaning thus only depends on language.
  4. Meaning is just a social construct and we can define anything to have meaning. Thus it doesn't matter what you find meaningful since it is just something you inherited through society and parenting.

I believe points 1-3 are fine, point 4 is kinda shaky.

Reply
Nathan Young's Shortform
AntonTimmer9mo10

Maybe a different framework to look at it:

  1. The map tries to represent the territory faithfully.
  2. The map consciously misrepresent the territory. But you can still infer through the malevolent map some things about the territory.
  3. The map does not represent the territory at all but pretends to be 1. Difference to 2 is that 2 is still taking the territory as base case and changing it while 3 is not at all trying to look at the territory.
  4. The map is the territory. Any reference on the map is just a reference to another part of the map. Claiming that the map might be connected to an external territory is taken as bullshit because people are living in the map. In the optimal case the map is at least self consistent.
Reply
Evolution provides no evidence for the sharp left turn
AntonTimmer2y1-4

I wouldn't ascribe human morality to the process of evolution. Morality is a bunch of if..., then statements. Morality seems to be more of a cultural thing and helps coordination. Morality is obviously influenced by our emotions such as disgust, love etc but these can be influenced heavily by culture, upbringing and just genes. Now let's assume the AI is getting killed if it behaves "unmoral", how can you be sure that it does not evolve to be deceptive ?

Reply
Confusing the ideal for the necessary
AntonTimmer3y73

This kinda reminds me of Failing with abandon

Reply
AntonTimmer's Shortform
AntonTimmer3y10

Today I though about how it is weird that so many people go into soft sciences (social sciences etc.) instead of STEM fields. I think one of the reasons may be that feedback loops are way bigger. In STEM fields most of the time you will be shown that you are wrong. However in soft sciences you can go on without ever noticing that you made a wrong judgement (outside view). Maybe alignment should look more into how people came up with theories in soft sciences ? Since it seems like the feedback loops are bigger.

Reply
DeepMind alignment team opinions on AGI ruin arguments
AntonTimmer3y50

I misused the definition of a pivotal act which makes it confusing. My bad! 

I understood the phrase pivotal act more in the spirit of out-off distribution effort. To rephrase it more clearly: Do "you" think an out-off distribution effort is needed right now ? For example sacrificing the long term (20 years) for the short term (5 years) or going for high risk-high reward strategies. 

Or should we stay on our current trajectory, since it maximizes our chances of winning ? (which as far as I can tell is "your" opinion)

Reply
DeepMind alignment team opinions on AGI ruin arguments
AntonTimmer3y147

As far as I can tell the major disagreements are about us having a plan and taking a pivotal act. There seems to be general "consensus" (Unclear, Mostly Agree, Agree) about what the problems are and how an AGI might look. Since no pivotal acts is needed either you think that we will be able to tackle this problem with the resources we have and will have, you have (way) longer timelines (let's assume Eliezer timeline is 2032 for argument's sake) or you expect the world to make a major shift in priorities concerning AGI.

Am I correct in assuming this or am I missing some alternatives ?

Reply
All AGI safety questions welcome (especially basic ones) [July 2022]
AntonTimmer3y20

This seems to boil down to the "AI in the box" problem. People are convinced that keeping an AI trapped is not possible. There is a tag which you can look up (AI Boxing) or you can just read up here.

Reply
Church vs. Taskforce
AntonTimmer3y10

Reading this 13 years later is quite interesting when you think about how far the LW community and EA community have come. 

Reply
[$20K in Prizes] AI Safety Arguments Competition
AntonTimmer3y10

"If AGI systems can become as smart as humans, imagine what one human/organization could do by just replicating this AGI."

Reply
Load More
2AntonTimmer's Shortform
3y
2
7Unchangeable Code possible ?
Q
3y
Q
9