LESSWRONG
LW

soth02
81170
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Looking for a specific group of people
soth023y-1-1

There is a problem in that any group that is generating alpha would likely lose alpha/person if they allow random additional people into their group.

Think Renaissance Medallion fund.  It's been closed to outside investment since near its inception 30 years ago.  Prerequisites for the average person joining would be something like true-genius level Phd in a related STEM field.

An analogue which is closely related is poker players who use solvers to improve their game.  The starting stakes are a bit lower.  The solvers are like a few thousand dollars + equipment to run them, a class on how to use them runs a similar couple thousand bucks, and then there is the small matter of memorizing the shape of a few thousand tables.  As a side note, I think poker is inherently limited because at the top of the heap, you are fighting for single digit to tens of millions of dollars, which is somewhat chump change in the ultimate scheme of things.

Magic the Gathering is similar (cards+variance+strategy/tactics as alpha).

Crypto is similar because of the variance/volatility.  There was a decent pipeline of people who went from MtG->Poker->crypto.  However, I don't think crypto groups are what you are looking for because at this point, the alpha is you.

There is also the superforecaster group.  You can try metaculus.com or reading https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718 

I'm not sure what the end goal is for individual forecasters.  On metaculus you accumulate points for correct predictions and there is a rankings board.  So it looks primarily status driven, but it's hard to put food on the table with this level of status.  Maybe when you hit top 100 you get an invite to an exclusive group?

Reply
AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years
soth023y40

Coincidentally, that scene in The Big Short takes place on January 11 (2007) :D

Reply
That one apocalyptic nuclear famine paper is bunk
soth023y21

I read it as a joke, lol.

Reply
Half-baked AI Safety ideas thread
soth023y10

https://www.lesswrong.com/posts/jnyTqPRHwcieAXgrA/finding-goals-in-the-world-model

Could it be possible to poison the world model an AGI is based on to cripple its power?

Use generated text/data to train world models based on faulty science like miasma, phlogiston, ether, etc.

Remove all references to the internet or connectivity based technology.

Create a new programming language that has zero real world adoption, and use that for all code based data in the training set.

Reply
Half-baked AI Safety ideas thread
soth023y10

There might be a way to elicit how aligned/unaligned the putative AGI is.

  1. Enter into a Prisoner's Dilemma type scenario with the putative AGI.
  2. Start off in the non-Nash equilibrium of cooperate/cooperate.
  3. The number of rounds is specified at random and isn't known to participants. (possible variant is declare false last rounds, and then continue playing for x rounds).
  4. Observe when/if the putative AGI defects in the 'last' round.
Reply
Half-baked AI Safety ideas thread
soth023y10

Does there have to be a reward?  This is using brute force to create the underlying world model.  It's just adjusting weights right?

Reply
Half-baked AI Safety ideas thread
soth023y20

Brute force alignment by adding billions of tokens of object level examples of love, kindness, etc to the dataset.  Have the majority of humanity contribute essays, comments, and (later) video.

Reply
Half-baked AI Safety ideas thread
soth023y30

I wonder what kind of signatures a civilization gives off when AGI is nascent.

Reply
Gato as the Dawn of Early AGI
soth023y10

Develop a training set for alignment via brute force.  We can't defer alignment to the ubernerds.  If enough ordinary people (millions? tens of millions?) contribute billions or trillions of tokens, maybe we can increase the chance of alignment.  It's almost like we need to offer prayers of kindness and love to the future AGI: writing alignment essays of kindness that are posted to reddit, or videos  extolling the virtue of love that are uploaded to youtube.

Reply
[$20K in Prizes] AI Safety Arguments Competition
soth023y10

AI presents both staggering opportunity and chilling peril. Developing intelligent machines could help eradicate disease, poverty, and hunger within our lifetime. But uncontrolled AI could spell the end of the human race. As Stephen Hawking warned, "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks."

Reply
Load More
4The Kindness Project
4y
3