LESSWRONG
LW

332
Sinclair Chen
66461870
Message
Dialogue
Subscribe

manifold.markets/Sinclair

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
3Sinclair Chen's Shortform
2y
86
No wikitag contributions to display.
Sinclair Chen's Shortform
Sinclair Chen1mo10

yeah

Reply
The Problem
Sinclair Chen1mo10-8

More the latter.

It is clear that language models are not "recursively self improving" in any fast sense. They improve with more data in a pretty predictable way in S curves that top out at a pretty disappointing peak. They are useful to do AI research in a limited capacity, some of which hits back at the growth rate (like better training design) but the loops are at long human time-scales. I am not sure it's even fast enough to give us an industrial revolution.

I have an intuition that most naiive ways of quickly tightening the loop just causes the machine to break and not be very powerful at all.

So okay we have this promising technology that do IMO math, write rap lyrics, moralize, assert consciousness, and make people fall in love with it -- but it can't run a McDonald's franchise or fly drones into tanks on the battlefield (yet?)
Is "general intelligence" a good model for this technology? It is very spiky "intelligence". It does not rush past all human capability. It has approached human capability gradually and in an uneven way. 
It is good at the soft feelsy stuff and bad at a lot of the hard power stuff. I think this is the best possible combination of alignment vs power/agency that we could have hoped for back in 2015 to 2019. But people here are still freaking like gpt-2 just came out.

A crux for me is, will language models win over a different paradigm? I do think it is "winning" right now, being more general and actually economically useful kinda. So it would have to be a new exotic paradigm.

Another crux for me is, how good at is it at new science? Not just helping AI researchers with their emails. How good will it be at improving rate of AI research, as well as finding new drugs, better weapons, and other crazy new secrets (at least) like the discovery of atomic power?
I think it is not good at this and will not be that good at this. It is best when there is a lot of high quality data and already fast iteration times (programming) but suffers in most fields of science, especially new science, where that is not the case.
I relent that if language models will get to the superweapons then it makes sense to treat this like an issue of national/global security.

Intuitively I am more worried about the language models accelerating memetic technology. New religion/spirituality/movements, psychological operations, propaganda. This seems clearly where they are most powerful. I can see a future where we fight culture wars forever, but also one where we genuinely raise humanity to a better state of being as all information technologies have done before (ha).
This is not something that hits back at the AI intelligence growth rate very much.

Besides tending the culture, I also think a promising direction for "alignment" (though maybe you want to call it a different name, being a different field) is paying attention to the relationships between individual humans and AI and the pattern of care and interdependence that arises. The closest analogue is raising children and managing other close human relationships.

Reply
The Problem
Sinclair Chen1mo72

Why are we worried about ASI if current techniques will not lead to intelligence explosion?

There's often a bait and switch in these communities,  where I ask this and people say "even if takeoff is slow, there is still these other problems ..." and then list a bunch of small problems, not too different from other tech, which can be dealt with in normal ways.

Reply
The Problem
Sinclair Chen1mo0-2

I definitely think more psychologists should get into being model whisperers. Also teachers, parents, and other people who care for children.

Reply
Emotions Make Sense
Sinclair Chen1mo10

Indeed people often play low status, small, energy preserving lying down curling up, frowny crying -- in order to signal other people for reassurance. This gets trained out of people like us who use screens too much, that no one will come unless you give a positive and legible cry for help.

The reassurance of course, is about status and reputation. We still like you. We're here for you. We're still cool. Consider status a measure of the health of your social ties, which many people terminally value and in present society still provides instrumental, material value (jobs, places to crash, mutual aid, marketing / audience building for your future startup, ...).

It makes sense to think of relationships as things that are built that have their own health instead of purely thinking of material output. The future is uncertain. You can't model that far. You might get more returns later by investing now. More speculative, i think the drive to relate to others is borne from an ancient desire to form contracts with other agents to combine into (partial?) superagents. like bees in a hive.

Reply
Sinclair Chen's Shortform
Sinclair Chen1mo65

In HPMOR, Harry has this big medical kit. But he doesn't exercise and has no qualms messing up his sleep schedule by 4 hours of jet lag a day

Not very Don't Die of him if you ask me

Reply
My Empathy Is Rarely Kind
Sinclair Chen1mo10-2

I also look down on people I consider worse than me. I used to be more cynical and bitter. But now people receive me as warm and friendly -- even people outside of a rationalist "truth-first" type community.

I'm saying this because I see in you a desire to connect with people. You wish they were more like you. But you are afraid of becoming more like them.

The solution is to be upfront with them about your feelings instead of holding it in.

Most people care more about being understood than being admired. The kind of person who prioritizes their own comfort over productivity within an abstract system - they are probably less autistic than you. They are interested in you. If you are disgusted with their mindset, they'll want to know. If you explain it to them, and then listen to their side of where they are coming from, and then you will learn a more detailed model of them.

If you see a way they personally benefit (by their own values) by behaving differently - then telling them is a kindness.

Another thing is that a lot of people actually want you to be superior to them.  They want to be the kitten. They want you to take care of it. They want to higher status people around them. They want someone to follow. They want to be part of something bigger. They want a role model to have something to aim towards. Many reasons.

Being upfront can also filter you into social bubbles that share your values.

Reply1
Make More Grayspaces
Sinclair Chen2mo21

I like this. I think there's some value in having elite communities that are not 101 spaces. But I am sure how or when I would use one. I do think I improve my rationality by spending time with particular smart thoughtful friends. But this doesn't really come from exclusion or quarantining other people.

I enjoy the cognitive trashpit and that's why I'm mostly on twitter now. I am happy to swim in soapy grimy dishwater, not so much because I want to raise the sanity waterline (ha) but the general public is bigger - more challenging, more important. Consider it aliveness practice, or like tsujigiri.

The twitter scene does have standards. but it's more diffuse, decentralized, informal.

I feel like you're trying to build a new science (great!) but I'm more interested in a new version of something like the viennese coffee house scene.

Reply
adamzerner's Shortform
Sinclair Chen2mo10

isn't this what toothpicks are traditionally for?

sometimes i just run my fingernail through my teeth, scrape all the outward surfaces and slide it in between the teeth

Reply
Raemon's Shortform
Sinclair Chen2mo10

shortform video has some epistemic benefits. you get a chance to see the body language and emotional affect of people, which transfers much more information and makes it harder to just flat out lie.

more importantly, everpresent access to twitter allows me to quickly iterate on my ideas and get instant feedback on every insane thought that flows through my head. this is not a path i recommend for most people. but it is the path i've chosen.

Reply
Load More
11Prediction markets are consistently underconfident. Why?
Q
2y
Q
4
11Anki setup best practices?
Q
2y
Q
4
3Sinclair Chen's Shortform
2y
86
21View and bet in Manifold prediction markets on Lesswrong
3y
8