LESSWRONG
LW

Purplehermann's Shortform

by Purplehermann
17th Oct 2024
1 min read
32

2

This is a special post for quick takes by Purplehermann. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Purplehermann's Shortform
9Purplehermann
22Carl Feynman
7J Bostock
0Purplehermann
8J Bostock
4ChristianKl
5Purplehermann
4Purplehermann
4Purplehermann
4Purplehermann
9Carl Feynman
3Purplehermann
2Carl Feynman
3Purplehermann
4Purplehermann
6JBlack
5Everything-Apologist
3Purplehermann
3Viliam
3Purplehermann
3Purplehermann
2Purplehermann
1Purplehermann
1Purplehermann
1Purplehermann
1Purplehermann
1Purplehermann
2Dagon
3Purplehermann
2Viliam
1Purplehermann
-2Purplehermann
32 comments, sorted by
top scoring
Click to highlight new comments since: Today at 8:39 AM
[-]Purplehermann10mo90

What happened to regrowing limbs? From what little I understand, with pluripotent stem cells we could do a lot, except cancer.

Why don't we use stem cells instead of drilling for cavities? While there are a few types of tissue, tumors are fairly rare in teeth, likely due to minimal blood flow.

Reply
[-]Carl Feynman10mo220

In general the human body is only capable of healing injuries that are the kind of thing that, if they were smaller, would still leave the victim alive, in the Stone Age.  If an injury is of a type that would be immediately fatal in the Stone Age, there’s no evolutionary pressure to make it survivable.  For example, we can regrow peripheral nerves, because losing a peripheral nerve means a numb patch and a weak limb, but you could live with this for a few months even if you’re a caveman.  On the other hand, we can’t regrow spinal cord, because a transected spinal cord  is fatal within a day or two even given the finest Stone Age nursing care (it didn’t become survivable until about 1946.). On the third hand, we can heal brain from strokes, even though brain is more complex than spinal cord, because a small stroke is perfectly survivable as long as you have someone to feed you until you get better.  We can survive huge surgical incisions, even though those would be fatal in the Stone Age, because small penetrating wounds were survivable, and the healing mechanisms can just do the same thing all along the incision.  This is why we sew wounds up: to convince the healing mechanisms that it’s only a small cut.

Unfortunately this argument suggests regrowing limbs is impossible.  An amputation is bad but survivable, and after it heals, you can still get around.  But many years of spending a lot of bodily energy on regrowing a limb that is pretty useless for most of that time doesn’t seem worthwhile.

Some particular problems I see:

In humans, there’s no mechanism for a growing limb to connect correctly to an adult injury site.  For example, there’s already a bunch of scar tissue there, which has to be cleared away progressively as the limb grows.  Evolution has not seen fit to provide us with this complex biochemistry, unlike the case of salamanders.

Children have a high level of circulating growth hormone, which tells the arm cells how fast to grow.  If you tried to provide this to an adult, their other bones would also grow, causing deformity (acromegaly).

It’s odd that we can’t grow new teeth when the old ones fall out.  More than once, I mean.  Drilling for cavities makes sense because the enamel (outer tooth layer) is essentially dead, and doesn’t regrow.  But we should be able to grow a whole new tooth from the root when we get a cavity.

Reply
[-]J Bostock10mo70

The regrowing limb thing is a nonstarter due to the issue of time if I understand correctly. Salamanders that can regrow limbs take roughly the same amount of time to regrow them as the limb takes to grow in the first place. So it would be 1-2 decades before the limb was of adult size. Secondly it's not as simple as just smearing on some stem cells to an arm stump. Limbs form because of specific signalling molecules in specific gradients. I don't think these are present in an adult body once the limb is made. So you'd need a socket which produces those which you'd have to build in the lab, attach to blood supply to feed the limb, etc.

Reply
[-]Purplehermann10mo0-3

The first issue seems minor - even if true, a 40 year old man could have a new arm by 60

Reply
[-]J Bostock10mo80

For a good few years you'd have a tiny baby limb, which would make it impossible to have a normal prosthetic. I also think most people just don't want a tiny baby limb attached to them. I don't think growing it in the lab for a decade is feasible for a variety of reasons. I also don't know how they planned to wire the nervous system in, or ensure the bone sockets attach properly, or connect the right blood vessels. The challenge is just immense and it gets less and less worth over time it as trauma surgery and prosthetics improve.

Reply
[-]ChristianKl10mo40

There seem to be clinical trials underway for regrowing teeth in Japan: https://www.popularmechanics.com/science/health/a60952102/tooth-regrowth-human-trials-japan/

Reply
[-]Purplehermann9mo51

DOGE.

This is an opportunity to work with the levers of real power. If there are 5 people here who work on this for two years, that's an in with Senators, Congressman, bureaucrats and possibly Musk.

Just showing up and making connections while doing hard work is the most efficient way to get power right now, in the time before AI gets dangerous and power will be very relevant.

I do not believe that this should be taken as an opportunity to evangelize. People, not ideology.

This seems like something worth funding if someone would like to but can't afford it.

Reply
[-]Purplehermann7mo4-7

Keeping humans around is the correct move for a powerful AGI, assuming it isn't being existentially threatened. 

For a long while human inputs will be fairly different from silicon inputs, and humans can do work - intellectual or physical - and no real infrastructure is necessary for human upkeep or reproduction (compared to datacenters). 

Creating new breeds of human with much higher IQs and creating (or having them create) neuralink-like tech to cheaply increase human capabilities will likely be a very good idea for AGIs.

Most people here seem worried about D tier ASIs, ASIs should see the benefits of E tier humans (250+ IQ and/or RAM added through neuralink-like tech) and even D tier humans (genesmith on editing, 1500+ IQs with cybernetics vastly improving cognition and capability)

 

 

'Sparing a little sunlight' for an alternative lifeform which creates a solid amount of redundancy as well as being more effecient for certain tasks and allowing for more diverse research, as well as having minimal up-front costs  is overdetermined

Reply
[-]Purplehermann8mo40

A message from Claude:

'''This has been a fascinating and clarifying discussion. A few key insights I'll take away:

The distinction between bounded and unbounded optimization is more fundamental than specific value differences between AIs. The real existential threat comes from unbounded optimizers. The immune system/cancer metaphor provides a useful framework - it's about maintaining a stable system that can identify and prevent destructive unbounded growth, not about enforcing a single value set. The timing challenge is critical but more specific than I initially thought - we don't necessarily need the "first" AGI to be perfect, but we need bounded optimizers to establish themselves before any unbounded ones emerge.

Some questions this raises for further exploration:

What makes a Schelling fence truly stable under recursive self-improvement? Could bounded optimizers coordinate even with different base values, united by shared meta-level constraints? Are there ways to detect early if an AI system will maintain bounds during capability gain?

The framing of "cancer prevention" versus "value enforcement" feels like an important shift in how we think about AI governance and safety. Instead of trying to perfectly specify values, perhaps we should focus more on creating robust self-limiting mechanisms that can persist through capability gains.'''

Reply
[-]Purplehermann8mo42

I notice that I'm confused.

Google made an amazing AI for playing chess, by allowing it to make its own data.

Why hasn't the same thing happened for programming? Have it generate a bunch of pictures with functionality expectations (a PM basically), have it write and run code, then check the output against the requirements it created, then try again when it doesn't come out right.

This is even easier where the pm is unnecessary - leetcode, codewars, euler...

You could also pay PMs to work with the AI developers, instead of the code tutors xAI is hiring.

There seems to be a preference to having the LLMs memorize code instead of figuring things out itself.

If you run out of things like that you could have it run random programs in different languages, only learning from those that work.

I haven't used genesis, but that also seems like a mostly-built validator for programs that AIs can use to create and train on their own data.

With the amount of compute going into training, it should be easy to create huge amounts of data?

Reply
[-]Carl Feynman8mo90

This isn’t crazy— people have tried related techniques.  But it needs more details thought out. 

In the chess example, the AIs start out very stupid, being wired at random.  But in a game between two idiots, moving at random, eventually someone is going to win.  And then you reinforce the techniques used by the winner, and de-reinforce the ones used by the loser.  In any encounter, you learn, regardless of who wins.  But in an encounter between a PM and a programmer, if the programmer fails, who gets reinforced?  It might be because the programmer is dumb, and should be de-reinforced.  But it might be because the PM is dumb, and asked for something impossible or far beyond what can be done, in which case it should be de-reinforced.  But it might be because the PM came up with a task just barely beyond the programmer’s ability, which is good and should be reinforced.  We somehow need to keep the PM producing problems which are hard but possible.  Maybe the programmer could be tasked with coming up with either a solution or a proof of impossibility?  

AlphaGo had a mechanism which tracked how important each move was.  It was trained to predict the probability that white would win, on each position encountered in the game.    Moves where this probability swung wildly were given a larger weight in reinforcement.  This was important for concentrating training on decisive moves, allowing the extraction of information from each move instead of each game. It’s not clear if this is possible in the programming task.

Reply
[-]Purplehermann8mo30

The point was more about creating your own data being easy, just generate code then check it by running it. Save this code, and later use it for training.

If we wanted to go the way of AlphaZero it doesn't seem crazy.

De-enforce commands, functions, programs which output errors, for a start.

I didn't think of the pm as being trained by these games, that's interesting. Maybe have two instances competing to get closer on some test cases the pm can prepare to go with the task, and have them competing on time, compute, memory, and accuracy. You can de-enforce the less accurate, and if fully accurate they can compete on time, memory, cpu.

I'm not sure "hard but possible" is the bar - you want lots of examples of what doesn't work along with what does, and you want it for easy problems and hard ones so the model learns everything

Reply
[-]Carl Feynman8mo20

What’s a PM?

Reply
[-]Purplehermann8mo30

Product manager, non-technical counterpart to a team lead in a development team

Reply
[-]Purplehermann10mo40

Why aren't research companies made in Africa/Middle East/China for human research- cut out most of the bureaucracy and find out fast if something works, if it does set up a company in a 1st world country to go through the steps?

Reply
[-]JBlack10mo62

Research companies work best when there's plenty of infrastructure that can supply stuff they need to do the research. Including, to mention one recent case, electricity. It also helps to be in an area where there is stable government that can protect the research site from civil or military unrest, and too much (or too unpredictable) corruption. You also want it to be a place where your researchers are happy to live while they do their research, and where you can relatively easily recruit other skilled workers.

China does meet these requirements, but it is not exactly lacking in bureaucracy so I'm not sure why it made the list. If you're doing research involving human trials of some sort, you also want to be able to communicate well with the participants so extensive knowledge of the language and culture will be very useful.

All that said, plenty of organizations do carry out research all over the world, not just in rich countries with a lot of bureaucracy.

Reply
[-]Everything-Apologist10mo52

Speaking as a low-level employee in the medtech industry, I can report that a lot of companies are satisfied doing human research in Eastern Europe, where costs are low and regulations lax, but infrastructure is still available. I have heard that there is a bit of "rotation" among these countries over time, since costs and regulations tend to increase wherever human research becomes popular.

Reply
[-]Purplehermann4mo3-3

I don't remeber who said it, but building AI isn't just about power dynamics or a bit of efficiency.

It's about whether humanity should keep doing things.

Civilization (feels like it?) stagnated and degraded for the last decades (the main technological upgrade being the cause of social degradation).

We haven't solved cancer, can't regrow limbs, people are unhealthy, commuting to work is unpleasant and work weeks are long. The list can go on. 

Humans make tools do let them do better and more work. Humans even set up full automation of certain things. Now humans are looking to fully automate humans, perhaps because we don't believe in the human race. (I think EY and doomers generally are the same as the accelerationists, neither has faith in humanity). 

What could humans make that would restore faith- faith that we could compete with AGIs, faith that we can get out of stagnation without replacing humans, faith that we can make the world of humans a better one? 

A tech advance, an organizational efficiency advance,  quality of life, something else?

Reply
[-]Viliam4mo30

I admit I don't have much faith in most of humanity.

Only a few people are actively bad. But also only a few are actively good. Most people are just fucking passive. On some level they want the world to be a nice place, but if that requires them moving a finger, then... sorry, it's just not going to happen.

We have educational system that can teach you things that the ancient philosophers would sacrifice an arm and a leg for. And I don't mean just the school system, but also the internet: Wikipedia, Khan Academy, Library Genesis. If you want to learn something, it's there, often for free. Most people just don't care.

Even the universities and research institutions are mostly full of people just going through the motions, without any genuine curiosity. It may not seem that way if you are in an exceptionally good school or research org. But most people go to the university just to get the diploma, to hopefully get them a better job; they don't care about the knowledge. Most submitted scientific papers are shit that should be thrown away.

I suspect we only get progress because the human population is so huge that even a fraction of a fraction is enough in absolute numbers to keep inventing new things, while the rest of humanity keeps sleepwalking. A planet with one billion sane people would be... a science fiction story, compared to what we have now.

Of course, replacing humans with machines that will murder us and then proceed to slowly convert the entire universe to paperclips is not an answer.

What could humans make that would restore faith

I am tempted to say "wake up", except that most people would interpret that as doing some crazy thing, such as joining some religious group, reading conspiracy theories, etc. So I will say "wake up in a sane way". That means:

  • Start paying attention to the universe around you. The living things are made of cells, everything is made of atoms. Pick up a textbook. Watch an educational video. Share with friends.
  • Notice the things that are missing around you, and do something about that. (It will be more fun if you do that together with your friends.) People are lonely - talk to them. The park is dirty - pick up the trash.
  • Notice the things that are dysfunctional and start talking about that. Houses are expensive - maybe we should build more houses. The food in shops is unhealthy - maybe we should learn to cook. Politicians are crazy - maybe we should write "a sane centrist manifesto", a list of things that most sane people can agree on, publish it everywhere, and mercilessly call out every crazy idea that goes against it.
  • Simply, act as if this world is real, and as if your actions can actually matter (even if just on a small scale). You might not change the world, but you can definitely bring some change to your neighborhood.

Today, there are people like that. But they are like 1% of the population, and it's too much work for them to fix everything (especially while some people are actively trying to make it worse), and they are tired. Some help would be really appreciated.

If that is not an option, then at least we need to figure out some way for the sane people to get together, start their own city / country / planet, and... on my darker days, I would say "let the rest of humanity rot in their filth"... but actually I am a nice person, it's just that when everyone is drowning, you need to save yourself first, catch some breath, and maybe then return to help others. But you need to keep the separation on some symbolic level; for example if most scientific journals contain unreproducible results, do not post in the same journals, but start a new one with different criteria for accepting papers. If the schools suck, don't just become a teacher, but at least start a new school. If you are sane, keep yourself separated from the insane; help them if you can, but do it from a position where they can't drag you down or take credit for your work or twist your ideas. (Basically, if you do something fundamentally different from others, be legible about it, so that other sane people see whom to join.)

And to be honest, I don't think that even the rationality community passes this bar, although it is one of the few groups that are at least trying. (But we still get the murderous Zizians, demon-exorcising Leverage, etc.)

Reply
[-]Purplehermann8mo30

Writing tests, QA and Observability are probably going to stay for a while and work hand in hand with AI programming, as other forms of programming start to disappear. At least until AI programming becomes very reliable.

This should allow for working code to be produced way faster, likely giving more high-quality 'synthetic' data, but more importantly massively changing the economics of knowledge work

Reply
[-]Purplehermann11mo32

Devops Mentality is correct:

Friction is a big deal.

Being able to change code and deploy immediately (or in a few minutes) matters immensely.

This might just be feedback loops at an extreme, but I believe it's something more.

The benefit of going from 20wpm to 40wpm was not ×2 to my writing and experience, it was more like ×8. I fully intend to reach 60wpm.

It was closer to a ×2 to my developing experience, which is interesting as most of software development isn't the actual typing. Another anecdote is that the best programmers I know all have 60+wpm (there was one react front end guy who was good, but mostly at design).

Reducing friction is underrated, even if friction is only 10% of a job (typing during coding), improving that friction substantially matters more than 10%.

This may have to do with flow, concentration or enjoyment

Reply
[-]Purplehermann5mo20

How difficult/expensive would it be to create a large database of people with full panels of their micronutrients, hormones, fat distribition, bmi, insulin, medications etc from regular checkup, and chronic issues?

I've started reading the literature on some common chronic diseases,  and there's often a few important (often different!) variables missing in different studies which make getting a full picture much harder. 

As a second step, maybe allow individuals to add data with sensors and apps that come with a pipeline to the database? Sleep data, food diaries, glucose monitors, thermometers, step counters, heart rate monitors etc

Add genomic sequencing and you've got as much data as you can use,  assuming you scale enough. 

The question is how you make it easy enough that it can be opt-out instead of opt-in

Reply
[-]Purplehermann5mo10

Main tech nodes coming up:

  1. Gene-editing/selection for enhanced humans
  2. Zoom, but metaverse
  3. Quantum computing
  4. ASI

Anyone worrying about human disempowerment should really hope the first 2 happen before 4.

3 is double-edged, could be very useful and could allow for 4 to be much worse much faster

If we pause AI development,  it should be until the first 3 are integrated into societal infrastructure, and then people are given a certain amount of time to do safety research

Reply
[-]Purplehermann6mo10

Corrigibility seems like a very bad idea if general. If you can pick where ASI is corrigible maybe that's better than straight up anti-corrigibility

Reply1
[-]Purplehermann6mo10

Bryan Johnson is getting a ton of data on biomarkers, but N=1.

How hard would it be to set up a smart home-test kit, which automatically uploads your biomarker data to an open-source database of health? 

Combining that with food and exercise journaling, and we could start to get some crazy amounts of high resolution data on health

Getting health companies to offer discounts for people doing this religiously could create a virtuous cycle of more people putting up results, getting better results and therefore more people signing up for health services

Reply
[-]Purplehermann7mo10

Any good, fairly up-to-date lists of the relevant papers to read to catch up with AI research (as far as a crash course will take a newcomer)?

 

Preferably one that will be updated

Reply
[-]Purplehermann9mo10

University education can be made free pretty cheaply.

The cost at scale is in the credentials- you need to make tests, test students, and check those tests.

The classes can be filmed once, and updated every few years if necessary. Each course can have a forum board for discussion and meeting up for studying in groups.

See course credentials for things like AWS.

Reply
[-]Dagon9mo20

Well, no.  the easy part of a university undergrad education is pretty cheap (to provide) already.  It's bundled with a lot of things that make it expensive to buy traditionally.

Reply
[-]Purplehermann9mo30

So unbundle it?

Reply
[-]Viliam9mo20

Whether something is technically and economically possible is just a part of the puzzle. The remaining part is whether the people who make decisions have the incentives to do so.

According to Bryan Caplan, schools certify: intelligence, conscientiousness, and conformity. Online learning would certify intelligence, conscientiousness (even more than school attendance), but not conformity. Would the employers be okay with that?

Also, some prestigious universities select for having tons of money and/or the right social connections. The education is not the point. The point is that your parents had to be a part of the social "inner circle" to get you to the university, and you spent a few years socializing with other kids of the same kind, establishing the "inner circle" of the next generation. Making the credentials available to hoi polloi would defeat the entire purpose.

Reply
[-]Purplehermann9mo10

There is a beautiful thing called unilateral action.

I believe most employers mostly don't care about conformity as such.

The inner circle stuff is only true of elite schools AFAIK. You can outcompete the rest of the universities

Reply
[-]Purplehermann6mo-2-3

It feels like llms are converging to be like a mix between the basically retarded humans that have a 120 IQ and can't abstractly think their way out of a wet paper bag, but in every topic because it's all abstract, and the trivia kid who has read an insane amount but will bs you sometimes.

Think stereotypical humanities graduate. 

 

This tracks with how they're being trained too - everything is abstract except how people react to them, and they have been exposed to a bunch of data.

At some point we'll be at effectively 0% error, and will have reached the Platonic Ideal of the above template

 

If they start RLing on run code maybe they'll turn into the Platonic Tech Bro tm.

 

Getting convinced that you need the training data to be embodied to get true AGI

Reply
Moderation Log
More from Purplehermann
View more
Curated and popular this week
32Comments