LESSWRONG
LW

t14n
572110
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
AI agents and painted facades
t14n12h10

I think the main challenge to monitoring agents is one of volume/scale, re:

Agents are much cheaper and faster to run than humans, so the amount of information and interactions that humans need to oversee will drastically increase.


But 1, 2, 4 are issues humans already face with managing other humans.

  • A decent chunk of people are "RL'd" into doing the bare minimum and playing organizational politics to optimize their effort:reward ratio.
  • Many employees are treated as fungible and are given limited context into their entire org. Even in a fully transparent org, if it's large enough, then it's impossible for everything to be in an individual's context anyway.
  • People fail all the time in quite surprising and subtle ways! We're just really bad at documenting and noticing most of them since we usually only explore and share really catastrophic failures. If someone loses ~$100, it's whatever. When someone loses ~$100M dollars, it's a lawsuit.
     

This is a long winded way of saying: I'm optimistic we can address managing agents (with their current capabilities) by drawing analogies to how we already do management in effective organizations, and find avenues to scale these management strategies 100x (or however many OOMs one might think we'll need).

Reply
AI agents and painted facades
t14n13h10

I interpreted loop as referring to an "OODA loop" where managers are observing those they are managing, delegating and action, and then waiting for feedback to then go back to the beginning of the loop.

e.g. nowadays, I delegate a decent chunk of implementation work to coding agents and the "loop" is me giving them a task, letting them riff on it for a few minutes, and then reviewing output before requesting changes or committing them in.

Reply
t14n's Shortform
t14n19d10

contra @noahpinion's piece on AI comparative advantage

https://www.noahpinion.blog/p/plentiful-high-paying-jobs-in-the

TL;DR

  • AI-pilled people say that some form of major un/underemployment is in the near future for humanity.
  • This misses the subtle idea of comparative advantage, i.e:
  • "Imagine a venture capitalist (let’s call him “Marc”) who is an almost inhumanly fast typist. He’ll still hire a secretary to draft letters for him, though, because even if that secretary is a slower typist than him, Marc can generate more value using his time to do something other than drafting letters. So he ends up paying someone else to do something that he’s actually better at."
  • Future AI's will eventually be better than every human at everything, but humans will still have a human-economy because AI's will have much better things to do.

I think this makes a lot of sense and mostly agree. But I want to pose the question: what if AI's run of out useful things to do?

I don't mean they'll run out of useful things to do forever, but what if AI's run into "atom" bottlenecks the way that humans already do?

I'm using a working definition of aligned AI that goes something like:

  • AI systems mostly act autonomously, but are aligned with individual and societal interests
  • We make (mostly) reasonable tradeoffs about when and where humans must be in the loop to look over and approve further actions by these systems

More or less, systems like Claude Code or DeepResearch with waiting periods of hours/days/weeks between human check-in time instead of minutes.

Assuming we have aligned AI systems in the future, think about this:

  • AI-1 is tasked with developing cures to cancers. It reads all the literature, has some critical insight, and then sends a report to run experiments X, Y, Z to some humans and then report back with the results.
  • While waiting for humans to finish up the experiment: what does AI-1 do?
  • In the days/weeks it takes to run the experiments maybe AI-1 will be tasked with solving some other class of disease. It reads all the literature, has some critical insight, needs more data, and sends more humans off to run more experiments in atom-realm.
  • Eventually, AI-1 no longer has diseases (of human interest) to analyze and has to wait until experiments finish. We do not have enough wet labs and humans running around cultivating petri dishes to keep it busy. (or even if we have robot wet-labs, we are still bottlenecked by the time it takes to cultivate cultures and so on).

It's unclear to me what we'll allocate AI-1 to do next (assuming it's not fully autonomous). And in a world where AI-1 is fully autonomous and aligned, I'm not sure AI-1 will know either.

This is what makes me unsure about the comparative advantage point. At this point, I imagine someone (or AI-1 itself) determines that in the meantime, it can act as AI-1-MD and consult with patients in need.

And then maybe there are no more patients to screen (perhaps everyone is incredibly healthy, or we have more than enough AIs for everyone to have personalized doctors). AI-1-MD has to find something else to do.

There's a wide band of how long this period of "atom bottlenecks" remains. In some areas (like solving all diseases), I imagine the incentives will be aligned enough where we'll look to remove the wetlab/experimentation bottleneck. But I think the world looks very different on if that bottlenecks takes 2 years or 20 years.

In a world where it takes 2 years to solve the "experimentation" bottleneck, then AI-1 can use its comparative advantage to pursue research and probably won't replace doctors/lawyers/whatever it's next best alternative is. But if it takes 20 years to solve these bottlenecks, then maybe we a lot of AI-1's time is spent towards replacing large functions of being a doctor/lawyer/etc. 

AI's don't "labor" the same ways humans do. They won't need 20-30 years of training for advanced jobs, they'll always have access to all the knowledge they'll need to be an expert in an instant. They won't think in minutes/hours, they'll think at the speed of processors -- which is milliseconds and nanoseconds. They'll likely be able to context switch to various domains with no penalty. 

A really plausible reality to me is that many cognitive and intellectual tasks will be delegated to future AI systems because they'll be far faster, better, and cheaper than most humans, and they won't have anywhere else to point their ability at.

Reply
The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating
t14n7mo14-5

re: 1b (development likely impossible to kept secret given scale required)

I'm remind of Dylan Patel's comments (semianalysis) on a recent episode of the Dwarkesh Podcast which goes something like:

if you're Xi Jinping and you're scaling pilled, you can just centralize all the compute and build all the substations for it. You can just hide it inside one of the factories you already have that's drawing power for steel production and re-purpose it as a data center.

Given the success we've seen in training SOTA models with constrained GPU resources (Deepseek), I don't think it's far fetched to think you can hide bleeding edge development. It turns out all you need is a few hundred of the smartest people in your country and a few thousand GPUs.

Hrm...sounds like the size of the Manhattan Project.

Reply1
t14n's Shortform
t14n7mo20

I have Aranet4 CO2 monitors inside my apartment, one near my desk and one in the living room both at eye level and visible to me at all times when I'm in those spaces. Anecdotally, I find myself thinking "slower" @ 900+ ppm, and can even notice slightly worse thinking at levels as low as 750ppm.

I find indoor levels @ <600ppm to be ideal, but not always possible depending on if you have guests, air quality conditions of the day, etc.

I unfortunately only live in a space with 1 window, so ventilation can be difficult. However a single well placed fan facing outwards blowing towards the window improves indoor circulation. With the HVAC system fan also running, I can decrease indoor ppm by 50-100 in just 10-15 minutes.

Here is a video showcasing an experiment on optimal fan placements to increase airflow in your house/apartment.

If you don't already periodically vent your space (or if you live in a nice enough climate, keep windows open all day), then I highly recommend you start doing so.

Reply
t14n's Shortform
t14n8mo20

Want to know what AI will do to the labor market? Just look at male labor force participation (LFP) research.

US Male LFP has dropped since the 70s, from ~80% to ~67.5%.

There are a few dynamics driving this, and all of them interact with each other. But the primary ones seem to be:

  • increased disability [1]
  • younger men (e.g. <35 years old) pursuing education instead of working [2]
  • decline in manufacturing and increase in services as fraction of the economy
  • increased female labor force participation [1]

Our economy shifted from labor that required physical toughness (manufacturing) to labor that required more intelligence (services). At the same time, our culture empowered women to attain more education and participate in labor markets. As a result, the market for jobs that require higher intelligence became more competitive, and many men could not keep up and became "inactive." Many of these men fell into depression or became physically ill, thus unable to work.

Increased AI participation is going to do for humanity what increased women LFP has done for men in the last 50 years -- it is going to make cognitive labor an increasingly competitive market. The humans that can keep up will continue to participate and do well for themselves.

What about the humans who can't keep up? Well, just look at what men are doing now. Some men are pursuing higher education or training, attempting to re-enter the labor market by becoming more competitive or switching industries.

But an increased percentage of men are dropping out completely. Unemployed men are spending more hours playing sports and video games [1], and don't see the value in participating in the economy for a variety of reasons [3] [4].

Unless culture changes and the nature of jobs evolve dramatically in the next couple of years, I suspect these trends to continue.

Relevant links:

[1] Male Labor Force Participation: Patterns and Trends | Richmond Fed

[2] Men’s Falling Labor Force Participation across Generations - San Francisco Fed

[3] The Effect of Declining Marriage Prospects on Young Men's Labor-Force Participation Rate by Ariel Binder :: SSRN

[4] What’s behind Declining Male Labor Force Participation | Mercatus Center

Reply
Careless thinking: A theory of bad thinking
t14n9mo11

re: public track records

I have a fairly non-assertive, non-confrontational personality, which causes me to defer to "safer" strategies (e.g. nod and smile, don't think too hard about what's being said, or at least don't vocalize counterpoints). Perhaps others here might relate. These personality traits are reflected in "lazy thinking" online -- e.g. not posting even when I feel like I'm right about X, not sharing an article or sending a message for fear of looking awkward/revealing a preference about myself that others might not agree with.

I notice that people who are very assertive and/or competitive, who see online discussions as "worth winning", will be much more publicly vocal about their arguments and thought process. Meek people (like me), may not see the worth in undertaking the risk of publicly revealing arguments or preferences. Embarrassment, shame, potentially being shunned for your revealed preferences, and so on -- there are many social risks to being public with your arguments and thought process. And if you don't value the "win" in the public sphere, why take on that risk?

Perhaps something that holds people back from publishing more is that many people tie their offline identity to their online identities. Or perhaps it's just a cultural inclination -- maybe most people are like me and don't value the status/social reward of being correct and sharing about it.

It's enough to be privately rigorous and correct. 

Reply
Raemon's Shortform
t14n9mo*53

Skill ceilings across humanity is quite high. I think of super genius chess players, Terry Tao, etc.

A particular individual's skill ceiling is relatively low (compared to these maximally gifted individuals). Sure, everyone can be better at listening, but there's a high non-zero chance you have some sort of condition or life experience that makes it more difficult to develop it (hearing disability, physical/mental illness, trauma, an environment of people who are actually not great at communicating themselves, etc).

I'm reminded of what Samo Burja calls "completeness hypothesis":

> It is the idea that having all of the important contributing pieces makes a given effect much, much larger than having most of the pieces. Having 100% of the pieces of a car produces a very different effect than having 90% of the pieces. The four important pieces for producing mastery in a domain are good feedback mechanisms, extreme motivation, the right equipment, and sufficient time. According to the Completeness Hypothesis, people that stably have all four of these pieces will have orders-of-magnitude greater skill than people that have only two or three of the components.

This is not a fatalistic recommendation to NOT invest in skill development. Quite the opposite.

I recommend Dan Luu's 95th %-tile is not that good.

Most people do not approach anywhere near their individual skill ceiling because they lack the four things that Burja lists. As Luu points out, most people don't care that much to develop their skills. People do not care to find good feedback loops, cultivate the motivation, or carve out sufficent time to develop skills. Certain skills may be limited by resources (equipment), but there are hacks that can lead to skill development at a sub-optimal rate (e.g. calisthenics for muscle mass development vs weighted training. Maybe you can't afford a gym membership but push-ups are free).

As @sunwillrise mentioned, there are diminishing returns for developing a skill. The gap from 0th % -> 80th % is actually quite narrow. 80th % -> 98% requires work but is doable for most people, and you probably start to experience diminishing returns around this range.

98%+ results are reserved for those who can have long-term stable environments to cultivate the skill, or the extremely talented. 

Reply
t14n's Shortform
t14n1y279

I'm giving up on working on AI safety in any capacity.

I was convinced ~2018 that working on AI safety was an Good™ and Important™ thing, and have spent a large portion of my studies and career trying to find a role to contribute to AI safety. But after several years of trying to work on both research and engineering problems, it's clear no institutions or organizations need my help.

First: yes, it's clearly a skill issue. If I was a more brilliant engineer or researcher then I'd have found a way to contribute to the field by now.

But also, it seems like the bar to work on AI safety seems higher than AI capabilities. There is a lack of funding for hiring more people to work on AI Safety, and it seems to have created a dynamic where you have to be scarily brilliant to even get a shot at folding AI safety into your career.

In other fields, there are a variety of professionals who can contribute incremental progress and get paid as they progress their knowledge and skills. Like educators across varying levels, technicians in lab who support experiments, and so on. There are far fewer opportunities like that w.r.t AI Safety. Many "mid-skilled" engineers and researchers just don't have a place in the field. I've met and am aware of many smart people attempting to find roles to contribute to AI safety in some capacity, but there's just not enough capacity for them.

I don't expect many folks here to be sympathetic to this sentiment. My guess on the consensus is that in fact, we should only have brilliant people working on AI safety because it's a very hard and important problem and we only get a few shots (maybe only one shot) to get it right!

[This comment is no longer endorsed by its author]Reply161
How a chip is designed
t14n1y10

Morris Chang (founder of TSMC and titan in the fabrication process) had a lecture at MIT giving an overview of the history in chip design and manufacturing. [1] There's a diagram ~34:00 that outlines the chip design process, and where foundries like TSMC slot into the process.

I also recommend skimming Chip War by Chris Miller. Has a very US-centric perspective, but gives a good overview of the major companies that developed chips from the 1960s-1990s, and the key companies that are relevant/bottlenecks to the manufacturing process circa-2022. 

1: TSMC founder Morris Chang on the evolution of the semiconductor industry
 

Reply
Load More
No wikitag contributions to display.
1t14n's Shortform
1y
11
12The Problem with Reasoners by Aidan McLaughin
9mo
1
1t14n's Shortform
1y
11