Wiki Contributions


Yeah I agree with everything you say; it's just I was trying to remind myself of enough of SLT to give a a 'five minute pitch' for SLT to other people, and I didn't like the idea that I'm hanging it of the ReLU.

I guess the intuition behind the hierarchical nature of the models leading to singularities is the permutation symmetry between the hidden channels, which is kind of an easy thing to understand.

I get and agree with your point about approximate equivalences, though I have to say that I think we should be careful! One reason I'm interested in SLT is I spent a lot of time during my PhD on Bayesian approximations to NN posteriors. I think SLT is one reasonable explanation of why this. never yielded great results, but I think hand-wavy intuitions about 'oh well the posterior is probably-sorta-gaussian' played a big role in it's longevity as an idea.

yeah it's not totally clear what this 'nearly singular' thing would mean? Intuitively, it might be that there's a kind of 'hidden singularity' in the space of this model that might affect the behaviour, like the singularity in a dynamic model with a phase transition. but im just guessing

I'm trying to read through this more carefully this time: how load-bearing is the use of ReLU nonlinearities in the proof? This doesn't intuitively seem like it should be that important (e.g a sigmoid/gelu/tanh network feels like it is probably singular, and it certainly has to be if SLT is going to tell us something important about NN behaviour because changing the nonlinearity doesn't change how NNs behave that much imo), but it does seem to be an important part of the construction you use.

maybe this is really naive (I just randomly thought of it), and you mention you do some obvious stuff like looking at the singular vectors of activations which might rule it out, but could the low-frequency cluster be linked something simple like the fact that the use of ReLUs, GeLUs etc. means the neuron activations are going to be biased towards the positive quadrant of the activation space in terms of magnitude (because negative components of any vector in the activation basis would be cut off). I wonder if the singular vectors would catch this.

I think that it's important to be careful with elaborately modelled reasoning about this kind of thing, because the second order political effects are very hard to predict but also likely to be extremely important, possibly even more important than the direct effect on timelines in some scenarios. For instance, you mention leading labs slowing down as bad (because the leading labs are 'safety conscious' and slowing down dilutes their lead). In my opinion, this is a very simplistic model of the likely effects of this intervention. There are a few reasons for this:

  • Taking drastic unilateral action creates new political possibilities. A good example is Hinton and Bengio 'defecting' to advocating strongly for AI safety in public; I think this has had a huge effect on ML researchers and governments in taking things seriously, even though the direct effect on AI research is probably neglible. For instance, Hinton in particular made me personally take a much more serious look at AI safety related arguments, and this has influenced me trying to re-orient my career in a more safety-focused direction. I find it implausible that a leading AI lab shutting themselves down for safety reasons would have no second order political effects along these lines, even if the direct impact was small: if there's one lesson I would draw from covid and the last year or so of AI discourse, it's that the overton window is much more mobile than people often think. A dramatic intervention like this would obviously have uncertain outcomes, but could trigger unforeseen possibilities. Unilateral action that disadvantages the actor also makes a political message much more powerful. There's a lot of skepticism when labs like Anthropic talk loudly about AI risk because of the objection 'if it's so bad why are you making it'. While there are technical arguments one can make that there are good reasons to simultaneously work on safety and ai development, it makes communicating this message much harder and people will understandably have doubts about your motives.

  • 'we can't slow down because someone else will do it anyway' - I actually this is probably wrong: in a counterfactual world where OpenAI didn't throw lots of resources and effort into language models, I'm not actually sure someone else would have bothered to continue scaling them, at least not for many years. Research is not a linear process and a field being unfashionable can delay progress by a considerable amount; just look at the history of neural network research! I remember many people in academia being extremely skeptical of scaling laws around the time they were being published; if OpenAI hadn't pushed on it it could have taken years to decades for another lab to really throw enough resources at that hypothesis if it had become unfashionable for whatever reason.

  • I'm not sure it's always true that other labs catch up if the leading ones stop: progress also isn't a simple function of time; without people trying to scale massive GPU clusters you don't get practical experience with the kind of problems such systems have, production lines don't re-orient themselves towards the needs of such systems, etc. etc. There are important feedback loops in this kind of process that the big labs shutting down could disrupt, such as attracting more talent and enthusiasm into the field. It's also not true that all ML research is a monolithic line towards 'more AGI' - from my experience of academia, many researchers would have quite happily worked on small specialised systems in a variety of domains for the rest of time.

I think many of these arguments also apply to arguments against 'US moratorium now' - for instance, it's much easier to get other countries to listen to you if you take unilateral actions, as doing so is a costly signal that you are serious.

this isn't neccesarily to say that I think a US moratorium or a leading lab shutting down would actually be a useful thing, just that I don't think it's cut and dry that it wouldn't. Consider what would happen if a leading lab actually did shut themselves down - would there really be no political consequences that would have a serious effect on the development of AI? I think that your argument makes a lot of sense if we are considering 'spherical AI labs in a vacuum', but I'm not sure that's how it plays out in reality.

Any post along the lines of yours needs a 'political compass' diagram lol.

I mean it's hard to say what Altman would think in your hypothetical debate: assuming he has reasonable freedom of action at OpenAI his revealed preference seems to be to devote <= 20% of the resources available to his org to 'the alignment problem'. If he wanted to assign more resources into 'solving alignment' he could probably do so. I think Altman thinks he's basically doing the right thing in terms of risk levels. Maybe that's a naive analysis, but I think it's probably reasonable to take him more or less at face value.

I also think that it's worth saying that easily the most confusing argument for the general public is exactly the Anthropic/OpenAI argument that 'AI is really risky but also we should build it really fast'. I think you can steelman this argument more than I've done here, and many smart people do, but there's no denying it sounds pretty weird, and I think it's why many people struggle to take it at face value when people like Altman talk about x-risk - it just sounds really insane!

In constrast, while people often think it's really difficult and technical, I think yudkowsky's basic argument (building stuff smarter than you seems dangerous) is pretty easy for normal people to get, and many people agree with general 'big tech bad' takes that the 'realists' like to make.

I think a lot of boosters who are skeptical of AI risk basically think 'AI risk is a load of horseshit' for various not always very consistent reasons. It's hard to overstate how much 'don't anthropomorphise' and 'thinking about AGI is distracting sillyness by people who just want to sit around and talk all day' are frequently baked deep into the souls of ML veterans like LeCun. But I think people who would argue no to your proposed alignment debate would, for example, probably strongly disagree that 'the alignment problem' is like a coherent thing to be solved.

Maybe I shouldn't have used EY as an example, I don't have any special insight into how he thinks about AI and power imbalances. Generally I get the vibe from his public statements that he's pretty libertarian and thinks pros outweigh cons on most technology which he thinks isn't x-risky. I think I'm moderately confident that hes more relaxed about, say, misinformation or big tech platforms dominance than (say) Melanie Mitchell but maybe i'm wrong about that.

I think there's some truth to this framing, but I'm not sure that people's views cluster as neatly as this. In particular, I think there is a 'how dangerous is existential risk' axis and a 'how much should we worry about AI and Power' axis. I think you rightly identify the 'booster' cluster (x-risk fake, AI +power nothing to worry about) and 'realist' (x-risk sci-fi, AI + power very concerning) but I think you are missing quite a lot of diversity in people's positions along other axes that make this arguably even more confusing for people. For example, I would characterise Bengio as being fairly concerned about both x-risk and AI+power, wheras Yudkowsky is extremely concerned about x-risk and fairly relaxed about AI+power.

I also think it's misleading to group even 'doomers' as one cluster because there's a lot of diversity in the policy asks of people who think x-risk is a real concern, from 'more research needed' to 'shut it all down'. One very important group you are missing are people who are simultaneously quite (publicly) concerned about x-risk, but also quite enthusiastic about pursuing AI development and deployment. This group is important because it includes Sam Altman, Dario Amodei and Demis Hassabis (leadership of the big AI labs), as well as quite a lot people who work developing AI or work on AI safety. You might summarise this position as 'AI is risky, but if we get it right it will save us all'. As they are often working at big tech, I think these people are mostly fairly un-worried or neutral about AI + power. This group is obviously important because they work directly on the technology, but also because this gives them a loud voice in policy and the public sphere. You might think of this as a 'how hard is mitigating x-risk' axis. This is another key source of disagreement : going from public statements alone, I think (say) Sam Altman and Eliezer Yudkowsky agree on the 'how dangerous' axis and are both fairly relaxed Silicon Valley libertarians on the 'AI+power' axis, and mainly disagree on how difficult is it to solve x-risk. Obviously people's disagreements on this question have a big impact on their desired policy!

Well maybe you should read the book! I think that there are a few concrete points you can disagree on.

One thing I think about a lot is: are we sure this is unique, or did something else like luck or geography somehow play an important role in one (or a handful) of groups of sapiens happening to develop some strong (or "viral") positive-feedback cultural learning mechanisms that eventually dramatically outpaced other creatures?

I'm not an expert, but I'm not so sure that this is right; I think that anatomically modern humans already had significantly better abilities to learn and transmit culture than other animals, because anatomically modern humans generally need to extensively prepare their food (cooking, grinding etc.) in a culturally transmitted way. So by the time we get to sapiens we are already pretty strongly on this trajectory.

I think there's an element of luck: other animals do have cultural transmission (for example elephants and killer whales) but maybe aren't anatomically suited to discover fire and agriculture. Some quirks of group size likely also play a role. It's definitely a feedback loop though; once you are an animal with culture, then there is increased selection pressure to be better at culture, which creates more culture etc.

If Homo sapiens are believed to have originated around 200,000 years ago, but only developed agricultural techniques around 12,000 years ago, the earliest known city 9,000 years ago, and only developed a modern-style writing system maybe 5,000 years ago, are we sure that those humans who lived for 90%+ of human "pre-history" without agriculture, large groups, and writing systems would look substantially more intelligent to us than chimpanzees?

I'm gonna go with absolutely yes, see my above comment about anatomically modern humans and food prep. I think you are severely under-estimating the sophistication of hunter-gatherer technology and culture!

The degree to which 'objective' measures of intelligence like IQ are culturally specific is an interesting question.

If people are reading this thread and want to read this argument in more detail: the (excellent) book 'The Secret of our Success' by Joseph Henrich (astral codex 10 review/summary here makes this argument in a very compelling way. There is a lot of support for the idea that the crucial 'rubicon' that separates chimps from people is cultural transmission which enables the gradual evolution of strategies over periods longer than an individual lifetime rather than any 'raw' problem solving intelligence. In fact according to Heinrich there are many ways in which humans are actually worse than chimps in some measures of raw intelligence: chimps have better working memory and faster reactions for complex tasks in some cases, and they are better than people at finding Nash equilibria which require randomising your strategy. But humans are uniquely able to learn behaviours from demonstration and forming larger groups which enable the gradual accumulation of 'cultural technology', which then allowed a runway of cultural-genetic co-evolution (e.g food processing technology -> smaller stomachs and bigger brains -> even more culture -> bigger brains even more of an advantage etc.) It's hard to appreciate how much this kind of thing helps you think; for instance, most people can learn maths but few would have invented arabic numerals by themselves. Similarly, having a large brain by itself is actually not super useful without the cultural superstructure: most people alive today would quickly die if dropped into the ancestral environment without the support of modern culture unless they could learn from hunter-gatherers (see Henrich for many examples of this happening to European explorers!). For instance, i like to think I'm a pretty smart guy but I have no idea how to make e.g bronze or stone tools, and it's not obvious that my physics degree would help me figure it out! Henrich also makes the case for the importance of this with some slightly chilling examples of cultures that lost their ability to make complex technology (e.g boats) when they fell below a critical population size and became isolated.

It's interesting to consider the implications for AI: I'm not very sure about this. On the one hand LLMs clearly have superhuman ability to memorise facts, but I'm not sure if this means they can learn new tasks or information particularly easily. On the other it seems likely that LLMs are taking pretty heavy advantage of the 'culture overhang' of the internet! I don't know if it really makes sense to think of their abilities here as strongly superhuman: if you magically had the compute and code to try to train gpt-n in 1950 it's not obvious you could have got it to do very much, without the internet for it to absorb.

I think that dropout inhibits superposition is pretty intuitive; there's a sense in which superposition seems like a 'delicate' phenomenon and adding random noise obviously increases the noise floor of what the model can represent. It might be possible to make some more quantitative predictions about this in the ReLU output model which would be cool, though maybe not that important relative to doing more on the effect of dropout/superposition in more realistic models.

Load More