Since nobody else posted these:
Bay Area is Sat Dec 17th (Eventbrite) (Facebook)
South Florida (about an hour north of Miami) is Sat Dec 17th (Eventbrite) (Facebook)
On current hardware, sure.
It does look like scaling will hit a wall soon if hardware doesn't improve, see this paper: https://arxiv.org/abs/2007.05558
But Gwern has responded to this paper pointing out several flaws... (having trouble finding his response right now..ugh)
However, we have lots of reasons to think Moore's law will continue ... in particular future AI will be on custom ASICs / TPUs / neuromorphic chips, which is a very different story. I wrote about this long ago, in 2015. Such chips, especially asynchronous and analog ones, can be vastly more ...
I disagree, in fact I actually think you can argue this development points the opposite direction, when you look at what they had to do to achieve it and the architecture they use.
I suggest you read Ernest Davis' overview of Cicero. Cicero is a special-purpose system that took enormous work to produce -- a team of multiple people labored on it for three years. They had to assemble a massive dataset from 125,300 online human games. They also had to get expert annotations on thousands of preliminary outputs. Even that was not enough.. they ...
I've looked into these methods a lot, in 2020 (I'm not so much up to date on the latest literature). I wrote a review in my 2020 paper, "Self-explaining AI as an alternative to interpretable AI".
There are a lot of issues with saliency mapping techniques, as you are aware (I saw you link to the "sanity checks" paper below). Funnily enough though, the super simple technique of occlusion mapping does seem to work very well, though! It's kinda hilarious actually that there are so many complicated mathematical techniques for saliency mapping, but I have s...
There's no doubt a world simulator of some sort is probably going to be an important component in any AGI, at the very least for planning - Yan LeCun has talked about this a lot. There's also this work where they show a VAE type thing can be configured to run internal simulations of the environment it was trained on.
In brief, a few issues I see here:
Peperine (black pepper extract) can help make quercetin more bioavailable. They are co-administered in many studies on the neuroprotective effects of quercetin: https://scholar.google.com/scholar?hl=en&as_sdt=0,22&q=piperine+quercetin
I find slower take-off scenarios more plausible. I like the general thrust of Christiano's "What failure looks like". I wonder if anyone has written up a more narrative / concrete account of that sort of scenario.
The thing you are trying to study ("returns on cognitive reinvestment") is probably one of the hardest things in the world to understand scientifically. It requires understanding both the capabilities of specific self-modifying agents and the complexity of the world. It depends what problem you are focusing on too -- the shape of the curve may be very different for chess vs something like curing disease. Why? Because chess I can simulate on a computer, so throwing more compute at it leads to some returns. I can't simulate human biology in a computer - we h...
How familiar are you with Chollet's paper "On the Measure of Intelligence"? He disagrees a bit with the idea of "AGI" but if you operationalize it as "skill acquisition efficiency at the level of a human" then he has a test called ARC which purports to measure when AI has achieved human-like generality.
This seems to be a good direction, in my opinion. There is an ARC challenge on Kaggle and so far AI is far below the human level. On the other hand, "being good at a lot of different things", ie task performance across one or many tasks, is obviously very important to understand and Chollet's definition is independent from that.
Interesting, thanks. 10x reduction in cost every 4 years is roughly twice what I would have expected. But it sounds quite plausible especially considering AI accelerators and ASICs.
Thanks for sharing! That's a pretty sophisticated modeling function but it makes sense. I personally think Moore's law (the FLOPS/$ version) will continue, but I know there's a lot of skepticism about that.
Could you make another graph like Fig 4 but showing projected cost, using Moore's law to estimate cost? The cost is going to be a lot, right?
Networks with loops are much harder to train.. that was one of the motivations for going to transformers instead of RNNs. But yeah, sure, I agree. My objection is more that posts like this are so high level I have trouble following the argument, if that makes sense. The argument seems roughly plausible but not making contact with any real object level stuff makes it a lot weaker, at least to me. The argument seems to rely on "emergence of self-awareness / discovery of malevolence/deception during SGD" being likely which is unjustified in my view. I'm not s...
Has GPT-3 / large transformers actually led to anything with economic value? Not from what I can tell although anecdotal reports on Twitter are that many SWEs are finding Github Copilot extremely useful (it's still in private beta though). I think transformers are going to start providing actual value soon, but the fact they haven't so far despite almost two years of breathless hype is interesting to contemplate. I've learned to ignore hype, demos, cool cherry-picked sample outputs, and benchmark chasing and actually look at what is being deployed "in the ...
This is a shot in the dark, but I recall there was a blog post that made basically the same point visually, I believe using Gaussian distributions. I think the number they argued you should aim for was 3-4 instead of 6. Anyone know what I'm talking about?
Hi, I just wanted to say thanks for the comment / feedback. Yeah, I probably should have separated out the analysis of Grokking from the analysis of emergent behaviour during scaling. They are potentially related - at least for many tasks it seems Grokking becomes more likely as the model gets bigger. I'm guilty of actually conflating the two phenomena in some of my thinking, admittedly.
Your point about "fragile metrics" being more likely to show Grokking great. I had a similar thought, too.
I think a bit too much mindshare is being spent on these sci-fi scenario discussions, although they are fun.
Honestly I have trouble following these arguments about deception evolving in RL. In particular I can't quite wrap my head around how the agent ends up optimizing for something else (not a proxy objective, but a possibly totally orthogonal objective like "please my human masters so I can later do X"). In any case, it seems self awareness is required for the type of deception that you're envisioning. Which brings up an interesting question - can a pu...
Zac says "Yes, over the course of training AlphaZero learns many concepts (and develops behaviours) which have clear correspondence with human concepts."
What's the evidence for this? If AlphaZero worked by learning concepts in a sort of step-wise manner, then we should expect jumps in performance when it comes to certain types of puzzles, right? I would guess that a beginning human would exhibit jumps from learning concepts like "control the center" or "castle early, not later".. for instance the principle "control the center", once followed, has implicati...
This is pretty interesting. There is a lot to quibble about here, but overall I think the information about bees here is quite valuable for people thinking about where AI is at right now and trying to extrapolate forward.
A different approach, perhaps more illuminating would be to ask how much of a bee's behavior could we plausibly emulate today by globing together a bunch of different ML algorithms into some sort of virtual bee cognitive architecture - if say we wanted to make a drone that behaved like a bee ala Black Mirror. Obviously that's a much more c...
Another point is that when you optimize relentlessly for one thing, you have might have trouble exploring the space adequately (get stuck at local maxima). That's why RL agents/algorithms often take random actions when they are training (they call this "exploration" instead of "exploitation"). Maybe random actions can be thought of as a form of slack? Micro-slacks?
Look at Kenneth Stanley's arguments about why objective functions are bad (video talk on it here). Basically he's saying we need a lot more random exploration. Humans are similar - we have an ope...
Bostrom talks about this in his book "Superintelligence" when he discusses the dangers of Oracle AI. It's a valid concern, we're just a long way from that with GPT-like models, I think.
I used to think a system trained on text only could never learn vision. So if it escaped onto the internet, it would be pretty limited in how it could interface with the outside world since it couldn't interpret streams from cameras. But then I realized that probably in it's training data is text on how to program a CNN. So in theory a system trained on only text could build...
I just did some tests... it works if you go to settings and click "Activate Markdown Editor". Then convert to Markdown and re-save (note, you may want to back up before this, there's a chance footnotes and stuff could get messed up).
$stuff$ for inline math and double dollar signs for single line math work when in Markdown mode. When using the normal editor, inline math doesn't work, but $$ works (but puts the equation on a new line).
I have mixed feelings on this. I have mentored ~5 undergraduates in the past 4 years and observed many others, and their research productivity varies enormously. How much of that is due to IQ vs other factors I really have no idea. My personal feeling was most of the variability was due to life factors like the social environment (family/friends) they were ensconced in and how much time that permitted them to focus on research.
My impression from TAing physics for life scientists for two years was that a large number felt they were intrinsically bad a...
I liked how in your AISS support talk you used history as a frame for thinking about this because it highlights the difficulty of achieving superhuman ethics. Human ethics (for instance as encoded in laws/rights/norms) is improving over time, but it's been a very slow process that involves a lot of stumbling around and having to run experiments to figure out what works and what doesn't. "The Moral Arc" by Michael Shermer is about the causes of moral progress... one of them is allowing free speech, free flow of ideas. Basically, it seems moral progres...
It's a mixed bag. A lot of near term work is scientific, in that theories are proposed and experiments run to test them, but from what I can tell that work is also incredibly myopic and specific to the details of present day algorithms and whether any of it will generalize to systems further down the road is exceedingly unclear.
The early writings of Bostom and Yudkowsky I would classify as a mix of scientifically informed futurology and philosophy. As with science fiction, they are laying out what might happen. There is no science of psychohistory an...
The paper you cited does not show this.
Yeah, you're right I was being sloppy. I just crossed it out.
oo ok, thanks, I'll take a look. The point about generative models being better is something I've been wanting to learn about, in particular.
SGD is a form of efficient approximate Bayesian updating.
Yeah I saw you were arguing that in one of your posts. I'll take a closer look. I honestly have not heard of this before.
Regarding my statement - I agree looking back at it it is horribly sloppy and sounds absurd, but when I was writing I was just thinking about how all L1 and L2 regularization do is bias towards smaller weights - the models still take up the same amount of space on disk and require the same amount amount of compute to run in terms of FLOPs. But yes you're right they make the m...
By the way, if you look at Filan et al.'s paper "Clusterability in Neural Networks" there is a lot of variance in their results but generally speaking they find that L1 regularization leads to slightly more clusterability than L2 or dropout.
The idea that using dropout makes models simpler is not intuitive to me because according to Hinton dropout essentially does the same thing as ensembling. If what you end up with is something equivalent to an ensemble of smaller networks than it's not clear to me that would be easier to prune.
One of the papers you linked to appears to study dropout in the context of Bayesian modeling and they argue it encourages sparsity. I'm willing to buy that it does in fact reduce complexity/ compressibility but I'm also not sure any of this is 100% clear cut.
(responding to Jacob specifically here) A lot of things that were thought of as "obvious" were later found out to be false in the context of deep learning - for instance the bias-variance trade-off.
I think what you're saying makes sense at a high/rough level but I'm also worried you are not being rigorous enough. It is true and well known that L2 regularization can be derived from Bayesian neural nets with a Gaussian prior on the weights. However neural nets in deep learning are trained via SGD, not with Bayesian updating -- and it doesn't seem modern CNNs...
Hey, OK, fixed. Sorry there is no link to the comment -- I had a link in an earlier draft but then it got lost. It was a comment somewhere on LessWrong and now I can't find it -_-.
That's interesting it motivated you to join Anthropic - you are definitely not alone in that. My understanding is Anthropic was founded by a bunch of people who were all worried about the possible implications of the scaling laws.
To my knowledge the most used regularization method in deep learning, dropout, doesn't make models simpler in the sense of being more compressible.
A simple L1 regularization would make models more compressible in so far as it suppresses weights towards zero so they can just be thrown out completely without affecting model performance much. I'm not sure about L2 regularization making things more compressible - does it lead to flatter minima for instance? (GPT-3 uses L2 regularization, which they call "weight decay").
But yes, you are right, Occam factors are...
I think this is a nice line of work. I wonder if you could add a simple/small constraint on weights that avoids the issue of multimodal neurons -- it seems doable.
I just wanted to say I don't think you did anything ethically wrong here. There was a great podcast with Diana Fleischman I listened to a while ago where she talked about how we manipulate other people all the time especially in romantic relationships. I'm uncomfortable saying that any manipulation whatsoever is ethically wrong because I think that's demanding too much cognitive overhead for human relationships (and also makes it hard to raise kids) - I think you have to have a figure out a more nuanced view. For instance, having a high level rule on what ...
You sound very confident your device would have worked really well. I'm curious, how much testing did you do?
I have a Garmin Vivosmart 3 and it tries to detect when I'm either running, biking, or going up stairs. It works amazingly well considering the tiny amount of hardware and battery power it has, but it also fails sometimes, like randomly thinking I've been running for a while when I've been doing some other high heart rate thing. Maddeningly, I can't figure out how to turn off some of the alerts, like when I've met my "stair goal" for the day.
I think he's conditioning heavily on being fully vaxxed and boosted when making the comparison to the flu. Which makes sense to me. I also suspect long Covid-19 risk is much lower if you're vaxxed & boosted, based on the theory that Long Covid is caused by an inflammatory cascade that won't shut off (there's a lot of debate about what biomarkers to use but many long Covid patients have elevated markers of inflammation months later). If your symptoms are mild, you won't have that inflammatory cascade. Here's Zvi on one of the latest Long Covid papers : ...
"I think this is important as the speed prior was considered to be, and still is by many, a very good candidate for a way of not producing deceptive models." I'm curious who has professed a belief in this.
I don't have much direct experience with transformers (I was part of some research with BERT once where we found it was really hard to use without adding hard-coded rules on top, but I have no experience with the modern GPT stuff). However, what you are saying makes a lot of sense to me based on my experience with CNNs and the attempts I've seen to explain/justify CNN behaviour with side channels (for instance this medical image classification system that also generates text as a side output).
See also my comment on Facebook.
I think what you're saying makes a lot of sense. When assembling a good training data set, it's all about diversity.
(cross posting this comment from E. S. Yudkowksy's Facebook with some edits / elaboration)
Has anyone tried fine-tuning a transformer on small datasets of increasing size to get a sense of how large a dataset would be needed to do this well? I suspect it might have to be very large.
Note this is similar to the "self explaining AI" idea I explored in early 2020, which I threw together a paper on (I am hesitant to link to it because it's not that great of a paper and much of the discussion there is CNN specific, but here it is.). I can see how producing "thoug...
We're guessing 1000 steps per reasonably-completed run (more or less, doesn't have to be exact) and guessing maybe 300 words per step, mostly 'thought'. Where 'thoughts' can be relatively stream-of-consciousness once accustomed (we hope) and the dungeon run doesn't have to be Hugo quality in its plotting, so it's not like we're asking for a 300,000-word edited novel.
However I also could see the "thoughts" output misleading people - people might mistake the model's explanations as mapping onto the calculations going on inside the model to produce an output.
I think the key point on avoiding this is the intervening-on-the-thoughts part:
"An AI produces thoughts as visible intermediates on the way to story text, allowing us to watch the AI think about how to design its output, and to verify that we can get different sensible outputs by intervening on the thoughts".
So the idea is that you train things in such a way that the thoughts do map onto the calculations going on inside the model.
Note: Pfizer started a trial in September to try to answer this question. We may know answer in a few months. In theory I don't see why it wouldn't work but with limited supply there's probably better uses at least in the next few months.
Also, note the initial EUA application is asking it be approved for high-risk patients only, probably because Pfizer was told by FDA it wouldn't be EUA'd otherwise.
Paxlovid must be taken with Ritonavir (otherwise Paxlovid breaks down to fast) which messes with liver enzymes and isn't a good choice for man...
Very cool, will take a look. This basically solves question 1. It seems the original Solomonoff work isn't published anywhere. By the way, the author, William H. Press, is a real polymath! I am curious if there is any extension of this work to agents with finite memory.. as an example, the same situation where you're screening a large number of people, but now you have a memory where you can store N results of prior screenings for reference. I'm going to look into it..
Here's another paper on small / non-robust features, but rather specific to patch-based vision transformers:
Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation
^ This work is very specific to patch-based methods. Whether patches are here to stay and for how long is unclear to me, but right now they seem to be on an ascendancy (?).
For what it's worth - I see value in votes being public by default. It can be very useful to see who upvoted or downvoted your comment. Of course then people will use the upvote feature just to indicate they read a post, but that's OK (we are familiar with that system from Facebook, Twitter, etc).
I'm pretty apathetic about all the other proposals here. Reactions seem to me to be unnecessary distractions. [side note - emojiis are very ambiguous so it's good you put words next to each one to explain what they are supposed to mean]. The way I woul...
I would modify the theory slightly by noting that the brain may become hypersensitive to sensations arising from the area that was originally damaged, even after it has healed. Sensations that are otherwise normal can then trigger pain. I went to the website about pain reprocessing therapy and stumbled upon an interview with Alan Gordon where he talked about this. I suspect that high level beliefs about tissue damage etc play a role here also in causing the brain to become hyper focused on sensations coming from a particular region and to interpret t... (read more)