Veedrac

Posts

Sorted by New

Comments

DALL-E by OpenAI

I expect getting a dataset an order of magnitude larger than The Pile without significantly compromising on quality will be hard, but not impractical. Two orders of magnitude (~100 TB) would be extremely difficult, if even feasible. But it's not clear that this matters; per Scaling Laws, dataset requirements grow more slowly than model size, and a 10 TB dataset would already be past the compute-data intersection point they talk about.

Note also that 10 TB of text is an exorbitant amount. Even if there were a model that would hit AGI with, say, a PB of text, but not with 10 TB of text, it would probably also hit AGI with 10 TB of text plus some fairly natural adjustments to its training regime to inhibit overfitting. I wouldn't argue this all the way down to human levels of data, since the human brain has much more embedded structure than we assume for ANNs, but certainly huge models like GPT-3 start to learn new concepts in only a handful of updates, and I expect that trend of greater learning efficiency to continue.

I'm also skeptical that images, video, and such would substantially change the picture. Images are very information sparse. Consider the amount you can learn from 1MB of text, versus 1MB of pixels.

Correlations among these senses gives rise to understanding causality.  Moreover,  human brains might have evolved innate structures for things like causality,  agency,  objecthood,  etc which don't have to be learned.

Correlation is not causation ;). I think it's plausible that agenthood would help progress towards some of those ideas, but that doesn't much argue for multiple distinct senses. You can find mere correlations just fine with only one.

It's true that even a deafblind person will have mental structures that evolved for sight and hearing, but that's not much of an argument that it's needed for intelligence, and given the evidence (lack of mental impairment in deafblind people), a strong argument seems necessary.

For sure I'll accept that you'll want to train multimodal agents anyway, to round out their capabilities. A deafblind person might still be intellectually capable, but it doesn't mean they can paint.

DALL-E by OpenAI

Audio,  video,  text,  images

While other media would undoubtedly improve the model's understanding of concepts hard to express through text, I've never bought the idea that it would do much for AGI. Text has more than enough in it to capture intelligent thought; it is the relations and structure that matters, above all else. If this weren't true, one wouldn't expect competent deafblind people, but there are. Their successes are even in spite of an evolutionary history with practically no surviving deafblind ancestors! Clearly the modules that make humans intelligent, in a way that other animals and things are not, are not dependent on multisensory data.

Will OpenAI's work unintentionally increase existential risks related to AI?

To the question, how do OpenAI's demonstrations of scaled up versions of current models affect AI safety?, I don't think much changes? It does seem that OpenAI is aiming to go beyond simple scaling, which seems much riskier.

As to the general question, certainly that news makes me more worried about the state of things. I know way too little about the decision to be more concrete than that.

Open & Welcome Thread - December 2020

Thanks, I figured this wouldn't be a new question. UDASSA seems quite unsatisfying (I have no formal argument for that claim) but the perspective is nice. I appreciate the pointer :).

Open & Welcome Thread - December 2020

Consider a fully deterministic conscious simulation of a person. There are two possible futures, one where that simulation is run once, and another where the simulation is run twice simultaneously in lockstep, with the exact same parameterization and environment. Do these worlds have different moral values?

I ask because...

initially I would have said no, probably not, these are identically the same person, so there is only one instance actually there, but...

Consider a fully deterministic conscious simulation of a person. There are two possible futures, one where that simulation is run once, and another where the simulation is also run once, but with the future having twice the probability mass. Do these worlds have different moral values?

to which the answer must surely be yes, else it's really hard to have coherent moral values under quantum mechanics, hence the contradiction.

What technologies could cause world GDP doubling times to be <8 years?

Do you expect pre-takeoff AI to provide this? What sort of AI and production capabilities are you envisioning?

Or are you answering this question without reference to AI? If so, what would make this useful for estimating AI timelines?

AGI Predictions

This is only true if, for example, you think AI would cause GDP growth. My model assigns a lot of probability to ‘AI kills everyone before (human-relevant) GDP goes up that fast’, so questions #7 and #8 are conditional on me being wrong about that. If we can last any small multiples of a year with AI smart enough to double GDP in that timeframe, then things probably aren't as bad as I thought.

AGI Predictions

To emphasize, the clash I'm perceiving is not the chance assigned to these problems being tractable, but to the relative probability of ‘AI Alignment researchers’ solving the problems, as compared to everyone else and every other explanation. In particular, people building AI systems intrinsically spend a degree of their effort, even if completely unconvinced about the merits of AI risk, trying to make systems aligned, just because that's a fundamental part of building a useful AI.

I could talk about the specific technical work, or the impact that things like the AI FOOM Debate had on Superintelligence had on OpenPhil, or CFAR on FLI on Musk on OpenAI. Or I could go into detail about the research being done on topics like Iterated Amplification and Agent Foundations and so on and ways that this seems to me to be clear progress on subproblems.

I have a sort of Yudkowskian pessimism towards most of these things (policy won't actually help; Iterated Amplification won't actually work), but I'll try to put that aside here for a bit. What I'm curious about is what makes these sort of ideas only discoverable in this specific network of people, under these specific institutions, and particularly more promising than other sorts of more classical alignment.

Isn't Iterated Amplification in the class of things you'd expect people to try just to get their early systems to work, at least with ≥20% probability? Not, to be clear, exactly that system, but just fundamentally RL systems that take extra steps to preserve the intentionality of the optimization process.

To rephrase a bit, it seems to me that a worldview in which AI alignment is sufficiently tractable that Iterated Amplification is a huge step towards a solution, would also be a worldview in which AI alignment is sufficiently easy (though not necessarily easy) that there should be a much larger prior belief that it gets solved anyway.

AGI Predictions

There is a huge difference in the responses to Q1 (“Will AGI cause an existential catastrophe?”) and Q2 (“...without additional intervention from the existing AI Alignment research community”), to a point that seems almost unjustifiable to me. To pick the first matching example I found (and not to purposefully pick on anybody in particular), Daniel Kokotajlo thinks there's a 93% chance of existential risk without the AI Alignment community's involvement, but only 53% with. This implies that there's a ~43% chance of the AI Alignment community solving the problem, conditional on it being real and unsolved otherwise, but only a ~7% chance of it not occurring for any other reason, including the possibility of it being solved by the researchers building the systems, or the concern being largely incorrect.

What makes people so confident in the AI Alignment research community solving this problem, far above that of any other alternative?

The Colliding Exponentials of AI

On the other hand, improvements on ImageNet (the datasets alexnet excelled on at the time) itself are logarithmic rather than exponential and at this point seem to have reached a cap at around human level ability or a bit less (maybe people got bored of it?)

The best models are more accurate than the ground-truth labels.

Are we done with ImageNet?
https://arxiv.org/abs/2006.07159

Yes, and no. We ask whether recent progress on the ImageNet classification benchmark continues to represent meaningful generalization, or whether the community has started to overfit to the idiosyncrasies of its labeling procedure. We therefore develop a significantly more robust procedure for collecting human annotations of the ImageNet validation set. Using these new labels, we reassess the accuracy of recently proposed ImageNet classifiers, and find their gains to be substantially smaller than those reported on the original labels. Furthermore, we find the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end. Nevertheless, we find our annotation procedure to have largely remedied the errors in the original labels, reinforcing ImageNet as a powerful benchmark for future research in visual recognition.

Figure 7. shows that model progress is much larger than the raw progression of ImageNet scores would indicate.

Load More