We’ve taught AI how to speak, and it appears that openAI has taught their AI how to produce as little offensive content as possible.
The problem is that the AI can (and does) lie. Right now, ChatGPT and its ilk are a less than superhuman levels of intelligence, so we can catch their lies. But when a superhuman AI starts lying to you, how does one correct for that? If a superhuman AI starts veering off in a direction that is unexpected, how does one bring it back on track?
@gwern short story, Clippy highlights many of the issues with naively training a superintelligent algorithm on human-generated data and expecting that algorithm to pick up human values as a result. Another post to consider is The Waluigi Effect, which raises the possibility that the more you train an agent to say correct, inoffensive things, the more you've also trained a shadow-agent to say incorrect, offensive things.
How would you measure the usage? If, for example, Google integrates Bard into its main search engine, as they are rumored to be doing, would that count as usage? If so, I would agree with your assessment.
However, I disagree that this would be a "drastic" impact. A better Google search is nice, but it's not life-changing in a way that would be noticed by someone who isn't deeply aware of and interested in technology. It's not like, e.g. Google Maps navigation suddenly allowing you to find your way around a strange city without having to buy any maps or decipher local road signs.
What I'm questioning is the implicit assumption in your post that AI safety research will inevitably take place in an academic environment, and therefore productivity practices derived from other academic settings will be helpful. Why should this be the case when, over the past few years, most of the AI capabilities research has occurred in corporate research labs?
Some of your suggestions, of course, work equally well in either environment. But not all, and even the ones which do work would require a shift in emphasis. For example, when you say professors should be acquainted with other professors, that's valid in academia, where roughly everyone who matters either has tenure or is on a tenure track. However, that is not true in a corporate environment, where many people may not even have PhDs. Furthermore, in a corporate environment, limiting one's networking to just researchers is probably ill advised, given that there are many other people who would have influence upon the research. Knowing a senior executive with influence over product roadmaps could be just as valuable, even if that executive has no academic pedigree at all.
Prioritizing high value research and ignoring everything else is a skill that works in both corporate and academic environments. But 80/20-ing teaching? In a corporate research lab, one has no teaching responsibilities. One would be far better served learning some basic software engineering practices, in order to better interface with product engineers. Similarly, with regards to publishing, for a corporate research lab, having a working product is worth dozens of research papers. Research papers bring prestige, but they don't pay the bills. Therefore, I would argue that AI safety researchers should be keeping an eye on how their findings can be applied to existing AI systems. This kind of product-focused development is something that academia is notoriously bad at.
I also question your claim that academic bureaucracy doesn't slow good researchers down very much. That's very much not in line with what anecdotes I've heard. From what I've seen, writing grant proposals, dealing with university bureaucracy, and teaching responsibilities are a significant time suck. Maybe with practice and experience, it's possible for a good researcher to complete these tasks on "autopilot", and therefore not notice the time that's being spent. But the tasks are still costing time and mental energy that, ideally, would be devoted to research or writing.
I don't think it's inevitable that academia will take over AI safety research, given the trend in AI capabilities research, and I certainly don't think that academia taking over AI safety research would be a good thing. For this reason I question whether it's valuable for AI safety researchers to develop skills valuable for academic research, specifically, as opposed to general time management, software engineering and product development skills.
What is the purpose, beyond mere symbolism, of hiding this post to logged out users when the relevant data is available, in far more detail, on Google's official AI blog?
I am saying that successful professors are highly successful researchers
Are they? That's why I'm focusing on empirics. How do you know that these people are highly successful researchers? What impressive research findings have they developed, and how did e.g. networking and selling their work enable them to get to these findings? Similarly, with regards to bureaucracy, how did successfully navigating the bureaucracy of academia enable these researchers to improve their work?
The way it stands right now, what you're doing is pointing at some traits that correlate with academic success, and are claiming that
This reasoning is flawed. First, why should AI safety research aspire to the same standards of "publish or perish" and the emphasis on finding positive results that gave us the replication crisis? It seems to me that, to the greatest extent possible, AI safety research should reject these standards, and focus on finding results that are true, rather than results that are publishable.
Secondly, correlation is not causation. The fact that many researchers from an anecdotal sample share certain attributes doesn't mean that those attributes are causative of those researchers' success. There are lots of researchers who do all of the things that you describe, managing their time, networking aggressively, and focusing on understanding grantmaking, who do not end up at prestigious institutions. There are lots of researchers who do all of those things who don't end up with tenure at all.
This is why I'm so skeptical of your post. I'm not sure that the steps your take are actually causative of academic success, rather than merely correlating with academic success, and furthermore, I'm not even sure that the standards of academic success are even something that AI safety research should aspire to.
Well, augmenting reality with an extra dimension containing the thing that previously didn’t exist is the same as “trying and seeing what would happen.” It worked swimmingly for the complex numbers.
No it isn't. The difference between and the values returned by is that can be used to prove further theorems and model phenomena, such as alternating current, that would be difficult, if not impossible to model with just the real numbers. Whereas positing the existence of is just like positing the existence of a finite value that satisfies . We can posit values that satisfy all kinds of impossibilities, but if we cannot prove additional facts about the world with those values, they're useless.
For what it's worth, I had a very similar reaction to yours. Insects and arthropods are a common source of disgust and revulsion, and so comparing anyone to an insect or an arthropod, to me, shows that you're trying to indicate that this person is either disgusting or repulsive.
Probabilities as credences can correspond to confidence in propositions unrelated to future observations, e.g., philosophical beliefs or practically-unobservable facts. You can unambiguously assign probabilities to ‘cosmopsychism’ and ‘Everett’s many-worlds interpretation’ without expecting to ever observe their truth or falsity.
You can, but why would you? Beliefs should pay rent in anticipated experiences. If two beliefs lead to the same anticipated experiences, then there's no particular reason to choose one belief over the other. Assigning probability to cosmopsychism or Everett's many-worlds interpretation only makes sense insofar as you think there will be some observations, at some point in the future, which will be different if one set of beliefs is true versus if the other set of beliefs is true.
One crude way of doing it is saying that a professor is successful if they are a professor at a top 10-ish university.
But why should that be the case? Academia is hypercompetitive, but the way it selects is not solely on the quality of one's research. Choosing the trendiest fields has a huge impact. Perhaps the professors that are chosen by prestigious universities are the ones that the prestigious universities think are the best at drawing in grant money and getting publications into high-impact journals, such as Nature, or Science.
Specifically I think professors are at least +2σ at “hedgehog-y” and “selling work” compared to similarly intelligent people who are not successful professors, and more like +σ at the other skills.
How does one determine this?
Overall, it seems like your argument is that AI safety researchers should behave more like traditional academia for a bunch reasons that have mostly to do with social prestige. While I don't discount the role that social prestige has to play in drawing people into a field and legitimizing it, it seems like overall, the pursuit of prestige has been a net negative for science as a whole, leading to, for example, the replication crisis in medicine and biology, or the nonstop pursuit of string theory over alternate hypotheses in physics. Therefore, I'm not convinced that importing these prestige-oriented traits from traditional science would be a net positive for AI safety research.
Furthermore, I would note that traditional academia has been moving away from these practices, to a certain extent. During the early days of the COVID pandemic, quite a lot of information was exchanged not as formal peer-reviewed research papers, but as blog posts, Twitter threads, and preprints. In AI capabilities research, many new advances are announced as blog posts first, even if they might be formalized in a reseach paper later. Looking further back in the history of science, James Gleick, in Chaos relates how the early researchers into chaos and complexity theories did their research by informally exchanging letters and draft papers. They were outside the normal categories that the bureaucracy of academia had established, so no journal would publish them.
It seems to me that the foundational, paradigm-shifting research always takes place this way. It takes place away from the formal rigors of academia, in informal exchanges between self-selected individuals. Only later, once the core paradigms of the new field have been laid down, does the field become incorporated into the bureaucracy of science, becoming legible enough for journals to routinely publish findings from the new field. I think AI safety research is at this early stage of maturity, and therefore it doesn't make sense for it to import the practices that would help practitioners survive and thrive in the bureaucracy of "Big Science".
Ah, but how do you make the artificial conscience value aligned with humanity? An "artificial conscience" that is capable of aligning a superhuman AI... would itself be an aligned superhuman AI.