Personal website: https://andrewtmckenzie.com/
Thanks so much for putting this together Mati! If people are interested in cryonics/brain preservation and would like to learn about (my perspective on) the field from a research perspective, please feel free to reach out to me: https://andrewtmckenzie.com/
I also have some external links/essays available here: https://brainpreservation.github.io/
It seems to me like your model is not necessarily taking into account technical debt sufficiently enough. https://neurobiology.substack.com/p/technical-debt-probably-the-main-roadblack-in-applying-machine-learning-to-medicine
It seems to me like this is the main thing that will slow down the extent to which foundation models can consistently beat newly trained specialized models.
Anecdotally, I know several people who don’t like to use chatgpt because its training cuts off in 2021. This seems like a form of technical debt.
I guess it depends on how easily adaptable foundation models are.
Those sound good to me! I donated to your charity (the Animal Welfare Fund) to finalize it. Lmk if you want me to email you the receipt. Here's the manifold market:
Bet
Andy will donate $50 to a charity of Daniel's choice now.
If, by January 2027, there is not a report from a reputable source confirming that at least three companies, that would previously have relied upon programmers, and meet a defined level of success, are being run without the need for human programmers, due to the independent capabilities of an AI developed by OpenAI or another AI organization, then Daniel will donate $100, adjusted for inflation as of June 2023, to a charity of Andy's choice.
Terms
Reputable Source: For the purpose of this bet, reputable sources include MIT Technology Review, Nature News, The Wall Street Journal, The New York Times, Wired, The Guardian, or TechCrunch, or similar publications of recognized journalistic professionalism. Personal blogs, social media sites, or tweets are excluded.
AI's Capabilities: The AI must be capable of independently performing the full range of tasks typically carried out by a programmer, including but not limited to writing, debugging, maintaining code, and designing system architecture.
Equivalent Roles: Roles that involve tasks requiring comparable technical skills and knowledge to a programmer, such as maintaining codebases, approving code produced by AI, or prompting the AI with specific instructions about what code to write.
Level of Success: The companies must be generating a minimum annual revenue of $10 million (or likely generating this amount of revenue if it is not public knowledge).
Report: A single, substantive article or claim in one of the defined reputable sources that verifies the defined conditions.
AI Organization: An institution or entity recognized for conducting research in AI or developing AI technologies. This could include academic institutions, commercial entities, or government agencies.
Inflation Adjustment: The donation will be an equivalent amount of money as $100 as of June 2023, adjusted for inflation based on https://www.bls.gov/data/inflation_calculator.htm.
Regulatory Impact: In January 2027, Andy will use his best judgment to decide whether the conditions of the bet would have been met in the absence of any government regulation restricting or banning the types of AI that would have otherwise replaced programmers.
Sounds good, I'm happy with that arrangement once we get these details figured out.
Regarding the human programmer formality, it seems like business owners would have to be really incompetent for this to be a factor. Plenty of managers have coding experience. If the programmers aren't doing anything useful then they will be let go or new companies will start that don't have them. They are a huge expense. I'm inclined to not include this since it's an ambiguity that seems implausible to me.
Regarding the potential ban by the government, I wasn't really thinking of that as a possible option. What kind of ban do you have in mind? I imagine that regulation of AI is very likely by then, so if the automation of all programmers hasn't happened by Jan 2027, it seems very easy to argue that it would have happened in the absence of the regulation.
Regarding these and a few of the other ambiguous things, one way we could do this is that you and I could just agree on it in Jan 2027. Otherwise, the bet resolves N/A and you don't donate anything. This could make it an interesting Manifold question because it's a bit adversarial. This way, we could also get rid of the requirement for it to be reported by a reputable source, which is going to be tricky to determine.
Understandable. How about this?
Bet
Andy will donate $50 to a charity of Daniel's choice now.
If, by January 2027, there is not a report from a reputable source confirming that at least three companies, that would previously have relied upon programmers, and meet a defined level of success, are being run without the need for human programmers, due to the independent capabilities of an AI developed by OpenAI or another AI organization, then Daniel will donate $100, adjusted for inflation as of June 2023, to a charity of Andy's choice.
Terms
Reputable Source: For the purpose of this bet, reputable sources include MIT Technology Review, Nature News, The Wall Street Journal, The New York Times, Wired, The Guardian, or TechCrunch, or similar publications of recognized journalistic professionalism. Personal blogs, social media sites, or tweets are excluded.
AI's Capabilities: The AI must be capable of independently performing the full range of tasks typically carried out by a programmer, including but not limited to writing, debugging, maintaining code, and designing system architecture.
Equivalent Roles: Roles that involve tasks requiring comparable technical skills and knowledge to a programmer, such as maintaining codebases, approving code produced by AI, or prompting the AI with specific instructions about what code to write.
Level of Success: The companies must be generating a minimum annual revenue of $10 million (or likely generating this amount of revenue if it is not public knowledge).
Report: A single, substantive article or claim in one of the defined reputable sources that verifies the defined conditions.
AI Organization: An institution or entity recognized for conducting research in AI or developing AI technologies. This could include academic institutions, commercial entities, or government agencies.
Inflation Adjustment: The donation will be an equivalent amount of money as $100 as of June 2023, adjusted for inflation based on https://www.bls.gov/data/inflation_calculator.htm.
I guess that there might be some disagreements in these terms, so I'd be curious to hear your suggested improvements.
Caveat: I don't have much disposable money right now, so it's not much money, but perhaps this is still interesting as a marker of our beliefs. Totally ok if it's not enough money to be worth it to you.
I’m wondering if we could make this into a bet. If by remote workers we include programmers, then I’d be willing to bet that GPT-5/6, depending upon what that means (might be easier to say the top LLMs or other models trained by anyone by 2026?) will not be able to replace them.
These curves are due to temporary plateaus, not permanent ones. Moore's law is an example of a constraint that seems likely to plateau. I'm talking about takeoff speeds, not eventual capabilities with no resource limitations, which I agree would be quite high and I have little idea of how to estimate (there will probably still be some constraints, like within-system communication constraints).
Does anyone know of any AI-related predictions by Hinton?
Here's the only one I know of - "People should stop training radiologists now. It's just completely obvious within five years deep learning is going to do better than radiologists because it can get a lot more experience. And it might be ten years but we got plenty of radiologists already." - 2016, slightly paraphrased
This seems like still a testable prediction - by November 2026, radiologists should be completely replaceable by deep learning methods, at least other than regulatory requirements for trained physicians.
I agree with most of this post, but it doesn’t seem to address the possibility of whole brain emulation. However, many/(?most) would argue this is unlikely to play a major role because AGI will come first.