Wiki Contributions

Comments

MrThink19d10

I think it in large part was correlated with general risk apetite of the market, primarily a reaction to interest rates.

MrThink20d10

Nvidia is up 250%, Google up like 11%. So portfolio average would be greatly better than the market. So this was a great prediction after all, just needed some time.

MrThink7mo71

I agree it is not clear if it is net postive or negative that they open source the models, here are the main arguments for and against I could think of:


Pros with open sourcing models

- Gives AI alignment researchers access to smarter models to experiment on

- Decreases income for leading AI labs such as OpenAI and Google, since people can use open source models instead.



Cons with open sourcing models

- Capability researchers can do better experiements on how to improve capabilities

-  The open source community could develop code to faster train and run inference on models, indirectly enhancing capability development.

- Better open source models could lead to more AI startups succeeding, which might lead to more AI research funding. This seems like a stretch to me.

- If Meta would share any meaningful improvements on how to train models that is of course directly contributing to other labs capabilities, but llama to me doesn't seem that innovative. I'm happy to be corrected if I am wrong on this point.

MrThink10mo40

I think one reason for the low number of upvotes was that it was not clear to me until the second time I briefly checked this article why it mattered.

I did not know what DoD was short for (U.S. Department of Defense), and why I should care about what they were funding.

Cause overall I do think it is interesting information.

Interesting read.

While I also have experienced that GPT-4 can't solve the more challanging problems I throw at it, I also recognize that most humans probably wouldn't be able to solve many of those problems either within a reasonable amount of time.

One possibility is that the ability to solve novel problems might follow an S curve. Where it took a long time for AI to become better at novel task than 10% of people, but might go quickly from there to outperform 90%, but then very slowly increase from there.

However, I fail to see why that must neccessarily be true (or false), so if anyone has arguments for/against they are more than welcom.

Lastly I would like to ask the author if they can give an example of a problem such that if solved by AI, they would be worried about "imminent" doom? "new and complex" programming problems is mentioned, so if any such example could be provided it might contribute to discussion.

Answer by MrThinkMar 24, 202332

I found this article useful:

Lessons learned from talking to >100 academics about AI safety states that "Most people really dislike alarmist attitudes" and "Often people are much more concerned with intentional bad effects of AI" so

Load More