Rafael Harth

Sequences

Litereature Summaries
Factored Cognition
Understanding Machine Learning

Wiki Contributions

Comments

It is fascinating to learn about the extent to which AI technologies like GPT-4 and Copilot X have been integrated into the operations of LessWrong. It is understandable that the LW team wanted to keep this information confidential in order to prevent the potential negative consequences of revealing the economic value of AI.

However, with the information now out in the open, it's important to discuss the ethical implications of such a revelation. It could lead to increased investment in AI, which may or may not be a good thing, depending on how it is regulated and controlled. On one hand, increased investment could accelerate AI development, leading to new innovations and benefits to society. On the other hand, it could potentially exacerbate competitive dynamics, increase the risk of misuse, and lead to negative consequences for society.

Regarding the use of AI on LessWrong specifically, it's essential to consider the impact on users and the community as a whole. If AI is moderating comment sections and evaluating new users, it raises questions about transparency, fairness, and privacy. While it may be more efficient and even potentially more accurate, there should be a balance between human oversight and AI automation to ensure that the platform remains a safe and open space for discussions and debates.

Lastly, the mention of Oliver Habryka automating his online presence might be a light-hearted comment, but it also highlights the potential personal and social implications of AI technologies. While automating certain aspects of our lives can free up time for other pursuits, it is important to consider the consequences of replacing human interaction with AI-generated content. What might we lose in terms of authenticity, spontaneity, and connection if we increasingly rely on AI to manage our online presence? It's a topic that merits further reflection and discussion.

It's probably because GPT learns on the basis of tokens, not letters, so this doesn't really tell you much. If you want to find something it can't do, it'd be more impressive if it were a logic thing, not syntactic thing

My guess is that most people who downvoted think popular philosophy is unlikely to be relevant for no-nonsense applications like math and alignment

GPT-4 is being used to speed up development of GPT-5 already.

Source?

Half a year ago, we had the discussion about a lot of AI content. It since seems to have gotten more extreme; right now I count 10-3 of AI/xrisk - other stuff on the homepage (here coded as red and blue; I counted one post for both).

I know GPT-4 got just released, so maybe it's fine? Idk, but it really jumped out to me.

(He also did a quite high-effort thing in 2019 which did work. I don't know how well he kept the pounds off in the subsequent time)

I'm kinda confused why this is only mentioned in one answer, and in parentheses. Shouldn't this be the main answer -- like, hello, the premise is likely false? (Even if it's not epistemically likely, I feel like one should politely not assume that he since gained weight unless one has evidence for this.)

This doesn't seem quite right. The information content of agree vs. disagree depends on your prior, i.e., on . If that's <0.5, then an agree vote is more informative; if it's >0.5, then a disagree vote is more informative. But it's not obvious that it's <.5 in general.

I know he's talking about alignment, and I'm criticizing that extremely strong claim. This is the main thing I wanted to criticize in my comment! I think the reasoning he presents is not much supported by his publicly available arguments.

Ok, I don't disagree with this. I certainly didn't develop a gears-level understanding of why [building a brain-like thing with gradient descent on giant matrices] is doomed after reading the 2021 conversations. But that doesn't seem very informative either way; I didn't spend that much time trying to grok his arguments.

I assume you're asking if someone can query GPT-4 with this. if so, I did and here's the response.

I would agree with this if Eliezer had never properly engaged with critics, but he's done that extensively. I don't think there should be a norm that you have to engage with everyone, and "ok choose one point, I'll respond to that" seems like better than not engaging with it at all. (Would you have been more enraged if he hadn't commented anything?)

Load More