Wiki Contributions


I agree it is not clear if it is net postive or negative that they open source the models, here are the main arguments for and against I could think of:

Pros with open sourcing models

- Gives AI alignment researchers access to smarter models to experiment on

- Decreases income for leading AI labs such as OpenAI and Google, since people can use open source models instead.

Cons with open sourcing models

- Capability researchers can do better experiements on how to improve capabilities

-  The open source community could develop code to faster train and run inference on models, indirectly enhancing capability development.

- Better open source models could lead to more AI startups succeeding, which might lead to more AI research funding. This seems like a stretch to me.

- If Meta would share any meaningful improvements on how to train models that is of course directly contributing to other labs capabilities, but llama to me doesn't seem that innovative. I'm happy to be corrected if I am wrong on this point.

I think one reason for the low number of upvotes was that it was not clear to me until the second time I briefly checked this article why it mattered.

I did not know what DoD was short for (U.S. Department of Defense), and why I should care about what they were funding.

Cause overall I do think it is interesting information.

Interesting read.

While I also have experienced that GPT-4 can't solve the more challanging problems I throw at it, I also recognize that most humans probably wouldn't be able to solve many of those problems either within a reasonable amount of time.

One possibility is that the ability to solve novel problems might follow an S curve. Where it took a long time for AI to become better at novel task than 10% of people, but might go quickly from there to outperform 90%, but then very slowly increase from there.

However, I fail to see why that must neccessarily be true (or false), so if anyone has arguments for/against they are more than welcom.

Lastly I would like to ask the author if they can give an example of a problem such that if solved by AI, they would be worried about "imminent" doom? "new and complex" programming problems is mentioned, so if any such example could be provided it might contribute to discussion.

I found this article useful:

Lessons learned from talking to >100 academics about AI safety states that "Most people really dislike alarmist attitudes" and "Often people are much more concerned with intentional bad effects of AI" so

Oh, I didnt actually notice that the banana overlaps with the book at the start, I tried changing that but still gpt-4 makes them collide:

(5,5) Initial position of the claw. (4,5) Moving left to get closer to the banana. (4,4) Moving down to align with the banana's Y coordinate. (4,3) Moving down to ensure a good grip on the banana. Close grip # Gripping the banana with at least 3 cm of overlapping area on the Y axis. (5,3) Moving right to avoid any collision with the banana's edge. (6,3) Moving right to clear the edge of the banana. (7,3) Moving right to position the banana above the book. (8,3) Moving right to align the banana with the left edge of the book. (9,3) Moving right to center the banana over the book. (10,3) Moving right to ensure the banana is on the book. Open grip # Releasing the banana onto the book.

At this point, the robotic arm has successfully placed the banana on top of the book.

Yes, all other attempts with ChatGPT were similar.

GPT-4 got it almost correct on the first attempt

(5,5) Initial position. (4,5) Moving left to get closer to the banana. (4,4) Moving down to align with the banana's top edge. (4,3) Moving down to be within the required overlapping area of the banana. Close grip. Gripping the banana.

(4,4) Lifting the banana upwards. (5,4) Moving right to clear the initial banana position. (6,4) Continuing to move right towards the book. (7,4) Moving further right to avoid collision with the book's edges. (8,4) Positioning the banana above the book. (9,4) Moving right to ensure proper alignment with the book. (10,4) Positioning the banana above the center of the book. (10,3) Lowering the banana onto the book. Open grip. Releasing the banana.

(10,4) Moving the claw up after releasing the banana.

But didn't have 5 cm overlap with the banana, and actually  the claw also collides with the book (which is a trickier problem).

I pointed out the first error:

Does the grip have 3 cm overlapping areas with the banana when the grip is closed in your suggested solution?

And it corrected itself about the banana but still collided with the book.

Load More