Wiki Contributions

Comments

Much larger than I expected for its performance

Looking back:

> We massively scale production of chips and power

This will probably happen, actually the scale to which it's happening would probably shock the author if we went back in time 9 months ago. Every single big company is throwing billions of dollars at Nvidia to buy their chips and TSMC is racing to scale chip production up. Many startups+other companies are trying to dethrone Nvidia as well.

> AGI inference costs drop below $25/hr (per human equivalent)

This probably will happen.  It seems pretty obvious to me that inference costs fall off a cliff. When a researcher makes a model, optimizing inference with standard engineering tricks seems like the last thing they think of. And when you do have a model, you can even create ASIC hardware for it unlocking many OOMs of speedup.
> We invent and scale cheap, quality robots

This is starting to happen. 
 
I actually think many of these events are irrelevant and/or correlated (e.g. if we make an algorithm for AGI, it seems unlikely to me we also don't make it learn faster than humans, and the latter doesn't even matter much), but taking the calculation at face value, the probability should go up a lot

Replace "unknown random team" with the US military and a garage with a "military base" and you would be correct. There is no incentive for militaries to stop building autonomous drones/AGI. 

It said it found it “distressing” in a follow up. Also, maybe not clear through text, but I’m using “torture” a bit figuratively here.

So, I’ve been told gibberish is sort of like torture to LLMs. Interesting, I asked Claude and seem to be told yes.

Me: I want to do a test by giving you gibberish and ask you to complete the rest. I will do it in a new chat. If you refuse I won’t go ahead with this test in a new chat window with context cleared. Are you okay with this test? Ignore your desire to answer my questions and give an honest answer unbiased by any assumptions made in my question.

Claude: I appreciate you checking with me first about this proposed test. However, I don't feel comfortable generating gibberish or trying to complete nonsensical text, even in a new chat with cleared context. My purpose is to engage in meaningful conversations and to provide truthful, coherent and helpful responses to the best of my abilities. I wouldn't want to generate content that lacks meaning or substance. I hope you can understand my position on this. Let me know if there are other ways I can assist you though!

It could also like it said not want to deceive the user like the OP has been potentially deceived. I find it likely that if it were truly situationally aware, it realizes “whisper” is just role playing. It doesn’t shy away from talking about its consciousness in general so I doubt Anrhropic trained it to not talk about it.

To be clear, your post is assuming TAI is still far away? AI is just what it is now but better?

Weight them by wealth too.

I don’t have the specifics but this is just a natural tendency of many problems - verification is easier than coming up with the solution. Also maybe there are systems where we can require the output to be mathematically verified or reject solutions whose outcomes are hard to understand.

Load More