Tomás B.

Occasionally think about topics discussed here. Will post if I have any thoughts worth sharing.

Wiki Contributions

Comments

I was in the middle of writing a frustrated reply to Matthew's comment when I realized he isn't making very strong claims. I don't think he's claiming your scenario is not possible. Just that not all power seeking is socially destructive, and this is true just because most power seeking is only partially effective. Presumably he agrees that in the limit of perfect power acquisition most power seeking would indeed be socially destructive. 

Every toaster a Mozart, every microwave a Newton, every waffle iron a Picasso.

Was this all done through Suno? You guys are much better at prompting it than I am.

The bet is with a friend and I will let him judge.

I agree that providing an api to God is a completely mad strategy and we should probably expect less legibility going forward. Still, we have no shortage of ridiculously smart people acting completely mad.

Tomás B.1mo262

This seems to be as good of a place as any to post my unjustified predictions on this topic, the second of which I have a bet outstanding on at even odds.

  1. Devin will turn out to be just a bunch of GPT-3.5/4 calls and a pile of prompts/heuristics/scaffolding so disgusting and unprincipled only a team of geniuses could have created it.
  2. Someone will create an agent that gets 80%+ on SWE-Bench within six months. 

I am not sure if 1. being true or false is good news. Both suggest we should update towards large jumps in coding ability very soon.

Regarding RSI, my intuition has always been that automating AI research will likely be easier than automating the development and maintenance of a large app like, say, Photoshop, So I don't expect fire alarms like "non-gimmicky top 10 app on AppStore was developed entirely autonomously" before doom.

Tomás B.1mo2-7

After spending several hours trying to get Gemini, GPT-4 and Claude 3 to make original jokes - I now think I may be wrong about this. Still could be RLHF, but it does seem like an intelligence issue. @janus are the base models capable of making original jokes?

Looks to me he's training on the test set tbh. His ambition to get an IQ of 195 is admirable though. 

Tomás B.2mo113

I very much doubt this will work. I am also annoyed you don't share your methods. If you can provide me with a procedure that raises my IQ by 20 points in a manner that convinces me this is a real increase in g, I will give you one hundred thousand dollars.

Answer by Tomás B.Mar 02, 202420

@Veedrac suppose this pans out and custom hardware is made for such networks.  How much faster/larger/cheaper will this be?

Answer by Tomás B.Feb 29, 202452

This is applied to training. It’s not a quantization method.

Load More