Posts

Sorted by New

Wiki Contributions

Comments

2. I think non-x-risk focused messages are a good idea because:

  • It is much easier to reach a wide audience this way.
  • It is clear that there are significant and important risks even if we completely exclude x-risk. We should have this discussion even in a world where for some reason we could be certain that humanity will survive for the next 100 years.
  • It widens Overton's window. x-risk is still mostly considered to be a fringe position among the general public, although the situation has improved somewhat.

3. There were cases when it worked well. For example, the Letter of three hundred.

4. I don't know much about EA's concerns about Elon. Intuitively, he seems to be fine. But I think that in general, people are more biased towards too much distancing which often hinders coordination a lot.  

5. I think more signatures cannot make things worse if authors are handling them properly. Just rough sorting by credentials (as FLI does) may be already good enough. But it's possible and easy to be more aggressive here.

 

I agree that it's unlikely that this letter will be net bad and that it's possible it can make a significant positive impact. However, I don't think people argued that it can be bad. Instead, people argued it could be better. It's clearly not possible to do something like this every month, so it's better to put a lot of attention to details and think really carefully about content and timing.

I think it is only getting started. I expect that likely there will be more attention in 6 months and very likely in 1 year.

OpenAI has barely rolled out its first limited version of GPT-4 (only 2 weeks have passed!). It is growing very fast but it has A LOT of room to grow. Also, text-2-video is not here in any significant sense but it will be very soon.

When it was published, it felt like a pretty short timeline. But now we are in early 2023 and it feels like late 2023 according to this scenario.

I wonder if soon the general public will freak out on a large scale (Covid-like). I will be not surprised if it will happen in 2024 and only slightly surprised if it will happen this year. If it will happen, I am also not sure if it will be good or bad.

OpenAI just dropped ChatGPT plugins yesterday. It seems like it is an ideal platform for it? Probably will be even easier to implement than before and have better quality. But more importantly, it seems that ChatGPT plugins will quickly shape to be the new app store and it would be easier to get attention on this platform compared to other more traditional ways of distribution. Quite speculative, I know, but seems very possible.

If somebody will start such a project, please contact me. I am ex-Google SWE with decent knowledge of ML and experience of running software startup (as co-founder and CTO in the recent past).

I would also be interested to hear why it could be a bad idea.

Good point. It's a bit weird that performance on easy Codeforces questions is so bad (0/10) though. 

https://twitter.com/cHHillee/status/1635790330854526981

I think you misinterpret hindsight neglect. It got to 100% accuracy, so it got better, not worse.

Also,  a couple of images are not shown correctly, search for <img in text.

Really helpful for learning new frameworks and stuff like that. I had a very good experience using it for Kaggle competitions (I am semi-intermediate level, probably it is much less useful on the expert level).

Also, I found it quite useful for research on obscure topics like "how to potentiate this not well-known drug". Usually, such research involves reading through tons of forums, subreddits etc. and signal to noise ratio is quite high. GPT-4 is very useful to distil signal because it basically already read this all.

Btw, I tried to make it solve competitive programming problems. I think it's not a matter of prompt engineering: it is genuinely bad on it. The following pattern is common:

  • GPT-4 proposes some solutions, usually wrong at the first glance.
  • I point to mistakes.
  • GPT-4 says yeah you're right, but now it is fixed.
  • It is going on like this for ~4 iterations until I give up on this particular problem or more interestingly GPT-4 starts to claim that it's impossible to solve.

It really feels like a low IQ (but very eloquent) human in such moments, it just cannot think abstractly.

Well, I do not have anything like this but it is very clear that China is way above GPT-3 level. Even the open-source community is significantly above. Take a look at LLaMA/Alpaca, people run them on consumer PC and it's around GPT-3.5 level, the largest 65B model is even better (it cannot be run on consumer PC but can be run on a small ~10k$ server or cheaply in the cloud). It can also be fine-tuned in 5 hours on RTX 4090 using LORA: https://github.com/tloen/alpaca-lora .

Chinese AI researchers contribute significantly to AI progress, although of course, they are behind the USA. 

My best guess would be China is at most 1 year away from GPT-4. Maybe less.

Btw, an example of a recent model: ChatGLM-6b

Load More