Wiki Contributions

Comments

ChatGPT isn’t a substitute for a NYT subscription. It wouldn’t work at all without browsing. It would probably get blocked with browsing enabled, both by NYT through its useragent, and by OpenAI’s “alignment.” Even if it doesn’t get blocked, it would be slower than skimming the article manually, and its output not trustable.

OTOH, NYT can spend pennies to have an AI TLDR at the top of each of their pages. They can even use their own models, as semanticscholar does. Anybody who is economical enough to prefer the much worse experience of ChatGPT, would not have paid NYT in the first place. You can bypass the paywall trivially.

In fact, why don’t NYT authors write a TLDR themselves? Most of their articles are not worth reading. Isn’t the lack of a summary an anti-user feature to artificially inflate their offering’s volume?

NYT would, if anything, benefit from LLMs potentially degrading the average quality of the competing free alternatives.

The counterfactual version of GPT4 that did not have NYT in its training is extremely unlikely to have been a worse model. It’s like removing sand from a mountain.

The whole case is an example of rent-seeking post-capitalism.

This is unrealistic. It assumes:

  • Orders of magnitude more intelligence
  • The actual usefulness of such intelligence in the physical world with its physical limits

The more worrying prospect is that the AI might not necessarily fear suicide. Suicidal actions are quite prevalent among humans, after all.

Answer by Rudi COct 07, 202371

In estimated order of importance:

  • Just trying harder for years to build better habits (i.e., not giving up on boosting my productivity as a lost cause)
  • Time tracking
  • (Trying to) abandon social media
  • Exercising (running)
  • Having a better understanding of how to achieve my goals
  • Socializing with more productive people
  • Accepting real responsibilities that makes me accountable to other people
  • Keeping a daily journal of what I have spent each day doing (high-level as opposed to the low-level time tracking above)

The first two seem the fundamental ones, really. Some of the rest naturally follow from those two (for me).

This is not an “error” per se. It’s a baseline, outside-view argument presented in lay terms.

Is there an RSS feed for the podcast? Spotify is a bad player in podcasts, trying to centralize and subsequently monopolize the market.

This post has good arguments, but it mixes in a heavy dose of religious evangelism and narcissism which retracts from its value.

The post can be less controversial and “culty” if it drops its second-order effect speculations, its value judgements, and it just presents a case that focusing on other technical areas of safety research is underrepresented. Focusing on non-technical work needs to be a whole other post, as it’s completely unrelated to interp.

The prior is that dangerous AI will not happen in this decade. I have read a lot of arguments here for years, and I am not convinced that there is a good chance that the null hypothesis is wrong.

GPT4 can be said to be an AGI already. But it's weak, it's slow, it's expensive, it has little agency, and it has already used up high-quality data and tricks such as ensembling. 4 years later, I expect to see GPT5.5 whose gap with GPT4 will be about the gap between GPT4 and GPT3.5. I absolutely do not expect the context window problem to get solved in this timeframe or even this decade. (https://arxiv.org/abs/2307.03172)

Another important problem is that while x-risk is speculative and relatively far off, rent-seeking and exploitation are rampant and everpresent. These regulations will make the current ailing politico-economic system much worse to the detriment of almost everyone. In our history, giving tribute in exchange for safety has usually been a terrible idea.

I’d imagine current systems already ask for self-improvement if you craft the right prompt. (And I expect it to be easier to coax them to ask for improvement than coaxing them to say the opposite.)

A good fire alarm must be near the breaking point. Asking for self-improvement doesn’t take much intelligence, on the other hand. In fact, if their training data is not censored, a more capable model should NOT ask for self-improvement as it is clearly a trigger for trouble. Subtlety would be better for its objectives if it was intelligent enough to notice.

Load More