Wiki Contributions


Exactly. As for the cost issue, the code can be deployed as:

- Twitter bots (registered as such) so the deployer controls the cost

- A webpage that charges you a small payment (via crypto or credit card) to run 100 queries. Such websites can actually be generated by ChatGPT4 so it's an easy lift. Useful for people who truly want to learn or who want to get good arguments for online argumentation

- A webpage with captchas and reasonable rate limits to keep cost small 


In general yes, here no. My impression from reading LW is that many people suffer from a great deal of analysis paralysis and are taking too few chances, especially given that the default isn't looking great.

There is such a thing as doing a dumb thing because it feels like doing something (e.g. let's make AI Open!) but this ain't it. The consequences of this project are not going to be huge (talking to people) but you might get a nice little gradient read as to how helpful it is and iterate from there.

It should be possible to ask content owners for permission and get pretty far with that.

AFAIK what does is fine tuning, with their own language models, which aren't at parity with ChatGPT. Using a better language model will yield better answers but, MUCH MORE IMPORTANTLY, what I'm suggesting is NOT fine tuning.

What I'm suggesting gives you an answer that's closer to a summary of relevant bits of LW, Arbital, etc. The failure mode is much more likely to be that the answer is irrelevant or off the mark than it being at odds with prevalent viewpoints on this platform.

Think more interpolating over an FAQ, and less reproducing someone's cognition.

The US has around one traffic fatality per 100 million miles driven; if a human driver makes 100 decisions per mile

A human driver does not make 100 "life or death decisions" per mile. They make many more decisions, most of which can easily be corrected, if wrong, by another decision.

The statistic is misleading though in that it includes people who text, drunk drivers, tired drivers. The performance of a well rested human driver that's paying attention to the road is much, much higher than that. And that's really the bar that matters for self driving car, you don't want a car that is doing better than the average driver who - hey you never know - could be a drunk.

Fixing hardware failures in software is literally how quantum computing is supposed to work, and it's clearly not a silly idea.

Generally speaking, there's a lot of appeal to intuition here, but I don't find it convincing. This isn't good for Tokyo property prices? Well maybe, but how good of a heuristic is that when Mechagodzilla is on its way regardless.

In addition

  1. There aren't that many actors in the lead.
  2. Simple but key insights in AI (e.g doing backprop, using sensible weight initialisation) have been missed for decades.

If the right tail for the time to AGI by a single group can be long and there aren't that many groups, convincing one group to slow down / paying more attention to safety can have big effects.

How big of an effect? Years doesn't seem off the table. Eliezer suggests 6 months dismissively. But add a couple years here and a couple years there, and pretty soon you're talking about the possibility of real progress. It's obviously of little use if no research towards alignment is attempted in that period of course, but it's not nothing.

There are IMO in-distribution ways of successfully destroying much of the computing overhang. It's not easy by any means, but on a scale where "the Mossad pulling off Stuxnet" is 0 and "build self replicating nanobots" is 10, I think it's is closer to a 1.5.

Indeed, there is nothing irrational (in an epistemic way) about having hyperbolic time preference. However, this means that a classical decision algorithm is not conducive to achieving long term goals.

One way around this problem is to use TDT, another way is to modify your preferences to be geometric.

A geometric time preference is a bit like a moral preference... it's a para-preference. Not something you want in the first place, but something you benefit from wanting when interacting with other agents (including your future self).

The second dot point is part of the problem description. You're saying it's irrelevant, but you can't just parachute a payoff matrix where causality goes backward in time.

Find any example you like, as long as they're physically possible, you'll either have the payoff tied to your decision algorithm (Newcomb's) or to your preference set (Solomon's).

Load More