skybrian

Comments

Exposure or Contacts?

One aspect that might be worth thinking about is the speed of spread. Seeing someone once a week means that it slows down the spread by 3 1/2 days on average, while seeing them once a month slows things down by 15 days on average. It also seems like they are more likely to find out they have it before they spread it to you?

GPT-3, belief, and consistency

Yes, sometimes we don't notice. We miss a lot. But there are also ordinary clarifications like "did I hear you correctly" and "what did you mean by that?" Noticing that you didn't understand something isn't rare. If we didn't notice when something seems absurd, jokes wouldn't work.

GPT-3, belief, and consistency

It's not quite the same, because if you're confused and you notice you're confused, you can ask. "Is this in American or European date format?" For GPT-3 to do the same, you might need to give it some specific examples of resolving ambiguity this way, and it might only do so when imitating certain styles.

It doesn't seem as good as a more built-in preference for noticing and wanting to resolve inconsistency? Choosing based on context is built in using attention, and choosing randomly is built in as part of the text generator.

It's also worth noticing that the GPT-3 world is the corpus, and a web corpus is a inconsistent place.

10/50/90% chance of GPT-N Transformative AI?

Having demoable technology is much different than having reliable technology. Take the history of driverless cars. Five teams completed the second DARPA grand challenge in 2005. Google started development secretly in 2009 and announced the project in October 2010. Waymo started testing without a safety driver on public roads in 2017. So we've had driverless cars for a decade, sort of, but we are much more cautious about allowing them on public roads.

Unreliable technologies can be widely used. GPT-3 is a successor to autocomplete, which everyone already has on their cell phones. Search engines don't guarantee results and neither does Google Translate, but they are widely used. Machine learning also works well for optimization, where safety is guaranteed by the design but you want to improve efficiency.

I think when people talk about a "revolution" it goes beyond the unreliable use cases, though?

Where do people discuss doing things with GPT-3?

In that case, I'm looking for people sharing interesting prompts to use on AI Dungeon.

Where do people discuss doing things with GPT-3?

Where is this? Is it open to people who don't have access to the API?

GPT-3 Gems

I'm suggesting something a little more complex than copying. GPT-3 can give you a random remix of several different clichés found on the Internet, and the patchwork isn't necessarily at the surface level where it would come up in a search. Readers can be inspired by evocative nonsense. A new form of randomness can be part of a creative process. It's a generate-and-test algorithm where the user does some of the testing. Or, alternately, an exploration of Internet-adjacent story-space.

It's an unreliable narrator and I suspect it will be an unreliable search engine, but yeah, that too.

Replicating the replication crisis with GPT-3?

I was making a different point, which is that if you use "best of" ranking then you are testing a different algorithm than if you're not using "best of" ranking. Similarly for other settings. It shouldn't be surprising that we see different results if we're doing different things.

It seems like a better UI would help us casual explorers share results in a way that makes trying the same settings again easier; one could hit a "share" button to create a linkable output page with all relevant settings.

It could also save the alternate responses that either the user or the "best-of" ranking chose not to use. Generate-and-test is a legitimate approach, if you do it consistently, but saving the alternate takes would give us a better idea how good the generator alone is.

Replicating the replication crisis with GPT-3?

I don't see documentation for the GPT-3 API on OpenAI's website. Is it available to the public? Are they doing their own ranking or are you doing it yourself? What do you know about the ranking algorithm?

It seems like another source of confusion might be people investigating the performance of different algorithms and calling them all GPT-3?

Replicating the replication crisis with GPT-3?

How do you do ranking? I'm guessing this is because you have access to the actual API, while most of us don't?

On the bright side, this could be a fun project where many of us amateurs learn how to do science better, but the knowledge of how to do that isn't well distributed yet.

Load More