skybrian

Comments

No Causation without Reification

What’s an example of a misconception someone might have due to having a mistaken understanding of causality, as you describe here?

The bads of ads

This is a bizarre example, sort of like using Bill Gates to show why nobody needs to work for a living. It ignores the extreme inequality of fame.

Tesla doesn’t need advertising because they get huge amounts of free publicity already, partly due to having interesting, newsworthy products, partly due to having a compelling story, and partly due to publicity stunts.

However, this free publicity is mostly unavailable for products that are merely useful without being newsworthy. There are millions of products like this. An exciting product might not need advertising but exciting isn’t the same as useful.

So It seems like the confidence to advertise a boring product might be a signal of sorts? However, given that many people in business are often unreasonably optimistic, it doesn’t seem like a particularly strong one. Faking confidence happens quite a lot.

Babble & Prune Thoughts

It seems like some writers have habits to combat this, like writing every day or writing so many words a day. As long as you meet your quota, it’s okay to try harder.

Some do this in public, by publishing on a regular schedule.

If you write more than you need, you can prune more to get better quality.

Exposure or Contacts?

One aspect that might be worth thinking about is the speed of spread. Seeing someone once a week means that it slows down the spread by 3 1/2 days on average, while seeing them once a month slows things down by 15 days on average. It also seems like they are more likely to find out they have it before they spread it to you?

GPT-3, belief, and consistency

Yes, sometimes we don't notice. We miss a lot. But there are also ordinary clarifications like "did I hear you correctly" and "what did you mean by that?" Noticing that you didn't understand something isn't rare. If we didn't notice when something seems absurd, jokes wouldn't work.

GPT-3, belief, and consistency

It's not quite the same, because if you're confused and you notice you're confused, you can ask. "Is this in American or European date format?" For GPT-3 to do the same, you might need to give it some specific examples of resolving ambiguity this way, and it might only do so when imitating certain styles.

It doesn't seem as good as a more built-in preference for noticing and wanting to resolve inconsistency? Choosing based on context is built in using attention, and choosing randomly is built in as part of the text generator.

It's also worth noticing that the GPT-3 world is the corpus, and a web corpus is a inconsistent place.

10/50/90% chance of GPT-N Transformative AI?

Having demoable technology is much different than having reliable technology. Take the history of driverless cars. Five teams completed the second DARPA grand challenge in 2005. Google started development secretly in 2009 and announced the project in October 2010. Waymo started testing without a safety driver on public roads in 2017. So we've had driverless cars for a decade, sort of, but we are much more cautious about allowing them on public roads.

Unreliable technologies can be widely used. GPT-3 is a successor to autocomplete, which everyone already has on their cell phones. Search engines don't guarantee results and neither does Google Translate, but they are widely used. Machine learning also works well for optimization, where safety is guaranteed by the design but you want to improve efficiency.

I think when people talk about a "revolution" it goes beyond the unreliable use cases, though?

Where do people discuss doing things with GPT-3?

In that case, I'm looking for people sharing interesting prompts to use on AI Dungeon.

Where do people discuss doing things with GPT-3?

Where is this? Is it open to people who don't have access to the API?

GPT-3 Gems

I'm suggesting something a little more complex than copying. GPT-3 can give you a random remix of several different clichés found on the Internet, and the patchwork isn't necessarily at the surface level where it would come up in a search. Readers can be inspired by evocative nonsense. A new form of randomness can be part of a creative process. It's a generate-and-test algorithm where the user does some of the testing. Or, alternately, an exploration of Internet-adjacent story-space.

It's an unreliable narrator and I suspect it will be an unreliable search engine, but yeah, that too.

Load More