Posts

Sorted by New

Wiki Contributions

Comments

Speaking for myself, I would have confidently predicted the opposite result for the largest models.

My understanding is that LLMs work by building something like a world-model during training by compressing the data into abstractions. I would have expected something like "Tom Cruise's mother is Mary Lee Pfeiffer" to be represented in the model as an abstract association between the names that could then be "decompressed" back into language in a lot of different ways.

The fact that it's apparently represented in the model only as that exact phrase (or maybe as some kind of very alien abstraction?) leads me think that LLMs are either a bit more like "stochastic parrots" than I would have expected, or that their world-models are a lot more alien.

I'm not sure I agree. Consider the reaction of the audience to this talk- uncomfortable laughter, but also a pretty enthusiastic standing ovation. I'd guess that latter happened because the audience saw Eliezer as genuine- he displayed raw emotion, spoke bluntly, and at no point came across as someone making a play for status. He fit neatly into the "scientist warning of disaster" archetype, which isn't a figure that's expected to be particularly skilled at public communication.

A more experienced public speaker would certainly be able to present the ideas in a more high-status way- and I'm sure there would be a lot of value in that.  But the goal of increasing the status of the ideas might to some degree trade off against communicating their seriousness- a person skillfully arguing a high-status idea has a potential ulterior motive that someone like Eliezer clearly doesn't.  To get the same sort of reception from an audience that Eliezer got in this talk, a more experienced speaker might need to intentionally present themselves as lacking polish, which wouldn't necessarily be the best way to use their talents.

Better, maybe, to platform both talented PR people and unpolished experts.

Note that, while the linked post on the TEDx YouTube channel was taken down, there's a mirror available at: https://files.catbox.moe/qdwops.mp4.

Here are a few images generated by DALL-E 2 using the tokens:
https://i.imgur.com/kObEkKj.png

Nothing too interesting, unfortunately.

I assume you're not a fan of the LRNZ deep learning-focused ETF, since it includes both NVDA and a lot of datacenters (not to mention the terrible 2022 performance). Are there any other ETFs focused on this sort of thing that look better?

There are a lot of interesting ideas in this RP thread.  Unfortunately, I've always found it a bit hard to enjoy roleplaying threads that I'm not participating in myself.  Approached as works of fiction rather than games, RP threads tend to have some very serious structural problems that can make them difficult to read.

Because players aren't sure where a story is going and can't edit previous sections, the stories tend to be plagued by pacing problems- scenes that could be a paragraph are dragged out over pages, important plot beats are glossed over, and so on. It's also very rare that players are able to pull off the kind of coordination necessary for satisfying narrative buildup and payoff, and the focus on player character interaction tends to leave a lot of necessary story scaffolding like scene setting and NPC interaction badly lacking.

If your goal in writing this was in part to promote or socially explore these utopian ideas rather than just to enjoy a forum game, it may be worth considering ways to mitigate these issues- to modify the Glowfic formula to better accommodate an audience.

The roleplaying threads over at RPG.net may provide some inspiration.  A skilled DM running the game can help mitigate pacing issues and ensure that interactions have emotional stakes.  Of course, forum games run with TTRPG rules can also get badly bogged down in mechanics.  Maybe some sort of minimalist diceless system would be worth exploring?

It could also help to treat the RP thread more like an actual author collaboration- planning out plot beats and character development in an OOC thread, being willing to delete and edits large sections that don't work in hindsight, and so on.  Maybe going through a short fantasy writing course like the one from Brandon Sanderson with other RP participants so that everyone is on the same page when it comes to plot structure.

Of course, that would all be a much larger commitment, and probably less fun for the players- but you do have a large potential audience who are willing to trade a ton of attention for good long-form fiction, so figuring out ways of modifying this hobby to better make that trade might be valuable.

Thanks!

I'm not sure how much the repetitions helped much with accuracy for this prompt- it's still sort of randomizing traits between the two subjects.  Though with a prompt this complex, the token limit may be an issue- it might be interesting to test at some point whether very simple prompts get more accurate with repetitions.

That said, the second set are pretty awesome- asking for a scene may have helped encourage some more interesting compositions.  One benefit of repetition may just be that you're more likely to include phrases that more accurately describe what you're looking for.

When they released the first Dall-E, didn't OpenAI mention that prompts which repeated the same description several times with slight re-phrasing produced improved results?

I wonder how a prompt like:

"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney."

-would compare with something like:

"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney.  A painting of an ornate robotic feline made of brass and a man wearing futuristic tribal clothing.  A steampunk scene by James Gurney featuring a robot shaped like a panther and a high-tech shaman."

I think this argument can and should be expanded on.  Historically, very smart people making confident predictions about the medium-term future of civilization have had a pretty abysmal track record.  Can we pin down exactly why- what specific kind of error futurists have been falling prey to- and then see if that applies here?

Take, for example, traditional Marxist thought.  In the early twentieth century, an intellectual Marxist's prediction of a stateless post-property utopia may have seemed to arise from a wonderfully complex yet self-consistent model which yielded many true predictions and which was refined by decades of rigorous debate and dense works of theory.  Most intelligent non-Marxists offering counter-arguments would only have been able to produce some well-known point, maybe one for which the standard rebuttals made up a foundational part of the Marxist model.

So, what went wrong?  I doubt there was some fundamental self-contradiction that the Marxists missed in all of their theory-crafting.  If you could go back in time and give them a complete history of 20th century economics labelled as a speculative fiction, I don't think many of their models would update much- so not just a failure to imagine the true outcome.  I think it may have been in part a mis-calibration of deductive reasoning.

Reading the old Sherlock Holmes stories recently, I found it kind of funny how irrational the hero could be.  He'd make six observations, deduce W, X, and Y, and then rather than saying "I give W, X, and Y each a 70% chance of being true, and if they're all true then I give Z an 80% chance, therefore the probability of Z is about 27%", he'd just go "W, X, and Y; therefore Z!".  This seems like a pretty common error.

Inductive reasoning can't take you very far into the future with something as fast as civilization- the error bars can't keep up past a year or two.  But deductive reasoning promises much more.  So long as you carefully ensure that each step is high-probability, the thinking seems to go, a chain of necessary implications can take you as far into the future as you want.  Except that, like Holmes, people forget to multiply the probabilities- and a model complex enough to pierce that inductive barrier is likely to have a lot of probabilities.

The AI doom prediction comes from a complex model- one founded on a lot of arguments that seem very likely to be true, but which if false would sink the entire thing.  That motivations converge on power-seeking; that super-intelligence could rapidly render human civilization helpless; that a real understanding of the algorithm that spawns AGI wouldn't offer any clear solutions; that we're actually close to AGI; etc.  If we take our uncertainty about each one of the supporting arguments- small as they may be- seriously, and multiply them together, what does the final uncertainty really look like?

Load More