Posts

Sorted by New

Wiki Contributions

Comments

Well, props for offering a fresh outside perspective- this site could certainly use more of that.  Unfortunately, I don't think you've made a very convincing argument. (Was that intentional, since you don't seem to believe ideological arguments can be convincing?)

We can never hope to glimpse pure empirical noumenon, but we certainly can build models that more or less accurately predict what we will experience in the future. We rely on those models to promote whatever we value, and it's important to try and improve how well they work. Colloquially, we call that empiricism.

Cults, ideologies and such are models that have evolved to self-propagate- to convince people to spread them.  Sometimes they do this by latching on to peoples' fears, as you've mentioned.  Sometimes, they do it by providing people with things they value, like community or a feeling of status.  Other times, they're horribly emotionally abusive, making people feel pain at the thought of questioning dogma, engineering harmful social collective action problems and turning people into fanatics.

Our reality is built from models, but not all models are ideologies.  Ideologies are parasitic- they optimize our models for propagation rather than accurate prediction.  That's a bad thing because the main things we want to propagate are ourselves and the people we care about, and we need to be able to make accurate predictions to do that.

Is this site dangerously ideological?  That's definitely a question that deserves more attention, but it's an empirical question, not one of whether the ideology is stronger than rivals.

Can people be convinced to change or abandon ideologies by empirical arguments? Absolutely. I was raised to be a fanatic Evangelical, and was convinced to abandon it mostly by reading about science and philosophy.  I've also held strong beliefs about AI risk that I've changed my mind about after reading empirical arguments.  My lived experience strongly suggests that beliefs can be changed by things other than lived experience.

Do you find any of this convincing? I'm guessing not. From the tone of your post, it looks like you're viewing this kind of exchange as a social competition where changing your mind means losing. But I'd suggest asking yourself whether that frame is really helpful- whether it actually promotes the things you value.  Our models of reality are deeply flawed, and some of those flaws will cause us and other people pain and heartbreak.  We need to be trying to minimize that- to build models that are more accurately predictive of experience- both for ourselves and in order to be good people.  But if communication outside of the confines of an ideology can only ever be a status game, how can we hope to do that?

ProjectLawful.com: Eliezer's latest story, past 1M words

There are a lot of interesting ideas in this RP thread.  Unfortunately, I've always found it a bit hard to enjoy roleplaying threads that I'm not participating in myself.  Approached as works of fiction rather than games, RP threads tend to have some very serious structural problems that can make them difficult to read.

Because players aren't sure where a story is going and can't edit previous sections, the stories tend to be plagued by pacing problems- scenes that could be a paragraph are dragged out over pages, important plot beats are glossed over, and so on. It's also very rare that players are able to pull off the kind of coordination necessary for satisfying narrative buildup and payoff, and the focus on player character interaction tends to leave a lot of necessary story scaffolding like scene setting and NPC interaction badly lacking.

If your goal in writing this was in part to promote or socially explore these utopian ideas rather than just to enjoy a forum game, it may be worth considering ways to mitigate these issues- to modify the Glowfic formula to better accommodate an audience.

The roleplaying threads over at RPG.net may provide some inspiration.  A skilled DM running the game can help mitigate pacing issues and ensure that interactions have emotional stakes.  Of course, forum games run with TTRPG rules can also get badly bogged down in mechanics.  Maybe some sort of minimalist diceless system would be worth exploring?

It could also help to treat the RP thread more like an actual author collaboration- planning out plot beats and character development in an OOC thread, being willing to delete and edits large sections that don't work in hindsight, and so on.  Maybe going through a short fantasy writing course like the one from Brandon Sanderson with other RP participants so that everyone is on the same page when it comes to plot structure.

Of course, that would all be a much larger commitment, and probably less fun for the players- but you do have a large potential audience who are willing to trade a ton of attention for good long-form fiction, so figuring out ways of modifying this hobby to better make that trade might be valuable.

What DALL-E 2 can and cannot do

Thanks!

I'm not sure how much the repetitions helped much with accuracy for this prompt- it's still sort of randomizing traits between the two subjects.  Though with a prompt this complex, the token limit may be an issue- it might be interesting to test at some point whether very simple prompts get more accurate with repetitions.

That said, the second set are pretty awesome- asking for a scene may have helped encourage some more interesting compositions.  One benefit of repetition may just be that you're more likely to include phrases that more accurately describe what you're looking for.

What DALL-E 2 can and cannot do

When they released the first Dall-E, didn't OpenAI mention that prompts which repeated the same description several times with slight re-phrasing produced improved results?

I wonder how a prompt like:

"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney."

-would compare with something like:

"A post-singularity tribesman with a pet steampunk panther robot. Illustration by James Gurney.  A painting of an ornate robotic feline made of brass and a man wearing futuristic tribal clothing.  A steampunk scene by James Gurney featuring a robot shaped like a panther and a high-tech shaman."

Convince me that humanity *isn’t* doomed by AGI

I think this argument can and should be expanded on.  Historically, very smart people making confident predictions about the medium-term future of civilization have had a pretty abysmal track record.  Can we pin down exactly why- what specific kind of error futurists have been falling prey to- and then see if that applies here?

Take, for example, traditional Marxist thought.  In the early twentieth century, an intellectual Marxist's prediction of a stateless post-property utopia may have seemed to arise from a wonderfully complex yet self-consistent model which yielded many true predictions and which was refined by decades of rigorous debate and dense works of theory.  Most intelligent non-Marxists offering counter-arguments would only have been able to produce some well-known point, maybe one for which the standard rebuttals made up a foundational part of the Marxist model.

So, what went wrong?  I doubt there was some fundamental self-contradiction that the Marxists missed in all of their theory-crafting.  If you could go back in time and give them a complete history of 20th century economics labelled as a speculative fiction, I don't think many of their models would update much- so not just a failure to imagine the true outcome.  I think it may have been in part a mis-calibration of deductive reasoning.

Reading the old Sherlock Holmes stories recently, I found it kind of funny how irrational the hero could be.  He'd make six observations, deduce W, X, and Y, and then rather than saying "I give W, X, and Y each a 70% chance of being true, and if they're all true then I give Z an 80% chance, therefore the probability of Z is about 27%", he'd just go "W, X, and Y; therefore Z!".  This seems like a pretty common error.

Inductive reasoning can't take you very far into the future with something as fast as civilization- the error bars can't keep up past a year or two.  But deductive reasoning promises much more.  So long as you carefully ensure that each step is high-probability, the thinking seems to go, a chain of necessary implications can take you as far into the future as you want.  Except that, like Holmes, people forget to multiply the probabilities- and a model complex enough to pierce that inductive barrier is likely to have a lot of probabilities.

The AI doom prediction comes from a complex model- one founded on a lot of arguments that seem very likely to be true, but which if false would sink the entire thing.  That motivations converge on power-seeking; that super-intelligence could rapidly render human civilization helpless; that a real understanding of the algorithm that spawns AGI wouldn't offer any clear solutions; that we're actually close to AGI; etc.  If we take our uncertainty about each one of the supporting arguments- small as they may be- seriously, and multiply them together, what does the final uncertainty really look like?

Playing with DALL·E 2

Thanks for posting these.

It's odd that mentioning Dall-E by name in the prompt would be a content policy violation.  Do you know if they've mentioned why?

If you're still taking suggestions:
A beautiful, detailed illustration by James Gurney of a steampunk cheetah robot stalking through the ruins of a post-singularity city.  A painting of an ornate brass automaton shaped like a big cat.  A 4K image of a robotic cheetah in a strange, high-tech landscape.

I think OpenAI mentioned that including the same information several times with different phrasing helped for more complicated prompts in the first DALL-E, so I'm curious to see if that would help here- assuming that wouldn't be over the length limit.

What's the easiest way to currently generate images with machine learning?

For text-to-image synthesis, the Disco Diffusion notebook is pretty popular right now.  Like other notebooks that use CLIP, it produces results that aren't very coherent, but which are interesting in the sense that they will reliably combine all of the elements described in a prompt in surprising and semi-sensible ways, even when those elements never occurred together in the models' training sets.

The Glide notebook from OpenAI is also worth looking at.  It produces results that are much more coherent but also much less interesting than the CLIP notebooks. Currently, only the smallest version of the model is publicly available, so the results are unfortunately less impressive than those in the paper.

Also of note are the Chinese and Russian efforts to replicate DALL-E.  Like Glide, the results from those are coherent but not very interesting.  They can produce some very believable results for certain prompts, but struggle to generalize much outside of their training sets.

DALL-E itself still isn't available to the public, though I'm personally still holding out hope that OpenAI will offer a paid API at some point.

Postmortem on DIY Recombinant Covid Vaccine

Has your experience with this project given you any insights into bioterrorism risk?

Suppose that, rather than synthesizing a vaccine, you'd wanted to synthesize a new pandemic.  Would that have been remotely possible?  Do you think the current safeguards will be enough to prevent that sort of thing as the technology develops over the next decade or so?

Thoughts on Moral Philosophy

Do you think it's plausible that the whole deontology/consequentialism/virtue ethics confusion might arise from our idea of morality actually being a conflation of several different things that serve separate purposes?

Like, say there's a social technology that evolved to solve intractable coordination problems by getting people to rationally pre-commit to acting against their individual interests in the future, and additionally a lot of people have started to extend our instinctive compassion and tribal loyalties to the entirety of humanity, and also people have a lot of ideas about which sorts of behaviors take us closer to some sort of Pareto frontier- and maybe additionally there's some sort of acausal bargain that a lot of different terminal values converge toward or something.

If you tried to maximize just one of those, you'd obviously run into conflicts with the others- and then if you used the same word to describe all of them, that might look like a paradox.  How can something be clearly good and not good at the same time, you might wonder, not realizing that you've used the word to mean different things each time.

If I'm right about that, it could mean that when encountering the question of "what is most moral" in situations where different moral systems provide different answers, the best answer might not be so much "I can't tell, since each option would commit me to things I think are immoral," but rather "'Morality' isn't a very well defined word; could you be more specific?"

Going Out With Dignity

When people talk about "human values" in this context, I think they usually mean something like "goals that are Pareto optimal for the values of individual humans"- and the things you listed definitely aren't that.

Load More