Wiki Contributions


I'm thinking a good techno remix, right?

I mean, the Spokesperson is being dumb, the Scientist is being confused. Most AI researchers aren't even being Scientists, they have different theoretical models than EY. But some of them don't immediately discount the Spokesperson's false-empiricism argument publicly, much like the Scientist tries not to. I think the latter pattern is what has annoyed EY and what he writes against here.

However, a large number of current AI experts do recently seem to be boldly claiming that LLMs will never be sufficient for even AGI, not to mention ASI. So maybe it's also aimed at them a bit.

I think the simplest distinction is that monogamy doesn't entertain the possibility of a monogamous sexual/romantic partner ethically having other sexual/romantic partners at the same time.

If it's not monogamy then it can be something else but it doesn't have to be polyamory (swingers exist and in practice the overlap seems small). Ethical non-monogamy is a superset of most definitions of polyamory but not all because there are polyamorous people who "cheat" (break relationship agreements) and it doesn't stop them from being considered polyamorous, just like monogamous people who cheat don't become polyamorous (although I'd argue they become non-monogamous for the duration).

It's probably more information to learn that someone is monogamous than to learn that they are polyamorous and learning that they are ethically non-monogamous is somewhere in the middle.

I've also found that dance weekends have a strange ability to increase my skill and intuition/understanding of dance moreso than lessons. I think a big part of learning dance is learning by doing. For me at least a big part is training my proprioception to understand more about the world than it did before. Both leading and following also helps tremendously because a process something like "my mirror neurons have learned how to understand my partner's experience by being in their shoes in my own experience".

The most hilarious thing I witness is the different language everyone comes up with to describe the interaction of tone and proprioception. A bit more than half of the instructors I've listened to just call it Energy, and talking about directing it from certain places to certain places. Some people call it leading from X or follower voice or a number of other terms. Very few people have a mechanistic explanation of which muscle groups engage to communicate a lead into a turn or a change in angular momentum by a follow, and ultimately it probably wouldn't really help people because there appears to be an unconscious layer of learning that we all do between muscle activations and intentions.

tl;dr: I find that after thinking about wanting to do a particular thing and then trying it for a while with several different people as both lead and follow I slowly (sometimes suddenly; it was fun learning how to dance Lindy again after the pandemic from following one dance) find that it is both easier to achieve and easier to understand/feel through proprioception. It feels anti-rationalist as a process but performing the process is a pretty rational thing to do.

Contains/element-of are the complementary formal verbs from set theory, but I've definitely seen Contains/is-a used as equivalent in practice (cats contains Garfield because Garfield is a cat).

Similarly in programming "cat is Garfield's type" makes sense although it's verbose, or "cat is implemented by Garfield" for the traits folks which is far more natural.

So where linguistically necessary humans have had no trouble complementing is-a in natural language. I think it's a matter of where emphasis is desired; usually the subject (Garfield) is where the emphasis is, and usually the element is the subject instead of the class. Formally we often want the class/set/type to be the subject since it's the thing we are emphasizing.

What happens if the exam is given either on Saturday at midnight minus epsilon or on Sunday at 00:00? Seems surprising generally and also surprising in different ways across reasoners of different abilities and precisions given the choice of epsilon.

EDIT: I think it's also just as surprising if given at midnight minus epsilon on any day before Sunday, and therefore surprising any time. If days are discrete and there's no time during the day for consideration then it falls back on the original paradox, although that raises the question of when the logical inference takes place. I think this could be extended to an N-discrete-days paradox for any non-oracle agent that has to spend some amount of time during the day reasoning.

Another dumb but plausible way that AGI gets access to advanced chemicals, biotech, and machinery; someone asks "how do I make a lot of street drug X" and it snowballs from there.

It's okay because mathematical realism can keep modeling them long after we're gone.

We also routinely create real-life physical models who can be people en masse, and most of them (~93%) who became people have died so far, many by killing.

I'm all for solving the dying part comprehensively but a lot of book/movie/story characters are sort of immortalized. We even literally say that about them, and it's possible the popular ones are actually better off.

Some direct (I think) evidence that alignment is harder than capabilities; OpenAI basically released GPT-2 immediately with basic warnings that it might produce biased, wrong, and offensive answers. It did, but they were relatively mild. GPT-2 mostly just did what it was prompted to do, if it could manage it, or failed obviously. GPT-3 had more caveats, OpenAI didn't release the model, and has poured significant effort into improving its iterations over the last ~2 years. GPT-4 wasn't released for months after pre-training, OpenAI won't even say how big it is, Bing's Sydney (an early form of GPT-4) was incredibly misaligned showing significantly more alignment work was necessary as compared to early GPT-3, and the RLHF/finetuned GPT-4 is still pretty much as vulnerable to DAN and similar prompt engineering.

Load More