Posts

Sorted by New

Wiki Contributions

Comments

Nnotm4-2

In principle, I prefer sentient AI over non-sentient bugs. But the concern that is if non-sentient superintelligent AI is developed, it's an attractor state, that is hard or impossible to get out of. Bugs certainly aren't bound to evolve into sentient species, but at least there's a chance.

Nnotm5-1

Bugs could potentially result in a new sentient species many millions of years down the line. With super-AI that happens to be non-sentient, there is no such hope.

Nnotm20

Thank you for this, I had listened to the lesswrong audio of the last one just before seeing your comment about making your version, and now waited before listening to this one hoping you would post one

Nnotm107

Missed opportunity to replace "if the box contains a diamond" with the more thematically appropriate "if the chest contains a treasure", though

Nnotm21

FWIW the AI audio seems to not take that into account

Nnotm21

Thanks, I've found this pretty insightful. In particular, I hadn't considered that even fully understanding static GPT doesn't necessarily bring you close to understanding dynamic GPT - this makes me update towards mechinterp being slightly less promising than I was thinking.

Quick note:
> a page-state can be entirely specified by 9628 digits or a 31 kB file.
I think it's a 31 kb file, but a 4 kB file?

Nnotm30

I think an important difference between humans and these Go AIs is memory: If we find a strategy that reliably beats human experts, they will either remember losing to it or hear about it and it won't work the next time someone tries it. If we find a strategy that reliably beats an AI, that will keep happening until it's retrained in some way.

Nnotm40

Are you familiar with Aubrey de Grey's thinking on this?

To summarize, from memory, cancers can be broadly divided into two classes:

  • about 85% of cancers rely on lengthening telomeres via telomerase
  • the other 15% of cancers rely on some alternative lengthening of telomeres mechanism ("ALT")

The first, big class, can be solved if we can prevent cancers from using telomerase. In his 2007 book "Ending Aging", de Grey and his co-author Michael Rae wrote about "Whole-body interdiction of lengthening of telomeres" (WILT), which was about using gene therapy to remove telomerase genes in all somatic cells. Then, stem cell therapy could be used to produce new telomeres for those cases where the body usually legitimately uses telomerase.

But research has moved on since then, and this might not be necessary. One promising approach is Maia Biotech's THIO, a small molecule drug which is incorporated into the telomeres of cancer cells, compromises the telomeres' structure, and results in rapid cell death. They are currently preparing for phase 2 for a few different clinical trials.

For the other 15% of cancers, the ALT mechanism is as far as I know not as well understood, and I haven't heard of a promising general approach to cure it. But it seems plausible that it could be a similar affair in the end.

Nnotm40

Thanks, I will read that! Though just after you commented I found this in my history, which is the post I meant: https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth

Nnotm10

I think there was a post/short-story on lesswrong a few months ago about a future language model becoming an ASI because someone asked it to pretend it was an ASI agent and it correctly predicted the next tokens, or something like that. Anyone know what that post was?

Load More