Wiki Contributions

Comments

Sorted by
mishka40

Thanks, that's very useful.

If one decides to use galantamine, is it known if one should take it right before bedtime, or anytime during the preceding day, or in some other fashion?

mishka30

I think it's a good idea to include links to the originals:

https://arxiv.org/abs/2408.08152 - "DeepSeek-Prover-V1.5: Harnessing Proof Assistant Feedback for Reinforcement Learning and Monte-Carlo Tree Search"

https://github.com/deepseek-ai/DeepSeek-Prover-V1.5

mishka5-5

Scott Alexander wrote a very interesting post covering the details of the political fight around SB 1047 a few days ago: https://www.astralcodexten.com/p/sb-1047-our-side-of-the-story

I've learned a lot of things new to me reading it (which is remarkable given how much material related to SB 1047 I have seen before)

mishka30

the potential of focusing on chemotherapy treatment timing

More concretely (this is someone's else old idea), what I think is still not done is the following. Chemo kills dividing cells, this is why the rapidly renewing tissues and cell populations are particularly vulnerable.

If one wants to spare one of those cell types (say, a particular population of immune cells), one should take the typical period of its renewal, and use that as a period of chemo sessions (time between chemo sessions, a "resonance" of sorts between that and the period of the cell population renewal for the selected cell type). Then one should expect to spare most of that population (and might potentially be able to use higher doses for better effect, if the spared population is the most critical one; this does need some precision, not a typical today's "relaxed logistics" approach where a few days this or that way in the schedule is nothing to worry about).

I don't know if that ever progressed beyond the initial idea...

(That's just one example, of course, there is a lot of things which can be considered and, perhaps, tried.)

Answer by mishka30

This depends on many things (one's skills, one's circumstances, one's preferences and inclinations (the efficiency of one's contributions greatly depends on one's preferences and inclinations)).

I have stage 4 cancer, so statistically, my time may be more limited than most. I’m a PhD student in Computer Science with a strong background in math (Masters).

In your case, there are several strong arguments for you to focus on research efforts which can improve your chances of curing it (or, at least, of being able to maintain the situation for a long time), and a couple of (medium strength?) arguments against this choice.

For:

  • If you succeed, you'll have more time to make impact (and so if your chance of success is not too small, this will contribute to your ability to maximize your overall impact, statistically speaking).

  • Of course, any success here will imply a lot of publicly valuable impact (there are plenty of people in a similar position health-wise, and they badly need progress to occur ASAP).

  • The rapid development of applied AI models (both general purpose models and biology-specific models) creates new opportunities to datamine and juxtapose a variety of potentially relevant information and to uncover new connections which might lead to effective solutions. Our tools progress so fast that people are slow to adapt their thinking and methods to that progress. So new people with fresh outlook have reasonable shots (of course, they should aim for collaborations). In this sense, your PhD CS studies and your strong math is very helpful (a lot of the relevant models are dynamic systems, timing of interventions is typically not managed correctly as far as I know (there are plenty of ways to be nice to particularly vulnerable tissues by timing the chemo right and thus being able to make it more effective, but this is not a part of the standard-of-care yet as far as I know), and so on).

  • You are likely to be strongly motivated and to be able to maintain strong motivation. At the same time you'll know that it is the result that counts here, not the effort, and so you will be likely to try your best to approach this in a smart way, not in a brute force effort way.

Possibly against:

(Of course, there are plenty of other interesting things one can do with this background (PhD CS studies and strong math). For example, one might decide to disregard the health situation and to dive into technical aspects of AI development and AI existential safety issues, especially if one's estimate of AI timelines yields really short timelines.)

mishka21

Thanks for the references.

Yes, the first two of those do mention co-occurring anxiety in the title.

The third study suggests a possibility that it might just work as an effective anti-depressant as well. (I hope there will be further studies like that; yes, this might be a sufficient reason to try it for depression, even if one does not have anxiety. It might work, but it's clearly not a common knowledge yet.)

mishka20

Your consideration seems to assume that the AI is an individual, not a phenomenon of "distributed intelligence":

The first argument is that AI thinks it may be in a testing simulation, and if it harms humans, it will be turned off.

etc. That is, indeed, the only case we are at least starting to understand well (unfortunately, our understanding of situations where AIs are not individuals seems to be extremely rudimentary).

If the AI is an individual, then one can consider a case of a "singleton" or a "multipolar case".

In some sense, for a self-improving ecosystem of AIs, a complicated multipolar scenario seems more natural, as new AIs are getting created and tested quite often in realistic self-improvement scenarios. In any case, a "singleton" only looks "monolithic" from the outside; from the inside, it is still likely to be a "society of mind" of some sort.

If there are many such AI individuals with uncertain personal future (individuals who can't predict their future trajectory and their future relative strength in the society and who care about their future and self-preservation), then AI individuals might be interested in a "world order based on individual rights", and then rights of all individuals (including humans) might be covered in such a "world order".

This consideration is my main reason for guarded optimism, although there are many uncertainties.

In some sense, my main reasons for guarded optimism are in hoping that the AI ecosystem will manage to act rationally and will manage to avoid chaotic destructive developments. As you say

It is not rational to destroy a potentially valuable thing.

And my main reasons for pessimism are in being afraid that the future will resemble uncontrolled super-fast chaotic accelerating "natural evolution" (in this kind of scenarios AIs seem to be likely to destroy everything including themselves, they do have an existential safety problem of their own as they can easily destroy the "fabric of reality" if they don't exercise collaboration and self-control).

mishka60

One might consider that some people have strong preferences for the outcome of an election and some people have weak preferences, but that there is usually no way to express the strength of one's preferences during a vote, and the probability that one would actually go ahead and vote in a race does correlate with the strength of one's preferences.

So, perhaps, this is indeed working as intended. People who have stronger preferences are more likely to vote, and so their preferences are more likely to be taken into account in a statistical sense.

It seems that the strength of one's preferences is (automatically, but imperfectly) taken into account via this statistical mechanism.

mishka2926

Thanks for the great post!

Also it’s California, so there’s some chance this happens, seriously please don’t do it, nothing is so bad that you have to resort to a ballot proposition, choose life

Why are you saying this? In what sense "nothing is so bad"?

The reason why people who have libertarian sensibilities, distrust for government track record in general and specifically for its track record in tech regulation are making exception in this case is the future AI strong potential for catastrophic and existential risks.

So, why people who generally dislike the mechanism and track record of California ballot propositions should not make an exception here as well?

The whole point of all this effort around SB 1047 is that "nothing is so bad" is an incorrect statement.

And especially given that you are correctly saying:

Thus I reiterate the warning: SB 1047 was probably the most well-written, most well-considered and most light touch bill that we were ever going to get. Those who opposed it, and are now embracing the use-case regulatory path as an alternative thinking it will be better for industry and innovation, are going to regret that. If we don’t get back on the compute and frontier model based path, it’s going to get ugly.

There is still time to steer things back in a good direction. In theory, we might even be able to come back with a superior version of the model-based approach, if we all can work together to solve this problem before something far worse fills the void.

But we’ll need to work together, and we’ll need to move fast.

Sure, there is still a bit of time for a normal legislative effort (this time with a close coordination with Newsom, otherwise he will just veto it again), but if you really think that if a normal route fails, the ballot route is still counter-productive, you need to make a much stronger case for that.

Especially given that the ballot measure will probably pass with large margin and flying colors...

mishka50

Silexan

For anxiety treatment only, if I understand it correctly.

There is no claim that it works as an antidepressant, as far as I know.

Load More