Dacyn

Wiki Contributions

Comments

Sorted by
Dacyn32

I don't agree that I am making unwarranted assumptions; I think what you call "assumptions" are merely observations about the meanings of words. I agree that it is hard to program an AI to determine who the "he"s refer to, but I think as a matter of fact the meanings of those words don't allow for any other possible interpretation. It's just hard to explain to an AI what the meanings of words are. Anyway I'm not sure if it is productive to argue this any further as we seem to be repeating ourselves.

Dacyn32

No, because John could be speaking about himself administering the medication.

If it's about John administering the medication then you'd have to say "... he refused to let him".

It’s also possible to refuse to do something you’ve already acknowledged you should do, so the 3rd he could still be John regardless of who is being told what.

But the sentence did not claim John merely acknowledged that he should administer the medication, it claimed John was the originator of that statement. Is John supposed to be refusing his own requests?

Dacyn3-4

John told Mark that he should administer the medication immediately because he was in critical condition, but he refused.

Wait, who is in critical condition? Which one refused? Who’s supposed to be administering the meds? And administer to whom? Impossible to answer without additional context.

I don't think the sentence is actually as ambiguous as you're saying. The first and third "he"s both have to refer to Mark, because you can only refuse to do something after being told you should do it. Only the second "he" could be either John or Mark.

Dacyn40

Early discussion of AI risk often focused on debating the viability of various elaborate safety schemes humanity might someday devise—designing AI systems to be more like “tools” than “agents,” for example, or as purely question-answering oracles locked within some kryptonite-style box. These debates feel a bit quaint now, as AI companies race to release agentic models they barely understand directly onto the internet.

Why do you call current AI models "agentic"? It seems to me they are more like tool AI or oracle AI...

Dacyn32

I am still seeing "succomb".

Dacyn10

In the long scale a trillion is 10^18, not 10^24.

Dacyn10

I say "zero" when reciting phone numbers. Harder to miss that way.

Dacyn10

I think you want to define to be true if is true when we restrict to some neighbourhood such that is nonempty. Otherwise your later example doesn't make sense.

Dacyn54

I noticed all the political ones were phrased to support the left-wing position.

Dacyn70

This doesn't completely explain the trick, though. In the step where you write f=(1-I)^{-1} 0, if you interpret I as an operator then you get f=0 as the result. To get f=Ce^x you need to have f=(1-I)^{-1} C in that step instead. You can get this by replacing \int f by If+C at the beginning.

Load More