notfnofn

Wiki Contributions

Comments

At my local Barnes and Nobles, I cannot access slatestarcodex.com nor putanumonit.com. Have never had any issues accessing any other websites (not that I've tried to access genuinely sketchy websites there). The wifi there is titled Bartleby, likely related to Bartleby.com, whereas many other Barnes and Nobles have wifi titled something like "BNWifi". I have not tried to access these websites at other Barnes yet.

The hot hand fallacy: seeing data that is typical for independent coin flips as evidence for correlation between adjacent flips.

The hot hand fallacy fallacy (Miller, Sanjurjo 2018): Not correcting for the fact that amongst random length-k (k>2) sequences of independent coin tosses with at least one heads before toss k, the expected proportion of (heads after heads)/(tosses after heads) is less than 1/2.

The hot hand fallacy fallacy fallacy: Misinterpreting the above observation as a claim that under some weird conditioning, the probability of Heads given you have just seen Heads is less than 1/2 for independent coin tosses.

Has to be a python code; allowing arbitrary non-computable natural language descriptions gets hairy fast

Random thought after reading "A model of UDT with a halting oracle": imagine there are two super-intelligent AIs A and B, suitably modified to have access to their own and each other's source codes. They are both competing to submit a python code of length at most N which prints the larger number, then halts (where N is orders of magnitude larger than the code lengths of A and B). A can try to "cheat" by submitting something like exec(run B on the query "submit a code of length N that prints a large number, then halts") then print(0), but B can do this as well. Supposing they must submit to a halting oracle that will punish any AI that submits a non-halting program, what might A and B do?

The intended question, I think, is if you were to find a dictionary for some alien language (not a translators dictionary, but a dictionary for people who speak that language to look up definitions of words), can you translate most of the dictionary to English? What if you additionally had access to large amounts of conversations in that language, without any indication of what the aliens were looking at/doing at the time of the conversation?

notfnofn6-1

Predictive Clustering: Whenever your writing is predictable (for example, when responding to something or after the first few sentences of a new post), an LLM could vaguely predict the points you might make. It could cluster these points, allowing you to point and click on the relevant cluster. For instance, in a political piece, you might first click, "I [Agree | Disagree | Raise Interesting Other Point | Joke]." You then select "Raise interesting point," and it presents you with 5-20 points you might want to raise, along with a text box to add your own. Once you add your point, you can choose a length.

This seems like something that is very likely to come into existence in the near future, but I hope does not. Not only does it rob people of the incredibly useful practice of crafting their own arguments, I think putting better words in the user's mouth than the user planned to say can influence the way the user actually thinks.

Frequentist and Bayesian reasoning are two ways to handle Knightian uncertainty. Frequentism gives you statements that are outright true in the face of this uncertainty, which is fantastic. But this sets an incredibly high bar that is very difficult to work with.

For a classic example, let's say you want have a possibly biased coin in front of you and you want to say something about its rate of heads. From frequentism, you can lock in a method of obtaining a confidence interval after, say, 100 flips and say "I'm about to flip this coin 100 times and give you a confidence interval for p_heads. The chance that the interval will contain p_heads is at least 99%, regardless of what the true value of p_heads is" There's no Bayesian analogue.

Now let's say I had a complex network of conditional probability distributions with a bunch of parameters which have Knightian uncertainty. Getting confidence regions will be extremely expensive, and they'll probably be way too huge to be useful. So we put on a convenient prior and go.
 

ETA: Randomized complexity classes also feel fundamentally frequentist.

[This comment is no longer endorsed by its author]Reply

Not fully, unfortunately. Although a baseline would be asking an LLM to convert my latex file into markdown that allows mathjax

Someone recently tried to sell me on the Ontological Argument for God which begins with "God is that for which nothing greater can be conceived." For the reasons you described, this is completely nonsensical, but it was taken seriously for a long time (even by Bertrand Russell!), which made me realize how much I took modern logic for granted

I didn't think much of your comment at the time, but I think it's extremely central to the whole thing now. We go from unconscious to conscious almost all at once.

Load More