β-redex

Posts

Sorted by New

Wiki Contributions

Comments

I barely registered the difference between small talk and big talk

I am still confused about what "small talk" is after reading this post.

Sure, talking about the weather is definitely small talk. But if I want to get to know somebody, weather talk can't possibly last for more than 30 seconds. After that, both parties have demonstrated the necessary conversational skills to move on to more interesting topics. And the "getting to know each other" phase is really just a spectrum between surface level stuff and your deepest personal secrets, so I don't really see where you would draw the line between small and deep talk.

One situation I struggle with on the other hand is when I would rather avoid talking to a person at all, and so I want to maintain the shallowest possible level of small talk. (Ideally I could tell them that "sorry, I would rather just not talk to you right now", but that's not really socially accepted.)

It was actually this post about nootropics that got me curious about this. Apparently (based on self reported data) weightlifting is just straight up better than most other nootropics?

Anyway, thank you for referencing some opposing evidence on the topic as well, I might try to look into it more at some point.

(Unfortunately, the thing that I actually care about - whether it has cognitive benefits for me - seems hard to test, since you can't blind yourself to whether you exercised.)

I think this is (and your other post about exercise) are good practical examples of situations where rational thinking makes you worse off (at least for a while).

If you had shown this post to me as a kid, my youth would probably have been better. Unfortunately no one around me was able to make a sufficiently compelling argument for caring about physical appearance. It wasn't until much later that I was able to deduce the arguments for myself. If I just blindly "tried to fit in with the cool kids, and do what is trendy", I would have been better off.

I wonder what similar blind spots I could have right now where the argument in favor of doing something is quite complicated, but most people in society just do it because they blindly copy others, and I am worse off as a result.

This alone trumps any other argument mentioned in the post. None of the other arguments seem universal and can be argued with on an individual basis.

I actually like doing things with my body. I like hiking and kayaking and mountain climbing and dancing.

As some other commenters noted, what if you just don't?

I think it would be valuable if someone made a post just focused on collecting all the evidence for the positive cognitive effects of exercise. If the evidence is indeed strong, no other argument in favor of exercise should really matter.

FWIW I don't think that matters, in my experience interactions like this arise naturally as well, and humans usually perform similarly to how Friend did here.

In particular it seems that here ChatGPT completely fails at tracking the competence of its interlocutor in the domain at hand. If you asked a human with no context at first they might give you the complete recipe just like ChatGPT tried, but any follow up question immediately would indicate to them that more hand-holding is necessary. (And ChatGPT was asked to "walk me through one step at a time", which should be blatantly obvious and no human would just repeat the instructions again in answer to this.)

Cool! (Nitpick: You should probably mention that you are deviating from the naming in the HoTT book. AFAIK usually and types are called Pi and Sigma types respectively, while the words "product" and "sum" (or "coproduct" in the HoTT book) are reserved for and .)

I am especially looking forward to discussion on how MLTT relates to alignment research and how it can be used for informal reasoning as Alignment Research Field Guide mentions.

I always get confused when the term "type signature" is used in text unrelated to type theory. Like what do people mean when they say things like "What’s the type signature of an agent?" or "the type of agency is "?

This argument seems a bit circular, nondeterminism is indeed a necessary condition for exfiltrating outside information, so obviously if you prevent all nondeterminism you prevent exfiltration.

You are also completely right that removing access to obviously nondeterministic APIs would massively reduce the attack surface. (AFAIK most known CPU side-channel require timing information.)

But I am not confident that this kind of attack would be "robustly impossible". All you need is finding some kind of nondeterminism that can be used as a janky timer and suddenly all Spectre-class vulnerabilities are accessible again.

For instance I am pretty sure that rowhammer depends on the frequency of the writes. If you insert some instruction between the writes to RAM, you can suddenly measure the execution time of said instruction by looking at how many cycles it took to flip a bit with rowhammer. (I am not saying that this particular attack would work, I am just saying that I am not confident you couldn't construct something similar that would.)

I am confident that this direction of exfiltration would be robustly impossible.

If you have some deeper reason for believing this it would probably be worth its own post. I am not saying that its impossible to construct some clever sandbox environment that ensures determinism even on a buggy CPU with unknown classes of bugs, I am just saying that I don't know of existing solutions.

(Also in my opinion it would be much easier to just make a non-buggy CPU instead of trying to prove correctness of something executing on a buggy one. (Though proving your RAM correct seems quite hard, e.g. deriving the lack of rowhammer-like attacks from Maxwell's laws or something.))

Yes, CPUs leak information: that is the output kind of side-channel, where an attacker can transfer information about the computation into the outside world. That is not the kind I am saying one can rule out with merely diligent pursuit of determinism.

I think you are misunderstanding this part, input side channels absolutely exist as well, Spectre for instance:

On most processors, the speculative execution resulting from a branch misprediction may leave observable side effects that may reveal private data to attackers.

Note that the attacker in this case is the computation that is being sandboxed.

This implies that we could use relatively elementary sandboxing (no clock access, no networking APIs, no randomness, none of these sources of nondeterminism, and that’s about it) to prevent a task-specific AI from learning any particular facts

It's probably very hard to create such a sandbox though, your list is definitely not exhaustive. Modern CPUs leak information like a sieve. (The known ones are mostly patched of course but with this track record plenty more unknown vulnerabilities should exist.)

Maybe if you build the purest lambda calculus interpreter with absolutely no JIT and a deterministic memory allocator you could prove some security properties even when running on a buggy CPU? This seems like a bit of a stretch though. (And maybe while running it like this on a single thread you can prevent the computation from being able to measure time, any current practical AI needs massive parallelism to execute. With that probably all hopes of determinism and preventing timing information from leaking in go out the window.)

Also I just found that you already argued this in an earlier post, so I guess my point is a bit redundant.

Anyway, I like that this article comes with an actual example, we could probably use more examples/case studies for both sides of the argument.

Load More