One thing you could do is give users relatively more voting power if they vote without seeing the author of the post. I.e., you can enable a mode which hides post authors until you give a vote on the anonymized content. After that, you can still vote like normal.
Obviously there are ways author identity can leak through this, but it seems better than nothing.
It has been updated now -- I didn't get around to it that afternoon and then forgot until now.
Onboard the flight and while ordering the ticket, the airline will try to sell you their branded credit card. If you take the deal and use the credit card, they give you a bunch of miles, and they get a cut of every purchase you make.
This is correct and a big oversight on my part, thanks! I’ll update the post later today.
EDIT: It's been updated.
This seems plausible given that virtually every knowledge worker I know fantasizes to some extent about working with their hands.
I suspect that this method will only work well on tasks where the model needs to reason explicitly in order to cheat. So, e.g., if the model needs to reason out some trait of the user in order to flatter them the prompt will likely kick in and get it to self-report its cheating, but if the model can learn to flatter the user without on-the-fly without reasoning the prompt probably won't do anything. By analogy, if I instruct a human to tell me whenever they use hand gestures to communicate something, they will have difficulty because their hand gestures are automatic and not normally promoted to conscious attention.
There's the atom transformer in AlphaFold-like architectures, although the embeddings it operates on do encode 3D positioning from earlier parts of the model so maybe that doesn't count.
Transformers do not natively operate on sequences.
This was a big misconception I had because so much of the discussion around transformers is oriented around predicting sequences. However, it's more accurate to think of general transformers as operating on unordered sets of tokens. The understanding of sequences only comes if you have a positional embedding to tell the transformer how the tokens are ordered, and possibly a causal mask to force attention to flow in only one direction.
The Money Stuff column mentioned AI alignment, rationality, and the UK AISI today:
Here is a post from the UK AI Security Institute looking for economists to “find incentives and mechanisms to direct strategic AI agents to desirable equilibria.” One model that you can have is that superhuman AI will be terrifying in various ways, but extremely rational. Scary AI will not be an unpredictable lunatic; it will be a sort of psychotic pursuing its own aims with crushing instrumental rationality. And arguably that’s where you need economists! The complaint people have about economics is that it tries to model human behavior based on oversimplified assumptions of rationality. But if super AI is super-rational, economists will be perfectly suited to model it. Anyway if you want to design incentives for AI here’s your chance.
It seems to me like buying an investment property is almost always a bad decision, because 1) single properties are very volatile, 2) you generally have to put a very large chunk of your net worth (sometimes even >100%!) in a property that's completely undiversified, and 3) renting out a property is work and you likely could get a better hourly elsewhere.
The only advantages I see are that there's far more cheap leverage available to retail investors in real estate than other sectors, and mortgages can act as a savings commitment device. Are there other reasons I'm missing that explain the apparent popularity of these investments?