LESSWRONG
LW

840
Alex A
158040
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Telling the Difference Between Memories & Logical Guesses
Alex A24d30

Yes, I think it’s chunking to compress information. You group similar events together, and only remember a distinct instance if something extraordinary happens (for instance, a spider crawls onto your toothbrush). 

From my observation, memories, even of recent events, are not even that linear. They focus primarily on the novel information, and the gaps are connected by either the fungible/chunked memories or the reasoning you were referring to. The upshot is that if you have novel events separated by a lot of mundanity, you may remember them out of sequence (and remember them prioritized by importance or novelty). I find this often when trying to recall my dreams. 

Reply
Telling the Difference Between Memories & Logical Guesses
Alex A24d80

After doing track-back meditation a few times, I noticed that my memories of habitual activities have a different, more vague feeling than unique events. It seems like in addition to logically filling in the gaps (which I noticed as well), memories of repeated unchanging actions are stored as essentially fungible.

Reply1
OpenAI: Facts from a Weekend
Alex A2y4314

RE: the board’s vague language in their initial statement

Smart people who have an objective of accumulating and keeping control—who are skilled at persuasion and manipulation —will often leave little trace of wrongdoing. They’re optimizing for alibis and plausible deniability. Being around them and trying to collaborate with them is frustrating. If you’re self-aware enough, you can recognize that your contributions are being twisted, that your voice is going unheard, and that critical information is being withheld from you, but it’s not easy. And when you try to bring up concerns, they are very good at convincing you that those concerns are actually your fault.

I can see a world where the board was able to recognize that Sam’s behaviors did not align with OpenAI’s mission, while not having a smoking gun example to pin him on. Being unskilled politicians with only a single lever to push (who were probably morally opposed to other political tactics) the board did the only thing they could think of, after trying to get Sam to listen to their concerns. Did it play out well? No.

It’s clear that EA has a problem with placing people who are immature at politics in key political positions. I also believe there may be a misalignment in objectives between the politically skilled members of EA and the rest of us—politically skilled members may be withholding political advice/training from others out of fear that they will be outmaneuvered by those they advise. This ends up working against the movement as a whole.

Reply3
Lying to chess players for alignment
Answer by Alex AOct 27, 202320

I’m about 1000 ELO on chess.com and would be interested in playing as A. I play regularly, but haven’t had formal training or studied seriously. I’d be free weekdays after 7 pm ET.

Reply
113Deception Chess: Game #1
2y
22