eg

Wiki Contributions

Comments

And for more conceptual rather than empirical research, the teams might go in completely different directions and generate insights that a single team or individual would not.

How many parameters do self-driving-car neural nets have?
Answer by egAug 06, 202110

Take with grain of salt but maybe 119m?

Medium post from 2019 says "Tesla’s version, however, is 10 times larger than Inception. The number of parameters (weights) in Tesla’s neural network is five times bigger than Inception’s. I expect that Tesla will continue to push the envelope."

Wolfram says of Inception v3 "Number of layers: 311 | Parameter count: 23,885,392 | Trained size: 97 MB"

Not sure what version of Inception was being compared to Tesla though.

Two AI-risk-related game design ideas

D&D website estimates 13.7m active players and rising.

LCDT, A Myopic Decision Theory

Probabilistic/inductive reasoning from past/simulated data (possibly assumes imperfect implementation of LCDT):

"This is really weird because obviously I could never influence an agent, but when past/simulated agents that look a lot like me did X, humans did Y in 90% of cases, so I guess the EV of doing X is 0.9 * utility(Y)."

Cf. smart humans in Newcomb's prob: "This is really weird but if I one box I get the million, if I two-box I don't, so I guess I'll just one box."

LCDT, A Myopic Decision Theory

For a start, low-level deterministic reasoning:

"Obviously I could never influence an agent, but I found some inputs to deterministic biological neural nets that would make things I want happen."

"Obviously I could never influence my future self, but if I change a few logic gates in this processor, it would make things I want happen."

Training Better Rationalists?

This post inspired https://www.lesswrong.com/posts/RdCb8EGEEdWbwvqcp/why-not-more-small-intense-research-teams

Training Better Rationalists?
Answer by egAug 05, 20214

My impression is that SEALs are exceptional as a team, much less individually.  Their main individual skill is extreme team-mindedness.

LCDT, A Myopic Decision Theory

Seems potentially valuable as an additional layer of capability control to buy time for further control research.  I suspect LCDT won't hold once intelligence reaches some threshold: some sense of agents, even if indirect, is such a natural thing to learn about the world.

What does GPT-3 understand? Symbol grounding and Chinese rooms

Two big issues I see with the prompt:

a) It doesn't actually end with text that follows the instructions; a "good" output (which GPT-3 fails in this case) would just be to list more instructions.

b) It doesn't make sense to try to get GPT-3 to talk about itself in the completion.  GPT-3 would, to the extent it understands the instructions, be talking about whoever it thinks wrote the prompt.

What does GPT-3 understand? Symbol grounding and Chinese rooms

I agree and was going to make the same point: GPT-3 has 0 reason to care about instructions as presented here.  There has to be some relationship to what text follows immediately after the end of the prompt.

Load More