All of MathiasKirkBonde's Comments + Replies

Politics is way too meta

For me, this perfectly hits the nail on the head.

This is a somewhat weird question, but like, how do I do that?

I've noticed multiple communities fall into the meta-trap, and even when members notice it can be difficult to escape. While the solution is simply to "stop being meta", that is much harder said than done.

When I noticed this happening in a community I am central in organizing I pushed back by bringing my own focus to output instead of process hoping others would follow suit. This has worked somewhat and we're definitely on a better track. I wonder what dynamics lead to this 'death by meta' syndrome, and if there is a cure.

1skybrian4moWhen you're actually a little curious, you might start by using a search engine to find a decent answer to your question. At least, if it's the sort of question for which that would work. Maybe even look for a book to read? But, maybe we should acknowledge that much of the time we aren't actually curious and are just engaging in conversation for enjoyment? In that case, cheering on others who make an effort to research things and linking to their work is probably the best you can do. Even if you're not actually curious, you can notice people who are, and you can look for content that's actually about concrete things. For example, my curiosity about the history of politics in Turkey is limited, so while I did read Scott Alexander's recent book review and some responses with interest, I'm not planning on reading an actual book on it. I don't think he's all that curious either, since he just read one book, but that's going further than me.
Heel-and-toe drumming

Really cool concept of drumming with your feet while playing another instrument.

I think it would be really cool to experiment with different trigger sounds. The muscles in your foot severely limits the nuances available to play, and trying to imitate the sounds of a regular drum-set will not go over well.

I think it is possible to achieve much cooler playing, if you skip the idea of your pedals needing to imitate a drum-set entirely. Experiment with some 808 bass, electric kicks, etc.

Combining that with your great piano playing would create an entirely new feel of music, whereas it can easily end up sounding like a good pianist struggling to cooperate with a much worse drummer

2jefftk8moIf you look at the video where I'm playing piano I'm using electronic drum sounds, though I still want to play around and figure out ones I like better. Here's what this is eventually going to fit with: https://www.jefftk.com/p/rhythm-stage-setup-v3 [https://www.jefftk.com/p/rhythm-stage-setup-v3]
Why indoor lighting is hard to get right and how to fix it

I spent 5 minutes searching amazon.de for replacements to the various products recommended and my search came up empty.

Is there someone who has put together the needed list of bright lighting products on amazon.de? I tried doing it myself and ended up hopelessly confused. What I'm asking for is eg. two desk lamps and corresponding light bulbs that live up to the criteria.

I'll pay $50 to the charity of your choice, if I make a purchase based off your list.

Things are allowed to be good and bad at the same time

And there doesn’t need to be an “overall goodness” of the job that would be anything else than just the combination of those two facts.

There needs to be an "overall goodness" that is exactly equal to the combination of those two facts. I really like the fundamental insight of the post. It's important to recognize that your mind wants to push your perception of the "overall goodness" to the extremes, and that you shouldn't let it do that.

If you now had to make a decision on whether to take the job, how would you use this electrifying zap help you make the decision?

2Kaj_Sotala9moMy current feeling is that I'd probably take it. (The job example was fictional, as the actual cases where I've used this have been more personal in nature, but if I translate your question to those contexts then "I'd take it" is what I would say if I translated the answer back.)
Should we write more about social life?

I would strongly prefer a Lesswrong that is completely devoid of this.

Half the time it ends up in spiritual vaguery, of which there's already too much on Lesswrong. The other half ends up being toxic male-centric dating advice.

Inner Alignment: Explain like I'm 12 Edition

For those who, like me, have the attention span and intelligence of a door hinge the ELI5 edition is:

Outer alignment is trying to find a reward function that is aligned with our values (making it produce good stuff rather than paperclips)

Inner alignment is the act of ensuring our AI actually optimizes the reward function we specify.

An example of poor inner alignment would be us humans in the eyes of evolution. Instead of doing what evolution intended, we use contraceptives so we can have sex without procreation. If evolution had gotten its inner alignment right, we would care as much about spreading our genes as evolution does!

To what extent is GPT-3 capable of reasoning?

GPT-3's goal is to accurately predict a text sequence. Whether GPT-3 is capable of reason, or whether we can get it to explicitly reason is two different questions.

If I had you read Randall Munroe's book "what if" but tore out one page and asked you to predict what will be written as the answer, there's a few good strategies that come to mind.

One strategy would be to pick random verbs and nouns from previous questions and hope some of them will be relevant for this question as well. This strategy will certainly do better than if yo... (read more)

Six economics misconceptions of mine which I've resolved over the last few years

I don't get the divestment argument, please help me understand why I'm wrong.

Here's how I understand it:

If Bob offers to pay Alice whatever Evil-Corp™ would have paid in stock dividends in exchange for what Alice would have paid for an Evil-Corp™ stock, Evil-Corp™ has to find another buyer. Since Alice was the buyer willing to pay the most, Evil-Corp™ now loses the difference between what Alice was willing to pay and the next-most willing buyer, Eve, is willing to pay.

Is that understanding correct, or am I missing... (read more)

So I think the divestment argument that Buck is making is the following:

Assume there are 25 investors, from Alice to Ysabel. Each investor is risk-averse, and so is willing to give up a bit of expected value in exchange for reduced variance, and the more anticorrelated their holdings, the less variance they'll have. This means Alice is willing to pay more for her first share of EvilCorp stock than she is for her second share, and so on; suppose EvilCorp has 100 shares, and the equilibrium is that each investor has 4 shares.

Suppose now Alice decides th... (read more)

Self-Predicting Markets

As Benjamin Graham put it:

in the short run, the market is a voting machine; in the long run, the market a weighing machine.

The unexpected difficulty of comparing AlphaStar to humans

I think that's a very fair way to put it, yes. One way this becomes very apparent, is that you can have a conversation with a starcraft player while he's playing. It will be clear the player is not paying you his full attention at particularly demanding moments, however.

Novel strategies are thought up inbetween games and refined through dozens of practice games. In the end you have a mental decision tree of how to respond to most situations that could arise. Without having played much chess, I imagine this is how people do chess openers do as wel... (read more)

I think the abstract question of how to cognitively manage a "large action space" and "fog of war" is central here.

In some sense StarCraft could be seen as turn based, with each turn lasting for 1 microsecond, but this framing makes the action space of a beginning-to-end game *enormous*. Maybe not so enormous that a bigger data center couldn't fix it? In some sense, brute force can eventually solve ANY problem tractable to a known "vaguely O(N*log(N))" algorithm.

BUT facing "a limit that forces meta-cognition"... (read more)

The unexpected difficulty of comparing AlphaStar to humans

Before doing the whole EA thing, I played starcraft semi-professionally. I was consistently ranked grandmaster primarily making money from coaching players of all skill levels. I also co-authored a ML paper on starcraft II win prediction.

TL;DR: Alphastar shows us what it will look like when humans are beaten in completely fair fight.

I feel fundamentally confused about a lot of the discussion surrounding alphastar. The entire APM debate feels completely misguided to me and seems to be born out of fundamental misunderstandings of what it means to be good at ... (read more)

6spkoc2yI think you're right when it comes to SC2, but that doesn't really matter for DeepMind's ultimate goal with AlphaStar: to show an AI that can learn anything a human can learn. In a sense AlphaStar just proves that SC2 is not balanced for superhuman ( https://news.ycombinator.com/item?id=19038607 [https://news.ycombinator.com/item?id=19038607] ) micro. Big stalker army shouldn't beat big Immortal army. In current SC2 it obviously can with good enough micro. There are probably all sorts of other situations where soft-scissor beats soft-rock with good enough micro. Does this make AlphaStar's SC2 performance illegitimate? Not really? Tho in the specific Stalker-Immortal fight, input through an actual robot looking at an actual screen and having to cycle through control groups to check HP and select units PROBABLY would not have been able to achieve that level of micro. The deeper problem is that this isn't DeepMind's goal. It just means that SC2 is a cognitively simpler game than initially thought(note, not easy, simple as in a lot of the strategy employed by humans is unnecessary with sufficient athletic skill). The higher goal of AlphaStar is to prove that an AI can be trained from nothing to learn the rules of the game and then behave in a human-like, long term fashion. Scout the opponent, react to their strategy with your own strategy etc. Simply bulldozing the opponent with superior micro and not even worrying about their counterplay(since there is no counterplay) is not particularly smart. It's certainly still SC2, it just reveals the fact that SC2 is a much simpler game(when you have superhuman micro).
5maximkazhenkov2yInteresting point. Would it be fair to say that, in a tournament match, a human pro player is behaving much more like a reinforcement learning agent than a general intelligence using System 2 [https://www.lesswrong.com/posts/LQSGd97EGPdG2MN7i/link-system-2-thinking-decreases-religious-belief] ? In other words, the human player is also just executing reflexes he has gained through experience, and not coming up with ingenious novel strategies in the middle of a game. I guess it was unreasonable to complain about the lack of inductive reasoning and game-theoretic thinking in AlphaStar from the beginning since DeepMind is a RL company, and RL agents just don't do that sort of stuff. But I think it's fair to say that AlphaStar's victory was much less satisfying than AlphaZero, being not only unable to generalize across multiple RTS games, but also unable to explore the strategy space of a single game (hence the incentivizing of use of certain units during training). I think we all expected seeing perfect game sense and situation-dependent strategy choice, but instead blink stalkers is the one build to rule them all, apparently.

I think your feelings stem from you considering it to be enough If AS simply beats human players while APM whiners would like AS to learn all the aspect of Starcraft skill it can reasonably be expected to learn.

The agents on ladder don't scout much and can't react accordingly. They don't tech switch midgame and some of them get utterly confused in ways a human wouldn't. Game 11 agent vs MaNa couldn't figure out it could build 1 phoenix to kill the warp prism and chose to follow it with 3 oracles (units which cant shoot at flying units). The ladder agents d

... (read more)
Sunny's Shortform

"Science confirms video games are good" is essentially the same statement as "The bible confirms video games are bad" just with the authority changed. Luckily there remains a closer link between the authroity "Science" and truth than the authority "The bible" and truth so it's still an improvement.

Most people still update their worldview based upon whatever their tribe as agreed upon as their central authority. I'm having a hard time critisising people for doing this, however. This is something we all do! ... (read more)

1Sunny from QAD2yOh yes, that's certainly true! My point is that anybody who has the floor can say that science has proven XYZ when it hasn't, and if their audience isn't scientifically literate then they won't be able to notice. That's why I lead with the Dark Ages example where priests got to interpret the bible however was convenient for them.
Experimental Open Thread April 2019: Socratic method

I really like this line of thinking. I don't think it is necessarily opposed to the typical map-territory model, however.

You could in theory explain all there is to know about the territory with a single map, however that map would become really dense and hard to decipher. Instead having multiple maps, one with altitude, another with temperature, is instrumentally useful for best understanding the territory.

We cannot comprehend the entire territory at once, so it's instrumentally useful to view the world through different lenses and see what new ... (read more)

2shminux2yNot in terms of other maps, but in terms of its predictive power: Something is more useful if it allows you to more accurately predict future observations. The observations themselves, of course, go through many layers of processing before we get a chance to compare them with the model in question. I warmly recommend the relevant SSC blog posts: https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/ [https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/] https://slatestarcodex.com/2017/09/06/predictive-processing-and-perceptual-control/ [https://slatestarcodex.com/2017/09/06/predictive-processing-and-perceptual-control/] https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/ [https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/] https://slatestarcodex.com/2019/03/20/translating-predictive-coding-into-perceptual-control/ [https://slatestarcodex.com/2019/03/20/translating-predictive-coding-into-perceptual-control/]
Announcing the Center for Applied Postrationality

Believing the notion that one can 'deconfuse' themself on any topic, is an archetypal mistake of the rationalist. Only in the spirit of all things that are essential to our personal understanding, can we expect our beliefs to conform to the normality of our existence. Asserting that one can know anything certain of the physical world is, by its definition, a foolhardy pursuit only someone with a narrow and immature understanding of physicality would consider meaningful.

Believing that any 'technique' could be used to train ones mind in t... (read more)

3Richard_Kennaway2yIs that you, GPT2?