Is starting capital really a bottleneck for entrepreneurs? Don't you just get money from investors? Elon Musk and Bill Gates only needed a laptop to start their business. Or, from Warren Buffet's biography: "In 1945, as a high school sophomore, Buffett and a friend spent $25 to purchase a used pinball machine, which they placed in the local barber shop. Within months, they owned several machines in three different barber shops across Omaha. They sold the business later in the year for $1,200 to a war veteran. ... In high school, he invested in a business owned by his father and bought a 40-acre farm worked by a tenant farmer. He bought the land when he was 14 years old with $1,200 of his savings. By the time he finished college, Buffett had accumulated $9,800 in savings".
Were long-standing research problems actually ever presented at IMO? AFAIK, problems featured there already have solutions.
I feel like I'm missing context. Why did this community come to care about blackmail laws in the first place?
I'm so confused. How did Luna survive the Killing Curse?
Great observation! I was struggling with the same issue when I moved from studying math to graduate ML research. Depth-first search is the right approach to reading a math textbook. Say, you started to learn homology theory and realized you don't know what's an Abelian group. You should stop and go read about Abelian groups, or you won't understand what comes next.
However, the same approach was getting me in trouble when trying to understand state-of-the-art in voice processing. I would start reading an article in the morning, and by the evening I'd finish no articles and find myself stuck in the middle of a textbook reading about some 50 years outdated method which wasn't relevant to modern research. So, I worked out that the effective approach to this task is breadth-first search: read the article from beginning to end, write down all unknown terms, go on to look up the most important one.
I think another reason why people might default to depth-first search is that BFS requires to store a list of unexplored nodes in memory and people don't have a lot working memory. So a note-taking system like Zettelkasten really helps with applying this approach more broadly in one's life.
Do you know anyone who wants such a forum?
Cheers to simon, ericf and myself, for offering an optimal solution! And cheers to abstractapplic for organizing the challenge.
The leaderboard (if you're not here, I couldn't figure out what your final decision was, or you added more than 10 points):
simon, ericf 0.9375
[('CHA', 8), ('CON', 15), ('DEX', 13), ('INT', 13), ('STR', 8), ('WIS', 15)]
[('CHA', 8), ('CON', 14), ('DEX', 13), ('INT', 13), ('STR', 8), ('WIS', 16)]
Samuel Clamons 0.8095
[('CHA', 8), ('CON', 17), ('DEX', 13), ('INT', 13), ('STR', 7), ('WIS', 14)]
[('CHA', 9), ('CON', 16), ('DEX', 14), ('INT', 13), ('STR', 8), ('WIS', 12)]
[('CHA', 8), ('CON', 14), ('DEX', 13), ('INT', 13), ('STR', 6), ('WIS', 18)]
[('CHA', 7), ('CON', 15), ('DEX', 13), ('INT', 13), ('STR', 6), ('WIS', 18)]
[('CHA', 11), ('CON', 14), ('DEX', 13), ('INT', 13), ('STR', 6), ('WIS', 15)]
[('CHA', 11), ('CON', 16), ('DEX', 13), ('INT', 13), ('STR', 7), ('WIS', 12)]
AABoyles, Pongo, GuySrinivasan 0.6389
[('CHA', 6), ('CON', 14), ('DEX', 13), ('INT', 13), ('STR', 6), ('WIS', 20)]
[('CHA', 5), ('CON', 14), ('DEX', 13), ('INT', 20), ('STR', 8), ('WIS', 12)]
Deccludor 0.6098 [('CHA', 5), ('CON', 20), ('DEX', 13), ('INT', 13), ('STR', 6), ('WIS', 15)]
Randomini 0.4688 [('CHA', 4), ('CON', 14), ('DEX', 13), ('INT', 13), ('STR', 16), ('WIS', 12)]
From plotting the data, I saw that:
I fit a regularized logistic regression and a neural net, but couldn't get validation accuracy greater than 70%, which was only a little better than the 65% baseline of random guessing. I realized that the data is not very informative and I don't know how results are calculated, so I better stick with a conservative model like Nearest Neighbors classifier, and try a few different models. I fit a KNN classifier, gradient boosting on decision trees, and regularized logistic regression (all with validation accuracy 70%), and chose a point which gave near the top scores for all three classifiers. (It had all stats >=8, too.)
I wonder if it has something to do with age-related hearing loss. I remember when I was a teen, rock music sounded like antimelodic screeching to me too, but I listened because my friends liked it, and I liked the lyrics. Now the same songs sound like legit music to me. Maybe it's because our hearing range shrinks with age, so the high-pitched sounds become quieter and the song doesn't sound like screeching anymore.
CHA+4, STR+2, WIS+4
Is it a secret / part of the puzzle, where this data came from?