Karolina

Posts

Sorted by New

Wiki Contributions

Comments

Why learn to code?
  • It’s fun
  • It’s useful even if you aren’t a professional software engineer. I taught myself some basic programming a few years ago and used it to automate tedious reports and processes for my job (which allowed me to spend time on less urgent but more interesting projects).
The Case for Frequentism: Why Bayesian Probability is Fundamentally Unsound and What Science Does Instead

“What would a frequentist analysis of the developing war look like?”

Why is it necessary to assign a probability to the outcome of a war?

For example, someone paying attention to the news on 2/21/22 would have seen Putin’s speech where he declared independence for the Donbas regions/laid out motivations for an invasion and thought to themselves “Russia will probably invade Ukraine in the near future. Satellite images show nearly 200k Russian troops surrounding Ukraine’s Eastern border. Russia’s military forces vastly outnumber Ukraine’s on land and in air, so Russia is in a good position to succeed. However, the outcome of war also depends on a number of intangible factors such as command and control, logistics, and morale, so Russia’s prospects for success remain to be seen.” Personally I find this kind of analysis more reasonable than “P(Russia success) = .75. Here’s the math.”

Good Heart Week: Extending the Experiment

Thanks! Do I still need to enter an email?

Good Heart Week: Extending the Experiment

“Today I'm here to tell you: this is actually happening and it will last a week. You will get a payout if you give us a PayPal/ETH address or name a charity of your choosing.”

How do we give you the name of a charity? I only see fields to enter a PayPal and email address on the payment info page.

MIRI announces new "Death With Dignity" strategy

The spread of opinions seems narrow compared to what I would expect. OP makes some bold predictions in his post. I see more debate over less controversial claims all of the time.

Sorry, but what do aliens have to do with AI?

MIRI announces new "Death With Dignity" strategy

Thanks! You explanation was helpful, and I appreciate you acknowledging the amount of time and effort it takes for a newcomer to read these posts! :)

MIRI announces new "Death With Dignity" strategy

Thanks for responding to my comment—I appreciate you clarifying OP’s points. I’ve been working through some of the sequences on AI risk, and I think I have a better understanding of why people are concerned. I also took a moment to look up MIRI—first of all, wow, it must be interesting to work in an area like AI alignment which (if I understand correctly) is at the intersection of so many fields like math, psychology, programming, and philosophy!

I had a couple of questions after browsing MIRI’s website—I hope you don’t mind me asking.

  • What kinds of tools are you guys working on to make general AI systems safer?
  • Since general AI has yet to be developed, how does MIRI test its tools and research?
  • Is MIRI working to develop any AI systems?

A funny coincidence—I think my fiance has a friend who works for an AI research lab owned by Google. Maybe I’ll ask about his expectations on the future of AI.

MIRI announces new "Death With Dignity" strategy

So AI will destroy the planet and there’s no hope for survival?

Why is everyone here in agreement that AI will inevitably kill off humanity and destroy the planet?

Sorry I’m new to LessWrong and clicked on this post because I recognized the author’s name from the series on rationality.

Load More