From the scientific paper I mentioned in the first comment they used different questions, here is an example:
“The questionnaires asked for interval estimates of birth years for ﬁve famous characters from world history (Mohammed, Newton, Mozart, Napoleon, and Einstein), and the years of death for ﬁve other famous persons (Nero, Copernicus, Galileo, Shakespeare, and Lincoln).”
I tested to answer these questions myself with 90% confidence intervals, and my result was that I was correct 7/10 questions, so seems like I still am overconfident in my answers even though I just read about it. But to be fair, 10 questions are far from statistical significance.
Wow, really interesting article.
It is really interesting that the median result was negative, although strategic overconfidence as some has pointed out explains some of it.
Found a very interesting paper on the subject of overconfidence: https://www.researchgate.net/publication/227867941_When_90_confidence_intervals_are_50_certain_On_the_credibility_of_credible_intervals
“Estimated confidence intervals for general knowledge items are usually too narrow. We report five experiments showing that people have much less confidence in these intervals than dictated by the assigned level of confidence. For instance, 90% intervals can be associated with an estimated confidence of 50% or less (and still lower hit rates). Moreover, interval width appears to remain stable over a wide range of instructions (high and low numeric and verbal confidence levels).”
Yes, I agree that governments will be likely to "defend" their local fiat currencies, since they both have incentives (like control of the currency and production of more, and often relies on creating more to fund budget deficits), as well as the means to defend it.
I personally would really like such a bank account, that automatically invested the money in the account in the way I want, if the fees were low enough.
Yes, and also usually the currency becomes safer (harder to "hack") with more miners.
Thank you for the feedback, I changed it to commodities in the post.
Yes, it seems logical that the better we can predict the future, the better our decision will be today.
I find it interesting that some of the people who have had the biggest impact on the world, like Jeff Bezos and Elon Musk, say they have been heavily inspired by sci-fi. Which might indicate it is highly useful to imagine the future in order to do big change.
Great thoughts, it was very interesting to read. Some thoughts occurred to me that might be of interest to others, and others input on them I would find interesting as well.
Imagine an AI was trained as an oracle, trained on a variety of questions and selected based on how “truthful” the answers were. Assuming this approach was possible and could create an AGI, might that be a viable way to “create selection pressures towards agents which think in the ways we want”? In other word, might this create an aligned AI regardless of extreme optima?
Another thought that occurred to me is: let’s say an AI that is “let loose” and spreads to new hardware, encounters the “real world” and is exposed to massive amounts of new data. then the range of circumstances would of course be considered very broad. In the example with the oracle, potentially everything could be the same during training and after the training, except for the questions asked. Could this potentially increase safety, since the range of circumstances it would need to have desirable motivations in would be comparatively narrow?Lastly, I'm new to LessWrong, so I'm extra grateful for all input regarding how I can improve my reasoning and commenting skills.
It does seem like a reasonable analogy that the Neuralink could be like a "sixth sense" or an extra (very complex) muscle.
Elon Musk have argued that humans can take in a lot of information through vision (by looking at a picture for one second, you can take in a lot of information). Text/speech however is not very information dense. He argues that therefore since we use keyboards or speech to communicate information outwards, it takes a long time.
One possibility is that AI could help interpreting the data uploaded, and filling in details to make the uploaded information more useful. For example you could "send" an image of something through the Neuralink, an AI would interpret it, fill in the details that are unclear, and then you would have an image, very close to what you imagined, containing several hundred or maybe thousands kilobytes of information.
The neuralink would only need to increase the productivity of an occupation by a few percent, to be worth the investment of 3000-4000 USD that Elon Musk believes the price will drop to.
That does sound like a rational approach, especially since the complexity of the problem makes it near impossible to promote a single approach.