That might be too quick a dismissal given the importance that is typically assigned to trust for well functioning economies and economic development. But I think the view of some top three regardless of what the three are is a difficult to accept as an unqualified statement.
Seems like we're talking about a very complex and complicated area that will not distill down to some simple map of that territory. I think we will find that the map will need to have a larger number of layers that can be applied than just three.
Which layers one will need or find most informative will depend a good bit on what focus or specific question or framing one starts with. I thought that type of view was implied in your conclusion so was a bit surprised to see that parenthetical statement.
Thanks. I'm surprised there are not more obvious/visibe efforts, and results/finding, along that line of approach.
I would say a sandbox is probably not the environment I would choose. I would suggest, at least once someone thinks they might actually be testing a true AGI, a physically isolated system 100% self contained and disconnected from all power and communications networks in the real world.
Many so-called "logical fallacies" are correct Bayesian inferences.
I find this a very interesting claim and wondering if anyone has applied it to some list of logical fallacies such as one might find listed in some Intro to Logic text book.
I'm assuming that one could get all that from reading through all the Sequences but sees to me a cheat sheet type document would be much more helpful.
Perhaps I'm missing some obvious failing that is well known but wouldn't an isolated VR environment allow failed first tries without putting the world at risk? We probably don't have sufficiently advanced environments currently and we don't have any guarantee that everyone developing AGI would actually limit their efforts to such environments.
But I don't think I've ever seen such an approach suggested. Is there some failure point I'm missing?
I suppose you're getting the 5000 number from the 5% claim but Hanson actually doesn't claim 5000 as a number but rather makes the claim "I’d guess there are at least a thousand such strong dramatic reported events."
So here you drop from a 5% claim to that of about 1%.
As for where, it doesn't take too much to start getting some leads. Most are news stories that probably don't meet your criteria but this might at least offer some basis for thinking something is going on. I think the question then becomes more why is the government and military taking these steps -- they are clearly not costless and many other efforts are competing for funds -- if there is really nothing but smoke and mirrors.
small omission:
that we will struggle to address if don't understand fundamental uncertainty.
Also, I was initial confused by your shift from "truth" to "relative truth" and started to wonder if you were going to slip a concept that was not really truth but continue as if you were still talking about truth as I suspect most understand the word. That is, something of an absolute and unrelated to usefulness or practicality. If that was intentional that's find. If not you might consider a be more of an introduction to that shift as your following text does clarify the difference and why you used the term. Just might be less jarring for other readers -- assuming you were not intentionally attempting to "jar" the reader's mind at that point.
I'm not sure if this will be a good comment but if you've never heard of an old counter-culture Christmas time story, The Hog Father, you might find it interesting. In a sense it's a mirror image of your position. Basically we need to believe little lies in order to believe the big lies (like morality, ethics, truth, right/wrong).
If in fact most futures play out in ways that lead to human extinction, then a high estimate of extinction is correct or "rational"; if most futures don't lead to doom, then a low estimate of doom is correct. This is a fact independent of the public / consensus epistemic state of any relevant scientific fields.
This seems wrong, or at least incomplete.
Give all the doom outcomes a p or 1/10^10000000000000000000000 and the bliss outcome 1-p. Even with a lot more ways doom occurs it seems we might not worry much about doom actually happening. It's true you might weight the value of doom much higher than bliss so some expected value might work towards your view. But now we need to consider the timing of doom and existential risks unrelated to AI. If someone were to work through all the AI dooms and timing of that doom and come to (sake of argument clearly) is 50 billion years then we have much more to worry about from our Sun than AI.
While I share your ignorance about how things are done in Australia and your general description of Social Security I think a couple of points might be worth considering. First, Social Security is not a UBI or even supposed to be sufficient to support someone in retirement. It is a supplemental income program that, that people pay into. I agree that there is a rather large disconnect between what you pay in and what you can expect to take out based on your personal situation.
That said, it also seems to share some of the same concerns that OP makes. Many question if those paying in now will actually be able to collect, solvency issues. While I don't think any talks about this (but I don't look so it could be well known and discussed in some circles) there seems to be a very clear bias towards the "haves" actually being able to pull the most out compared to those most needing it. Look at the payout schedule for delaying your payment until 70. Those that need cannot wait. My supposition is that this incentive to delay for a few years is largely about cash flow issues related to the whole question of solvency. But clearly that introduces some, arguably, undesirable distributional effects.
I would also point out that while you can start collecting at 62 you will not be collecting what is considered your full monthly supplemental income. You get penalized for taking payment early (full age and full payment depends on when you were born -- for me I have to be 66 1/2 to collect a full payout) just as you get a premium for waiting.
I think one can find plenty of similar points of contention related to Social Security as is raised in the OP.
Edit to add a small pointer to Alaska. That State, unless it's changed, has something of a UBI type payout. It's based on the royalties one mineral and oil leases on State land. All Alaskan citizens get their share (not sure if it's uniform or proportional to some factor). Perhaps some make similar complaints about that program as are raised in the OP but if not then it might be something of a compare and contrast option.
Seems like a case could be made that upbringing of the young is also a case of "fucking with the brain" in that the goal is clearly to change the neural pathways to shift from whatever was producing the unwanted behavior by the child into pathways consistent with the desired behavior(s).
Is that really enslavement? Or perhaps, at what level is that the case?