This is totally misguided. If heuristics worked 100% of the time they wouldn't be rules of thumb, they'd be rules of nature. We only have to be wrong once for AI to kill us.
I invest in US assets myself but not because of any faith in the US, in fact the opposite - Firstly it's like a fund manager investing into a known bubble - You know it's going to burst but, if it doesn't burst in the next year or so you cannot afford the short/medium term loss relative to your competitors and, secondly, If the US crashes it takes down the rest of the world with it and is probably the first to recover so you might as well stick with it. None of this translates to faith in US, AI, governance. Your mention of positive-sum deals is particularly strange since, if the world has learned one thing about Trump, it is that he sees the world, almost exclusively, in zero sum terms.
Stating the obvious here but Trump has ensured that the USG cannot credibly guarantee anything at all and hence this is a non-starter for foreign governments.
I think it does. Certainly the way that I would do it would be to create a world map from memory, then overlay the coordinate grid, then just answer by looking it up. You answers will be as good as your map is. I believe that the LLMs most likely work from wikipedia articles - There are a lot of location pages with coordinates in wikipedia
Humans would draw a map of the world from memory, overlay the grid and look up the reference. I doubt that the LLMs do this. It would be interesting to see whether they can actually relate the images to the coordinates - I suspect not i.e. I expect that they could draw a good map, with gridlines from training data but would be unable to relate the visual to the question. I expect that they are working from coordinates in wikipedia articles and the CIA website. Another suggestion would be to ask the LLM to draw a map of the world with non-standard grid lines e.g. every 7 degrees
This is interesting but, in some ways, it should have been obvious - Everything we say, says something about who we are and what we say is influenced by what we know in ways that we are not conscious of. Magicians use subconscious forcing all the time along the lines of "Think of a number between 1 and 10"
It's worse than that, (1) is just the big problem for philosophy hiding behind circular definitions and multiple undefined words to obscure the big issue. We have "progress" and "values" and "good" used as if they are independent when, even a cursory examination, shows that they are not and they are, in fact, "defined" using circular reasoning - We have made progress because our values are better (more good) now than they were in the past. How do we know that our values are better now than in the past? Because we have made progress. We believe that we are better now than we were in the past because, for example, we do not discriminate against homsexuals. But the people in the past would argue that they were better than us for exactly the same reason. I believe that the root cause of the illusion of moral progress is no more, and no less, than the the obvious observation that winners always get to write the history and always paint themselves in the best light. We are the winners. We defeated the "us" of the past and now "we" get to say that we are morally superior because the people of the past are not here to argue with us and, even if they were, we would destroy them with our superior technology.
Complacency! Try visiting a country that hasn't had generations of peaceful democracy - They take these issues much more seriously. The optics of this are heavily skewed by the US, who had, essentialy, the same religion and politics for centuries and so they believe that none of the serious stuff consequences could ever happen to them.
You are arguing that it is tractable to have predictable positive long term effects using something that is known to be imperfect (heuristic ethics). For that to make sense you would have to justify why small imperfections cannot possibly grow into large problems. It's like saying that because you believe that you only have a small flaw in your computer security nobody could ever break in and steal all of your data. This wouldn't be true even if you knew what the flaw was and, with heuristic ethics, you don't even know that.