Firstly, I want to say thank you for commenting on my shortpost.
The "information war" sounds like politics as usual. Propaganda, censorship, Twitter, TikTok -- all have existed long before AGI.
It's more about the magnitude and its effectiveness. For ex, India-Pakistan fire at each other on a fairly regular basis but that doesn't count as a reason to go to war. But if one of them drops a nuke, then the security of the other is endagenered to the point they will invade.
An AI / AGI powered "propaganda" machine can spin up millions of bots, argue with every individual in a country. Create personalized realistic images / fake news articles and news organizations that overpower information agencies (like actual newspapers). I don't think pre-GPT3 countries can spin up news agencies on the go that aren't obvious to the people that they are state-sponsored.
State-sponsored cybernetic attacks on other states' infrastructure? We pretend for decades that we don't see them, because... well, they are trivial to deny, you just need to play dumb.
My point is that can we actually see them and act against them, even if we publicly deny them. Imagine a social attack that you don't detect but changes how people vote over the decade towards unproductive politicians.
If you believe AGI can come up with new attack vectors (maybe on the net, it might be new bio-weapons), then the defending country needs to spend expenditure on how to detect them. If the defending country can't spin up AGIs of its own or lags behind (imagine North Korea), why wouldn't they just say we can't possibly hope to keep up or defend against an AGI as time progresses. We should nuke AGI installations while our nukes would still do damage.
Please let me know if something doesn't make sense or is just not new information to you.
If military AGI is akin to nuclear bombs, then would it be justified to attack the country trying to militarize AGI? What would the first act of war in future wars be?
If a country A is building a nuke, then the argument for country B to pre-emptively attack it is that the first act of war involving nukes would effectively end country B. In this case, the act of war is still a physical explosion.
In case of AI, what would be the first act of war akin to physical explosion? Would country B be able to even detect if AI is being used against it? If there is an intelligent explosion, wouldn't the nature of war possibly change rapidly that country B can not even detect they are being attacked until it's too late. For example, you develop AGI and use that to hinder political process and cause slow down in economic growth rate by 2 percentage points. Over a long time horizon like a century, this dooms country B.
Then isn't it country B's interest to consider building military AGI as an act of war since it can not hope to detect future acts of war carried out by the AI.
Past History: Britain's first act of war against Germany in WW1 was cutting off the telegraph lines carried out by a non-military vessel. The first act of war in future wars does not have to be physical explosion and likely be an information attack.
Former Indian Army Chief speaks on how India is losing the Information War:
The ability to do nothing assumes welfare policy for migrants
How? It might be better to be homeless in US than having a house in Afghanistan. Job visa restrictions don't allow you to be homeless.
I simply mean allowing people to stay in a country when they are in-between jobs or looking for other jobs. Most countries only give you a fixed period like 30-days to find similar work, otherwise you are asked to leave.
If there were no job-specific restrictions, people can save up money for a time period or work in another job / employer other than the one stated on your visa.
The migrants might be better of than in their home country and the person who's employing them has a cheap servant.
Not arguing against this. But the migrant can't for example take out a loan and start a business due to visa restrictions or move to more productive parts of the economy.
See this as an example: https://www.reddit.com/r/h1b/comments/1l1lcwq/moving_to_india_after_15_years/
I think even if you open up Western countries, there are more productive areas where labour can be absorbed rather than household work.
Servants in Singapore are probably are a result of Singapore's migration policy. Let's say a developed country had a lax migration policy that allowed people to just come and setup a business, work or study or do nothing.
Then this would allow people to take risks such upskilling themselves and moving into more productive sectors of the economy in a similar fashion to the native population. Current migration policies tend to be restrictive as they tie down individuals to a specific job.
Switching jobs, upskilling yourself or starting a business becomes really hard without breaking immigration laws. Thus, condemning workers to the low productive jobs that they first got when they came into the country.
Speaking from experience in Mumbai, just pretending to throw a stone doesn't necessarily work. You have to pretend to pick up a stone and then throw it.
Wouldn't it crash markets because people took on debt to fund chip production? Since, private players can't reason when governments might interfere, they would not want to fund AI after this. Effectively making AI research a government project?
Why would any government that is not US / China agree to this? They would be worse off if AI is only a government project as their governments can't hope to compete. If there are private players, then they can get a stake in the private companies and get some leverage.