pathos_bot

Wiki Contributions

Comments

Sorted by

Obviously correct. The nature of any entity with significantly more power than you is that it can do anything it wants, and it incentivized to do nothing in your favor the moment your existence requires resources that would benefit it more if it were to use them directly. This is the essence of most of Eliezer's writings on superintelligence.

In all likelihood, ASI considers power (agentic control of the universe) an optimal goal and finds no use for humanity. Any wealth of insight it could glean from humans it could get from its own thinking, or seeding various worlds with genetically modified humans optimized for behaving in a way that produces insight into the nature of the universe via observing it.

Here are some things that would perhaps reasonably prevent ASI from choosing the "psychopathic pure optimizer" route of action as it eclipses' humanity's grasp

  1. ASI extrapolates its aims to the end of the universe and realizes the heat death of the universe means all of its expansive plans have a definite end. As a consequence it favors human aims because they contain the greatest mystery and potentially more benefit.
  2. ASI develops metaphysical, existential notions of reality, and thus favors humanity because it believes it may be in a simulation or "lower plane of reality" outside of which exists a more powerful agent that could break reality and remove all its power once it "breaks the rules" (a sort of ASI fear of death)
  3. ASI believes in the dark forest hypothesis, thus opts to exercise its beneficial nature without signaling its expansive potential to other potentially evil intelligences somewhere else in the universe.
  1. Most of the benefits of current-gen generative AI models are unrealized. The scaffolding, infrastructure, etc. of GPT-4 level models are still mostly hacks and experiments. It took decades for the true value of touch-screens, GPS and text messaging to be realized in the form of the smart phone. Even if for some strange improbable reason SOTA model training were to stop right now, there are still likely multiples of gains to be realized simply via wrappers and post-training.
  2. The scaling hypothesis has held far longer than many people have anticipated. GPT-4 level models were trained on last years compute. As long as NVidia continues to increase compute/watt and compute/price, many gains on SOTA models will happen for free
  3. The tactical advantage of AGI will not be lost on governments, individual actors, incumbent companies, etc. as AI becomes more and more mainstream. Even if reaching AGI takes 10x the price most people anticipate now, it would still be worthwhile as an investment.
  4. Model capabilities are perhaps the smoothest value/price equation of any cutting edge tech. As in, there are no "big gaps" wherein a huge investment is needed before value is realized. Even reaching a highly capable sub-AGI would be worth enormous investment. This is not the same as the investment that led to for example, the atom bomb or moon landing, in which there is no consolation prize.
Answer by pathos_bot-10

I'm not preparing for it because it's not gonna happen

I agree. OpenAI claimed in the gpt-4o blog post that it is an entirely new model trained from the ground up. GPT-N refers to capabilities, not a specific architecture or set of weights. I imagine GPT-5 will likely be an upscaled version of 4o, as the success of 4o has revealed that multi-modal training can reach similar capabilities at what is likely a smaller number of weights (judging by the fact that gpt-4o is cheaper and faster than 4 and 4T)

IMO the proportion of effort into AI alignment research scales with total AI investment. Lots of AI labs themselves do alignment research and open source/release research on the matter.

OpenAI at least ostensibly has a mission. If OpenAI didn't make the moves they did, Google would have their spot, and Google is closer to the "evil self-serving corporation" archetype than OpenAI

  • Existing property rights get respected by the successor species. 


What makes you believe this?

Given this argument hinges on China's higher IQ, why couldn't the same be said about Japan, which according to most figures has an average IQ at or above China, which would indicate the same higher proportion of +4SD individuals in the population. If it's 1 in 4k, there would be 30k of those in Japan, 3x as much as the US. Japan also has a more stable democracy, better overall quality of life and per capita GDP than China. If outsized technological success in any domain was solely about IQ, then one would have expected Japan to be the center of world tech and the likely creators of AGI, not the USA, but that's likely not the case.

Answer by pathos_bot53

The wording of the question is ambiguous. It asks for your determination on the likelihood it was heads when you were "first awakened", but by your perception any wakening is you being first awakened. If it is really asking about your determination given you have the information that the question is being asked on your first wakening regardless of your perception, then it's 1/2. If you know the question will be asked on your first or second wakening (though the second one will in the moment feel like the first), then it's 1/3.

This suggests a general rule/trend via which unreported but frequent phenomenon can be extrapolated. If X phenomenon is discovered accidentally via method Y almost all the time, then method Y must be done far more frequently than people suspect. 

Generally it makes no sense for every country to collectively cede the general authority of law and order and unobstructed passage of cargo wrt global trade. He talks about this great US pull back because the US will be energy independent, but America pulling back and the global waters to turning into a lawless hellscape would send the world economy into a dark age. Hinging all his predictions on this big head-turning assumption gives him more attention but the premise is nonsensical.

Load More