Mitchell_Porter

Comments

Non-poisonous cake: anthropic updates are normal

Steven Weinberg argued anthropically for a small nonzero cosmological constant, a few years before dark energy became part of standard cosmology. 

Open and Welcome Thread - May 2021

Turns out it was a post at Steve Hsu's blog about Francois Chollet

How counting neutrons explains nuclear waste

Does the shell theory explain why it becomes unusually unstable once there's two neutrons past the shell (and not when there's two protons past the shell)?

For alpha decay, a bunch of two protons and two neutrons need to detach. Two protons will have a greater intrinsic chance of breaking away, because of charge repulsion from other protons. So it's detaching the neutrons which is hardest. 

So, if you are considering various nuclei with two nucleons outside the filled shells, and asking when alpha emission faces the lowest energy barrier, it might be the case in which the two protons come from a filled shell (and can use charge repulsion to escape), and the neutrons are the two loose nucleons. 

And also, why does the decay mode suddenly change to alpha particles?

The proton shell after Z=82 seems to be the threshold at which electrostatic repulsion between protons, wins out over strong-force cohesion among nucleons. Although it can take a while... the half-life of bismuth-209 is 10^19 years!

Against intelligence

reading someone that "understood AI" 10 years ago and doesn't own a company valued at a few hundred millions is like reading someone that "gets how trading works", but works at Walmart and live with his mom 

Such an interesting statement. Do you mean this literally? You believe that everyone on Earth who "understood AI" ten years ago, became a highly successful founder?

Power dynamics as a blind spot or blurry spot in our collective world-modeling, especially around AI

Trying to get the gist of this post... There's the broad sweep of AI research across the decades, up to our contemporary era of deep learning, AlphaGo, GPT-3. In the 2000s, the school of thought associated with Eliezer, MIRI, and Less Wrong came into being. It was a pioneer in AI safety, but its core philosophy of preparing for the first comprehensively superhuman AI, remains idiosyncratic in a world focused on more specialized forms of AI. 

There is a quote from Eliezer talking about "AI alignment" research, which would be that part of AI safety concerned with AI (whether general or special) getting the right goals. Apparently the alignment research community was more collaborative before OpenAI and truly big money got involved, but now it's competitive and factional. 

Part of the older culture of alignment research, was a reluctance to talk about competitive scenarios. The fear was that research into alignment per se would be derailed by a focus on western values vs nonwestern values, one company rather than another, and so on. But this came to pass anyway, and the thesis of this post is that there should now be more attention given to politics and how to foster cooperation. 

My thoughts... I don't know how much attention those topics should be given. But I do think it essential that someone keep trying to solve the problem of human-friendly general AI in a first-principles way... As I see it, the MIRI school of thought was born into a world that, at the level of civilizational trends, was already headed towards superhuman AI, and in an uncontrolled and unsafe way, and that has never stopped being the case. 

In a world where numerous projects and research programs existed, that theoretically might cross the AI threshold unprepared, MIRI was a voice for planning ahead and doing it right, by figuring out how to do it right. For a while that was its distinctive quality, its "brand", in the AI world... Now it's a different time: AI and its applications are everywhere, and AI safety is an academic subdiscipline. 

But for me, the big picture and the endgame is still the same. Technical progress occurs in an out-of-control way, the threshold of superhuman AI is still being approached on multiple fronts, and so while one can try to moderate the risks at a political or cultural level, the ultimate outcome still depends on whether or not the first project across the threshold is "safe" or "aligned" or "friendly". 

[Prediction] What war between the USA and China would look like in 2050

Even without a singularity, 2050 is unimaginably far away. 2050 is as far from 2021, as 2021 is from 1992, a time when there was no mass Internet, no smartphones, no 9/11, Japan was America's big economic rival, China was still debating whether to continue economic reform, the Soviet Union had just ceased to exist and the European Union had just begun to exist. Half the world population of 2021 wasn't even alive in 1992. 

Estimating COVID cases & deaths in India over the coming months

Some headlines today about case numbers going down in big cities... If anyone still wants to wrestle with the Indian second wave, some information sources: 

https://covid19india.org 

https://twitter.com/BhramarBioStat 

https://www.reddit.com/r/IndiaSpeaks 

Agency in Conway’s Game of Life

Seems like there's a difference between viability of AI, and ability of AI to shape a randomized environment. To have AI, you just need stable circuits, but to have an AI that can shape, you need a physics that allows observation and manipulation... It's remarkable that googling "thermodynamics of the game of life" turns up zero results. 

Biological Holism: A New Paradigm?

The Santa Fe Institute was founded in 1984, the first Macy conferences were in the 1940s, Smuts wrote Holism and Evolution in 1926, Aristotle had three types of soul... what's new about this? 

Open and Welcome Thread - May 2021

Was there a recent post, where some expert claimed that deep learning can't deal with ... some kind of discreteness? 

Load More