How deep is your skepticism? In the context of consciousness, valence basically means the qualia of value. Are we denying a particular theory of valence, or proposing that valence is a wrong way to think about the phenomenology of value, or denying that there is any phenomenology of value at all?
Hameroff's work is a precious contribution to expanding the scientific imagination, and I even include this latest twist of time crystals. (Ryan Kidd has studied Floquet dynamics, which underpins the discrete time crystals he's talking about.) There are Ising-type models of microtubule dynamics and you can get time crystals in Ising systems... However, I am extremely skeptical of the Bandyopadhyay group's interpretations of its data.
Wanting to destroy all computers and wanting to wirehead everyone is a new combination...
What would it mean to decelerate Less Wrong?
this model strongly believes it is in 2024... This goes away sometimes when you turn search on (which adds the date to the system prompt)
I believe I saw this back in March from Gemini 2.5 Pro, when run from within Google AI Studio (which didn't allow search).
Nonetheless, certainly one needs to be able to say how much cooling is too much, or otherwise characterize the point at which cooling introduces its own form of cognitive degradation...
When you're creating something with the potential to replace the entire human race, you don't do it at all, until you are as sure as you can be, that you are doing it right. That's the adult attitude in my opinion.
Unfortunately we are no longer in that world. The precursors of superintelligence have been commercialized and multiple factions are racing to make it more and more powerful, trusting that they can figure out what needs to be done along the way. The question now is, what is the adult thing to do, in a world that is creating superintelligence in this reckless fashion?
But I'm sure that thinking about what would have been the adult way to do it, remains valuable.
OK, so you're talking about the conjunction of two things. One is the social and political milieu of Bay Area rationalism. That milieu contains anti-democratic ideologies and it is adjacent to the actual power elite of American tech, who are implicated in all kinds of nefarious practices. The other thing is something to do with the epistemology, methodology, and community practices of that rationalism per se, which you say render it capable of being coopted by the power philosophy of that amoral elite.
These questions interest me, but I live in Australia and have zero experience of the 21st century Bay Area (and of power elites in general), so I'm at a disadvantage in thinking about the social milieu. If I think about how it's evolved:
Peter Thiel was one of the early sponsors of MIRI (when it was SIAI). At that time, politically, he and Eliezer were known simply as libertarians. This was the world before social media, so politics was more palpably about ideas...
Less Wrong itself was launched during the Obama years, and was designed to be apolitical, but surveys always indicated a progressive majority among the users, with other political identities also represented. At the same time, this was the era in which e.g. Curtis Yarvin's neoreaction began to attract interest and win adherents in the blogosphere, and there were a few early adopters in the rationalist world, e.g. SIAI spokesperson Michael Anissimov left to follow the proverbial pipeline from libertarianism to white nationalism, and there was the group that founded "More Right", specifically to discuss political topics banned from Less Wrong in a way combining rationalist methods with reactionary views.
Here we're approaching the start of the Trump years. Thiel has become Trump's first champion in Silicon Valley, and David Gerard and the reddit enemies of Less Wrong (/r/sneerclub) have made alleged adjacency to Trump, Yarvin, and "human biodiversity" (e.g. belief in racial IQ differences) central to their critique. At the same time, I would think that the mainstream politics actually suffusing the rationalist milieu at this time, is that of Effective Altruism, e.g. the views of Democrat-affiliated Internet billionaires like Dustin Moskovitz and Sam Bankman-Fried.
Then we have the Covid interlude, rationalists claim epistemological vindication for having been ahead of the curve, and then before you know it, it's the Biden years and the true era of AI begins with ChatGPT. The complex cultural tapestry of reactions to AI that we now inhabit, starts to take shape. Out of these views, those of the "AI safety" world (heavily identified with effective altruism, and definitely adjacent to rationalism) have some influence on the Biden policy response, while the more radical side of progressive opinion will often show affinity with the "anti-TESCREAL" framing coming from Emile Torres et al.
Meanwhile, as Eliezer turned doomer, Thiel has long since distanced himself, to the point that in 2025, Thiel calls him a legionnaire of Antichrist alongside Greta Thunberg. Newly influential EA gets its nemesis in the form of e/acc, Musk and Andreessen back Trump 2.0, and the new accelerationist "tech right" gets to be a pillar of the new regime, alongside right-wing populism.
In this new landscape, rationalism and Less Wrong still matter, but they are very much not in charge. At this point, the philosophies which matter are those of the companies racing to build AI, and the governments that could shape this process. As far as the companies are concerned, I identify two historic crossroads, Google DeepMind and the old OpenAI. There was a time when DeepMind was the only visible contender to create AI. They had some kind of interaction with MIRI, but I guess you'd have to look to Demis Hassabis and Larry Page to know what the "in-house" philosophy at Google AI was. Then you had the OpenAI project, which continues, but which also involved Musk and spawned Anthropic.
Of all these, Anthropic is evidently the one which (even if they deny it now) is closest to embodying the archetypal views of Effective Altruism and AI safety. You can see this in the way that David Sacks singles them out for particularly vituperative attention, and emphasizes that all Biden's AI people went to work there. OpenAI these days seems to contain a plurality of views that would range from EA to e/acc, while xAI I guess is governed autocratically by Musk's own views, which are an idiosyncratic mix of anti-woke accelerationism and "safety via truth-seeking".
Returning to the rationalist scene events where you see reactionary ideologues, billionaire minions, deep-state specters, and so on, on the guest list... I would guess that what you're seeing is a cross-section of the views among those working on frontier AI. Now that we are in a timeline where superintelligence is being aggressively and competitively pursued, I think it's probably for the best that all factions are represented at these events, it means there's a chance they might listen. At the same time, perhaps something would be gained by also having a purist clique who reject all such associations, and also by the development of defenses against philosophical cooptation, which seems to be part of what you're talking about.
it's hard not to feel some hopelessness that all of these problems can be made legible to the relevant people, even with a maximum plausible effort
A successful book or paper that covered them all should reach a lot of them.
Any comment on the idea that transformers are purely feed-forward networks, and that this makes introspection impossible?