Giving unsolicited advice and criticism is a very good credible signal of respect
I have often heard it claimed that giving advice is a bad idea because most people don't take it well and won't actually learn from it.
Giving unsolicited advice/criticism risks:
People benefit from others liking them and not thinking they are stupid, so these are real costs. Some people also don't like offending others.
So clearly it's only worth giving someone advice or criticism if you think at least some of the following are true:
No doubt that sycophancy and the fear of expressing potentially friendship damaging truths allows negative patterns of behavior to continue unimpeded but I think you've missed the two most necessary factors in determining if advice - solicited or unsolicited - is a net benefit to the recipient:
1. you sufficiently understand and have the expertise to comment on their situation
&
2. you can offer new understanding they aren't already privy to.
Perhaps the situations where I envision advice is being given is different to yours?
The problem I notice with most unsolicited advice is it's either something the recipient is already aware of (i.e. the classic sitcom example is someone touches a hot dish and after the fact is told "careful that pan is hot" - is it good advice? Well in the sense that it is truthful, maybe. But the burn already having happened, it is not longer useful.) This is why it annoys people, this is why it is taken as an insult to their intelligence.
A lot of people have already heard the generic or obvious advice and there may be many reasons why they aren't following it,[1] and most of the time hearing this generic advice being repeated will not be of a benefit ev...
On people's arguments against embryo selection
A recent NYT article about Orchid's embryo selection program triggered a surprising to me backlash on X where people expressed disgust and moral disapproval at the idea of embryo selection. The arguments generally fell into two categories:
(1) "The murder argument" Embryo selection is bad because it involves creating and then discarding embryos, which is like murdering whole humans. This argument also implies regular IVF, without selection, is also bad. Most proponents of this argument believe that the point of fertilization marks a key point when the entity starts to have moral value, i.e. they don't ascribe the same value to sperm and eggs.
(2) "The egalitarian argument" Embryo selection is bad because the embryos are not granted the equal chance of being born they deserve. "Equal chance" here is probably not quite the correct phrase/is a bit of a strawman (because of course fitter embryos have a naturally higher chance of being born). Proponents of this argument believe that intervening on the natural probability of any particular embryo being born is anti-egalitarian and this is bad. By selecting for certain traits we are saying peopl...
People talk about meditation/mindfulness practices making them more aware of physical sensations. In general, having "heightened awareness" is often associated with processing more raw sense data but in a simple way. I'd like to propose an alternative version of "heightened awareness" that results from consciously knowing more information. The idea is that the more you know, the more you notice. You spot more patterns, make more connections, see more detail and structure in the world.
Compare two guys walking through the forest: one is a classically "mindful" type, he is very aware of the smells and sounds and sensations, but the awareness is raw, it doesn't come with a great deal of conscious thought. The second is an expert in botany and birdwatching. Every plant and bird in the forest has interest and meaning to him. The forest smells help him predict what grows around the corner, the sounds connect to his mental map of birds' migratory routes.
Sometimes people imply that AI is making general knowledge obsolete, but they miss this angle—knowledge enables heightened conscious awareness of what is happening around you. The fact that you can look stuff up on Google, or ask an AI assistant, does not actually lodge that information in your brain in a way that lets you see richer structure in the world. Only actually knowing does that.
The risk of incorrectly believing in moral realism
(Status: not fully fleshed out, philosophically unrigorous)
A common talking point is that if you have even some credence in moral realism being correct, you should act as if it's correct. The idea is something like: if moral realism is true and you act is if it's false, you're making a genuine mistake (i.e. by doing something bad), whereas if it's false and you act as if it's true, it doesn't matter (i.e. because nothing is good or bad in this case).
I think this way of thinking is flawed, and in fact, the opposite argument can be made (albeit less strongly): if there's some credence in moral realism being false, acting as if it's true could be very risky.
The "act as if moral realism is true if unsure" principle contrasts moral realism, (i.e. that there is an objective moral truth, independent of any particular mind) with nihilism (i.e. nothing matters). But these are not the only two perspectives you could have. Moral subjectivism is a to-me intuitively compelling anti-realist view, which says that the truth value of moral propositions is mind-dependent (i.e. based on an individual's beliefs about what is right and wrong).
From...
I think people who predict significant AI progress and automation often underestimate how human domain experts will continue to be useful for oversight, auditing, accountability, keeping things robustly on track, and setting high-level strategy.
Having "humans in the loop" will be critical for ensuring alignment and robustness, and I think people will realize this, creating demand for skilled human experts who can supervise and direct AIs.
(I may be responding to a strawman here, but my impression is that many people talk as if in the future most cognitive/white-collar work will be automated and there'll be basically no demand for human domain experts in any technical field, for example.)
Inspired by a number of posts discussing owning capital + AI, I'll share my own simplistic prediction on this topic:
Unless there is a hostile AI takeover, humans will be able to continue having and enforcing laws, including the law that only humans can own and collect rent from resources. Things like energy sources, raw materials, and land have inherent limits on their availability - no matter how fast AI progresses we won't be able to create more square feet of land area on earth. By owning these resources, you'll be able to profit from AI-enabled economic growth as this growth will only increase demand for the physical goods that are key bottlenecks for basically all productive endeavors.
To elaborate further/rephrase: sure, you can replace human programmers with vastly more efficient AI programmers, decreasing the human programmers' value. In a similar fashion you can replace a lot of human labor. But an equivalent replacement for physical space or raw materials for manufacturing does not exist. With an increase in demand for goods caused by a growing economy, these things will become key bottlenecks and scarcity will increase their price. Whoever owns them (some humans) will be collecting a lot of rent.
Even simpler version of the above: economics traditionally divides factors of production into land, labor, capital, entrepreneurship. If labor costs go toward zero you can still hodl some land.
Besides the hostile AI takeover scenario, why could this be wrong (/missing the point)?
Space has resources people don't own. The earth's mantle a couple thousand feet down potentially has resources people don't own. More to the point maybe, I don't think humans will be able to continue enforcing laws barring a hostile takeover in the way you seem to think.
Imagine we find out that aliens are headed for earth and will arrive in a few years. Just from the light emissions their probes and expanding civilisation give off, we can infer that they're obviously more technologically mature than us, probably already engineered themselves to be much smarter than us, and can basically do whatever they want with the atoms that make up our solar system and there's nothing we can do about it. We don't know what they want yet though. Maybe they're friendly?
I think guessing that the aliens will be friendly and share human morality to an extent seems like a pretty specific guess about their minds to be making, and is maybe false more likely than not. But guessing that they don't care about human preferences or well-being but do care about human legal structures, that they won't at all help you or gift you things, also won't disassemble you and your property for its atoms[1], but will t...
Ownership is enforced by physical interactions, and only exists to the degree the interactions which enforce it do. Those interactions can change.
As Lucius said, resources in space are unprotected.
Organizations which hand more of their decision-making to sufficiently strong AIs "win" by making technically-legal moves, at the cost of probably also attacking their owners. Money is a general power coupon accepted by many interactions; ownership deeds are a more specific, narrow one; if the ai systems which enforce these mechanisms don't systemically reinforce towards outcomes where the things available to buy actually satisfy the preferences of remaining humans who own ai stock or land, then the owners can end up with no not-deadly food and a lot of money, while datacenters grow and grow, taking up energy and land with (semi?-)autonomously self replicating factories or the like - if money-like exchange continues to be how the physical economy is managed in ai to ai interactions, these self replicating factories might end up adapted to make products that the market will buy. but if the majority of the buying power is ai controlled corporations, then figuring out how to best manipulate ...
Whenever I read yet another paper or discussion of activation steering to modify model behavior, my instinctive reaction is to slightly cringe at the naiveté of the idea. Training a model to do some task only to then manually tweak some of the activations or weights using a heuristic-guided process seems quite un-bitter-lesson-pilled. Why not just directly train for the final behavior you want—find better data, tweak the reward function, etc.?
But actually there may be a good reason to continue working on model-internals control (i.e. ways of influencing model behavior outside of modifying the text input or training process, by directly changing internal state). For some applications, you may want to express something in terms of the model’s own abstractions, something that you won’t know a priori how to do in text or via training data in fine-tuning. Throughout the training process, a model naturally learns a rich semantic activation space. And in some cases, the “cleanest” way to modify its behavior is by expressing the change in terms of its learned concepts, whose representations are sculpted by exaflops of compute.
The motte and bailey of transhumanism
Most people on LW, and even most people in the US, are in favor of disease eradication, radical life extension, reduction of pain and suffering. A significant proportion (although likely a minority) are in favor of embryo selection or gene editing to increase intelligence and other desirable traits. I am also in favor of all these things. However, endorsing this form of generally popular transhumanism does not imply that one should endorse humanity’s succession by non-biological entities. Human “uploads” are much riskier than any of the aforementioned interventions—how do we know if we’ve gotten the upload right, how do we make the environment good enough without having to simulate all of physics? Successors that are not based on human emulation are even worse. Deep learning based AIs are detached from the lineage of humanity in a clear way and are unlikely to resemble us internally at all. If you want your descendants to exist (or to continue existing yourself), deep learning based AI is no equivalent.
Succession by non-biological entities is not a natural extension of “regular” transhumanism. It carries altogether new risks and in my opinion would almost certainly go wrong by most current people’s preferences.
The term “posthumanism” is usually used to describe “succession by non-biological entities”, for precisely the reason that it’s a distinct concept, and a distinct philosophy, from “mere” transhumanism.
(For instance, I endorse transhumanism, but am not at all enthusiastic about posthumanism. I don’t really have any interested in being “succeeded” by anything.)
I find this position on ems bizarre. If the upload acts like a human brain, and then also the uploads seem normalish after interacting with them a bunch, I feel totally fine with them.
I also am more optimistic than you about creating AIs that have very different internals but that I think are good successors, though I don't have a strong opinion.
Criticism quality-valence bias
Something I've noticed from posting more of my thoughts online:
People who disagree with your conclusion to begin with are more likely to carefully read and point out errors in your reasoning/argumentation, or instances where you've made incorrect factual claims. Whereas people who agree with your conclusion before reading are more likely to consciously or subconsciously gloss over any flaws in your writing because they are onboard with the "broad strokes".
So your best criticism ends up coming with a negative valence, i.e. from people who disagree with your conclusion to begin with.
(LessWrong has much less of this bias than other places, though I still see some of it.)
Could HGH supplementation in children improve IQ?
I think there's some weak evidence that yes. In some studies where they give HGH for other reasons (a variety of developmental disorders, as well as cases when the child is unusually small or short), an IQ increase or other improved cognitive outcomes are observed. The fact that this occurs in a wide variety of situations indicates that it could be a general effect that could apply to healthy children.
Examples of studies (caveat: produced with the help of ChatGPT, I'm including null results also). Left colum...
On optimizing for intelligibility to humans (copied from substack)
...One risk of “vibe-coding” a piece of software with an LLM is that it gets you 90% of the way there, but then you’re stuck—the last 10% of bug fixes, performance improvements, or additional features is really hard to figure out because the AI has written messy, verbose code that both of you struggle to work with. Nevertheless, to delegate software engineering to AI tools is more tempting than ever. Frontier models can spit out almost-perfect complex React apps in just a minute, something that
Think clearly about the current AI training approach trajectory
If you start by discussing what you expect to be the outcome of pretraining + light RLHF then you're not talking about AGI or superintelligence or even the current frontier of how AI models are trained. Powerful, general AI requires serious RL on a diverse range of realistic environments, and the era of this has just begun. Many startups are working on building increasingly complex, diverse, and realistic training environments.
It's kind of funny that so much LessWrong arguing has been around wh...
What, concretely, is being analogized when we compare AI training to evolution?
People (myself included) often handwave what is being analogized when it comes to comparing evolution to modern ML. Here's my attempt to make it concrete:
One implication of this is that we...