Dwarkesh asked a very interesting question in his Sutton interview, which Sutton wasn't really interested in replying to.
Dwarkesh notes that one idea for why the the bitter lesson was true is because general methods got to ride the wave of exponential computing power while knowledge engineering could not, as labour was comparatively fixed in supply. He then notices that post AGI labour supply will increase at a very rapid pace. And so he wonders, once the labour constraint is solved post AGI will GOFAI make a comeback? For we will then be able to afford the proverbial three billion philosophers writing lisp predicates or whatever various other kinds of high-labour AI techniques are possible.
Of course, the same consideration applies to theoretical agent-foundations-style alignment research
I don't actually buy this argument, but I think it's a very important argument for someone to make, and for people to consider carefully. So thank you to Dwarkesh for proposing it, and to you for mentioning it!
I've been writing up a long-form argument for why "Good Old Fashioned AI" (GOFAI) is a hopeless pipedream. I don't know if that would actually remain true for enormous numbers of superintelligent programmers! But if I had to sketch out the rough form of the argument, it would go something like this:
As a former robotics developer, I feel the bitter lesson in my bones. This is actually one of the points I plan to focus on when I write up the longer version of my argument.
High-quality manual dexterity (and real-time visual processing) in a cluttered environment is a heartbreakingly hard problem, using any version of GOFAI techniques I knew at the time. And even the most basic of the viable algorithms quickly turned into a big steaming pile of linear algebra mixed with calculus.
As someone who has done robotics demos (and who knows all the things an engineer can do to make sure the demos go smoothly), the Figure AI groceries demo still blows my mind. This demo is well into the "6 impossible things before breakfast" territory for me, and I am sure as hell feeling the imminent AGI when I watch it. And I think this version of Figure was an 8B VLLM connected to an 80M specialized motor control model running at 200 Hz? Even if I assume that this is a very carefully run demo showing Figure under ideal circumstances, it's still black magic fuckery for me.
But it's really hard to communicate this intuitive reaction to someone who hasn't spent years working on GOFAI robotics. Some things se...
Edit: this comment is no longer relevant because the text it talks about was removed.
What do you mean by "I am not sure OpenPhil should have funded these guys"? Edit for context: OpenPhil funded Epoch where they previously worked, but hasn't funded Mechanize where they currently work. Are you joking? Do you think it's bad to fund organizations that do useful work (or that you think will do useful work) but which employ people who have beliefs that might make them do work that you think is net negative? Do you have some more narrow belief about pressure OpenPhil should be applying to organizations that are trying to be neutral/trusted?
I think it's probably bad to say stuff (at least on LessWrong) like "I am not sure OpenPhil should have funded these guys" (the image is fine satire I guess) because this seems like the sort of thing which yields tribal dynamics and negative polarization. When criticizing people, I think it's good to be clear and specific. I think "humor which criticizes people" is maybe fine, but I feel like this can easily erode epistemics because it is hard to respond to. I think "ambiguity about whether this criticism is humor / meant literally etc" is much worse (and common on e.g. X/twitter).
Tbh, I just needed some text before the image. But I have negative sentiment for both Epoch and OpenPhil. From my perspective, creating benchmarks to measure things adjacent to RSI is likely net negative, and teetering on the edge of gain of function. Such measures so often become a target. And it should not come as surprise, I think, that key Epoch people just went off and activity started working on this stuff.
I mean, not super impressed with their relationship with OpenAI re: frontier math. The org has a bad smell to me, but I won't claim to know the whole of what they've done.
I don't necessarily disagree with what you literally wrote. But also, at a more pre-theoretic level, IMO the sequence of events here should be really disturbing (if you haven't already been disturbed by other similar sequences of events). And I don't know what to do with that disturbedness, but "just feel disturbed and don't do anything" also doesn't seem right. (Not that you said that.)
(For clarity: Open Phil funded those guys in the sense of funding Epoch, where they previously worked and where they probably developed a lot of useful context and connections, but AFAIK hasn't funded Mechanize.)
Hat in hand and on bended knee,
to Fate, I beg, "Ask not of me!"
to do that thing (so horrid, true)
In my secret heart I long to do.
I never thought I was anxious, but my sister pointed out I have structured my life to an unusual degree to avoid stimuli that could create anxiety: I don't drive, I worked from home my entire career even before it was cool and at a dev job at a company of the ideal size to allow me to avoid office politics or excessive meetings, I spend most my time reading alone, my life is highly routine, I often eat the same foods day after day until I tire and move on to the next thing, and I travel rarely.
I have no idea if she is right or not. And if true, I am unsure if it matters if such things grant me some peace. But the idea that the shape of one's life may be, in part, an unconscious treatment for mental flaws is a disquieting one. And it may be worth asking yourself: looking at the structure of my life, what symptoms might I be unconsciously treating?
To me, this doesn’t sound related to “anxiety” per se, instead it sounds like you react very strongly to negative situations (especially negative social situations) and thus go out of your way to avoid even a small chance of encountering such a situation. I’m definitely like that (to some extent). I sometimes call it “neuroticism”, although the term “neuroticism” is not great either, it encompasses lots of different things, not all of which describe me.
Like, imagine there’s an Activity X (say, inviting an acquaintance to dinner), and it involves “carrots” (something can go well and that feels rewarding), and also “sticks” (something can go badly and that feels unpleasant). For some people, their psychological makeup is such that the sticks are always especially painful (they have sharp thorns, so to speak). Those people will (quite reasonably) choose not to partake in Activity X, even if most other people would, at least on the margin. This is very sensible, it’s just cost-benefit analysis. It needn’t have anything to do with “anxiety”. It can feel like “no thanks, I don’t like Activity X so I choose not to do it”.
(Sorry if I’m way off-base, you can tell me if this doesn’t resonate with your experience.)
Before Allied victory, one might have guessed that the peoples of Japan and Germany would be difficult to pacify and would not integrate well with a liberal regime. For the populations of both showed every sign of virulent loyalty to their government. It's commonly pointed out that it is exactly this seemingly-virulent loyalty that implied their populations would be easily pacified once their governments fell, as indeed they were. To put it in crude terms: having been domesticated by one government, they were easily domesticated by another.
I have been thinking a bit about why I was so wrong about Trump. Though of course if I had a vote I would have voted for Kamala Harris and said as much at the time, I assumed things would be like his first term where (though a clown show) it seemed relatively normal given the circumstances. And I wasn't particularly worried. I figured norm violations would be difficult with hostile institutions, especially given the number of stupid people who would be involved in any attempt at norm violations.
Likely most of me being wrong here was my ignorance, as a non-citizen and someone generally not interested in politics, of American civics and how the sit...
Reading AI 2027, I can't help but laugh at the importance of the president in the scenario. I am sure it has been commented before but one should probably look at the actual material one is working with.
I expect this to backfire with most people because it seems that their concept of the authors hasn't updated in sync with the authors, and so they will feel that when their concept of the authors finally updates, it will seem very intensely like changing predictions to match evidence post-hoc. So I think they should make more noise about that, eg by loudly renaming AI 2027 to, eg, "If AI was 2027" or something. Many people (possibly even important ones) seem to me to judge public figures' claims based on the perceiver's conception of the public figure rather than fully treating their knowledge of a person and the actual person as separate. This is especially relevant for people who are not yet convinced and are using the boldness of AI 2027 as reason to update against it, and for those people, making noise to indicate you're staying in sync with the evidence would be useful. It'll likely be overblown into "wow, they backed out of their prediction! see? ai doesn't work!" by some, but I think the longer term effect is to establish more credibility with normal people, eg by saying "nearly unchanged: 2028 not 2027" as your five words to make the announcement.
There is the classic error of conflating the normative with the descriptive, presuming that what is good is also true. But the inverse is also a mistake I see people make all the time: conflating the descriptive for the normative. The descriptive is subject to change by human action, so maybe the latter is the worse of the two mistakes. Crudely, the stereotypical liberal makes the former mistake and the stereotypical reactionary makes the latter.
You can just not go bald. Finasteride works as long as you start early. The risk of ED is not as high as people think. At worst, it doubles the risk compared to placebo. If you have bad side effects quitting resolves them but it can take about a month for DHT levels to return to normal. Some men even have increased sex drive due to the slight bump in testosterone it gives you.
I think society has weird memes about balding and male beauty in general. Stoically accepting a disfigurement isn't particularly noble. You could "just shave it bro" or you could just take a pill every day, which is easier than shaving your head. Hair is nice. It's perfectly valid to want to keep your hair. Consider doing this if you like having hair.
Finasteride prevents balding but provides only modest regrowth. If you think you will need to start, start as soon as possible for the best results.
Note that there have been many reports of persistent physiological changes caused by 5-AR inhibitors such as finasteride (see: Post Finasteride Syndrome), some of which sound pretty horrifying, like permanent brain fog and anhedonia.
I've spent a lot of time reading through both the scientific literature and personal anecdotes and it seems like such adverse effects are exceedingly rare, but I have high confidence (>80%) that they are not completely made up or psychosomatic. My current best guess is that all such permanent effects are caused by some sort of rare genetic variants, which is why I'm particularly interested in the genetic study being funded by the PFS network.
The whole situation is pretty complex and there's a lot of irrational argumentation on both sides. I'd recommend this Reddit post as a good introduction – I plan on posting my own detailed analysis on LW sometime in the future.
"I think society has weird memes about balding and male beauty in general. Stoically accepting a disfigurement isn't particularly noble"
I think calling natural balding "disfigurement" is in line with the weird memes around male beauty.
Not having hair isn't harmful.
Disclaimer: I may go bald.
I was watching the PirateSoftware drama. There is this psychiatrist, Dr. K, who interviewed him after the internet hated him, and everyone praised Dr. K for calling him out or whatever. But much more fascinating is Dr. K’s interview with PirateSoftware a year before, as PirateSoftware expertly manipulates Dr. K into thinking he is an enlightened being and likely an accomplished yogi in a past life. If you listen to the interview he starts picking up on Dr. K’s spiritual beliefs and playing into them subtly:
I figured PirateSoftware must be stupider than I estimated given his weird coding decisions, but I bet he is legit very smart. His old job was doing white hat phishing and social engineering and I imagine he was very good at it.
Warning: spoilers for my last two stories.
There is one thing I wanted to mention about my The Origami Men story, which may be useful only to those who write fiction, if at all. One of the things I was attempting to do was to write a story with all the trappings of satire and irony but with zero actual satire and irony. I felt bad for pissing on Wallace's E Unibus Pluram in The Company Man (which I was not expecting to be as popular as it was) and I thought about his project to reach "the modern reader," which he thought difficult to reach because everyone was so irony poisoned even in the 1990s. I think Wallace's approach was to sort of like say, "yes we both know we both know what's going on here, shall we try to feel something anyway?" And I had an idea that another maybe-more-deceptive tack would be to entirely remove these dimensions while keeping all the trappings, the theory being that "the modern reader" would feel themselves comfortable and open and protected by an irony blanket that wasn't actually there and then be reachable under whatever Wallace's definition of "reachable" was. I think he maybe expected too much from fiction sometimes but hopefully The Origami Men will,...
Inadequate Equilibria lists the example of bright lights to cure SAD. I have a similar idea, though I have no clue if it would work. Can we treat blindness in children by just creating a device that gives children sonar? I think it would be a worthy experiment to create device that makes inaudible cherps and then translates their echos into the audible range and transmits them to some headphones the child wears. Maybe their brains will just figure it out? Alternatively, an audio interface to a lidar or a depth estimation model might do, too.
I am a strong believer that nanotechnology is possible, which seems to be a sort of antimeme. And tons of people who should really know better seem to consider the acknowledgement of the physical possibility of Drexlerish nanotech as evidence someone is crazy - it is amusing to look at the AGI takes of these same people five years ago. They are mostly using the exact same idiotic intuitions in exactly the same way for the exact same reasons.
But maybe this being an antimeme is good? Perhaps its best people are holding the idiot ball on the topic? On one hand, I don't think lying is good, even by omission. And to the extent denying nanotech is load-bearing in their claims that takeoff will be slow (by the Christiano definition) then getting them to see through the antimeme is useful - as an aside I think people do forget that we have seen little evidence so far, at least in terms of economic growth, that we are living in Christiano's predicted world. I get the impression, sometimes, some people think we have.
But also, we are getting very powerful tools that makes a Drexlarian project more and more plausible, which has its own risks and even indirect risks of increasing available compute. So perhaps we are fortunate nanotechnology is so incredibly low status? As Sama would probably just try to do it if it were not.
There is a journal called Nanotechnology. It reports a steady stream of developments pertaining to nanoscale and single-molecule design and synthesis. So that keeps happening.
What has not happened, is the convergence of all these capabilities in the kind of universal nanosynthesis device that Drexler called an assembler, and the consequent construction of devices that only it could make, such as various "nanobots".
It is similar to the fate of one of his mentors, Gerard O'Neill, who in the 1970s, led all kinds of research into the construction of space colonies and the logistics of space industrialization. Engineering calculations were done, supply chains were proposed; one presumes that some version of all that is physically possible, but no version of it was ever actually carried out.
In that case, one reason why is because of the enormous budgets involved. But another reason is political and cultural. American civilization was visionary enough to conceive of such projects, but not visionary enough to carry them out. Space remained the domain of science, comsats, spysats, and a token human presence at the international space station, but even returning to the moon...
Here Casey Muratori talks about computer programming being automated. Ignoring the larger concerns of AI for a minute, which he doesn't touch, I just thought this was a beautiful, high-integrity meditation on the prospect of the career he loves becoming unremunerative: https://youtu.be/apREl0KmTdQ?si=So1CtsKxedImBScS&t=5251
I have had the Daylight Tablet for a couple months. I really like it. It is very overpriced but the screen is great and the battery life good. People who read a lot of pdfs or manga, in particular, might like it.
At risk of sharing slop, Suno 4.5 Beta is amazing: https://suno.com/song/6b6ffd85-9cd2-4792-b234-40db368f6d6c?sh=utBip8t6wKsYiUE7