I agree with some of your premises:
But the discussions about class struggle typically imply more things, such as:
There is a war, and to choose to ignore it is to roll over and die.
By a similar logic, one should join an organized crime group, because there is crime, and to choose to ignore it is to roll over and die.
(Or, imagine the same argument, except for a race war.)
Anyone tried talking to GPT in a Slavic language? My experience is that it in general it can talk in Slovak, but sometimes it uses words that seem to be from other Slavic languages. I think, either it depends on how much input it had from each language and there are relatively few Slovak texts online compared to other languages, or the Slavic languages are just too similar to each other (some words are the same in multiple languages) that GPT has a problem remembering the exact boundary between them. Does anyone know more about this?
I get especially silly results when I ask (in Slovak) "Could you please write me a few Slovak proverbs?" In GPT-3.5, only one out of ten examples is correct. (I suspect that some of the "proverbs" are mis-translations from other languages, and some are pure hallucinations.)
editing the whole thing is so much extra work after I already did all the work figuring out what I think.
typically I don't want to spend the marginal time.
Yeah. Similar here, only I am aware of this in advance, so I often simply write nothing, because I am a bit of perfectionist here, don't want to publish something unfinished, and know that finishing just isn't worth it.
I wonder whether AI editors could help us with this.
Human reasoning about mathematics can be implemented in physics, yes.
All of mathematics can be encoded as just taking some premises (observations) which use ELEMENT OF, AND, NOT, and FORALL, and figuring out which other observations are guaranteed if we have observed all the premises.
This sounds like first-order logic. Which, I think, cannot even define natural numbers unambiguously.
Also, I think you stretched the meaning of "observation" beyond its usual limits.
Maybe; I am not familiar with details. But if something similar worked, it might work again, that's good news.
I am imagining something similar to the EA Hotel, but for scientists. Selected scientists (not sure by what criteria) could get a free accommodation and some unconditional basic income on top of that, for 5 or 10 years, with the possibility of extension determined by a review 1 year before the end. They would live next to other scientists, so would have opportunities to debate. In an ideal case, there would be a university nearby, where they could offer lectures to students.
Depending on the type of science, research can be expensive (a physicist needing a particle accelerator) or cheap (a mathematician needing a pen and paper). I am imagining the cheap side here, where UBI would solve most of the problem, and remove the need to ask for grants constantly.
That said, grant money is nice, the only downside is that it takes the scientists' time and attention. So maybe the science accelerator could hire people who would handle the bureaucracy as a full-time job. I imagine, probably too naively, that a scientist would just provide an abstract of their work, and the specialist would keep spamming the institutions. Similarly, a scientist would provide a paper, and the specialist would keep spamming the journals. (Or, if the project gets large enough, perhaps it could have a journal of its own.)
The general principle, yes. Not sure if there was an article specifically about the rationalist community.
AI content for specialists
There is a lot of AI content recently, and it is sometimes of the kind that requires specialized technical knowledge, which I (an ordinary software developer) do not have. Similarly, articles on decision theories are often written in a way that assumes a lot of background knowledge that I don't have. As a result there are many articles I don't even click at, and if I accidentally do, I just sigh and close them.
This is not necessarily a bad thing. As something develops, inferential distances increase. So maybe, as a community we are developing a new science, and I simply cannot keep up with it. -- Or maybe it is all crackpottery; I wouldn't know. (Would you? Are some of us upvoting content they are not sure about, just because they assume that it must be important? This could go horribly wrong.) Which is a bit of a problem for me, because now I can no longer recommend Less Wrong in good faith as a source of rational thinking. Not because I see obviously wrong things, but because there are many things where I have no idea whether they are right or wrong.
We had some AI content and decision theory here since the beginning. But those articles written back then by Eliezer were quite easy to understand, at least for me. For example, "How An Algorithm Feels From Inside" doesn't require anything beyond high-school knowledge. Compare it to "Hypothesis: gradient descent prefers general circuits". Probably something important, but I simply do not understand it.
Just like historically MIRI and CFAR split into two organizations, maybe Less Wrong should too.
Feeling of losing momentum
I miss the feeling that something important is happening right now (and I can be a part of it). Perhaps it was just an illusion, but at the first years of Less Wrong it felt like we were doing something important -- building the rationalist community, inventing the art of everyday rationality, with the perspective to raise the general sanity waterline.
It seems to me that we gave up on the sanity waterline first. The AI is near, we need to focus on the people who will make a difference (whom we could recruit for an AI research), there is no time to care about the general population.
Although recently, this baton was taken over by the Rational Animations team!
Is the rationalist community still growing? Offline, I guess it depends on the country. In Bratislava, where I live, it seems that ~ no one cares about rationality. Or effective altruism. Or Astral Codex Ten. Having five people at a meetup is a big success. Nearby Vienna is doing better, but it is merely climbing back to pre-COVID levels, not growing. Perhaps it is better at some other parts of the world.
Online, new people are still coming. Good.
Also, big thanks to all people who keep this website running.
But still it no longer feels to me anymore like I am here to change the world. It is just another form of procrastination, albeit a very pleasant one. (Maybe because I do not understand the latest AI and decision theory articles; maybe all the exciting things are there.)
Etc.
Some dialogs were interesting, but most are meh.
My greatest personal pet peeve was solved: people no longer talk uncritically about Buddhism and meditation. (Instead of talking more critically they just stopped talking about it at all. Works for me, although I hoped for some rational conclusion.)
It is difficult for me to disentangle what happens in the rationalist community from what happens in my personal life. Since I have kids, I have less free time. If I had more free time, I would probably be recruiting for the local rationality (+adjacent) community, spend more time with other rationalists, maybe even write some articles... so it is possible that my overall impression would be quite different.
(Probably forgot something; I may add some points later.)
(Duncan Sabien has announced that he likely won't post on LessWrong anymore. [...] I feel like LessWrong is losing a lot here: Sabien is clearly a top rationality writer.)
I think that Duncan writing on his own blog and we linking the good posts from LW may be the best solution for both sides. (Duncan approves of being linked.)
This reminded me of a bug I spent weeks figuring out, at the beginning of my career. Not sure if something like this would qualify, and I do not have the code anyway.
I wrote a relatively simple code that called a C library produced at the same company. Other people have used the same library for years without any issues. My code worked correctly on my local machine; worked correctly during the testing; and when deployed to the production server, it worked correctly... for about an hour... and then it stopped working.
I had no idea what to do about this. I was an inexperienced junior programmer; I didn't have a direct access to the production machine and there was nothing in the logs; and I could not reproduce the bug locally and neither could the tester. No one else had any problem using the library, and I couldn't see anything wrong in my code.
About a month later, I figured out...
...that at some moment, the library generated a temporary file, in the system temporary directory...
...the temporary file had a name generated randomly...
...the library even checked for the (astronomically unlikely) possibility that a file with given name might already exist, in which case it would generate another random name and check again (up to 100 times, and then it would give up, because potentially infinite loops were not allowed by our strict security policy).
Can you guess the problem now?
The random number generator was initialized in the library C code during some (but not all) of the API calls. My application happened to be the first one during the company existence that only needed the subset of API calls which did not initialize it. Thus during the first 100 calls the temporary files were generated deterministically, and during the 101st call and afterwards the application crashed. On my local computer and on the tester's computer, the system temporary directory was cleared at each reboot, so only the production actually ran out of the 100 deterministically generated file names.
If anyone wants to reproduce this in Python and collect the reward, feel free to do so.