" (...) the term technical is a red flag for me, as it is many times used not for the routine business of implementing ideas but for the parts, ideas and all, which are just hard to understand and many times contain the main novelties."
- Saharon Shelah
"A little learning is a dangerous thing ;
Drink deep, or taste not the Pierian spring" - Alexander Pope
As a true-born Dutchman I endorse Crocker's rules.
For my most of my writing see my short-forms (new shortform, old shortform)
Twitter: @FellowHominid
Personal website: https://sites.google.com/view/afdago/home
One takeaway for me is that the american Presidency is extremely powerful - especially when you don't care about passing legislation or popularity.
The unlimited pardons and vetoes is something that has been only sporadically used in the past, limited mostly by convention. Just reading the constitution text-as-written the presidency is wildly powerful, especially with a supreme court following a unitary executive interpretation and a lame-duck congress that does not care to insist on its war declaration prerogative.
I'm amused that the lightcone may have been lost in the 1790's when the US constitutional framework was designed.
A friend of mine visited the recent ' eugenics'* conference in the Bay. It had all the prominent people in this area attending iirc, eg Steve Hsu. My friend told me he asked around about how realistic these numbers were. He told me that the majority of serious people he spoke with were skeptical of IQ gains >~3 points.
*sorry I don't remember what it was called
I know you know this but I thought it is important to emphasize that
your first point is plausibly understating the problem of pragmatic/blackbox methods. In the worse-case an AI may simply encrypt its thoughts.
It's not even an oversight problem. There is simply nothing to ' oversee'. It will think its evil thoughts in private. The AI will comply with all evals you can cook up until it's too late.
This looks exciting. As Jeremy said, the length raises an eyebrow.
>I mean, you can put all your writing into collapsible sections, but I highly doubt you would get much traction that way. If you mark non-AI writing as AI content that's also against the moderation rules.
Simply keep the two apart, and try to add prose to explain the connection between them. Feel free to extensively make use of AI, just make sure it's clear which part is AI, and which part is not. Yes, this means you can't use AI straightforwardly to write your prose. Such is life. The costs aren't worth it for LW.
I'm surprised you are taking such a hardline stance on this point. Or perhaps I'm misunderstanding what you are saying.
The primary use-case of AI is not to just post some output with some minor context [though this can be useful]; the primary use-case is to create an AI draft and then go through several iterations and go through hand-editting at the end.
Using AI to draft writing is increasingly default all around the world. Is LessWrong going to be a holdout on allowing this? It seems this is what is implied.
Apart from the present post I am betting a large fraction of LessWrong posts are already written with AI assistance. Some may spent significant time to excise the tell-tale marks of LLM prose which man... feels super silly? But many posts explicitly acknowledge AI-assistance. For myself, I so assume everybody is using of course using AI assistance during writing I don't even consider it worth mentioning. It amuses me when commenters excitedly point out that I've used AI to assist writing as if they've caught me in some sort of shameful crime.
It seems that this ubiqituous practice violates either one of
>> You have to mark AI writing as such.
>>If you mark non-AI writing as AI content that's also against the moderation rules.
unless one retains a 'one-drop' rule for AI assistance.
P.S. I didn't use AI to write these comments but I would if I could. The reason that I don't, is not even to refrain from angering king habryka- it's simply that there isn't a clean in-comment AI interface that I can use [1]. But I'm sure when they I'll be using it all the time, saving significant time and improving my prose at the same time. My native prose is oft clunky, grammatically questionable, overwrought and undercooked.
I would probably play around with system prompts to give a more distinct style from standard LLMese because admittedly the "It's not just X it's a whole Y" can be rather annoying.
[1] maybe such an application already exists. This would be amazing. It can't be too hard to code. Please let me know if you know any such application exists.
I'm happy to signal that I'm a low-class individual, mouthpiece of the AI slop, if that helps.
Bio-supremacists such as yourself can then be sure to sneer appropriately.
Hi Artemy. Welcome to LessWrong!
Agree completely with what Zach is saying here.
We need two facts
(1) the world has a specific inductive bias
(2) neural networks have the same specific inductive bias
Indeed no free lunch arguments seem to require any good learner to have good inductive bias. In a sense learning is 'mostly' about having the right inductive bias.
We call this specific inductive bias a simplicity bias. Informally it agrees with our intuitive notion of low complexity.
Rk. Conceptually it is a little tricky since simplicity is in the eye of the beholder - by changing the background language we can make anything with high algorithmic complexity have low complexity. People have been working on this problem for a while but at the moment it seems radically tricky.
IIRC Aram Ebtekar has a proposed solution that John Wentworth likes; I haven't understood it myself yet. I think what one wants to say is that the [algorithmic] mutual information between the observer and the observed is low, where the observer implicitly encodes the universal turing machine used. In other words - the world is such that observers within it observe it to have low complexity with regard to their implicit reference machine.
Regardless, the fact that the real world satisfies a simplicity bias is to my mind difficult to explain without anthropics. I am afraid we may end up having to resort to an appeal to some form of UDASSA but others may have other theological commitments.
That's the bird-eye view of simplicity bias. If you ignore the above issue and accept some sort of formally-tricky-to-define but informally "reasonable" simplicty then the question becomes: why do neural networks have a bias towards simplicity. Well they have a bias towards degeneracy - and simplicity and bias are intimiately connected, see eg:
https://www.lesswrong.com/posts/tDkYdyJSqe3DddtK4/alexander-gietelink-oldenziel-s-shortform?commentId=zH42TS7KDZo9JimTF