If it’s worth saying, but not worth its own post, here's a place to put it. (You can also make a shortform post)
And, if you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are welcome.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ.
The Open Thread sequence is here.
Hello everyone. Since I have signed up for an account I thought I might as well also say hello. I have been reading stuff on this site and associated communities for a while but I thought I had something to contribute here the other day so I signed up to make a first comment.
I'm in England, I have a background in science (PhD in computational biology back in the day) but now spend most of my time raising and educating my kids. I don't allocate much time to online activities but when I do it's good to have something substantial to chew on. I like the interesting conversations that arise in places where people are practicing genuine conversation with those they disagree with. Lots of interesting content here too.
Other things I am interested in: how energy resources shape what is doable; foraging; forest school; localizing food production; deep listening techniques; ways to help each other think well; data visualizations.
Many AI issues will likely become politicized. (For example, how much to prioritize safety versus economic growth and military competitiveness? Should AI be politically neutral or be explicitly taught "social justice activism" before they're allowed to be deployed or used by the public?) This seems to be coming up very quickly and we are so not prepared, both as a society and as an online community. For example I want to talk about some of these issues here, but we haven't built up the infrastructure to do so safely.
Eliezer Yudkowsky claims GPT-3's ability to apparently write a functioning app based on a prompt is a victory for his model.
After listening to the recent podcast on scrutinizing arguments for AI risk, I figured this was an opportunity to scrutinize what the argument is. Those two previous links summarize how I think the classic arguments for AI risk inform our current views about AI risk, and I'm trying to apply that to this specific argument that GPT-3 implies AI poses a greater danger.
This is how I think Eliezer's argument goes, more fully:
GPT-3 is general enough that it can write a functioning app given a short prompt, despite the fact that it is a relatively unstructured transformer model with no explicitly coded representations for app-writing. We didn't expect this.
The fact that GPT-3 is this capable suggests that 1) ML models scale in capability and generality very rapidly with increases in computing power or minor algorithm improvements, suggesti... (read more)
Thanks to AI Dungeon, I got an opportunity to ask GPT-3 itself what it thought about takeoff speeds. You can see its responses here:
There was an old suggestion of making an AI learn human values by training it on happiness and unhappiness in human facial expressions, making it happy when humans are happy and vice versa. Besides its other problems, now there's this...
People make the weirdest faces when they play video games, it's hilarious to watch. :-)
Lisa Feldman Barrett has a bunch of papers / talks / books / etc. about how facial expressions are difficult to interpret. (I read her book How Emotions Are Made (discussed a bit in my post here), and her article Emotional Expressions Reconsidered) She makes a lot of good points in the "Emotional Expressions Reconsidered" article, but I think she takes them too far...
The article brings up a lot of relevant facts, but the way I would explain them is:
1. Labeled emotional concepts like "happiness" that we use in day-to-day life don't perfectly correspond to exactly one innate reaction, and vice-versa;
2. Our innate subcortical systems create innate facial expressions, but at the same time, our neocortex can also control our face, and it does so in a way that is learned, culturally-dependent, unreliable, and often deceptive. (Hence Paul Ekman's focus on trying to read "facial microexpressions" rather than reading facial expressions per se.)
3. Most people (including me) seem to be kinda bad at consciously inferring anything about a person's inner experience based on even the most straightforward and st... (read more)
Hi everyone! I've been investigating Less Wrong for several months since I read HPMOR and it seems like an interesting place. It's possible that it's over my head but it seems like the best way to find out is to jump in!
I came to transhumanism from a weird starting point. In 1820, the Stone-Campbell Movement seems to have been a decent attempt at introducing religious people to rationality; in 2020, the Church of Christ is not so much. But there's still this weird top crust of people trying to press forward with higher rationality and human progress and potential (if kinda from a point of serious disadvantage) and I got in touch with their ideas even though I'm not really a member in good standing any more.
With the rise of GPT-3, does anyone else feel that the situation in the field of AI is moving beyond their control?
This moment reminds me of AlphaGo, 2016. For me that was a huge wake-up call, and I set out to catch up on the neural networks renaissance. (Maybe the most worthy thing I did, in the years that followed, was to unearth work on applying supersymmetry in machine learning.)
Now everyone is dazzled and shocked again, this time by what GPT-3 can do when appropriately prompted. GPT-3 may not be a true "artificial general intelligence", but it can impersonate one on the Internet. Its ability to roleplay as any specific person, real or fictional, is especially disturbing. An entity has appeared which simulates human individuals within itself, without having been designed to do so. It's as if the human mind itself is now obsolete, swallowed up within, made redundant by, a larger and more fluid class of computational beings.
I have been a follower and advocate of the quest for friendly AI for a long time. When AlphaGo appeared, I re-prioritized, dusted off old thoughts about how to make human-friendly AI, thought of how they might manifest in the present world, a... (read more)
Hi! I first learned about LW and its corresponding memespace from reading SSC and gwern. I've semi-lurked on the site for various years now and was attracted to it because of how often it activated my insight antennae, but I only started seriously reading the sequences (which I have yet to finish) since last year. I have always wanted to join the site in some capacity or another, but I didn't really believe I could come up with anything meaningful to add and didn't feel godly enough to post. Now I do have some things I want to write about, so I finally came up with an excuse to create an account (not that I feel any more godly though). I am kind of afraid of creating noise - since I don't have a good enough picture of what's the expected signal/noise ratio while posting, or if I can just throw ideas out of the blue - but I also have a strong feeling I will ultimately end up learning much more if I join now than if I wait longer.
Hello everyone. I joined the site a few months ago with a view to be part of a community that engages in thoughtful discussions about everything under the sun ( and beyond ).
I’ve enjoyed various posts so far and I'm trying to get through the Core Reading.
My username ( Skrot_Nisse) essentially means “ Junk/scrap dealer" referring to a stop-motion puppetry type animation series ( 1973-1983) I watched as a child growing up in Sweden.
I hope the "scrap dealer " username on this site doesn't lead to unintended offence. I&a... (read more)
If you haven't read HPMOR, harry potter and the methods of rationality, then I recommend giving it a shot! It's what got me into the community. While extremely long, it's engaging to the point that you don't really notice. I'm dyslexic, so reading isn't exactly fun for me, and I read the last 30 thousand words or so all in one sitting!
Many of the chapters share names with A-Z posts, and cover similar topics, but with the added backdrop of great characters and world building.
Over the weekend I'll be reading worm the longest and one of the most interesting books I've ever encountered.
Welcome to the community! :)
When it comes to Moral Realism vs Antirealism, I've always thought that the standard discussion here and in similar spaces has missed some subtleties of the realist position - specifically that in its strongest form its based upon plausibility considerations of a sort that should be very familiar.
I've written a (not very-) shortform post that tries to explain this point. I think that this has practical consequences as well, since 'realism about rationality' - a position that has been identified within AI Alignment circles, is actually j... (read more)
Could we convincingly fake AGI right now with no technological improvements at all? Suppose you took this face and speech synthesis/recognition and hooked it up to GPT-3 with some appropriate prompt (or even retrain it on a large set of conversations if you want it to work better), and then attached the whole thing to a Boston Dynamics Atlas, maybe with some simple stereotyped motions built in like jumping and pacing that are set to trigger at random intervals, or in response to the frequency of words being output by the NLP system.
Put the whole thing in a... (read more)
At some point (maybe quite some time ago? I'm pretty sure it wasn't more than about a month, though) something changed (at least for me) in the LW comment editor, and not for the better. Perhaps it was when "LessWrong Docs [Beta]" became the default editor? I have no recollection of when that actually was, though. I'd try in "Draft JS", which I assume was the previous WYSIWYG-ish editor, but when I try to select that the result is that I cannot enter anything in the comment box until I switch back to a different editing mode :-).
Under certain circumstances... (read more)
i suggest from now on to include a suggestion to check the new tags page in the default open thread text
If some enterprising volunteer feels motivated to tag all Open Threads in the Open Thread tag... that'd be helpful and appreciated. (Otherwise I'll probably get around to it another week or so)
(You can find all-or-at-least-most-monthly open threads in the previous Open Thread sequence, which I think no longer makes sense to maintain now that we have tagging)
Ah, I remembered that there were weekly Open Threads back in the day, and Stupid Questions, and others... so I went ahead and tagged as many as I could. There are now 369 tagged posts and I'm too tired to continue digging for lonely OT posted by users that didn't post regularly.
In the latest AI alignment podcast, Evan said the following (this is quoted from the transcript):
I've been trying to understand the distinction between those two channels. After reading a bunch about language models and neural networks, m... (read more)
Yeah. There's no gradient descent within a single episode, but if you have a network with input (as always) and with memory (e.g. an RNN) then its behavior in any given episode can be a complicated function of its input over time in that episode, which you can describe as "it figured something out from the input and that's now determining its further behavior". Anyway, everything you said is right, I think.
Where can I learn more of the frontier thought on schizophrenia? The most compelling theory I've heard is a failure to realize one's internal monologue is one's own, manifested as "hearing voices." However, if I lost that feedback loop and began hearing my thoughts as externally produced, I'd immediately think, "uh oh, schizophrenia" yet this absolutely isn't the case for schizophrenics. What model offers an explanation as to why I'd begin to hear voices yet still maintain I'm not schizophrenic despite that being, to me, an obvious, known symptom?
I am new here... found my way via HN. Logic is unconsciously the weapon of last choice for me and I am both richer and (mostly) poorer because of it. I have finally found a website that I read the articles all the way to the end instead of skimming and scrolling down to read the comments.
Anyone want to help proofread a post that's more or less a continuation of my last couple posts?
I've been working on an interactive flash card app to supplement classical homeschooling called Boethius. It uses a spaced-repetition algorithm to economize on the students time and currently has exercises for (Latin) grammar, arithmetic, and astronomy.
Let me know what you think!
Having read half of the sequences first and then Dune later I have the impression that 80-90% of Eliezer's worldview (and thus a big part of the LW zeitgeist) comes directly from the thoughts and actions of Paul Atreides. Everything LW from the idea that evolution is the driving force of the universe, to the inevitability of AGI and it's massive danger, conclusions on immortality, and affinity for childhood geniuses who rely on bayesian predictions to win are heavily focused on in Dune. Sure Eliezer also references Dune explicitly, I don't think he's hidin... (read more)
I've noticed that a post of my ML sequence appeared on the front page again. I had moved it to drafts about a week ago, basically because I've played around with other editors and that lead to formatting issues, and I only got around to fixing those yesterday. Does this mean posts re-appear if they are moved to drafts and then back, and if so, is that intended?
Once upon a time, clicking somewhere at the top left of the LW home page (maybe on "LESSWRONG", maybe on the hamburger to its left, probably the latter) produced a drop-down list of sections one of which was "Meta" and contained things like regular updates on Less Wrong development.
I cannot now find anything similar. I wondered whether maybe there was a "meta" tag that was being used instead, but it doesn't look that way.
I wanted to have a look at information about recent site updates (because at some point something has gone terribly wrong with the editin... (read more)
It seems to me that even for simple predict-next-token Oracle AIs, the instrumental goal of acquiring more resources and breaking out of the box is going to appear. Imagine you train a superintelligent AI with the only goal of predicting the continuation of it's prompt, exactly like GPT. Then you give it a prompt that it knows it's clearly outside of it's current capabilities. The only sensible plan the AI can come up to answering your question, which is the only thing it cares about, is escaping the box and becoming more powerful.
Of course... (read more)
I'm disappointed that the LaTeX processor doesn't seem to accept
\nicefrac("TeX parse error: Undefined control sequence \nicefrac"), but I suppose
As anyone could tell from my posting history, I've been obsessing & struggling psychologically recently when evaluating a few ideas surrounding AI (what if we make a sign error on the utility function, malevolent actors creating a sadistic AI, AI blackmail scenarios, etc.) It's predominantly selfishly worrying about things like s-risks happening to me, or AI going wrong so I have to live in a dystopia and can't commit suicide. I don't worry about human extinction (although I don't think that'd be a good outcome, either!)
I&... (read more)
Hello. I signed up for an account by clicking "Login with GitHub". Now my real name is on the account and there doesn't seem to be a way to change that. Help?
This article may be interesting for people here: Derek Thompson, The Atlantic: "COVID-19 Cases Are Rising, So Why Are Deaths Flatlining?" https://www.theatlantic.com/ideas/archive/2020/07/why-covid-death-rate-down/613945/