Richard Ngo's case for why artificial general intelligence may pose an existential threat. Written with an aim to incorporate modern advances in machine learning, without taking any previous claims about AGI risk for granted.
I've been posting and commenting pretty frequently over the last few months, and I was curious about some stats. What started as a few GraphQL queries and some Python scripting turned into an interactive web app:
Enter a username, and it will give you some stats and a graph, broken down by post and comment karma. You can use the slider to adjust the date range, and the stats are automatically recalculated for the selected time period.
Another feature is the list of "Gems" - comments with at least a few votes that have the highest net karma score, i.e. comments that received strong upvotes from high-karma users, and few downvotes. I found that this often gives a better sense of a user's best comments than just looking at...
I've written up a rationality game which we played several times at our local LW chapter and had a lot of fun with. The idea is to put Aumann's agreement theorem into practice as a multi-player calibration game, in which players react to the probabilities which other players give (each holding some privileged evidence). If you get very involved, this implies reasoning not only about how well your friends are calibrated, but also how much your friends trust each other's calibration, and how much they trust each other's trust in each other.
You'll need a set of trivia questions to play. We used these.
The write-up includes a helpful scoring table which we have not play-tested yet. We did a plain Bayes loss rather than an adjusted Bayes loss when we played, and calculated things on our phone calculators. This version should feel a lot better, because the numbers are easier to interpret and you get your score right away rather than calculating at the end.
For the last couple of years, the Russian-speaking LW community has been running the AAG online, using this Google Sheets template: https://docs.google.com/spreadsheets/d/1tm4AYBMs8N-ZkdJJeNezG6H6tIPkQ5tHJmiFx8n3_Xo/edit
It supports and calculates points for 2-6 players.
The participants add and update their probabilities and see the history and the points they’ll get.
Feel free to use it!
The game gets better if multiple teams compete for the largest total amount of points instead of individual players competing with each other for individual points (use mult...
I'm excited to share a special opportunity to create a systemic impact: a statewide approval voting ballot initiative in Missouri. This would affect all elections throughout the state including federal and presidential. Approval voting favors consensus candidates and a more accurate representation of the public's support. This is critical if we want a government to behave in our interests on policies that concern our well-being.
The organization leading this charge is Show Me Integrity, where I'm currently doing a fellowship and assisting with fundraising efforts. Show Me Integrity has successfully passed a ballot initiative before, showing their ability to succeed on this kind of scale. They also successfully ran the ballot initiative for approval voting in St. Louis.
Why is this important?
Approval voting is a method that allows voters...
If I understand their draft language it looks problematic. It seems like they designed it so that people who lose primaries generally have no chance to appear on the ballot. I don't see a good reason to give political parties that much power over ballot access.
Having a system where an incumbent who loses a primary can't appear on the ballot means that the benefit of protecting incumbents from extremist primary challenges disappears.
Another alternative would be to allow the top two candidates from each party primary ballot access for the general election.
Let's say you have a few million tabs open in your mobile Chrome browser, because you never close anything, but now your browser is getting slow and laggy. You want to stick the URLs of those tabs somewhere for safekeeping so that you can close them all.
There's a lot of advice on doing this on the Internet, most of which doesn't work.
Here's a method that does work. It's a bit of a hack, but gives good results:
FYI, in ther answer you linked to, there is another, way easier way of doing it (& it worked for me):
tl;dr:
- have the Android command line tools installed on a development machine, and USB debugging enabled on your device. The device does not need to be rooted
adb forward tcp:9222 localabstract:chrome_devtools_remotewget -O tabs.json http://localhost:9222/json/list
For the purposes of this post, the anthropic shadow is the type of inference found in How Many LHC Failures Is Too Many?.
"Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"
In other words, since we are "blind" to situations in which we don't exist, we must adjust how we do bayesian updating. Although it has many bizarre conclusions, it is more intuitive than you think and quite useful!
There are many similar applications of anthropics, such as Nuclear close calls and Anthropic signature: strange anti-correlations.
This actually has implications for effective altruism. Since we are so early into humanity's existence, we can infer from the anthropic shadow that humans will probably soon die out....
Yes that's what I take would happen too unless I'm misunderstanding something? Because it would seem far more probable for *just* your consciousness to somehow still exist, defying entropy, than for the same thing to happen to an entire civilization (same argument why nearly all Boltzmann brains would be just a bare "brain").
A podcast interview (posted 2023-06-29) with noted AI researcher Douglas Hofstadter discusses his career and current views on AI.
Hofstadter has previously energetically criticized GPT-2/3 models (and deep learning and compute-heavy GOFAI). These criticisms were widely circulated & cited, and apparently many people found Hofstadter a convincing & trustworthy authority when he was negative on deep learning capabilities & prospects, and so I found his comments in this most recent discussion of considerable interest (via Edward Kmett).
Below I excerpt from the second half where he discusses DL progress & AI risk:
...
Q: ...Which ideas from GEB are most relevant today?
Douglas Hofstadter: ...In my book, I Am a Strange Loop, I tried to set forth what it is that really makes a self or a soul. I like to use the word "soul", not in the religious
Yeah, there's obviously SOME recursion there but it's still surprising that such a relatively low bandwidth recursion can still work so well. It's more akin to me writing down my thoughts and then rereading them to gather my ideas than the kind of loops I imagine our neurons might have.
That said, who knows, maybe the loops in our brain are superfluous, or only useful for learning feedback purposes, and so a neural network trained by an external system doesn't need them.
This is my fifth attempt at writing this post. I’m starting to think that I’ve already spent way too much time on this topic, which I’m convinced is valuable, but maybe not so valuable as to spend 20 hours perpetually rewriting a post about it. So obviously my solution is to rewrite it again, but this time in bullet points.
Here’s a tl;dr: There are some habits people can pick up that are very cheap, and may have positive effects, but these effects are too small to reliably notice consciously. Hence these habits are often neglected. In this post I argue to take some of these habits more seriously, and if they’re low-cost enough for you to implement, stick to them even absent of any feeling of them being useful.
One of the main ways I managed to instill good habits in myself is to both use optimal paths to good habits, and closing optimal paths to sub-optimal habits. The trick is to make a good habit easier than it is annoying, and a bad habit more annoying than it is preferable.
Examples:
Hydration - I simply place a 2l water bottle by the apartment door every evening. It becomes impossible for me to leave the house without picking it up, and once it is in my hand, Im so much more likely to drink from it and take it with me than forget.
Exercise: I bought dumbbells ...
When trying to improve the world via philanthropy, there are compelling reasons to focus on nurturing individual talent rather than supporting larger organizations, especially those with nebulous and unquantifiable goals.
Tyler Cowen's Emergent Ventures is a prime example of this approach, providing grants to individual entrepreneurs and thinkers who aim to make a significant societal impact. When asked how his approach to philanthropy differs from the Effective Altruist approach, Cowen answers:
I’m much more “person first.” I’m willing to consider, not any area—it ought to feel important—but I view it as more an investment in the person, and I have, I think, more faith that the person’s own understanding of what’s important will very often be better than mine. That would be the difference.
This model has been effective in...
I would expect that one of the key reasons why many people would not do this, is because it's socially weird and they are uncertain about how to handle how that changes their social relationship to the people around them.
Especially, given that many programmers are more on the shy side, writing a check to a GiveWell-recommended charity is easier. I think it would be valuable if someone who acts like that would write more about their experience doing it, so that people have an easier model to copy.
[Thanks to Charlie Steiner, Richard Kennaway, and Said Achmiz for helpful discussion.]
[Epistemic status: my best guess after having read a lot about the topic, including all LW posts and comment sections with the consciousness tag]
There's a common pattern in online debates about consciousness. It looks something like this:
One person will try to communicate a belief or idea to someone else, but they cannot get through no matter how hard they try. Here's a made-up example:
"It's obvious that consciousness exists."
-Yes, it sure looks like the brain is doing a lot of non-parallel processing that involves several spatially distributed brain areas at once, so-
"I'm not just talking about the computational process. I mean qualia obviously exists."
-Define qualia.
"You can't define qualia; it's a primitive. But you know what I mean."
-I...
I tend to think that, regardless of which camp is correct, it's unlikely that the difference is due to different experiences, and more likely that one of the two sides is making a philosophical error. Reason being that experience itself is a low-level property, whereas judgments about experience are a high-level property, and it generally seems to be the case that the variance in high-level properties is way way higher.
E.g., it'd be pretty surprising if someone claimed that red is more similar to green than to orange, but less surprising if they had a stra...
Neat! I found it interesting that 8/10 of my top comments by karma are from pre-LW 2.0. At least some of that is because the rationality quotes threads were good for karma farming, but apparently there were also just way more votes being cast.
Not important, but I guess there'll also be some inaccuracies to do with vote strength changing. (Out of interest, do you calculate vote strength based on current karma, or their fuzzily-back-computed karma at the time they made the comment/post?)