In this post, I proclaim/endorse forum participation (aka commenting) as a productive research strategy that I've managed to stumble upon, and recommend it to others (at least to try). Note that this is different from saying that forum/blog posts are a good way for a research community to communicate. It's about individually doing better as researchers.
previously: https://www.lesswrong.com/posts/h6kChrecznGD4ikqv/increasing-iq-is-trivial
I don't know to what degree this will wind up being a constraint. But given that many of the things that help in this domain have independent lines of evidence for benefit it seems worth collecting.
Food:
dark chocolate, beets, blueberries, fish, eggs. I've had good effects with strong hibiscus and mint tea (both vasodilators).
Exercise:
Regular cardio, stretching/yoga, going for daily walks.
Learning:
Meditation, math, music, enjoyable hobbies with a learning component.
Light therapy:
Unknown effect size, but increasingly cheap to test over the last few years. I was able to get Too Many lumens for under $50. Sun exposure has a larger effect size here, so exercising outside is helpful.
Cold exposure:
this might mostly just be exercise for the circulation system, but cold showers might also have some unique effects.
Chewing on things:
Increasing blood...
The subtext is that I'd like to have them if the author has them available. It sounded like it's applied/used by the author. Also, it's a frontpage post and the LW standard on scholarship is typically higher than this.
I'm fine with romeostevensit's reply that it's from a shallow google dive, but would have preferred this to be a QuickTake or at least an indication that it's shallow.
The following is an example of how if one assumes that an AI (in this case autoregressive LLM) has "feelings", "qualia", "emotions", whatever, it can be unclear whether it is experiencing something more like pain or something more like pleasure in some settings, even quite simple settings which already happen a lot with existing LLMs. This dilemma is part of the reason why I think AI suffering/happiness philosophy is very hard and we most probably won't be able to solve it.
Consider the two following scenarios:
Scenario A: An LLM is asked a complicated question and answers it eagerly.
Scenario B: A user insults an LLM and it responds.
For the sake of simplicity, let's say that the LLM is an autoregressive transformer with no RLHF (I personally think that the...
Granting that LLMs in inference mode experience qualia, and even granting that they correspond to human qualia in any meaningful way:
I find both arguments invalid. Either conclusion could be correct, or neither, or the question might not even be well formed. At the very least, the situation is a great deal more complicated than just having two arguments to decide between!
For example in scenario (A), what does it mean for an LLM to answer a question "eagerly"? My first impression is that it's presupposing the answer to the question, since the main meaning o...
There's a particular kind of widespread human behavior that is kind on the surface, but upon closer inspection reveals quite the opposite. This post is about four such patterns.
One of the most useful ideas I got out of Algorithms to Live By is that of computational kindness. I was quite surprised to only find a single mention of the term on lesswrong. So now there's two.
Computational kindness is the antidote to a common situation: imagine a friend from a different country is visiting and will stay with you for a while. You're exchanging some text messages beforehand in order to figure out how to spend your time together. You want to show your friend the city, and you want to be very accommodating and make sure...
What you say doesn't matter as much as what the other person hears. If I were the other person, I would probably wonder why you would add epicycles, and kindness would be just one possible explanation.
I call "alignment strategy" the high-level approach to solving the technical problem[1]. For example, value learning is one strategy, while delegating alignment research to AI is another. I call "alignment metastrategy" the high-level approach to converging on solving the technical problem in a manner which is timely and effective. (Examples will follow.)
In a previous article, I summarized my criticism of prosaic alignment. However, my analysis of the associated metastrategy was too sloppy. I will attempt to somewhat remedy that here, and also briefly discuss other metastrategies, to serve as points of contrast and comparison.
The conservative metastrategy follows the following algorithm:
For people who (like me immediately after reading this reply) are still confused about the meaning of "humane/acc", the header photo of Critch's X profile is reasonably informative
This is a linkpost for an essay I wrote on substack. Links lead to other essays and articles on substack and elsewhere, so don't click these if you don't want to be directed away from lesswrong. Any and all critique and feedback is appreciated. There are some terms I use in this post that I provide a (vague) definition for here at the outset (I have also linked to the essays where these were first used):
Particularism - The dominant world view in industrialized/"Western” culture, founded on reductionism, materialism/physicalism and realism.
The Epistemic - “By the epistemic I will mean all discourse, language, mathematics and science, anything and all that we order and structure, all our frameworks, all our knowledge.” The epistemic is the sayable, it is structure, reductive,...
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
I have the mild impression that Jacqueline Carey's Kushiel trilogy is somewhat popular in the community?[1] Is it true and if so, why?
E.g. Scott Alexander references Elua in Mediations on Moloch and I know of at least one prominent LWer who was a big enough fan of it to reference Elua in their discord handle.
This is the ninth post in my series on Anthropics. The previous one is The Solution to Sleeping Beauty.
There are some quite pervasive misconceptions about betting in regards to the Sleeping Beauty problem.
One is that you need to switch between halfer and thirder stances based on the betting scheme proposed. As if learning about a betting scheme is supposed to affect your credence in an event.
Another is that halfers should bet at thirders odds and, therefore, thirdism is vindicated on the grounds of betting. What do halfers even mean by probability of Heads being 1/2 if they bet as if it's 1/3?
In this post we are going to correct them. We will understand how to arrive to correct betting odds from both thirdist and halfist positions, and...
I read the beginning and skimmed through the rest of the linked post. It is what I expected it to be.
We are talking about "probability" - a mathematical concept with a quite precise definition. How come we still have ambiguity about it?
Reading E.T. Jayne’s might help.
Probability is what you get as a result of some natural desiderata related to payoff structures. When anthropics are involved, there are multiple ways to extend the desiderata, that produce different numbers that you should say, depending on what you get paid for/what you care about, and a...
The way the auditing works in the UK is as follows:
Students will be given an assignment, with a strict grading rubric. This grading rubric is open, and students are allowed to read it. The rubric will detail exactly what needs to be done to gain each mark. Interestingly, even students who read the rubric often fail to get these marks.
Teachers then grade the coursework against the rubric. Usually two from each school are randomly selected for review. If the external grader finds the marks more than 2 points off, all of the coursework will be remarked extern...
Intelligence varies more than it may appear. I tend to live and work with people near my own intelligence level, and so―probably―do you. I know there's at least two tiers above me. But there's even more tiers below me.
A Gallup poll of 1,016 Americans asked whether the Earth revolves around the Sun or the Sun revolves around the Earth. 18% got it wrong. This isn't an isolated result. An NSF poll found a slightly worse number.
Ironically, Gallup's own news report draws an incorrect conclusion. The subtitle of their report is "Four-fifths know earth revolves around sun". Did you spot the problem? If 18% of respondents got this wrong then an estimated 18% got it right just by guessing. 3% said they don't know. If this was an...