Shortform Content [Beta]

ryan_b's Shortform

Is spaced repetition software a good tool for skill development or good practice reinforcement?

I was recently considering using an Anki prompt to do a mental move rather than to test my recall, like tense your muscles as though you were performing a deadlift. I don't actually have access to a gym right now, so I didn't get to put it into action immediately. Visualizing the movement as vividly as possible, and tensing muscles like the movement was being performed (even when not doing it) are common tricks reported by famous weightlifters.

But I happened acro... (read more)

Showing 3 of 4 replies (Click to show all)
2ryan_b3hCould you talk a bit more about this? My initial reaction is that I am almost exactly proposing additional value from using Anki to engage the skill sans context (in addition to whatever actual practice is happening with context). I review Gwern's post pretty much every time I resume the habit; it doesn't look like it has been evaluated in connection with physical skills. I suspect the likeliest difference is that the recall curve is going to be different from the practice curve for physical skills, and the curve for mental review of physical skills will probably be different again. These should be trivial to adjust if we knew what they were, but alas, I do not. Maybe I could pillage the sports performance research? Surely they do something like this.
4mr-hire3hIt is hard to find, but it's covered here: [] My take is pretty similar to cognitive skills: It works well for simple motor skills but not as well for complex skills. My experience is basically that this doesn't work. This seems to track with the research on skill transfer (which is almost always non-existent or has such a small effect that it can't be measured.)

Ah, the humiliation of using the wrong ctrl-f inputs! But of course it would be lower level.

Well that's reason enough to cap my investment in the notion; we'll stick to cheap experiments if the muse descends.

NunoSempere's Shortform

While researching past technological discontinuities, I came across some interesting anecdotes. Some follow. I also looked at technology readiness levels, but this didn’t prove fruitful.

Anecdotes and patterns

Watt's safety concerns

As the 18th century advanced, the call was for higher pressures; this was strongly resisted by Watt who used the monopoly his patent gave him to prevent others from building high-pressure engines and using them in vehicles. He mistrusted the boiler technology of the day, the way they were constructed and the strength of the ma

... (read more)
3habryka6dAhm, this comment looks completely broken to me, like you accidentally copy-pasted your whole frontpage into it.

Fixed, thanks

Mark Xu's Shortform

Lesswrong posts that I want someone to write:

  1. Description of pair debugging
  2. Description of value handshakes

Maybe I'll think of more later.

I found a reference to "value handshakes" here:

Your AI doesn't figure out how to do a reasonable "values handshake" with a competitor (where two agents agree to both pursue some appropriate compromise values in order to be Pareto efficient)...

I think it refers to something like this: Imagine that a superintelligent human-friendly AI meets a superintelligent paperclip maximizer, and they both realize their powers are approximately balanced. What should they do?

For humans, "let's fight, and to the victor go the spoils" is the intuitive answer, but the superi... (read more)

Matt Goldenberg's Short Form Feed

I can't wrap my brain around the computational theory of consciousness.

Who decides how to interpret the computations?  If I have a beach, are the lighter grains 0 and darker grains 1?  What about the smaller and bigger grains? What if I decide to use the motion of the planets to switch between these 4 interpretations.

Surely under infinite definitions of computation, there are infinite consciousnesses experience infinite states at any given time, just from pure chance.

Showing 3 of 26 replies (Click to show all)
1interstice3dWhat hypothesis would you be "testing"? What I'm proposing is an idealized version of a sampling procedure that could be used to run tests, namely, sampling mind-like things according to their description complexity. If you mean that we should check if the minds we usually see in the world have low complexity, I think that already seems to be the case, in that we're the end-result of a low-complexity process starting from simple conditions, and can be pinpointed in the world relatively simply.
2mr-hire3dI mean, I'm saying get minds with many different complexities, figure out a way to communicate with them, and ask them about their experience. That would help to figure out if complexity is indeed correlated with observer moments. But how you test this feels different from the question of whether or not it's true.

I think we're talking about different things. I'm talking about how you would locate minds in an arbitrary computational structure(and how to count them), you're talking about determining what's valuable about a mind once we've found it.

throwaway_time's Shortform

Warning: This is a personal vent on my personal situation. I doubt it will be worth your time.


High school days were the peak of my life. School was easy, and I was fortunate enough to find interest and some talent in math olympiad. I climbed my way through before getting stopped just before the TST.

I remember them giving out big π-shaped chocolates for the IMO contestants that year, and getting really bitter because it looked so cool.

After I hit that wall I parted way with math to become a CS major because startups were a thing and I believed I could... (read more)

Showing 3 of 7 replies (Click to show all)

If you can't effectively work on your own projects, it might be worth getting a job so that you have a structure.

2mr-hire3dHappy to just chat if you'd like. I've battled with similar problems of lack of focus, and done a lot of work myself. Happy to listen.
1Slider3dCheck that you are not missing any neurodiversity diagnoses such as aspergers or adhd. Your problems sound a lot like my problems. Internalized ablism could be really depowering. One wouldn't run a marathon with a hearth condition. It is true that these kinds of things can have alot of blurry lines and sceptics can have easy time to not believe it even exists but most of the time taking correctly a problem into account is way better than ignoring it completely.
Hate9's Shortform

I've been wanting to set LessWrong as my home page for a while but kept avoiding it because the site is so visibly bright.

I looked up ways of viewing the site without hurting my eyes and found, but didn't really like it.

Then I found, which looked right, but wasn't technically the real LessWrong site (which meant it couldn't tell I was logged in, among other things).

But I already use Stylus to apply stylesheet modifications to webpages, so I looked at lessbright's source code, found the CSS ... (read more)

2habryka4dGreat! For people who are bothered by the brightness, this seems like a decent solution for now. We are thinking about creating styling for a proper dark mode, but really unclear what the timeline on that is atm.

I'm glad to hear there's a proper solution planned at some point since mine is somewhat hacky, but I'm not surprised there's no clear timeline.

AllAmericanBreakfast's Shortform

We do things so that we can talk about it later.

I was having a bad day today. Unlikely to have time this weekend for something I'd wanted to do. Crappy teaching in a class I'm taking. Ever increasing and complicating responsibilities piling up.

So what did I do? I went out and bought half a cherry pie.

Will that cherry pie make me happy? No. I knew this in advance. Consciously and unconsciously: I had the thought, and no emotion compelled me to do it.

In fact, it seemed like the least-efficacious action: spending some of my limited money, to buy a pie I don't... (read more)

4Viliam3dSo the "stupid solutions to problems of life" are not really about improving the life, but about signaling to yourself that... you still have some things under control? (My life may suck, but I can have a cherry pie whenever I want to!) This would be even more important if the cherry pie would somehow actively make your life worse. For example, if you are trying to lose weight, but at the same time keep eating cherry pie every day in order to improve the story of your day. Or if instead of cherry pie it would be cherry liqueur. Just guessing, but it would probably help to choose the story in advance. "If I am doing X, my life is great, and nothing else matters" -- and then make X something useful that doesn't take much time. Even better, have multiple alternatives X, Y, Z, such that doing any of them is a "proof" of life being great.

I do chalk a lot of dysfunction up to this story-centric approach to life. I just suspect it’s something we need to learn to work with, rather than against (or to deny/ignore it entirely).

My sense is that storytelling - to yourself or others - is an art. To get the reaction you want - from self or others - takes some aesthetic sensitivity.

My guess is there’s some low hanging fruit here. People often talk about doing things “for the story,” which they resort to when they're trying to justify doing something dumb/wasteful/dangerous/futile. Perversely, it oft... (read more)

TurnTrout's shortform feed

Epistemic status: not an expert

Understanding Newton's third law, .

Consider the vector-valued velocity as a function of time, . Scale this by the object's mass and you get the momentum function over time. Imagine this momentum function wiggling around over time, the vector from the origin rotating and growing and shrinking.

The third law says that force is the derivative of this rescaled vector function - if an object is more massive, then the same displacement of this rescaled arrow is a proportionally smaller velocity modification, because o... (read more)

niplav's Shortform

Two-by-two for possibly important aspects of reality and related end-states:

Coordination is hard Coordination is easy
Defense is easier Universe fractured into many parties, mostly stasis Singleton controlling everything
Attack is easier Pure Replicator Hell Few warring factions? Or descent into Pure Replicator hell?

I think there are multiple kinds of attacks, which matter in this matrix.  Destruction/stability is a different continuum from take control of resources/preserve control of resources.   

I also think there's no stasis - the state of accessible resources (heat gradients, in the end) in the universe will always be shifting until it's truly over.  There may be multi-millenea equilibria, where it feels like stasis on a sufficiently-abstract level, but there's still lots of change.  As a test of this intuition, the Earth has been static ... (read more)

3Daniel Kokotajlo3dShouldn't the singleton outcome be in the bottom right quadrant? If attack is easy but so is coordination, the only stable solution is one where there is only one entity (and thus no one to attack or be attacked.) If by contrast defense is easier, we could end up in a stable multipolar outcome... at least until coordination between those parties happen. Maybe singleton outcome happens in both coordination-is-easy scenarios.
jacobjacob's Shortform Feed

Testing whether images work in spoiler tags

Showing 3 of 4 replies (Click to show all)
10mr-hire6dAlternative hypothesis: His smile lights up even the darkest spoiler tag.
6Gurkenglas4dthen it would be

Take my upvote.

DanielFilan's Shortform Feed

Ted Kaczynski as a relatively apolitical test case for cancellation norms:

Ted Kaczynski was a mathematics professor who decided that industrial society was terrible, and waged a terroristic bombing campaign to foment a revolution against technology. As part of this campaign, he wrote a manifesto titled "Industrial Society and Its Future" and said that if a major newspaper printed it verbatim he would desist from terrorism. He is currently serving eight life sentences in a "super-max" security prison in Colorado.

My understanding is that his manifesto (which... (read more)

Showing 3 of 5 replies (Click to show all)

Generally speaking, if someone commits heinous and unambiguous crimes in service of an objective like "getting people to read X", and it doesn't look like they're doing a tricky reverse-psychology thing or anything like that, then we should not cooperate with that objective. If Kaczynski had posted his manifesto on LessWrong, I would feel comfortable deleting it and any links to it, and I would encourage the moderator of any other forum to do the same under those circumstances.

But this is a specific and unusual circumstance. When people try to cancel each ... (read more)

2Dagon4dThere's no need to cancel anyone who's failing to have influence already. I suspect there are no apolotical test cases: cancellation (in the form of verbally attacking and de-legitimizing someone as a person, rather than arguing against specific portions of their work) is primarily politically motivated. It's pretty pure ad-hominem argument: "don't listen to or respect this person, regardless of what they're saying". In this case, I'm not listening because I think it's low-value on it's own, regardless of authorship. The manifesto is pretty easy to find in PDF form for free. I wasn't able to get very far - way too many crackpot signals and didn't seem worth my time. To your bullet points: 1. I can read this two ways: "should anybody" meaning "do you recommend any specific person read it" or "do you object to people reading it". My answers are "yes, but not many people", and "no.". Anybody who is interested, either from a direct curiosity on the topic (which I predict won't be rewarded) or from wanting to understand this kind of epistemic pathology (which might be worthwhile) should read it. 2. It's absolutely acceptable. I wouldn't enjoy it, but I'm not a member of the group, so no harm there. To decide whether YOUR group should do it, try to identify what you'd hope to get out of it, and what likely consequences there are from pursuing that direction. If your group is visible and sensitive to public perception (aka politically influenced), then certainly you should consider those affects.
2DanielFilan4dTo be explicit, here are some reasons that the EA community should cancel Kaczynski. Note that I do not necessarily think that they are sound or decisive. * EAs are known as utilitarians who are concerned about the impact of AI technology. By associating with him, that could give people the false impression that EAs are in favour of terroristic bombing campaigns to retard technological development, which would damage the EA community. * His threat to bomb more people and buildings if the Washington Post (WaPo) didn't publish his manifesto damaged good discourse norms by inducing the WaPo to talk about something it wasn't otherwise inclined to talk about, and good discourse norms are important for effective altruism. * It seems to me (not having read the manifesto) that the policies he advocates would cause large amounts of harm. For instance, without modern medical technology, I and many others would not have survived to the age of one year. * His bombing campaign is evidence of very poor character.
niplav's Shortform

I just updated my cryonics cost-benefit analysis with

along with some small fixes and additions.

The basic result has not changed, though. It's still worth it.

MikkW's Shortform

Riemannian geometry belongs on the list of fundamental concepts that are taught and known far less than they should be in any competent society

steve2152's Shortform

Quick comments on "The case against economic values in the brain" by Benjamin Hayden & Yael Niv :

(I really only skimmed the paper, these are just impressions off the top of my head.)

I agree that "eating this sandwich" doesn't have a reward prediction per se, because there are lots of different ways to think about eating this sandwich, especially what aspects are salient, what associations are salient, what your hormones and mood are, etc. If neuroeconomics is premised on reward predictions being attached to events and objects rather than thoughts, then... (read more)

otto.barten's Shortform

Tune AGI intelligence by easy goals

If an AGI is provided an easily solvable utility function ("fetch a coffee"), it will lack the incentive to self-improve indefinitely. The fetch-a-coffee-AGI will only need to become as smart as a hypothetical simple-minded waiter. By creating a certain easiness for a utility function, we can therefore tune the intelligence level we want an AGI to achieve using self-improvement. The only way to achieve an indefinite intelligence explosion (until e.g. material boundaries) would be to program a utility function ma... (read more)

Showing 3 of 5 replies (Click to show all)
4Charlie Steiner6dSuppose I get hit by a meteor before I can hear your "2" - will you then have failed to tell me what 1+1 is? If so, suddenly this simple goal implies being able to save the audience from meteors. Or suppose your screen has a difficult-to-detect short circuit - your expected utility would be higher if you could check your screen and repair it if necessary. Because a utility maximizer treats a 0.09% improvement over a 99.9% baseline just as seriously as it treats a 90% improvement over a 0% baseline, it doesn't see these small improvements as trivial, or in any way not worth its best effort. If your goal actually has some chance of failure, and there are capabilities that might help mitigate that failure, it will incentivize capability gain. And because the real world is complicated, this seems like it's true for basically all goals that care about the state of the world. If we have a reinforcement learner rather than a utility maximizer with a pre-specified model of the world, this story is a bit different, because of course there will be no meteors in the training data. Now, you might think that this means that the RL agent cannot care about meteors, but this is actually somewhat undefined behavior, because the AI still gets to see observations of the world. If it is vanilla RL with no "curiosity," it won't ever start to care about the world until the world actually affects its reward (which for meteors, will take much too long to matter, but does become important when the reward is more informative about the real world), but if it's more along the lines of DeepMind's game-playing agents, then it will try to find out about the world, which will increase its rate of approaching optimal play. There are definitely ideas in the literature that relate to this problem, particularly trying to formalize the notion that the AI shouldn't "try too hard" on easy goals. I think these attempts mostly fall under two umbrellas - other-izers (that is, not maximizers) and impact r
1otto.barten4dThanks again for your reply. I see your point that the world is complicated and a utility maximizer would be dangerous, even if the maximization is supposedly trivial. However, I don't see how an achievable goal has the same problem. If my AI finds the answer of 2 before a meteor hits it, I would say it has solidly landed at 100% and stops doing anything. Your argument would be true if it decides to rule out all possible risks first, before actually starting to look for the answer of the question, which would otherwise quickly be found. But since ruling out those risks would be much harder to achieve than finding the answer, I can't see my little agent doing that. I think my easy goals come closest to what you call other-izers. Any more pointers for me to find that literature? Thanks for your help, it helps me to calibrate my thoughts for sure!

I think actually 1+1 = ? is not really an easy enough goal, since it's not 100% sure that the answer is 2. Getting to 100% certainty (including what I actually meant with that question) could still be nontrivial. But let's say the goal is 'delete filename.txt'? Could be the trick is in the language..

Daniel Kokotajlo's Shortform

When I first read the now-classic arguments for slow takeoff -- e.g. from Paul and Katja -- I was excited; I thought they described a serious alternative scenario to the classic FOOM scenarios. However I never thought, and still do not think, that the classic FOOM scenarios were very unlikely; I feel that the slow takeoff and fast takeoff scenarios are probably within a factor of 2 of each other in probability.

Yet more and more nowadays I get the impression that people think slow takeoff is the only serious possibility. For example, Ajeya and Rohin seem ve... (read more)

Roko's Shortform

One weird trick for estimating the expectation of Lognormally distributed random variables:

If you have a variable X that you think is somewhere between 1 and 100 and is Lognormally distributed, you can model it as being a random variable with distribution ~ Lognormal(1,1) - that is, the logarithm has a distribution ~ Normal(1,1).

What is the expectation of X?

Naively, you might say that since the expectation of log(X) is 1, the expectation of X is 10^1, or 10. That makes sense, 10 is at the midpoint of 1 and 100 on a log scale.

This is wrong though. The chanc... (read more)

johnswentworth's Shortform

Neat problem of the week: researchers just announced roughly-room-temperature superconductivity at pressures around 270 GPa. That's stupidly high pressure - a friend tells me "they're probably breaking a diamond each time they do a measurement". That said, pressures in single-digit GPa do show up in structural problems occasionally, so achieving hundreds of GPa scalably/cheaply isn't that many orders of magnitude away from reasonable, it's just not something that there's historically been much demand for. This problem plays with one idea for generating suc... (read more)

Dony's Shortform Feed

I am very interested in practicing steelmanning/Ideological Turing Test with people of any skill level. I have only done it once conversationally and it felt great. I'm sure we can find things to disagree about. You can book a call here.

jp's Shortform

What to do if you suddenly need to rest your hands

On Monday I went from "computer work seems kind of uncomfortable, I wonder if I should be worried" to "oh crap oh crap, that's actually painful". Everything I've ever heard says not to work through RSI pain, so what now? I decided to spend a week learning hands free input. I wanted to a) get some serious rest and b) still be productive. And guess what? Learning hands free input is like the one activity that does not suffer a productivity penalty from not being able to use your hands.

When I started it was re... (read more)

Have you seen Serenade?

Load More