Shortform Content [Beta]

TruetoThis's Shortform

There is a theory of "the path of least resistance" that implies the one should go with the flow. With that in mind, how do you continue to nurture the growth resulting from challenges? Does the rationale of the path of least resistance conflict with the challenges of life that are required for change?

Showing 3 of 4 replies (Click to show all)
3Pattern12hDoes the rationale of the path of least resistance conflict with the challenges of life that are required for change? No: "Life provides enough problems without us making more." Yes: * If you're up to your neck in water, (maybe) it's time to stop walking, and start swimming. * If you don't have enough challenges you can make what you're doing more difficult - or go find something better (harder) to do. * Imagine you are at the beach. If you swim out far enough, you can surf back in. If you get caught in a rip tide/rip current, the current may be too strong to fight.* * Metaphorically, just because you're going in the direction of the current doesn't mean you have to just stay afloat** - you can swim. * - swim perpendicular to the beach, then go back in. ** This behavior also seems characteristic of something having gone wrong. (If this is the case, identifying and addressing the problem may be as important as trying to change tack - which is not a 1d move in the literal world.)
1TruetoThis15hWould you say the same of "the path" in emotions and relationships?

I am not sure what that is.

It still takes effort to travel along a path. And there are many paths to choose from.

Draconarius's Shortform

Hilbert’s Motel improvement

This hotel is 2 star at best, imagine having to pack up your stuff every time the hotel receives a new guest? I’ve decided to fix that. The hotel still has infinite rooms and guests but this time every other room is unoccupied which prepares the hotel for an infinite amount of new visitors without inconveniencing the current residence.

Showing 3 of 5 replies (Click to show all)
8Ariel Kwiatkowski3dBut isn't the whole point that the hotel is full initially, and yet can accept more guests?
2mr-hire3dYeah, the hotel being always half full no matter how many guests it has doesn't seem as cool.

As soon as one more guest shows up it's more than half full.

Bob Jacobs's Shortform

With climate change getting worse by the day we need to switch to sustainable energy sources sooner rather than later. The new Molten salt reactors are small, clean and safe, but still carry the stigma of nuclear energy. Since these reactors (like others) can use old nuclear waste as a fuel source, I suggest we rebrand them to "Nuclear Waste Eaters" and give them (or a company that makes them) a logo in the vein of this quick sketch I made: https://postimg.cc/jWy3PtjJ

Hopefully a rebranding to "thing getting rid of the thing you hate, also di... (read more)

AABoyles's Shortform

Anything sufficiently far enough away from you is causally isolated from you. Because of the fundamental constraints of physics, information from there can never reach here, and vice versa. you may as well be in separate universes.

The performance of AlphaGo got me thinking about algorithms we can't access. In the case of AlphaGo, we implemented the algorithm (AlphaGo) which discovered some strategies we could never have created. (Go Master Ke Jie famously said "I would go as far as to say not a single human has touched the edge of the truth of Go.")

Perhaps

... (read more)
4Pattern7d2 things necessary for an algorithm to be useful: * If it's not fast enough, it doesn't matter how good it is * If we don't know what it's good for, it doesn't matter how good it is (until we figure that out) * Part of the issue with this might be programs that don't work or do anything (Beyond the trivial, it's not clear how to select for this, outside of something like AlphaGo.)
1AABoyles7dSure! My brute-force bitwise algorithm generator won't be fast enough to generate any algorithm of length 300 bits, and our universe probably can't support any representation of any algorithm of length greater than (the number of atoms in the observable universe) ~ 10^82 bits. (I don't know much about physics, so this could be very wrong, but think of it as a useful bound. If there's a better one (e.g. number of Planck volumes [https://en.wikipedia.org/wiki/Planck_units] in the observable universe), substitute that and carry on, and also please let me know!) Another class of algorithms that cause problems are those that don't do anything useful for some number of computations, after which they begin to output something useful. We don't really get to know if they will halt [https://en.wikipedia.org/wiki/Halting_problem], so if the useful structure emerges after some number of steps, we may not be committed to or able to run it that long.

I'm not a physicist either, but quantum mechanics might change the limits. (If it scales, though this might leave input and output limits; if the quantum computer can't store the output in classical mode, then it's ability to run the program probably doesn't matter. This might make less efficient crypto systems more secure, by virtue of size.*)

*Want your messages to be more secure? Padding.

Want your key more secure? Length.

Sherrinford's Shortform

You would hope that people actually saw steelmanning as an ideal to follow. If that was ever true, the corona pandemic and the policy response seem to have killed the demand for this. It seems to become acceptable to attribute just any kind of seemingly-wrong behavior to either incredible stupidity or incredible malice, both proving that all institutions are completely broken.

4Dagon1dI like the word "institurions". Some mix of institutions, intuitions, and centurions, and I agree that they're completely broken.

:-) Thanks. But I corrected it.

Paul Crowley's Shortform

For the foreseeable future, it seems that anything I might try to say to my UK friends about anything to do with LW-style thinking is going to be met with "but Dominic Cummings". Three separate instances of this in just the last few days.

Showing 3 of 4 replies (Click to show all)
10Kaj_Sotala4dCan you give some examples of "LW-style thinking" that they now associate with Cummings?
7Paul Crowley1dOn Twitter I linked to this [https://www.lesswrong.com/posts/ecyYjptcE34qAT8Mm/job-ad-lead-an-ambitious-covid-19-forecasting-project] saying Response:
lsusr's Shortform

[Book Review] Surfing Uncertainty

Surfing Uncertainty is about predictive coding, the theory in neuroscience that each part of your brain attempts to predict its own inputs. Predictive coding has lots of potential consequences. It could resolve the problem of top-down vs bottom-up processing. It cleanly unifies lots of ideas in psychology. It even has implications for the continuum with autism on one end and schizophrenia on the other.

The most promising thing about predictive coding is how it could provide a mathematical formulation for how the human brain

... (read more)
Ariel Kwiatkowski's Shortform

Has anyone tried to work with neural networks predicting the weights of other neural networks? I'm thinking about that in the context of something like subsystem alignment, e.g. in an RL setting where an agent first learns about the environment, and then creates the subagent (by outputting the weights or some embedding of its policy) who actually obtains some reward

__nobody's Shortform

Observation: It should generally be safe to forbid non-termination when searching for programs/algorithms.

In practice, all useful algorithms terminate: If you know that you're dealing with a semi-decidable thing and doing serious work, you'll either (a) add a hard cutoff, or (b) structure the algorithm into a bounded step function and a controller that decides whether or not to run for another step. That transformation is not adding significant overhead size-wise, so you're bound to find a terminating algorithm "near" a non-terminating one!

Sure, that sligh

... (read more)
Raemon's Scratchpad

I had a very useful conversation with someone about how and why I am rambly. (I rambled a lot in the conversation!).

Disclaimer: I am not making much effort to not ramble in this post.

A couple takeaways:

1. Working Memory Limits

One key problem is that I introduce so many points, subpoints, and subthreads, that I overwhelm people's working memory (where human working memory limits is roughly "4-7 chunks").

It's sort of embarrassing that I didn't concretely think about this before, because I've spent the past year SPECIFICALLY thinking about working memory limi

... (read more)

re working memory: never thought of it during conversations, interesting. it seems that we sometime hold the nodes of the conversation tree to go back to them afterward. and maybe if you're introducing new concepts while you're talking people need to hold those definitions in working memory as well.

Tetraspace Grouping's Shortform

Thoughts on Ryan Carey's Incorrigibility in the CIRL Framework (I am going to try to post these semi-regularly).

  • This specific situation looks unrealistic. But it's not really trying to be too realistic, it's trying to be a counterexample. In that spirit, you could also just use , which is a reward function parametrized by that gives the same behavior but stops me from saying "Why Not Just set ", which isn't the point.
    • How something like this might actually happen: you try to have your b
... (read more)
5Tetraspace Grouping1moThoughts on Dylan Hadfield-Menell et al.'s The Off-Switch Game [https://arxiv.org/abs/1611.08219]. * I don't think it's quite right to call this an off-switch - the model is fully general to the situation where the AI is choosing between two alternatives A and B (normalized in the paper so that U(B) = 0), and to me an off-switch is a hardware override that the AI need not want you to press. * The wisdom to take away from the paper: An AI will voluntarily defer to a human - in the sense that the AI thinks that it can get a better outcome by its own standards if it does what the human says - if it's uncertain about the utilities, or if the human is rational. * This whole setup seems to be somewhat superseded by CIRL [https://arxiv.org/abs/1606.03137], which has the AI, uh, causally find UA by learning its value from the human actions, instead of evidentially(?) doing it by taking decisions that happen to land it on action A when UA is high because it's acting in a weird environment where a human is present as a side-constraint. * Could some wisdom to gain be that the high-variance high-human-rationality is something of an explanation as to why CIRL works? I should read more about CIRL to see if this is needed or helpful and to compare and contrast etc. * Why does the reward gained drop when uncertainty is too high? Because the prior that the AI gets from estimating the human reward is more accurate than the human decisions, so in too-high-uncertainty situations it keeps mistakenly deferring to the flawed human who tells it to take the worse action more often? * The verbal description, that the human just types in a noisily sampled value of UA, is somewhat strange - if the human has explicit access to their own utility function, they can just take the best actions directly! In practice, though, the AI would learn this by looking at many past human actions (there's some CIRL!) which does seem like it
11Tetraspace Grouping7dThoughts on Abram Demski's Partial Agency [https://www.lesswrong.com/s/HeYtBkNbEe7wpjc6X/p/4hdHto3uHejhY2F3Q]: When I read Partial Agency, I was struck with a desire to try formalizing this partial agency thing. Defining Myopia [https://www.lesswrong.com/s/HeYtBkNbEe7wpjc6X/p/qpZTWb2wvgSt5WQ4H] seems like it might have a definition of myopia; one day I might look at it. Anyway, Formalization of Partial Agency: Try One A myopic agent is optimizing a reward function R(x0,y(x0)) where x is the vector of parameters it's thinking about and y is the vector of parameters it isn't thinking about. The gradient descent step picks the δx in the direction that maximizes R(x0+δx,y(x0)) (it is myopic so it can't consider the effects on y), and then moves the agent to the point (x0+δx,y(x0+δx)). This is dual to a stop-gradient agent, which picks the δx in the direction that maximizes f(x0+δx,y(x0+δx)) but then moves the agent to the point (x0+δx,y(x0)) (the gradient through y is stopped). For example, * Nash equilibria - x are the parameters defining the agent's behavior. y(x0) are the parameters of the other agents if they go up against the agent parametrized by x0. R is the reward given for an agent x going up against a set of agents y. * Image recognition with a neural network - x is the parameters defining the network, y(x0) are the image classifications for every image in the dataset for the network with parameters x0, and R is the loss function plus the loss of the network described by x on classifying the current training example. * Episodic agent - x are parameters describing the agents behavior. y(x0) are the performances of the agent x0 in future episodes. R is the sum of y, plus the reward obtained in the current episode. Partial Agency due to Uncertainty? Is it possible to cast partial agency in terms of uncertainty over reward functions? One reason I'd be myopic is if I didn't believe that I could, in expectation, improve some pa

So the definition of myopia given in Defining Myopia was quite similar to my expansion in the But Wait There's More section; you can roughly match them up by saying and , where is a real number corresponding to the amount that the agent cares about rewards obtained in episode and is the reward obtained in episode . Putting both of these into the sum gives , the undiscounted, non-myopic reward that the agent eventually obtains.

In terms of the definition that I give in t... (read more)

Mary Chernyshenko's Shortform

Some other people who play to win

It's a crowd I'd come into contact with as a manager of an online bookshop (and most of the reason I quitted). Usually, I can pretend they don't exist, but... we all know how it goes... and now that they don't make my blood boil every weekend, I can afford to speak about them.

"Some other people" will play to win - say, a facebook lottery with a book for a prize, and they will mean it. If they don't win, they will say the lottery was rigged. Public righteous indignation on every player's behalf is a weapon (and for the manag

... (read more)
2Pattern6dWhat's the book?

Not any particular book, but rather some frequent conditions of game theory problems I have seen here and elsewhere (my fb friend keeps posting such pieces). "The players care only about winning" etc. Well, some people actually do.

ryan wong's Shortform

There are two kinds of pleasurable feelings. The first one is a self-reinforcing loop, where the in-the-moment pleasure leads to craving for more pleasure, such as mindlessly scrolling through social media, or eating highly-processed, highly-palatable food. The second is pleasure gained through either thoughtfully consuming good content, like listening to good music or reading good books, or the fulfillment of a task that's meaningful, such as getting good grades or getting a promotion for sustained conscientious effort.

The first is pleasure for its o... (read more)

Showing 3 of 4 replies (Click to show all)
1ryan wong6dAs an add-on, I found this LW article [https://www.lesswrong.com/posts/KwdcMts8P8hacqwrX/noticing-the-taste-of-lotus] today that captures the essence of "first pleasure" I'm not quite sure I got this part, could you please elaborate on it? Here, I would argue that the feeling of the second pleasure is essential to meeting long-term goals. Feeling good about accomplishing sub-tasks will keep someone working towards an important goal, especially if it requires a long period of sustained effort. Thus, meeting short-term goal -> success -> second pleasure -> working on next short-term goal; with enough iterations, that will lead to the meeting of long-term goals and thus success.
3Dagon6dhttps://wiki.lesswrong.com/wiki/Paperclip_maximizer [https://wiki.lesswrong.com/wiki/Paperclip_maximizer] is the canonical example of over-simplified goal optimization. I bring it up mostly as a reminder that getting your motivational model wrong can lead to undesirable actions and results. Which leads to my main point. You're recommending one type of pleasure over another, based on it being more aligned with your non-pleasure-measured goals. I'm wondering why you are arguing for this, as opposed to just pursuing the goals directly, without consideration of pleasure.

Ah, now I've got what you mean. Thanks for referring me to that thought experiment, I don't have much prior knowledge on the field of AI so that was definitely a new insight for me.

I see now that my original shortform did not explicitly state that my terminal value was indeed the fulfillment of important goals. I was reflecting more on the distinction between pleasurable feelings that led to distraction & bad habits, vs ones that led to the actual fulfillment of goals. It was a personal reminder to experience the latter in place of the forme... (read more)

TurnTrout's shortform feed

Sentences spoken aloud are a latent space embedding of our thoughts; when trying to move a thought from our mind to another's, our thoughts are encoded with the aim of minimizing the other person's decoder error.

Raemon's Scratchpad

There's a problem at parties where there'll be a good, high-context conversation happening, and then one-too-many-people join, and then the conversation suddenly dies.

Sometimes this is fine, but other times it's quite sad.

Things I think might help:

  • If you're an existing conversation participant:
    • Actively try to keep the conversation small. The upper limit is 5, 3-4 is better. If someone looks like they want to join, smile warmly and say "hey, sorry we're kinda in a high context conversation right now. Listening is fine but probably don't join."
    • If you do want
... (read more)
Showing 3 of 6 replies (Click to show all)
2mr-hire8dI hosted an online-party using zoom breakout rooms a few weeks ago and ran into similar problems. Half-way through the party I noticed people were clustering in suboptimal size conversations and bringing high-context conversations to a stop, so I actually brought everybody backed to the lobby then randomly assigned them to groups of 2 or 3 - and when I checked 10 minutes later everyone was in the same two rooms again with groups of 8 - 10 people. AFAICT this was status/feelings driven - there were a few people at the party who were either existing high-status to the participants, or who were very charismatic, and everyone wanted to be in the same conversation as them. I think norm-setting around this is very hard, because it's natural to want to be around high-status and charismatic people, and it's also natural to want to participate in a conversation you're listening to. I'm going to try to add your suggestions to the top of the shared google doc next time I host one of these and see how it goes.

Agreed with the status/feelings cause. And I'm not 100% sure the solution is "prevent people from doing the thing they instinctively want to do" (especially "all the time.")

My current guess is "let people crowd around the charismatic/and/or/interesting people, but treat it more like a panel discussion or fireside chat, like you might have at a conference, where mostly 2-3 people are talking and everyone else is more formally 'audience.'"

But doing that all the time would also be kinda bad in different ways.

In this case... you might actually be able to fix t

... (read more)
2Raemon9dFYI, the actual motivating example here was at a party in gather.town [https://gather.town/], (formerly online.town, formerly town.siempre), which has much more typical "party" dynamics. (i.e people can wander around an online world and video chat with people nearby). In this case there were actually some additional complexities – I had joined a conversation relatively late, I did lurk for quite awhile, and wait for the current set of topics to die down completely before introducing a new one. And then the conversation took a turn that I was really excited by, and at least 1-2 other people were interested in, but it wasn't obvious to me that it was interesting to everyone else (I think ~5 people involved total?) And then a new person came in, and asked what we were talking about and someone filled them in... ...and then immediately the conversation ended. And in this case I don't know if the issue was more like "the newcomer killed the conversation" or "the convo actually had roughly reached it's natural end, and/or other people weren't that interested in the first place." But, from my own perspective, the conversation had just finished covering all the obvious background concepts that would be required for the "real" conversation to begin, and I was hoping to actually Make Real Progress on a complex concept. So, I dunno if this counted as "an interesting conversation" yet, and unfortunately the act of asking the question "hey, do we want to continue diving deep into this, or wrap up and transition into some other convo?" also kinda kills the conversation. Conversations are so god damn fragile. What I really wished was that everyone already had common knowledge of the meta-concept, wherein: * Party conversations are particularly fragile * Bringing a newcomer up to speed is usually costly if the conversation is doing anything deep * We might or might not want to continue delving into the current convo (but we don't currently have common knowledge of th
TurnTrout's shortform feed

Virtue ethics seems like model-free consequentialism to me.

I've was thinking along similar lines!

From my notes from 2019-11-24: "Deontology is like the learned policy of bounded rationality of consequentialism"

ESRogs's Shortform

I'm looking for an old post where Eliezer makes the basic point that we should be able to do better than intellectual figures of the past, because we have the "unfair" advantage of knowing all the scientific results that have been discovered since then.

I think he cites in particular the heuristics and biases literature as something that thinkers wouldn't have known about 100 years ago.

I don't remember if this was the main point of the post it was in, or just an aside, but I'm pretty confident he made a point like this at least... (read more)

10Pattern10d1. https://www.lesswrong.com/posts/96TBXaHwLbFyeAxrg/guardians-of-ayn-rand [https://www.lesswrong.com/posts/96TBXaHwLbFyeAxrg/guardians-of-ayn-rand] (Not sure if the material is accurate, but I think it's the post you're looking for. There could have been more than one on that though.) 2. https://www.lesswrong.com/posts/7s5gYi7EagfkzvLp8/in-defense-of-ayn-rand [https://www.lesswrong.com/posts/7s5gYi7EagfkzvLp8/in-defense-of-ayn-rand] References 1.

Thanks!

NaiveTortoise's Short Form Feed

Weird thing I wish existed: I wish there were more videos of what I think of as 'math/programming speedruns'. For those familiar with speedrunning video games, this would be similar except the idea would be to do the same thing for a math proof or programming problem. While it might seem like this would be quite boring since the solution to the problem/proof is known, I still think there's an element of skill to and would enjoy watching someone do everything they can to get to a solution, proof, etc. as quickly as possible (in an editor, on paper, LaTex, e

... (read more)
Showing 3 of 17 replies (Click to show all)
4riceissa10dSomewhat related: https://xenaproject.wordpress.com/2020/05/23/the-complex-number-game/ [https://xenaproject.wordpress.com/2020/05/23/the-complex-number-game/]

This is awesome! I've been thinking I should try out the natural number game for a while because I feel like formal theorem proving will scratch my coding / video game itch in a way normal math doesn't.

1NaiveTortoise1moCool!
mingyuan's Shortform

When I was in high school, I once had a conversation with a classmate that went something like this (except that it was longer and I was less eloquent):

Him: "German is a Scandinavian language."

Me: "No, it's not. German and the Scandinavian languages both fall under the umbrella of Germanic languages, but 'Scandinavian languages' refers to a narrower category that doesn't include German."

Him: "Well that's your opinion."

Me: "No??? That's not what an opinion is???"

Him: "Look, it's your opinion that German isn't a Scandinavian language, and it's my opinion tha

... (read more)
William_Darwin's Shortform

I've been thinking about people's mindset as it relates to spending their free time. Specifically, when you go to do something 'productive' like learn about a new topic, work through exercises in a textbook, go through an online course, etc...do you feel that you have to intentionally decide not to play video games, watch Netflix, etc and forego short-term happiness? Or do you feel that this decision is straightforward because that's what you would prefer to be doing and you don't feel like you sacrifice anything?

Load More