Shortform Content [Beta]

capybaralet's Shortform

Whelp... that's scary: 
Chip Huyen

@chipro

 

Replying to

@chipro

4. You won’t need to update your models as much One mindboggling fact about DevOps: Etsy deploys 50 times/day. Netflix 1000s times/day. AWS every 11.7 seconds. MLOps isn’t an exemption. For online ML systems, you want to update them as fast as humanly possible. (5/6)
https://twitter.com/chipro/status/1310952553459462146

John_Maxwell's Shortform

Someone wanted to know about the outcome of my hair loss research so I thought I would quickly write up what I'm planning to try for the next year or so. No word on how well it works yet.

Most of the ideas are from this review: https://www.karger.com/Article/FullText/492035

... (read more)
Jimrandomh's Shortform

According to Fedex tracking, on Thursday, I will have a Biovyzr. I plan to immediately start testing it, and write a review.

What tests would people like me to perform?

Tests that I'm already planning to perform:

To test its protectiveness, the main test I plan to perform is a modified Bittrex fit test. This is where you create a bitter-tasting aerosol, and confirm that you can't taste it. The normal test procedure won't work as-is because it's too large to use a plastic hood, so I plan to go into a small room, and have someone (wearing a respirator themselve... (read more)

AllAmericanBreakfast's Shortform

Explanation for why displeasure would be associated with meaningfulness, even though in fact meaning comes from pleasure:

Meaningful experiences involve great pleasure. They also may come with small pains. Part of how you quantify your great pleasure is the size of the small pain that it superceded.

Pain does not cause meaning. It is a test for the magnitude of the pleasure. But only pleasure is a causal factor for meaning.

Showing 3 of 6 replies (Click to show all)
4Viliam14hIn a perfect situation, it would be possible to achieve meaningful experiences without pain, but usually it is not possible. A person who optimizes for short-term pain avoidance, will not reach the meaningful experience. Because optimizing for short-term pain avoidance is natural, we have to remind ourselves to overcome this instinct.

This fits with the idea that meaning comes from pleasure, and that great pleasure can be worth a fair amount of pain to achieve. The pain drains meaning away, but the redeeming factor is that it can serve as a test of the magnitude of pleasure, and generate pleasurable stories in the future.

An important counter argument to my hypothesis is how we may find a privileged “high road” to success and pleasure to be less meaningful. This at first might seem to suggest that we do inherently value pain.

In fact, though, what frustrates people about people born with ... (read more)

2AllAmericanBreakfast15hI think we can consider pleasure, along with altruism, consistency, rationality, fitting the categorical imperative, and so forth as moral goods. People have different preferences for how they trade off one against the other when they're in conflict. But they of course prefer them not to be in conflict. What I'm interested is not what weights people assign to these values - I agree with you that they are diverse - but on what causes people to adopt any set of preferences at all. My hypothesis is that it's pleasure. Or more specifically, whatever moral argument most effectively hijacks an individual person's psychological reward system. So if you wanted to understand why another person considers some strange action or belief to be moral, you'd need to understand why the belief system that they hold gives them pleasure. Some predictions from that hypothesis: * People who find a complex moral argument unpleasant to think about won't adopt it. * People who find a moral community pleasant to be in will adopt its values. * A moral argument might be very pleasant to understand, rehearse, and think about, and unpleasant to abandon. It might also be unpleasant in the actions it motivates its subscriber to undertake. It will continue to exist in their mind if the balance of pleasure in belief to displeasure in action is favorable. * Deprogramming somebody from a belief system you find abhorrent is best done by giving them alternative sources of "moral pleasure." Examples of this include the ways people have deprogrammed people from cults and the KKK, by including them in their social gatherings, including Jewish religious dinners, and making them feel welcome. Eventually, the pleasure of adopting the moral system of that shared community displaces whatever pleasure they were deriving from their former belief system. * Paying somebody in money and status to uphold a given belief system is a great way to keep them doing it, no matt
MikkW's Shortform

I'm quite baffled by the lack of response to my recent question asking about which AI-researching companies are good to invest in (as in, would have good impact, not necessarily most profitable)- It indicates either A) most LW'ers aren't investing in stocks (which is a stupid thing not to be doing), or B) are investing in stocks, but aren't trying to think carefully about what impact their actions have on the world, and their own future happiness (which indicates a massive failure of rationality)

Even putting this aside, the fact that nobody jumped at the c... (read more)

Showing 3 of 7 replies (Click to show all)
3MikkW17hWow, that video makes me really hate Peter Thiel (I don't necessarily disagree with any of the points he makes, but that communication style is really uncool)

On the contrary, I aspire to the clarity and honesty of Thiel's style. Schmidt seems somewhat unable to speak directly. Of the two of them, Thiel was able to say specifics about how the companies were doing excellently and how they were failing, and Schmidt could say neither.

5mr-hire2dThis seems to be the common rationalist position, but it does seem to be at odds with: 1. The common rationalist position to vote on UDT grounds. 2. The common rationalist position to eschew contextualizing because it ruins the commons. I don't see much difference between voting because you want others to also vote the same way, or choosing stocks because you want others to choose stocks the same way. I also think it's pretty orthogonal to talk about telling the truth for long term gains in culture, and only giving money to companies with your values for long term gains in culture.
supposedlyfun's Shortform

I'm grateful for MIRI etc and their work on what is probably as world-endy as nuclear war was (and look at all the intellectual work that went into THAT).

The thing that's been eating me lately, almost certainly mainly triggered by the political situation in the U.S., is how to manage the transition from 2020 to what I suspect is the only way forward for the species--genetic editing to reduce or eliminate the genetically determined cognitive biases we inherited from the savannah.  My objectives for the transition would be

  1. Minimize death
  2. Minimize physical
... (read more)
5mr-hire3dDo you think genetic editing could remove biases? My suspicsion is that they're probably baked pretty deeply into our brains and society, and you can't just tweak a few genes to get rid of them.
1supposedlyfun3dI figure that at some point in the next ~300 years, computers will become powerful enough to do the necessary math/modeling to figure this out based on advances in understanding genetics.

It just feels like "biases" are such a high level of abstraction that are based on basic brain architecture.  To get rid of them would be like creating a totally different design.

AllAmericanBreakfast's Shortform

SlateStarCodex, EA, and LW helped me get out of the psychological, spiritual, political nonsense in which I was mired for a decade or more.

I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.

Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. Worst of all, it's given me ambition to do original research. That's a demanding task, one where you have to accept feeling stupid all the time.

But I still look down that old road and I'm glad I'm not walking down it anymore.

I started out feeling a lot smarter. I think it was community validation + the promise of mystical knowledge.

Too smart for your own good. You were supposed to believe it was about rationality. Now we have to ban you and erase your comment before other people can see it. :D

Now I've started to feel dumber. Probably because the lessons have sunk in enough that I catch my own bad ideas and notice just how many of them there are. [...] you have to accept feeling stupid all the time. But I still look down that old road and I'm glad I'm not walking down it anymore.

Yeah, same here.

There is a formal analogy between infra-Bayesian decision theory (IBDT) and modal updateless decision theory (MUDT).

Consider a one-shot decision theory setting. There is a set of unobservable states , a set of actions and a reward function . An IBDT agent has some belief [1], and it chooses the action .

We can construct an equivalent scenario, by augmenting this one with a perfect predictor of the agent (Omega). To do so, define , where the semantics of is "the unobservable state is and Omega predic... (read more)

Viliam's Shortform

1) There was this famous marshmallow experiment, where the kids had an option to eat one marshmallow (physically present on the table) right now, or two of them later, if they waited for 15 minutes. The scientists found out that the kids who waited for the two marshmallows were later more successful in life. The standard conclusion was that if you want to live well, you should learn some strategy to delay gratification.

(A less known result is that the optimal strategy to get two marshmallows was to stop thinking about marshmallows at all. Kids who focused ... (read more)

This seems likely to me, although I'm not sure "superstimulus" is the right word for this observation.

It certainly does make sense that people who are inclined to notice the general level of incompetence in our society, will be less inclined to trust it and rely on it for the future

ryan_b's Shortform

Is there a reason warfare isn't modeled as the production of negative value?

The only economic analyses I have seen are of the estimating-cost-of-lost-production type, which I can only assume reflects the convention of converting everything to a positive value.

But it is so damned anti-intuitive!

Showing 3 of 5 replies (Click to show all)

But the bottom line is that the value of weapons is destruction

The bottom line is protection, expansion, and/or survival; destruction is only an intermediate goal

5Dagon6dGo a little further, and I'll absolutely agree. Economic models that only consider accounting entities (currency and reportable valuation) are pretty limited in understanding most human decisions. I think war is just one case of this. You could say the same for, say, having children - it's a pure expense for the parents, from an economic standpoint. But for many, it's the primary joy in life and motivation for all the economic activity they partake in. Not at all. The vast majority of weapons and military (or hobby/self-defense) spending are never used to harm an enemy. The value is the perception of strength, and relatedly, the threat of destruction. Actual destruction is minor. That congress (and voters) are economically naïve is a distinct problem. It probably doesn't get fixed by additional naivete of forcing negative-value concepts into the wrong framework. If it can be fixed, it's probably by making the broken windows fallacy ( https://en.wikipedia.org/wiki/Parable_of_the_broken_window [https://en.wikipedia.org/wiki/Parable_of_the_broken_window]) less common among the populace.
4ryan_b6dThe map is not independent of the territory, here. Few cities were destroyed by nuclear weapons, but no one would have cared about them if they couldn't destroy cities. Destruction is the baseline reality upon which perceptions of strength operate. The whole value of the perception of strength is avoiding actual destructive exchanges; destruction remains the true concern for the overwhelming majority of such spending. The problem I see is that war is not distinct from economics except as an abstraction; they are in reality describing the same system. What this means is we have a partial model of one perspective of the system, and total negligence of another perspective of the system. Normally we might say not to let the perfect be the enemy of the good, but we're at the other end of the spectrum so it is more like recruiting the really bad to be an enemy of the irredeemably awful. Which is to say that economic-adjacent arguments are something the public at large is familiar with, and their right-or-wrong beliefs are part of the lens through which they will view any new information and judge any new frameworks. Quite separately I would find economics much more comprehensible if they included negatives throughout; as far as I can tell there is no conceptual motivation for avoiding them, it is mostly a matter of computational convenience. I would be happy to be wrong; if I could figure out the motivation for that, it would probably help me follow the logic better.
ofer's Shortform

[researcher positions at FHI] 

(I'm not affiliated with FHI.)

FHI recently announced: "We have opened researcher positions across all our research strands and levels of seniority. Our big picture research focuses on the long-term consequences of our actions today and the complicated dynamics that are bound to shape our future in significant ways. These positions offer talented researchers freedom to think about the most important issues of our era in an environment with other brilliant minds willing to constructively engage with a broad range of ideas. Applications close 19th October 2020, noon BST."

I'm thinking "project [/product] announcement". I encourage you to add a tag you think works, if anyone comes up with a better name, we can always change the name later

TurnTrout's shortform feed

What is "real"? I think about myself as a computation embedded in some other computation (i.e. a universe-history). I think "real" describes hypotheses about the environment where my computation lives. What should I think is real? That which an "ideal embedded reasoner" would assign high credence. However that works.

This sensibly suggests that Gimli-in-actual-Ea (LOTR) should believe he lives in Ea, and that Ea is real, even though it isn't our universe's Earth. Also, the notion accounts for indexical uncertainty by punting it to how embedded reasoning sho... (read more)

Mati_Roy's Shortform

I remember someone in the LessWrong community (I think Eliezer Yudkowsky, but maybe Robin Hanson or someone else, or maybe only Rationalist-adjacent; maybe an article or a podcast) saying that people believing in "UFOs" (or people believing in unproven theories of conspiracy) would stop being so enthusiastic about those if they became actually known as true with good evidence for them. does anyone know what I'm referring to?

ah, someone found it:

If You Demand Magic, Magic Won't Help" where he says at one point: "The worst catastrophe you could visit upon the New Age community would be for their rituals to start working reliably, and for UFOs to actually appear in the skies. What would be the point of believing in aliens, if they were just there, and everyone else could see them too? In a world where psychic powers were merely real, New Agers wouldn't believe in psychic powers, any more than anyone cares enough about gravity to believe in it. https://www.lesswrong.com/s/6BFkm

... (read more)
4Ruby3dEliezer talks about how dragons wouldn't be exciting if they were real, I recall. I'm not sure that's correct.
Vanessa Kosoy's Shortform

An AI progress scenario which seems possible and which I haven't seen discussed: an imitation plateau.

The key observation is, imitation learning algorithms[1] might produce close-to-human-level intelligence even if they are missing important ingredients of general intelligence that humans have. That's because imitation might be a qualitatively easier task than general RL. For example, given enough computing power, a human mind becomes realizable from the perspective of the learning algorithm, while the world-at-large is still far from realizable. So, an al... (read more)

Showing 3 of 9 replies (Click to show all)
2Vanessa Kosoy3dThe imitation plateau can definitely be rather short. I also agree that computational overhang is the major factor here. However, a failure to capture some of the ingredients can be a cause of low computational overhead, whereas a success to capture all of the ingredients is a cause of high computational overhang, because the compute necessary to reach superintelligence might be very different in those two cases. Using sideloads to accelerate progress might still require years, whereas an "intrinsic" AGI might lead to the classical "foom" scenario. EDIT: Although, since training is typically much more computationally expensive than deployment, it is likely that the first human-level imitators will already be significantly sped-up compared to humans, implying that accelerating progress will be relatively easy. It might still take some time from the first prototype until such an accelerate-the-progress project, but probably not much longer than deploying lots of automation.
2Vladimir_Nesov3dI agree. But GPT-3 seems to me like a good estimate for how much compute it takes to run stream of consciousness imitation learning sideloads (assuming that learning is done in batches on datasets carefully prepared by non-learning sideloads, so the cost of learning is less important). And with that estimate we already have enough compute overhang to accelerate technological progress as soon as the first amplified babbler AGIs are developed, which, as I argued above, should happen shortly after babblers actually useful for automation of human jobs are developed (because generation of stream of consciousness datasets is a special case of such a job). So the key things to make imitation plateau last for years are either sideloads requiring more compute than it looks like (to me) they require, or amplification of competent babblers into similarly competent AGIs being a hard problem that takes a long time to solve.

Another thing that might happen is a data bottleneck.

Maybe there will be a good enough dataset to produce a sideload that simulates an "average" person, and that will be enough to automate many jobs, but for a simulation of a competent AI researcher you would need a more specialized dataset that will take more time to produce (since there are a lot less competent AI researchers than people in general).

Moreover, it might be that the sample complexity grows with the duration of coherent thought that you require. That's because, unless you're training directl... (read more)

niplav's Shortform

Adblockers have positive externalities: they remove much of the incentive to make websites addictive.

If the adblockers become too popular, websites will update to circumvent them. It will be a lot of work at the beginning, but probably possible.

Currently, most ads are injected by JavaScript that downloads them from a different domain. That allows adblockers to block anything coming from a different domain, and the ads are blocked relatively simply.

The straightforward solution would be to move ad injection to the server side. The PHP (or whatever language) code generating the page would contact the ad server, download the ad, and inject it into the generat... (read more)

3mr-hire5dThey also have negative externalities, moving websites from price discrimination models that are available to everyone, to direct pay models that are only available to people who can afford that.
benwr's unpolished thoughts

If I got to pick the moral of today's Petrov day incident, it would be something like "being trustworthy requires that you be more difficult to trick than it would be worth", and I think very few people reliably live up to this standard.

TurnTrout's shortform feed

Reasoning about learned policies via formal theorems on the power-seeking incentives of optimal policies

One way instrumental subgoals might arise in actual learned policies: we train a proto-AGI reinforcement learning agent with a curriculum including a variety of small subtasks. The current theorems show sufficient conditions for power-seeking tending to be optimal in fully-observable environments; many environments meet these sufficient conditions; optimal policies aren't hard to compute for the subtasks. One highly transferable heuristic would therefore... (read more)

Matt Goldenberg's Short Form Feed

Mods are asleep, post pictures of mushroom clouds.

Load More