Second sentence:
Not quite sure what you mean, but all data is linked at the end of https://www.lesswrong.com/posts/3Rtvo6qhFde6TnDng/positly-covid-survey-2-controlled-productivity-data
n probably too small to read much into it, but yes: https://www.lesswrong.com/posts/3Rtvo6qhFde6TnDng/positly-covid-survey-2-controlled-productivity-data
I did ask about it, data here (note that n is small): https://www.lesswrong.com/posts/iTH6gizyXFxxthkDa/positly-covid-survey-long-covid
Yeah, I meant that early on in the vaccinations, officialish-seeming articles said or implied that breakthrough cases were very rare (even calling them 'breakthrough cases', to my ear, sounds like they are sort of more unexpected than they should be, but perhaps that's just what such things are always called). That seemed false at the time even, before later iterations of covid made it more blatantly so. I think it was probably motivated partly by desire to convince people that the vaccine was very good, rather than just error, which I think is questionable behavior.
I agree that I'm more likely to be concerned about in-fact-psychosomatic things than average, and on the outside view, thus probably biased in that direction in interpreting evidence. Sorry if that colors the set of considerations that seem interesting to me. (I didn't mean to claim that this was an unbiased list, sorry if I implied it. )
Some points regarding the object level:
Good points. Some responses:
I thought rapid tests were generally considered to have a much lower false negative rate for detecting contagiousness, though they often miss people who are infected but not yet contagious. I forget why I think this, and haven't been following possible updates on this story, but is that different from your impression? (Here's one place I think saying this, for instance: https://www.rapidtests.org/blog/antigen-tests-as-contagiousness-tests) On this story, rapid tests immediately before an event would reduce overall risk by a lot.
Agree the difference between actors and real companions is very important! I think you misread me (see response to AllAmericanBreakfast's above comment.)
Your current model appears to be wrong (supposing people should respond to fire alarms quickly).
From the paper:
"Subjects in the three naive bystander condition were markedly inhibited from reporting the smoke. Since 75% of the alone subjects reported the smoke, we would expect over 98% of the three-person groups to contain at least one reporter. In fact, in only 38% of the eight groups in this condit...
Sorry for being unclear. The first video shows a rerun of the original experiment, which I think is interesting because it is nice to actually see how people behave, though it is missing footage of the (I agree crucial) three group case. The original experiment itself definitely included groups of entirely innocent participants, and I agree that if it didn't it wouldn't be very interesting. (According to the researcher in the footage, via private conversation, he recalls that the filmed rerun also included at least one trial with all innocent people,...
I think I would have agreed that answering honestly is a social gaffe a few years ago, and in my even younger years I found it embarrassing to ask such things when we both knew I wasn't trying to learn the answer, but now I feel like it's very natural to elaborate a bit, and it usually doesn't feel like an error. e.g. 'Alright - somewhat regretting signing up for this thing, but it's reminding me that I'm interested in the topic' or 'eh, seen better days, but making crepes - want one?' I wonder if I've become oblivious in my old age, or socially chill, or ...
To check I have this: in the two-level adaptive system, one level is the program adjusting its plan toward the target configuration of being a good plan, and the other level is the car (for instance) adjusting its behavior (due to following the plan) toward getting to a particular place without crashing?
Fwiw I'm not aware of using or understanding 'outside view' to mean something other than basically reference class forecasting (or trend extrapolation, which I'd say is the same). In your initial example, it seems like the other person is using it fine - yes, if you had more examples of an AGI takeoff, you could do better reference class forecasting, but their point is that in the absence of any examples of the specific thing, you also lack other non-reference-class-forecasting methods (e.g. a model), and you lack them even more than you lack relevant refe...
I too thought the one cruise I've been on was a pretty good type of holiday! A giant moving building full of nice things is so much more convenient a vehicle than the usual series of planes and cabs and subways and hauling bags along the road and stationary buildings etc.
I wrote an AI Impacts page summary of the situation as I understand it. If anyone feels like looking, I'm interested in corrections/suggestions (either here or in the AI Impacts feedback box).
A few quick thoughts on reasons for confusion:
I think maybe one thing going on is that I already took the coherence arguments to apply only in getting you from weakly having goals to strongly having goals, so since you were arguing against their applicability, I thought you were talking about the step from weaker to stronger goal direction. (I’m not sure what arguments people use to get from 1 to 2 though, so maybe you are right that it is also something to do with coherence, at least implicitly.)
It also seems natural to think of ‘weakly has goals’ as some...
Thanks. Let me check if I understand you correctly:
You think I take the original argument to be arguing from ‘has goals' to ‘has goals’, essentially, and agree that that holds, but don’t find it very interesting/relevant.
What you disagree with is an argument from ‘anything smart’ to ‘has goals’, which seems to be what is needed for the AI risk argument to apply to any superintelligent agent.
Is that right?
If so, I think it’s helpful to distinguish between ‘weakly has goals’ and ‘strongly has goals’:
Yes, that's basically right.
You think I take the original argument to be arguing from ‘has goals' to ‘has goals’, essentially, and agree that that holds, but don’t find it very interesting/relevant.
Well, I do think it is an interesting/relevant argument (because as you say it explains how you get from "weakly has goals" to "strongly has goals"). I just wanted to correct the misconception about what I was arguing against, and I wanted to highlight the "intelligent" --> "weakly has goals" step as a relatively weak step in our current arguments. (In my ori...
Good points. Though I claim that I do hold the same facial expression for long periods sometimes, if that's what you mean by 'not moving'. In particular, sometimes it is very hard for me not to screw up my face in a kind of disgusted frown, especially if it is morning. And sometimes I grin for so long that my face hurts, and I still can't stop.
It doesn't seem that hard to wash your hands after putting away groceries, say. If I recall, I was not imagining getting many touches during such a trip. I'm mostly imagining that you put many of the groceries you purchase in your fridge or eat them within a couple of days, such that they are still fairly contaminated if they started out contaminated, and it is harder to not touch your face whenever you are eating recently acquired or cold food.
Yes - I like 'application' over 'potentially useful product' and 'my more refined writing skills' over 'my more honed writing', in its first one, for instance.
I grab the string and/or some beads I don't want to move together between my thumb and finger on one hand, and push the bead I do want to move with my thumb and finger of the other hand. (I don't need to see it because I can feel it and the beads don't move with my touching it.) I can also do it more awkwardly with one hand.
Thanks for further varieties! I hadn't seen the ring, and have had such a clicker but have not got the hang of using it non-awkwardly (where do you put it? With your keys? Who knows where those are? In your pocket? Who reliably has a pocket that fits things in? In your bag? Then you have to dig it out..)
Good point regarding wanting to know what number you have reached. I only want to know the exact number very occasionally, like with a bank account, but I agree that's not true of many use cases.
I haven't read Zvi's post, but would have thought that the good of slack can be cashed out in efficiency, if you are optimizing for the right goals (e.g. if you have a bunch of tasks in life which contribute to various things, it will turn out that you contribute to those things better overall if you have spare time between the tasks).
If you aren't in the business of optimizing for the ultimately right goals though, I'd think you could also include slack as one of your instrumental goals, and thus mostly avoid serious conflict e.g. instead of turnin...
It is irrigation actually, not moisture sensors. Or rather, I think it irrigates based on the level of moisture, using a combination of tiny tubes and clay spikes that I admittedly don't fully understand. (It seems to be much better at watering my plants than I am, even ignoring time costs!) I do have to fill up the water container sometimes.
I meant: conditional on it growing faster, why expect this is attributable to a small number of technologies, given that when it accelerated previously it was not like that (if I understand)?
If throughout most of history growth rates have been gradually increasing, I don't follow why you would expect one technology to cause it to grow much faster, if it goes back to accelerating.
Making them tastier, though not confident about this - originally motivated by not having normal flour, and then have done some of each, and thought the gluten free ones were better, but much randomness at play.
I did mean 'white' by 'wheat'; sorry (I am a foreigner). I haven't tried anything other than the gluten free one mentioned and white wheat flour.
>Someone's cognitive labor went into making the rabbit mold, and everything from there on out is eliminating the need to repeat that labor, and to reduce the number of people who need to have that knowledge.
Yeah, that's the kind of thing I had in mind in the last paragraph.
In such a case, you might get many of the benefits without the covid risks from driving to very close to the ER, then hanging out there and not going in and risking infection unless worse symptoms develop, but being able to act very fast if they do.
1) Even if it counts as a DSA, I claim that it is not very interesting in the context of AI. DSAs of something already almost as large as the world are commonplace. For instance, in the extreme, the world minus any particular person could take over the world if they wanted to. The concern with AI is that an initially tiny entity might take over the world.
2) My important point is rather that your '30 year' number is specific to the starting size of the thing, and not just a general number for getting a DSA. In particular, it does not apply to sma...
The time it takes to get a DSA by growing bigger depends on how big you are to begin with. If I understand, you take your 30 years from considering the largest countries, which are not far from being the size of the world, and then use it when talking about AI projects that are much smaller (e.g. a billion dollars a year suggests about 1/100,000 of the world). If you start from a situation of an AI project being three doublings from taking over the world say, then most of the question of how it came to have a DSA seems to be the question of how it grew the...
This sounds great to me, and I think I would be likely to sign up for it if I could, but I haven't thought about it for more than a few minutes, am particularly unsure about the implications for culture, and am maybe too enthusiastic in general for things being 'well organized'.
Oh yeah, I think I get something similar when my sleep schedule gets very out of whack, or for some reason when I moved into my new house in January, though it went back to normal with time. (Potentially relevant features there: bedroom didn't seem very separated from common areas, at first was sleeping on a pile of yoga mats instead of a bed, didn't get out much.)
I think random objects might work in a similar way. e.g. if talking in a restaurant, you grab the ketchup bottle and the salt to represent your point. I've only experimented with this once, with ultimately quite an elaborate set of condiments, tableware and fries involved. It seemed to make things more memorable and followable, but I wasn't much inclined to do it more for some reason. Possibly at that scale it was a lot of effort beyond the conversation.
Things I see around me sometimes get involved in my thoughts in a way that seems related. For ...
I'm pretty unsure how much variation in experience there is—'not much' seems plausible to me, but why do you find it so probable?
I also thought that at first, and wanted to focus on why people join groups that are already large. But yeah, lack of very small groups to join would entirely explain that. Leaving a group signaling not liking the conversation seems like a big factor from my perspective, but I'd guess I'm unusually bothered by that.
Another random friction:
Aw, thanks. However I claim that this was a party with very high interesting people density, and that the most obvious difference between me and others was that I ever sat alone.
I share something like this experience (food desirability varies a lot based on unknown factors and something is desirable for maybe a week and then not desirable for months) but haven't checked carefully that it is about nutrient levels in particular. If you have, I'd be curious to hear more about how.
(My main alternative hypothesis regarding my own experience is that it is basically imaginary, so you might just have a better sense than me of which things are imaginary..)
A page number or something for the 'more seasoned' link might be useful. The document is very long and doesn't appear to contain 'season-'.
The 'blander' link doesn't look like it supports the claim much, though I am only looking at the abstract. It says that 'in many instances' there have been reductions in crop flavor, but even this appears to be background that the author is assuming, rather than a claim that the paper is about. If the rest of the paper does contain more evidence on this, could you quote it or something, since the paper is expensive to see?
I am somewhat hesitant to share simple intuition pumps about important topics, in case those intuition pumps are misleading.
This sounds wrong to me. Do you expect considering such things freely to be misleading on net? I expect some intuition pumps to be misleading, but for considering all of the intuitions that we can find about a situation to be better than avoiding them.
Thanks for your thoughts!
I don't quite follow you on the intelligence explosion issue. For instance, why does a strong argument against the intelligence explosion hypothesis need to show that a feedback loop is unlikely? Couldn't we believe that it is likely, but not likely to be very rapid for a while? For instance, there is probably a feedback loop in intelligence already, where humans with better thoughts and equipment are effectively smarter, and can then devise better thoughts and equipment. But this has been true for a while, and is a fairly slow process (at least for now, relative to our ability to deal with things).
Do you mean that the half-day projects have to be in sequence relative to the other half-day projects, or within a particular half-day project, its contents have to be in sequence (so you can't for instance miss the first step then give up and skip to the second step)?
In general if things have to be done in sequence, often I make the tasks non-specific, e.g. lets say i want to read a set of chapters in order, then i might make the tasks 'read a chapter' rather than 'read the first chapter' etc. Then if I were to fail at the first one, I would keep reading ... (read more)