All of Mo Putera's Comments + Replies

Curious, what do you think now that GPT-4 is out?

Curious, what do you think now that GPT-4 is out?

When I first saw this post it was at -1 karma, which didn't make much sense to me, so I upvoted it back to zero. Can anyone who downvoted it share their reasoning?

if there is any way of fixing this mess, it's going to involve clarifying conflicts rather than obfuscating them

This immediately brought to mind John Nerst's erisology. I've been paying attention to it for a while, but I don't see it much here (speaking as a decade-long lurker); I wonder why.

1Frederic Janssens1mo
Thanks for the pointer. John Nerst's approach is similar to mine. The way I would formulate it here : De facto, people have different priors. If there is a debate/discussion, the most fruitful result would come by construing, in common if possible, a more encompassing reference frame, where both sets of priors can be expressed to their respective satisfaction. It is not easy. Some priors will be incompatible as such. A real dialogue supposes a readiness to examine ones priors and eventually adjust them to be less restrictive. A static defense of one's priors is mostly a waste of time (or a show). Caveat : bad faith exists, people, and groups, have vulnerabilities they will protect. So a  real dialogue is not always possible, or only very partially. The idea is to at least try.

Human/Machine Intelligence Parity by 2040? on Metaculus has a pretty high bar for human-level intelligence:

Assume that prior to 2040, a generalized intelligence test will be administered as follows. A team of three expert interviewers will interact with a candidate machine system (MS) and three humans (3H). The humans will be graduate students in each of physics, mathematics and computer science from one of the top 25 research universities (per some recognized list), chosen independently of the interviewers. The interviewers will electronically communicate

... (read more)
1Sherrinford1mo
The Metaculus definition is very interesting as it is quite different from what M. Y. Zuo suggested [https://www.lesswrong.com/posts/izsSEG9MRQjtgv5hh/taboo-human-level-intelligence?commentId=YQfJPor4peAMZC3ug]to be the natural interpretation of "human-level intelligence". I like the PASTA suggestion, thanks for quoting that! However, I wonder whether that bar is a bit too high.

I feel like he was falling into a kind of fallacy. He observed that a concept isn't entirely coherent, rejected the concept.

My go-to writeup on this is Luke Muehlhauser's Imprecise definitions can still be useful section of his What is Intelligence? MIRI essay written in 2013, which discusses the question of operationalizing the concept of "self-driving car":

...consider the concept of a “self-driving car,” which has been given a variety of vague definitions since the 1930s. Would a car guided by a buried cable qualify? What about a modified 1955 Studebaker

... (read more)
1Johannes C. Mayer1mo
I basically agree with this. But if you apply what are described in the post, it's reveals a lot about why we are not there yet. If you pit a human driver against any of the described autonomous cars, they will just be lots of situations, where the human performs better. And I don't need to run this experiment, in order to cash out its implications. I think when people talk about fully autonomous cars, then they have implicitly something in mind where the autonomous cars at least as good as human. Thinking about an experiment, that you could run here, makes this implicit assumption explicit. Which is think can be useful. It's one of the tools that you can use to make you definition more precise along the way.

I agree with this comment, and I'm confused why it's so disagreed with (-6 agreement karma vs +11 overall). Can anyone who disagreed explain their reasoning?

Apparently Jeff Bezos used to do something like this with his regular "question mark emails", which struck me as interesting in the context of an organization as large and complex as Amazon. Here's what it's like from the perspective of one recipient (partial quote, more at the link):

About a month after I started at Amazon I got an email from my boss that was a forward of an email Jeff sent him. The email that Jeff had sent read as follows:

“?”

That was it.

Attached below the “?” was an email from a customer to Jeff telling him he (the customer) takes a long

... (read more)

Where are you going with this line of questioning?

0M. Y. Zuo6mo
Well my last comment was over 4 months ago, so I'm not sure exactly. Do you expect folks to remember their every intention within the last 4 months? Just seeing from the comment chain I can offer some guesses, such as:

If it's high-quality distillation you're interested in, you don't necessarily need a PhD. I'm thinking of e.g. David Roodman, now a senior advisor at Open Philanthropy. He majored in math, then did a year-long independent study in economics and public policy, and has basically been self-taught ever since. Holden Karnofsky considers what he does extremely valuable:

David Roodman, who is basically the person that I consider the gold standard of a critical evidence reviewer, someone who can really dig on a complicated literature and come up with the answers, h

... (read more)

Yeah, I agree that's a weird way to define "high-dimensional". I'm more partial to defining it as "when the curse of dimensionality becomes a concern", which is less precise but more useful.

in the minds of people like Eliezer Yudkowsky or Paul Christiano, we're more likely doomed than not

My impression for Paul is the opposite – he guesses "~15% on singularity by 2030 and ~40% on singularity by 2040", and has said "quantitatively my risk of losing control of the universe though this channel [Eliezer's list of lethalities] is more like 20% than 99.99%, and I think extinction is a bit less less likely still". (That said I think he'd probably agree with all the reasons you stated under "I personally lean towards those latter views".) Curious to k... (read more)

0Aorou6mo
My view of PC's P(Doom) came from (IIRC) Scott Alexander's posts on Christiano vs Yudkowsky, where I remember a Christiano quote saying that although he imagines there'll be multiple AI competing as opposed to one emerging through a singularity, this would possibly be a worse outcome because it'd be much harder to control. From that, I concluded "Christiano thinks P(doom) > 50%", which I realize is pretty sloppy reasoning.  I will go back to those articles to check whether I misrepresented his views. For now I'll remove his name from the post 👌🏻

Is there something similar for the EA Forum? 

I think your 'Towards a coherent process for metric design' section alone is worth its weight in gold. Since most LW readers aren't going to click on your linked paper (click-through rates being as low in general as they are, from my experience in marketing analytics), let me quote that section wholesale:

Given the various strategies and considerations discussed in the paper, as well as failure modes and limitations, it is useful to lay out a simple and coherent outline of a process for metric design. While this will by necessity be far from complete, and w

... (read more)
2Davidmanheim6mo
Thanks!

IL's comment has a BOTEC arguing that video data isn't that unbounded either (I think the 1% usefulness assumption is way too low but even bumping it up to 100% doesn't really change the conclusion that much).

There's a tangentially related comment by Scott Alexander from over a decade ago, on the subject of writing advice, which I still think about from time to time:

The best way to improve the natural flow of ideas, and your writing in general, is to read really good writers so much that you unconsciously pick up their turns of phrase and don't even realize when you're using them. The best time to do that is when you're eight years old; the second best time is now.

Your role models here should be those vampires who hunt down the talented, suck out their souls, a

... (read more)
1Henrik Karlsson6mo
That's a great qoute!

What do you think about deep work (here's a semi-arbitrarily-chosen explainer)? I suppose the Monday time block after the meeting lets you do that, but that's maybe <10% of the workweek; you also did mention "If people want to focus deeply for a while, they can put on headphones". That said, many of your points aren't conducive to deep work (e.g. "If you need to be unblocked by someone, the fastest way is to just go to their desk and ask them in person" interrupts the other person's deep work block, same with "use a real-time chat platform like Slack to... (read more)

3Ruby6mo
I lead the LessWrong team within Lightcone (the "other" team) and I care a lot about protecting deep work time from interruptions. Though in practice due to the structure of the team (3 of us, usually 2 out 3 pairing), there's not that much blocking arising. As team lead, I'm the one moost likely to be a blocker and I end up kind of in the pattern you describe of doing deep work outside of regular hours so I'm available during.  I'm not sure if there's a better way if you're trying to get a lot done.

The lame answer: yeah, it does mess with deep work, and I'm not super sure how to balance them. 

The spicy answer: I have an unwritten polemic entitled "Against Deep Work". I can't share it though since I have not written it. Fortunately, much of what I hope to say in that post is captured in Chapter 9 of the Lean Startup, which has a section that resonates a lot with my experience. I'll just go ahead and quote it because it's so damn good (starting on page 191 in the linked PDF).

Imagine you’re a product designer overseeing a new product and you need t

... (read more)

At least speaking from my experience, one of the default ways the Lightcone campus team gets deep-work done is by working in pairs. I also think we would structure things probably somewhat differently if we were doing more engineering or math work (e.g. the LessWrong team tends to be somewhat less interrupt driven).

I've found that by working in pairs with someone, I end up with a lot more robustness to losing context for a minute or two, and often get to expand my metacognition, while still getting a lot of the benefits of deep work. It doesn't work for ev... (read more)

I'm curious if Eliezer endorses this, especially the first paragraph. 

I'm curious how you think your views here cash out differently from (your model of) most commenters here, especially as pertains to alignment work (timelines, strategy, prioritization, whatever else), but also more generally. If I'm interpreting you correctly, your pessimism on the usefulness-in-practice of quantitative progress probably cashes out in some sort of bet against scaling (i.e. maybe you think the "blessings of scale" will dry up faster than others think)? 

1DragonGod6mo
Oh, I think superintelligences will be much less powerful than others seem to think.   Less human vs ant, and more "human vs very smart human that can think faster, has much larger working memory, longer attention spans, better recall and parallelisation ability".

+1 for "quantity has a quality all its own". "More is different" pops up everywhere.

1Noosphere896mo
This is because in real life, speed and resources matter because they're both finite. Unlike a Turing machine that can assume both arbitrarily high memory and time, we don't have such things.

Carbon dating

You're gesturing in the right direction, but if it's the age of the universe you're looking for, you really want something like uranium-lead dating instead, which is routinely used to date rocks up to 4.5 billion years old with precision in the ~1% range. Carbon dating can't reliably measure dates more than ~50,000 years ago except in special circumstances, since the half-life of 14C is 5,730 years. 

Awhile back johnswentworth wrote What Do GDP Growth Curves Really Mean? noting how real GDP (as we actually calculate it) is a wildly misleading measure of growth because it effectively ignores major technological breakthroughs – quoting the post, real GDP growth mostly tracks production of goods which aren’t revolutionized; goods whose prices drop dramatically are downweighted to near-zero, so that when we see slow, mostly-steady real GDP growth curves, that mostly tells us about the slow and steady increase in production of things which haven’t been revo... (read more)

3NoUsernameSelected7mo
Is there any GDP-like measure that does do a better job of capturing growth from major tech breakthroughs?
2lc7mo
this

This post is great, I suspect I will be referencing it from time to time.

I don't know if you meant to include the footnotes as well, since they aren't present in this post. For instance, I tried clicking on

After a week, you'll likely remember why you started, but it may be hard to bring yourself to really care[2]

and it just doesn't lead anywhere, although I did find it on your blog.

1Pablo Repetto7mo
I'm glad you like it! Fixed the footnotes. They were there at the end, but unlinked. Some mixup when switching between LW's Markdown and Docs-style editor, most likely.
3Johannes C. Mayer7mo
I have not read this post, just looked at it for 30 seconds. It seems you can apply the babble and prune framework at different levels. What the author there talks about seems to be about the actual idea generation process. In that sense, the content here is already pruned, in the sense that I thought the idea was worth writing about and finished my exploratory writing. This post did not cause me to come up with this scheme, so what the post talks about is probably at least slightly different.

Much ink has been spilled on the difficulty of trying to solve a problem ahead of time and without any feedback loop; I won't rehash those arguments at length.

Can you point me to some readings, especially alignment-related stuff? (No need to rehash anything.) I've been reading LW on and off since ~2013 and have somehow missed every post related to this, which is kind of embarrassing.

1RobertM7mo
Unfortunately, this feels like a subject that's often discussed in asides.  I have a feeling this came up more than once during the 2021 MIRI Conversations [https://www.lesswrong.com/s/n945eovrA3oDueqtq], but could be misremembering.
6adamShimi7mo
Here are some of mine: * https://www.alignmentforum.org/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment [https://www.alignmentforum.org/posts/72scWeZRta2ApsKja/epistemological-vigilance-for-alignment] * https://www.alignmentforum.org/posts/FQqcejhNWGG8vHDch/on-solving-problems-before-they-appear-the-weird [https://www.alignmentforum.org/posts/FQqcejhNWGG8vHDch/on-solving-problems-before-they-appear-the-weird]

Just letting you know that you seem to have double-pasted the 3rd bullet point.

1Quinn8mo
oof, good catch, fixed.

I think TurnTrout's Reward is not the optimization target addresses this, but I'm not entirely sure (feel free to correct me).

1Big Tony8mo
Thank you! That post then led me to https://www.lesswrong.com/posts/3RdvPS5LawYxLuHLH/hackable-rewards-as-a-safety-valve [https://www.lesswrong.com/posts/3RdvPS5LawYxLuHLH/hackable-rewards-as-a-safety-valve], which appears to be talking about exactly the same thing.

for instance, one might try recruiting John Carmack to work on AI safety [this strikes me as a good idea, hindsight notwithstanding], only to get him interested enough that he starts up an AGI company a few years later

Is this a reference to his current personal project to work on AGI? 

Edit: reading a bit more about him, I suspect if he ever got interested in alignment work he'd likely prefer working on Christiano-style stuff than MIRI-style stuff. For instance (re: metaverse):

The idea of the metaverse, Carmack says, can be "a honeypot trap for 'archit

... (read more)
1Joe_Collman8mo
My source on this is his recent appearance on the Lex Fridman podcast [https://youtu.be/I845O57ZSy4?t=14568]. He's moving beyond the personal project stage. He does seem well-informed (and no doubt he's a very smart guy), so I do still hope that he might update pretty radically given suitable evidence. Nonetheless, if he stays on his present course greater awareness seems to have been negative (this could absolutely change). The tl;dr of his current position is: 1. Doesn't expect a fast takeoff. 1. Doesn't specify a probability or timescale - unclear whether he's saying e.g. a 2-year takeoff seems implausible; pretty clear he finds a 2-week takeoff implausible. 2. We should work on ethics/safety of AGI once we have a clearer idea what it looks like. (expects we'll have time, due to 1) 3. Not really dismissive of current AGI safety efforts, but wouldn't put his money there at present, since it's unclear what can be achieved. My take: * On 1a, his argument seems to miss the possibility that an AGI doesn't need to copy itself to other hardware (he has reasonable points that suggest this would be impractical), but might write/train systems that can operate with very different hardware. * If we expect smooth progress, we wouldn't expect the first system to have the capability to do this - though it may be able to bide its time (potentially requiring gradient-hacking). * However, he expects that we need a small number of key insights for AGI (seems right to me). This is hard to square with an expectation of smooth progress. * On 2, he seems overconfident on the ease of solving the safety issues once we have a clearer idea what it looks like. I guess he's thinking that this looks broadly similar to engineering problems we've handled in the past, which seems wrong to me. Everything we've built is narrow, so we've never needed to solve the "point this at exactly the thing we mean" problem (cap

It is hoped that this will allow for solutions to some of the problems which are inherent to the prevailing conception of physics while opening up new avenues of investigation and allowing us to talk about concepts like information. In future posts, I'll explain how it does this in more detail.

Could you at least give a "teaser preview" of what are the "problems which are inherent to the prevailing conception of physics" you mention here? Perhaps the Applications page regarding hybrid systems, and the remark in Q2 of the FAQ about how constructor theory let... (read more)

1A.H.8mo
Hi, thanks for your question. I have a big piece covering all of this in more detail which I plan to post in a couple of days once I've finished writing it. In the meantime, please accept this 'teaser' of a few problems in the prevailing conception (PC): 1. Dealing with hybrid systems. If we are operating in a regime where there are two contradictory sets of dynamical laws, we do not know what kind of evolution the system will follow. An example of such a system is one where both gravity (as governed by general relativity) and quantum mechanics are relevant. In such a cases, under the PC, it is difficult to make any predictions of what kind of behaviour systems will exhibit, since we lack the dynamical laws governing the system. However, by appealing to general counterfactual principles (the interoperability principle and the principle of locality), which cannot be stated in the PC, we can make predictions about such systems, even if we don't know the form of the dynamical laws. 2. The 2nd Law of Thermodynamics. Under the PC, the 2nd is difficult to express precisely, since all dynamical laws are reversible in time, but the 2nd law implies irreversible dynamics. This is normally dealt with by introducing some degree of imprecision or anthropocentrism (eg. through averaging or coarse graining, or describing the 2nd law in terms of our state of knowledge of the system). However, the 2nd law can be stated precisely as a counterfactual statement along the lines of 'it is impossible to engineer a cyclic process which converts heat entirely into work'. 3. The initial state problem. Under the PC, the state of a system can be explained in terms of its evolution, according to dynamical laws, from a previous state at an earlier time. This makes it difficult to explain early states of the universe: if a state can only be explained in terms of earlier states, then either the universe has an init

This is a really useful framing, it crystallized a lot of messy personal moral intuitions. Thanks for writing it.

Something something akrasia maybe? Or some of the other stuff in that wiki's "see also" section?

Maybe John Nerst's erisology is the "dual" to your essay here, since it's basically the study of disagreement. There's also a writeup in The Atlantic, and a podcast episode with Julia Galef. Quoting Nerst:

By “disagreement” I don’t mean the behavior of disagreeing. I mean the plain fact that people have different beliefs, different tastes, and react differently to things.

I find this endlessly interesting. A person that disagrees with me must have a different mind in some way. Can that difference be described? Explained? What do such differences say about th

... (read more)
1ravedon9mo
Excellent and thank-you! I'd somehow forgotten about Nerst and would have linked to his work directly.  I think the additional value Hahn's ontology brings to erisology [https://everythingstudies.com/what-is-erisology/] is an explicitly positive gradient, in a hill-climbing sense. For any disagreement, Hahn's ontology allows the parties to accept some level of agreement (Where are we on the agreement landscape?) and have an objective target for improvement, assuming good faith on everyone's part. I'm inclined to try to communicate it to Nert based upon your linking the two! 

The 'Resources' section lists How to Talk So Kids Will Listen and Listen So Kids Will Talk [book] -- I also enjoyed weft's Book Review: How To Talk So Little Kids Will Listen, written by Julie King and Joanne Faber, daughter of Adele Faber, who co-wrote the former with Elaine Mazlich. Quoting weft:

The core principles are the same, but the update stands on its own. Where the original "Kids" acts more like a workbook, asking the reader to self-generate responses, "Little Kids" feels more like it's trying to download a response system into your head via model

... (read more)
2Ruby9mo
Added!
2Elizabeth9mo
I found Crucial Conversations to be the adult version of How To Talk So... and it seriously levelled up my interpersonal skills at the time.

I'm guessing you're referring to Brian Potter's post Where Are The Robotic Bricklayers?, which to me is a great example of reality being surprisingly detailed. Quoting Brian:

Masonry seemed like the perfect candidate for mechanization, but a hundred years of limited success suggests there’s some aspect to it that prevents a machine from easily doing it. This makes it an interesting case study, as it helps define exactly where mechanization becomes difficult - what makes laying a brick so different than, say, hammering a nail, such that the latter is almost

... (read more)
2Gunnar_Zarncke9mo
Yes, that one! Thanks for finding and quoting.

This reminds me of Eliezer's short story That Alien Message, which is told from the other side of the speed divide. There's also Freitas' "sentience quotient" idea upper-bounding information-processing rate per unit mass at SQ +50 (it's log scale -- for reference, human brains are +13, all neuronal brains are several points away, vegetative SQ is -2, etc).

Perhaps I'm missing something (I don't work in AI research), but isn't the obvious first stop Christiano et al's Concrete Problems in AI Safety? Apologies if you already know about this paper and meant something else.

I concur with your last paragraph, and see it as a special case of rationalist taboo (taboo "AGI"). I'd personally like to see a set of AGI timeline questions on Metaculus where only the definitions differ. I think it would be useful for the same forecasters to see how their timeline predictions vary by definition; I suspect there would be a lot of personal updating to resolve emergent inconsistencies (extrapolating from my own experience, and also from ACX prediction market posts IIRC), and it would be interesting to see how those personal updates behave in the aggregate. 

I'm reminded of Sarah Constantin's Humans Who Are Not Concentrating Are Not General Intelligences. A quote that resonates with my own experience:

I’ve noticed that I cannot tell, from casual conversation, whether someone is intelligent in the IQ sense.

I’ve interviewed job applicants, and perceived them all as “bright and impressive”, but found that the vast majority of them could not solve a simple math problem. The ones who could solve the problem didn’t appear any “brighter” in conversation than the ones who couldn’t.

I’ve taught public school teachers, wh

... (read more)

Just wondering -- did you ever get around to writing this post? I've bounced off many Yoneda explainers before, but I have a high enough opinion of your expository ability that I'm hopeful yours might do it for me.

2johnswentworth1y
Still haven't gotten around to it.

You may be interested in Kevin Simler's essay A Nihilist's Guide to Meaning, which is a sort of graph-theory flavored take on meaning and purpose. I was pleasantly surprised to see how much mileage he got out of his working definition, how many examples of meaningful vs not-meaningful things it explains:

A thing X will be perceived as meaningful in context C to the extent that it's connected to other meaningful things in C.

Applied Divinity Studies wrote a related post that might be of interest: How Long Should you Take to Decide on a Career? They consider a modified version of the secretary problem that accounts for the 2 problematic assumptions you noted (binary payoff and ignorance of opportunity cost); you can play with the Colab notebook if you're curious. Interestingly, varying the parameters tends to pull the optimal starting point earlier (contra my initial intuition), sometimes by a lot. The optimal solution is so parameter-dependent that it made me instinctively wan... (read more)

I'm confused about your pushback to AllAmericanBreakfast's (great) feedback on your style, which I find antagonistic to the point that (like AAB) I'm not comfortable sharing it with anyone, despite broadly agreeing with your conclusions and thinking it's important. 

I'm curious if you can explicate the thought process behind such a high estimate.

9Tomás B.1y
Not really, just an intuition that it will be easier than most think, and what we have seen so far is the thin edge of a wedge that will be hammered in pretty quickly.

This feels like you're avoiding the least convenient possible world, even if that wasn't your intention.

It is worth remarking though, that even a nuclear rocket might learn something useful from practicing the gravity turn maneuver. Just because you have an easy time leaving Earth’s atmosphere and have no need of finesse, doesn’t mean your travels won’t land you on Venus someday.

I'm reminded of the career advice page on Terry Tao's blog. When I first found it many years ago as a student, I wondered why someone like Tao would bother to write about stuff like "work hard" and "write down what you've done" and "be patient" and "learn and relearn your field". Was... (read more)

Great post. I don't have much to add, but here are some related reads:

  • On compositionality by Jules Hedges, where he claims (among others) that "the opposite of compositionality is emergent effects" and "interfaces are synonymous with compositionality"
  • The epic story of container shipping by Venkatesh Rao, which expands upon your last example awesomely
  • This quote from Brad Stone's book The Everything Store on the inspiration behind Amazon's AWS is a nice example of your last paragraph on advice for system designers:

At the same time, Bezos became enamored with

... (read more)
2Kinrany2y
Seven sketches in compositionality [https://arxiv.org/abs/1803.05316] explores compositionality (category theory, really) with examples: * Dish recipes * Chemistry, resource markets and manufacturing * Relational database schemas and data migrations * Projects and teams with conflicting design trade-offs * Cyber-physical systems, signal flow graphs, circuits
Load More