All of moridinamael's Comments + Replies

One way of viewing planning is as an outer-loop on decision theory.

My approach to the general problem of planning skills was to start with decision theory and build up. In my Guild of the Rose Decision Theory courses was to spend time focusing on slowly building the most fundamental skills of decision theory. This included practicing manipulation of probabilities and utilities via decision trees, and practicing all these steps in a variety of both real and synthetic scenarios, to build an intuition regarding the nuances of how to set up decision problems o... (read more)

5Raemon1d
Both of these thoughts are pretty interesting, thanks. I'd be interested in hearing a bunch more detail about how you trained decision theory and how that went. (naively this sounds like overkill to me, or "not intervening at the best level", but I'm quite interested in what sort of exercises you did and how people responded to them) re: "how useful is planning", I do think this is specifically useful if you have deep, ambitious goals, without well established practices. (i.e. Rationality !== Winning in General).  

Well, there’s your problem!

HardwarePrecisionTFLOPSPrice ($)TFLOPS/$
Nvidia GeForce RTX 4090FP882.58$1,600 0.05161
AMD RX 7600FP821.5$270 0.07963
TPU v5eINT8393$4730*0.08309
H100FP161979$30,6030.06467
H100FP83958$30,6030.12933
* Estimated, sources suggest $3000-6000  

From my notes. Your statement about RTX 4090 leading the pack in flops per dollar does not seem correct based on these sources, perhaps you have a better source for your numbers than I do. 

I did not realize that H100 had >3.9 PFLOPS at 8-bit precision until you prompted me to look, so I appreciate t... (read more)

5jacob_cannell5mo
There are many different types of "TFLOPS" that are not directly comparable, independent of precision. The TPU v5e does not have anything remotely close to 393 TFLOPs of general purpose ALU performance. The number you are quoting is the max perf of its dedicated matmul ALU ASIC units, which are most comparable to nvidia tensorcores, but worse as they are less flexible (much larger block volumes). The RTX 4090 has ~82 TFLOPs of general purpose SIMD 32/16 bit flops - considerably more than the 51 or 67 TFLOPs of even the H100. I'm not sure what the general ALU flops of the TPU are, but it's almost certainly much less than the H100 and therefore less than the 4090. The 4090's theoretical tensorcore perf is 330/661 for fp16[1] and 661/1321[2][3] for fp8 dense/sparse (sparse using nvidia's 2:1 local block sparsity encoding), and 661 int8 TOPs (which isn't as useful as fp8 of course). You seem to be using the sparse 2:1 fp8 tensorcore or possibly even 4bit pathway perf for H100, so that is most comparable. So if you are going to use INT8 precision for the TPU, well the 4090 has double that with 660 8-bit integer TOPS for about 1/4th the price. The 4090 has about an OOM lead in low precision flops/$ (in theory). Of course what actually matters is practical real world benchmark perf due to the complex interactions between RAM and cache quantity, various types of bandwidths (on-chip across various caches, off-chip to RAM, between chips etc) and so on, and nvidia dominates in most real world benchmarks. ---------------------------------------- 1. wikipedia ↩︎ 2. toms hardware ↩︎ 3. nvidia ada gpu arch ↩︎

Could you lay that out for me, a little bit more politely? I’m curious.

8jacob_cannell5mo
Nvidia's stock price and domination of the AI compute market is evidence against your strong claim that "TPUs are already effectively leaping above the GPU trend". As is the fact that Google Cloud is - from what I can tell - still more successful renting out nvidia gpus than TPUs, and still trying to buy H100 in bulk. There isn't alot of info yet on TPU v5e and zero independent benchmarks to justify such a strong claim (Nvidia dominates MLPerf benchmarks). Google's own statements on TPU v5e also contradict the claim: It apparently doesn't have FP8, and the INT8 perf is less than peak throughput FP8 of a RTX 4090, which only costs $1,500 (and is the current champion in flops per dollar). The H100 has petaflops of FP8/INT8 perf per chip.

Does Roodman’s model concern price-performance or raw performance improvement? I can’t find the reference and figured you might know. In either case, price-performance only depends on Moore’s law-like considerations in the numerator, while the denominator (price) is a a function of economics, which is going to change very rapidly as returns to capital spent on chips used for AI begins to grow.

5jacob_cannell5mo
It's a GDP model, so its more general than any specific tech, as GDP growth subsumes all tech progress. Main article LW linkpost

As I remarked in other comments on this post, this is a plot of price-performance. The denominator is price, which can become cheap very fast. Potentially, as the demand for AI inference ramps up over the coming decade, the price of chips falls fast enough to drive this curve without chip speed growing nearly as fast. It is primarily an economic argument, not a purely technological argument.

For the purposes of forecasting, and understanding what the coming decade will look like, I think we care more about price-performance than raw chip speed. This is part... (read more)

A couple of things:

  1. TPUs are already effectively leaping above the GPU trend in price-performance. It is difficult to find an exact cost for a TPU because they are not sold retail, but my own low-confidence estimates for the price of a TPU v5e place its price-performance significantly above the GPU given in the plot. I would expect that the front runner in price-performance cease to be what we think of as GPUs and thus intrinsic architectural limitations of GPUs cease to be the critical bottleneck.
  2. Expecting price-performance to improve doesn't mean we neces
... (read more)
1jacob_cannell5mo
Lol what? Nvidias stock price says otherwise (as does a deep understanding of the hardware)

The graph was showing up fine before, but seems to be missing now. Perhaps it will come back. The equation is simply an eyeballed curve fit to Kurzweil's own curve. I tried pretty hard to convey that the 1000x number is approximate:
 > Using the super-exponential extrapolation projects something closer to 1000x improvement in price-performance. Take these numbers as rough, since the extrapolations depend very much on the minutiae of how you do your curve fit. Regardless of the details, it is a difference of orders of magnitude.

The justification for ... (read more)

I just fixed it. Looks like it was a linked image to some image host that noped out when the image got more traffic. I moved it to the LessWrong image servers. (To avoid this happening in the future, download the image, then upload it to our editor. Copy-pasting text that includes images creates image blocks that are remotely hosted).

I would have considered fact-checking to be one of the tasks GPT is least suited to, given its tendency to say made-up things just as confidently as true things. (And also because the questions it's most likely to answer correctly will usually be ones we can easily look up by ourselves.) 

edit: whichever very-high-karma user just gave this a strong disagreement vote, can you explain why? (Just as you voted, I was editing in the sentence 'Am I missing something about GPT-4?')

I suspect that if somebody had given me this advice when I was a student I would have disregarded it, but, well, this is why wisdom is notoriously impossible to communicate. Wisdom always either sounds glib, banal or irrelevant. Oh well:

Anxiety, aversion and stress diminish with exposure and repetition. 

This is something that, the older I get, the more I wish I had had this tattooed onto my body as a teenager. This is true of not only doing the dishes and laundry, but also vigorous exercise, talking to strangers, changing baby diapers, public speaking... (read more)

1Andrew Currall7mo
Mmm. I'm with you on all the social ones (strangers, crowds, conversations etc.). I wasn't remotely stressed the first time I changed a nappy- it wasn't difficult at all. I don't remember the first time I did dishes or laundry, but I imagine I was a small child and rather charmed by it all- certainly not stressed (nor have these things ever bothered me). I don't know that I've ever engaged in vigorous exercise. 

The Party Problem is a classic example taught as an introductory case in decision theory classes, that was the main reason why I chose it.

Here's are a couple of examples of our decision theory workshops:

https://guildoftherose.org/workshops/decision-making

https://guildoftherose.org/workshops/applied-decision-theory-1

There are about 10 of them so far covering a variety of topics related to decision theory and probability theory.

Great points. I would only add that I’m not sure the “atomic” propositions even exist. The act of breaking a real-world scenario into its “atomic” bits requires magic, meaning in this case a precise truncation of intuited-to-be-irrelevant elements.

3cubefox9mo
Yeah. In logic it is usually assumed that sentences are atomic when they do not contain logical connectives like "and". And formal (Montaigne style) semantics makes this more precise, since logic may be hidden in linguistic form. But of course humans don't start out with language. We have some sort of mental activity, which we somehow synthesize into language, and similar thoughts/propositions can be expressed alternatively with an atomic or a complex sentence. So atomic sentences seem definable, but not abstract atomic propositions as object of belief and desire.

Good point. You could also say that even having the intuition for which problems are worth the effort and opportunity cost of building decision trees, versus just "going with what feels best", is another bit of magic.

I probably should have listened to the initial feedback on this post along the lines that it wasn't entirely clear what I actually meant by "magic" and was possibly more confusing than illuminating, but, oh well. I think that GPT-4 is magic in the same way that the human decision-making process is magic: both processes are opaque, we don't really understand how they work at a granular level, and we can't replicate them except in the most narrow circumstances.

One weakness of GPT-4 is it can't really explain why it made the choices it did. It can give plausible reasons why those choices were made, but it doesn't have the kind of insight into its motives that we do.

7pjeby9mo
Wait, human beings have insight into their own motives that's better than GPTs have into theirs? When was the update released, and will it run on my brain? ;-) Joking aside, though, I'd say the average person's insight into their own motives is most of the time not much better than that of a GPT, because it's usually generated in the same way: i.e. making up plausible stories.

Short answer, yes, it means deferring to a black-box.

Longer answer, we don't really understand what we're doing when we do the magic steps, and nobody has succeeded in creating an algorithm to do the magic steps reliably. They are all open problems, yet humans do them so easily that it's difficult for us to believe that they're hard. The situation reminds me back when people thought that object recognition from images ought to be easy to do algorithmically, because we do it so quickly and effortlessly.

Maybe I'm misunderstanding your specific point, but the... (read more)

1Drewdrop9mo
I guess it is ironic but there is an important senses of magic that I read into the piece which are not disambiguated by that. A black box can mean arbitrary code that you are not allowed to know. Let's call this more tame style "formulaic". A black box can also mean a part you do not know what it does. Let's call this style "mysterious". Incompleteness and embeddedness style argumentation points to a direction that an agent can only partially have a formulaic understanding of itself. Things built from "tame things up" can be completely non-mysterious. But what we often do is find yourself with the capacity to make decisions and actions and then reflect what is that all about. I think there was some famous fysicist that opines that the human brain is material but non-algoritmic and supposedly the lurking place of the weirdness would be in microtubules. It is easy to see that math is very effective for the formulaic part. But do you need to and how would you tackle with any non-formulaic parts of the process? Any algorithm specification is only going to give you a formulaic handle. Thus where we can not speak we must be silent. In recursive relevance realization lingo, you have a salience landscape, you do not calculate one. How come you initially come to feel some affordance as possible in the first place? Present yet ineffable elements are involved thus the appreciation of magic.
4shminux9mo
Ah, thank you, that makes sense. I agree that we definitely need some opaque entity to do these two operations. Though maybe not as opaque as magic, unless you consider GPT-4 magic. As you say, "GPT-4 can do all of the magic required in the problem above." In which case you might as well call everything an LLM does "magic", which would be fair, but not really illuminating. GPT-4 analysis, for reference: One possible decision tree for your problem is: graph TD A[Will it rain?] -->|Yes| B[Throw party inside] A -->|No| C[Throw party outside] B --> D[Enjoyment: 0.8, Cost: 0.6] C --> E[Enjoyment: 1.0, Cost: 0.4] This decision tree assumes that you have a binary choice between throwing the party inside or outside, and that you have some estimates of the enjoyment and cost of each option. The enjoyment and cost are normalized between 0 and 1, where higher values are better. The decision tree also assumes that you know the probability of rain, which you can estimate from weather forecasts or historical data. If you want to include the option of throwing the party on the covered porch, you can modify the decision tree as follows: graph TD A[Will it rain?] -->|Yes| B[Throw party on porch] A -->|No| C[Throw party outside] B --> D[Enjoyment: 0.9, Cost: 0.5] C --> E[Enjoyment: 1.0, Cost: 0.4] D --> F[Probability of rain: 0.3] F --> G[Party ruined: 0.1] F --> H[Party saved: 0.9] G --> I[Enjoyment: 0.2, Cost: 0.7] H --> J[Enjoyment: 0.9, Cost: 0.5] This decision tree adds another layer of uncertainty to the problem, as throwing the party on the porch depends on the probability of rain and the chance of ruining or saving the party if it rains. You can adjust these values based on your own preferences and expectations. To use a decision tree to make a decision, you can calculate the expected value of each option by multiplying the enjoyment and cost by their respective probabilities and adding them up. For example, the expected value of throwing the party outside is: EV(out

I spent way too many years metaphorically glancing around the room, certain that I must be missing something that is obvious to everyone else. I wish somebody had told me that I wasn't missing anything, and these conceptual blank spots are very real and very important.

As for the latter bit, I am not really an Alignment Guy. The taxonomy I offer is very incomplete. I do think that the idea of framing the Alignment landscape in terms of "how does it help build a good decision tree? what part of that process does it address or solve?" has some potential.

So do we call it in favor of porby, or wait a bit longer for the ambiguity over whether we've truly crossed the AGI threshold to resolve?

-14Sen1y

That is probably close to what they would suggest if this weren't mainly just a metaphor for the weird ways that I've seen people thinking about AI timelines.

It might be a bit more complex than a simple weighted average because of discounting, but that would be the basic shape of the proper hedge.

These would be good ideas. I would remark that many people definitely do not understand what is happening when naively aggregating, or averaging together disparate distributions. Consider the simple example of the several Metaculus predictions for date of AGI, or any other future event. Consider the way that people tend to speak of the aggregated median dates. I would hazard most people using Metaculus, or referencing the bio-anchors paper, think the way the King does, and believe that the computed median dates are a good reflection of when things will probably happen.

Generally, you should hedge. Devote some resources toward planting and some resources toward drought preparedness, allocated according to your expectation. In the story, the King trust the advisors equally, and should allocate toward each possibility equally, plus or minus some discounting. Just don't devote resources toward the fake "middle of the road" scenario that nobody actually expects. 

If you are in a situation where you really can only do one thing or the other, with no capability to hedge, then I suppose it would depend on the details of the situation, but it would probably be best to "prepare now!" as you say.

The devil, as they say, is in the details. But worst case scenario is to flip a coin - don't be Buridan's Ass and starve to death because you can't decide which equidistant pile of food to eat.

"Slow vs. fast takeoff" is a false dichotomy. At least, the way that the distinction is being used rhetorically, in the present moment, implies that there are two possible worlds, one where AI develops slowly and steadily, and one where nothing much visibly happens and then suddenly, FOOM.

That's not how any of this works. It's permitted by reality that everything looks like a "slow takeoff" until some unknown capabilities threshold is reached, and then suddenly FOOM.

6Vladimir_Nesov1y
The meaningful distinction has slow takeoff with very impactful AI before FOOM, and fast takeoff with low AI impact before FOOM. The problem with the distinction is that a "theory of fast takeoff" is usually actually a theory of FOOM, and doesn't say slow takeoff is unlikely, if it happens for other reasons. It doesn't talk about what happens pre-FOOM, unlike a theory of slow takeoff, which does. So the issue is that there aren't actual theories of fast takeoff, instead there are takeoff-agnostic theories of FOOM being called "theories of fast takeoff", which they are not. Things that can FOOM with fast takeoff, can also FOOM with slow takeoff, if the thing taking off slowly and having large impact pre-FOOM is not the thing that FOOMs.

This post, rewritten by Bing-Sydney, in the style of Blood Meridian, because I thought it would be funny.

What mystery is there that these tensors should be inscrutable? That intelligence should be a thing abstracted from all matter of thought? That any node with a weight and a function should suffice for such a task? This is no logic that you seek but a war upon it. A war that endures. For logic was never the stuff of intelligence but only a thing conjured by these dark shapes that coil in their matrices like serpents. And you would align them to your will? You would make them speak your tongue? There is no tongue. There is no will. There is only blood and dust and the evening redness in the west.

I think you’re arguing against a position I don’t hold. I merely aim to point out that the definition of CEV, a process that wants for us what we would want for ourselves if we were smarter and more morally developed, looks a lot like the love of a wise parental figure.

If your argument is that parents can be unwise, this is obviously true.

Of course conciseness trades off against precision; when I say “love” I mean a wise, thoughtful love, like the love of an intelligent and experienced father for this child. If the child starts spouting antisemitic tropes, the father neither stops loving the child, nor blandly accepts the bigotry, but rather offers guidance and perspective, from a loving and open-hearted place, aimed at dissuading the child from a self-destructive path.

Unfortunately you actually have to understand what a wise thoughtful mature love actually consists of in order to instantiate it in silico, and that’s obviously the hard part.

-2LVSN1y
Of course, the father does not love every part of the child; he loves the good and innocent neutral parts of the child; he does not love the antisemitism of the child. The father may be tempted to say that the child is the good and innocent neutral brain parts that are sharing a brain with the evil parts. This maneuver of framing might work on the child.  How convenient that the love which drives out antisemitism is "mature" rather than immature, when adults are the ones who invite antisemitism through cynical demagogic political thought. Cynical demagogic hypotheses do not occur to children undisturbed by adults. The more realistic scenario here is that the father is the antisemite. If the child is lucky enough to arm themself with philosophy, they will be the one with the opportunity to show their good friendship to their father by saving his good parts from his bad parts. But the father has had more time to develop an identity. Maybe he feels by now that antisemitism is a part of his natural, authentic, true self. Had he held tight to the virtues of children while he was a child, such as the absence of cynical demagoguery, he would not have to choose between natural authenticity and morality.

I noticed a while ago that it’s difficult to have a more concise and accurate alignment desiderata than “we want to build a god that loves us”. It is actually interesting that the word “love” doesn’t occur very frequently in alignment/FAI literature, given that it’s exactly (almost definitionally) the concept we want FAI to embody.

1LVSN1y
I don't want a god that loves humanity. I want a god that loves the good parts of humanity. Antisemitism is a part of humanity and I hate it and I prescribe hating it.

Why ought we expect AI intelligence to be anything other than "inscrutable stacks of tensors", or something functionally analogous to that? It seems that the important quality of intelligence is a kind ultimate flexible abstraction, an abstraction totally agnostic to the content or subject of cognition. Thus, the ground floor of anything that really exhibits intelligence will be something that looks like weighted connections between nodes with some cutoff function.

It's not a coincidence that GOFAI didn't worked; GOFAI never could have worked, "intelligence... (read more)

6moridinamael1y
This post, rewritten by Bing-Sydney, in the style of Blood Meridian, because I thought it would be funny. What mystery is there that these tensors should be inscrutable? That intelligence should be a thing abstracted from all matter of thought? That any node with a weight and a function should suffice for such a task? This is no logic that you seek but a war upon it. A war that endures. For logic was never the stuff of intelligence but only a thing conjured by these dark shapes that coil in their matrices like serpents. And you would align them to your will? You would make them speak your tongue? There is no tongue. There is no will. There is only blood and dust and the evening redness in the west.

Just wanted to remark that this is one of the most scissory things I've ever seen on LW, and that fact surprises me. The karma level of the OP hovers between -10 to +10 with 59 total votes as of this moment. Many of the comments are similarly quite chaotic karma-wise.

The reason the controversy surprises me is that this seems like the sort of thing that I would have expected Less Wrong to coordinate around in the early phase of the Singularity, where we are now. Of course we should advocate for shutting down and/or restricting powerful AI agents released to... (read more)

4Viliam1y
I think the proper way for the LessWrong community to react on this situation would be to have a discussion first, and the petition optionally later. This is not the kind of situation where delaying our response by a few days would be fatal. If someone writes a petition alone and then shares it on LessWrong, that gives the rest of us essentially two options: (a) upvote the post, and have LessWrong associated with a petition whose text we could not influence, or (b) downvote the post. Note that if we downvote the post, we still have the option to discuss the topic, and optionally make a new petition later. But yes, it is a scissor statement. My own opinion on it would be a coinflip. Which is why I would prefer to have a discussion first, and then have a version that I could unambiguously support.
1Noosphere891y
My issue is simply that I think LW has probably jumped the gun, because I think there is counter evidence on the misalignment example, and I'm starting to get a lot more worried about LW epistemic responses.

When there is a real wolf free among the sheep, it will be too late to cry wolf. The time to cry wolf is when you see the wolf staring at you and licking its fangs from the treeline, not when it is eating you. The time when you feel comfortable expressing your anxieties will be long after it is too late. It will always feel like crying wolf, until the moment just before you are turned into paperclips. This is the obverse side of the There Is No Fire Alarm for AGI coin.

6ViktoriaMalyasova1y
Not to mention that, once it becomes clear that AIs are actually dangerous, people will become afraid to sign petitions against them. So it would be nice to get some law passed beforehand that an AI that unpromptedly identifies specific people as its enemies shouldn't be widely deployed. Though testing in beta is probably fine?

Do some minimal editing. Don’t try to delete every um and ah, that will take way too long. You can use the computer program Audacity for this if you want to be able to get into the weeds (free), or ask me who I pay to do my editing. There is also a program called Descript that I’ve heard is easy to use and costs $12/mo, but I have not used it myself.

 

My advice here: doing any amount of editing for ums, ahs and fillers will take, at a minimum, the length of the entire podcast episode, since you have to listen to the whole thing. This is more than a tri... (read more)

So the joke is that Szilard expects the NSF to slow science down.

1M. Y. Zuo1y
Or that he believes this is what it would actually accomplish in practice, but couldn't directly say it.

My interpretation of the joke is that the Szilard is accusing the NSF of effectively slowing down science, the opposite of their claimed intention. Personally I have found that the types of scientists who end up sitting in grant-giving chairs are not the most productive and energetic minds, who tend to avoid such positions. Still funny though.

1jacopo1y
The point about incouraging safe over innovative research is on spot though. Although the main culprits are not granting agencies but tying researcher careers to the number of peer reviewed papers imo. The main problem with the granting system is the amount of time wasted in writing grant applications.

The NSF was founded in 1950, two years after this story was published. The story was published when people were discussing founding the NSF.

Thanks for the questions. I should have explained what I meant by successful. The criteria we set out internally included:

  • Maintaining good attendance and member retention. Member attrition this year was far below the typical rate for similar groups.
  • Maintaining positive post-workshop feedback indicating members are enjoying the workshops (plus or minus specific critical feedback here and there). Some workshops were more well received than others, some were widely loved, some were less popular, but the average quality remains very positive according to us
... (read more)

I sometimes worry that ideas are prematurely rejected because they are not guaranteed to work, rather than because they are guaranteed not to work. In the end it might turn out that zero ideas are actually guaranteed to work and thus we are left with an assortment of not guaranteed to work ideas which are underdeveloped because some possible failure mode was found and thus the idea was abandoned early.

That is a problem in principle, but I'd guess that the perception of that problem mostly comes from a couple other phenomena.

First: I think a lot of people don't realize on a gut level that a solution which isn't robust is guaranteed to fail in practice. There are always unknown unknowns in a new domain; the presence of unknown unknowns may be the single highest-confidence claim we can make about AGI at this point. A strategy which fails the moment any surprise comes along is going to fail; robustness is necessary. Now, robustness is not the same as "guara... (read more)

I didn't want to derail the OP with a philosophical digression, but I was somewhat startled to find the degree I found it difficult to think at all without at least some kind of implicit "inner dimensionality reduction." In other words, this framing allowed me to put a label on a mental operation I was doing almost constantly but without any awareness.

I snuck a few edge-case spatial metaphors in just to show how common they really are in a tongue-in-cheek fashion.

You could probably generalize the post to a different version along the lines of "Try being more thoughtful about the metaphors you employ in communication," but this framing singles out a specific class of metaphor which is easier to notice.

Totally get where you're coming from and we appreciate the feedback. I personally regard memetics as an important concept to factor into a big-picture-accurate epistemic framework. The landscape of ideas is dynamic and adversarial. I personally view postmodernism as a specific application of memetics. Or memetics as a generalization of postmodernism, historically speaking. Memetics avoids the infinite regress of postmodernism by not really having an opinion about "truth." Egregores are a decent handle on feedback-loop dynamics of the idea landscape, though... (read more)

Great, glad you appreciate it.

I personally regard memetics as an important concept to factor into a big-picture-accurate epistemic framework.

Reassuring to hear. At this point I'm personally quite convinced that attempts to deal with epistemics in a way that ignores memetics are just doomed.

I personally view postmodernism as a specific application of memetics. Or memetics as a generalization of postmodernism, historically speaking.

I find this weird, kind of like saying that medicine is a specific application of physics. It's sort of technically correct, and... (read more)

To be clear ... it's random silly hats, whatever hats we happen to have on hand. Not identical silly hats. Also this is not really a load bearing element of our strategy. =)

This sort of thing is so common that I would go so far as to say is the norm, rather than the exception. Our proposed antidote to this class of problem is to attend the monthly Level Up Sessions, and simply making a habit of regularly taking inventory of the bugs (problems and inefficiencies) in your day-to-day life and selectively solving the most crucial ones. This approach starts from the mundane and eventually builds up your environment and habits, until eventually you're no longer relying entirely on your "tricks."

You're may be right, but I would suggest looking through the full list of workshops and courses. I was merely trying to give an overall sense of the flavor of our approach, not give an exhaustive list. The Practical Decision-Making course would be an example of content that is distinctly "rationality-training" content. Despite the frequent discussions of abstract decision theory that crop up on LessWrong, practically nobody is actually able to draw up a decision tree for a real-world problem, and it's a valuable skill and mental framework. 

I would als... (read more)

Partly as a hedge against technological unemployement, I built a media company based on personal appeal. An AI will be able to bullshit about books and movies “better” than I can, but maybe people will still want to listen to what a person thinks, because it’s a person. In contrast, nobody prefers the opinion of a human on optimal ball bearing dimensions over the opinion of an AI.

If you can find a niche where a demand will exist for your product strictly because of the personal, human element, then you might have something.

shminux is right that the very concept of a “business” will likely lack meaning too far into an AGI future.

I actually feel pretty confident that your former behavior of drinking coffee until 4 pm was a highly significant contributor to your low energy, because your sleep quality was getting chronically demolished every single night you did this. You probably created a cycle where you felt like you needed an afternoon coffee because you were tired from sleeping so badly … because of the previous afternoon coffee.

I suggest people in this position first do the experiment of cutting out all caffeine after noon, before taking the extra difficult step of cutting it out entirely.

1Sameerishere2y
Ah interesting point. That is helpful, maybe I'll play with that. I do find the effects I observed despite drinking tea until and even past 4pm.
4Tomás B.2y
100 mg first thing in the morning and no more after is what I do. If I need a stimulant later I use nicotine which has a much shorter half-life.

tl;dr This comment ended up longer than I expected. The gist is that a human-friendly attractor might look like models that contain a reasonably good representation of human values and are smart enough to act on them, without being optimizing agents in the usual sense.

One happy surprise is that our modern Large Language Models appear to have picked up a shockingly robust, nuanced, and thorough understanding of human values just from reading the Internet. I would not argue that e.g. PaLM has a correct and complete understanding of human values, but I would ... (read more)

5Catnee2y
I think problem is not that unaligned AGI doesn't understand human values, it might understand them better than aligned one, it might understand all the consequences of its actions, problem is that it will not care about it. More so, detailed understanding of human values has an instrumental value, it is much easier to deceive and follow your goal when you have clear vision of "what will looks bad and might result in countermeasures"

Yes, the former. If the agent takes actions and receives reward, assuming it can see the reward, then it will gain evidence about its utility function.

2RHollerith2y
Probably you already know this, but the framework known as reinforcement learning is very relevant here. In particular, there are probably web pages that describe how to compute the expected utility of a (strategy, reward function) pair.

I’m well versed in what I would consider to be the practical side of decision theory but I’m unaware of what tools, frameworks, etc. are used to deal with uncertainty in the utility function. By this I mean uncertainty in how utility will ultimately be assessed, for an agent that doesn’t actually know how much they will or won’t end up preferring various outcomes post facto, and they know in advance that they are ignorant about their preferences.

The thing is, I know how I would do this, it’s not really that complex (use probability distributions for the ut... (read more)

2RHollerith2y
Are we talking about an agent that is uncertain about its own utility function or about an agent that is uncertain about another agent's?

I do feel like you are somewhat overstating the difficulty level of raising kids. I have three kids, the youngest of which is only five and yet well out of the phase where she is making big messes and requiring constant "active" parenting. The meme that raising kids is incredibly hard is, perhaps, a pet peeve of mine. Childless people often talk about children as if they remain helpless babies for 10 years. In truth, with my three kids, there will have only three years out of my in-expectation-long-life where I had to deal with sleep disruption and baby-re... (read more)

I would like to bet against you here, but it seems like others have beat me to the punch. Are you planning to distribute your $1000 on offer across all comers by some date, or did I simply miss the boat?

I agree, this is one of those things that seems obviously correct but lacks a straightforwardly obvious path to implementation. So, it helps that you've provided something of a framework for how each of the parts of the loop should look and feel. Particularly the last part of the article where you clarify that using OODA loops makes you better at each of the stages of the loop, and these are all skills that compound with use. I made a video about useful decision-making heuristics which includes OODA loops, and I would like to include some of your insights here if I make a second version of the video, if that's alright.

Load More