Ideally, a competitive market would drive the price of goods close to the price of production, rather than the price that maximizes revenue. Unfortunately, some mechanisms prevent this.
One is the exploitation of the network effect, where a good is more valuable simply because more people use it. For example, a well-designed social media platform is useless if it has no users, and a terrible addictive platform can be useful if it has many users (Twitter).
This makes it difficult to break into a market and gives popular services the chance to charge what peop...
Most of the "mechanisms which prevent competitive pricing" is monopoly. Network effect is "just" a natural monopoly, where the first success gains so much ground that competitors can't really get a start. Another curiosity is the difference between average cost and marginal cost. One more user does not cost $40. But, especially in growth mode, the average cost per user (of your demographic) is probably higher than you think - these sites are profitable, but not amazingly so.
None of this invalidates your anger at the inadequacy of the modern dating equilibrium. I sympathize that you don't have parents willing to arrange your marriage and save you the hassle.
Three related concepts.
+1 for these forum logos to be added to FontAwesome:
https://github.com/FortAwesome/Font-Awesome/issues/19536
(random shower thoughts written with basically no editing)
Sometimes arguments have a beat that looks like "there is extreme position X, and opposing extreme position Y. what about a moderate 'Combination' position?" (I've noticed this in both my own and others' arguments)
I think there are sometimes some problems with this.
related take: "things are more nuanced than they seem" is valuable only as the summary of a detailed exploration of the nuance that engages heavily with object level cruxes; the heavy lifting is done by the exploration, not the summary
I'd appreciate it if you could take a look at it and let me know what you think!
I'm so very proud of the review.
I think it's an excellent review and a significant contribution to the Selection Theorems literature (e.g. I'd include it if I was curating a Selection Theorems sequence).
I'm impatient to post it as a top-level post but feel it's prudent to wait till the review period ends.
I have an intuition that any system that can be modeled as a committee of subagents can also be modeled as an agent with Knightian uncertainty over its utility function. This goal uncertainty might even arise from uncertainty about the world.
This is similar to how in Infrabayesianism an agent with Knightian uncertainty over parts of the world is modeled as having a set of probability distributions with an infimum aggregation rule.
These might be of interest, if you haven't seen them already:
Bewley, T. F. (2002). Knightian decision theory. Part I. Decisions in economics and finance, 25, 79-110.
Aumann, R. J. (1962). Utility theory without the completeness axiom. Econometrica: Journal of the Econometric Society, 445-462.
...The lack of willpower is a heuristic which doesn’t require the brain to explicitly track & prioritize & schedule all possible tasks, by forcing it to regularly halt tasks—“like a timer that says, ‘Okay you’re done now.’”
If one could override fatigue at will, the consequences can be bad. Users of dopaminergic drugs like amphetamines often note issues with channeling the reduced fatigue into useful tasks rather than alphabetizing one’s bookcase.
In more extreme cases, if one could ignore fatigue entirely, then analogous to lack of pain, the consequenc
I would no longer do many if these projects
Does anyone have a good piece on hedging investments for AI risk? Would love a read, thanks!
things upvotes conflates:
(list written by my own thumb, no autocomplete)
these things and their inversions sometimes have multiple components, and ma...
Lesswrong is a garden of memes, and the upvote button is a watering can.
Some discussion on whether alignment should see more influence from AGI labs or academia. I use the same argument in favor of a strong decoupling of alignment progress from both: alignment progress needs to go faster than capability progress. If we use the same methods or cultural technology as AGI labs or academia, we can guarantee slower than capability alignment progress. Just as fast as if AGI labs and academia work well for alignment as much as they work for capabilities. Given they are driven by capabilities progress and not alignment progress, they probably will work far better for capabilities progress.
Maybe a bigger deal is that by the nature of a paper, you can't get too many inferential steps away from the field.
Making the rounds.
User: When should we expect AI to take over?
ChatGPT: 10
User: 10? 10 what?
ChatGPT: 9
ChatGPT 8
...
So I was like,
If the neuroscience of human hedonics is such that we experience pleasure at about a 1 valence and suffering at about a 2.5 valence,
And therefore an AI building a glorious transhuman utopia would get us to 1 gigapleasure, and an endless S-risk hellscape would get us to 2.5 gigapain,
And we don’t know what our future holds,
And, although the most likely AI outcome is still overwhelmingly “paperclips”,
If our odds are 1:1 between ending u...
[Draft] Note: this is written in a mere 20 minutes.
Hypothesis: I, and people in general, seem to really underestimated this rather trivial statement that people don't really learn about something when they don't spend the time doing it/thinking about it. my thought includes improving your own self, and human modeling. Here are a list of related concepts. I was inspired by the first 2 and things below are connections to slightly different preexisting ideas in my mind. I am on the lookout for more instances of this hypothesis.
There are at least a few different dimensions to "learning", and this idea applies more to some than to others. Sometimes a brief summary is enough to change some weights of your beliefs, and that will impact future thinking to a surprising degree. There's also a lot of non-legible thinking going on when just daydreaming or reading fiction.
I fully agree that this isn't enough, and both directed study and intentional reflection is also necessary to have clear models. But I wouldn't discount "lightweight thinking" entirely.
Quick prediction so I can say "I told you so" as we all die later: I think all current attempts at mechanistic interpretability do far more for capabilities than alignment, and I am not persuaded by arguments of the form "there are far more capabilities researchers than mechanistic interpretability researchers, so we should expect MI people to have ~0 impact on the field". Ditto for modern scalable oversight projects, and anything having to do with chain of thought.
Very strong upvote. This also deeply concerns me.
The Research Community As An Arrogant Boxer
***
Ding.
Two pugilists circle in the warehouse ring. That's my man there. Blue Shorts.
There is a pause to the beat of violence and both men seem to freeze glistening under the cheap lamps. An explosion of movement from Blue. Watch closely, this is a textbook One-Two.
One. The jab. Blue snaps throws his left arm forward.
Two. Blue twists his body around and the throws a cross. A solid connection that is audible over the crowd.
His adversary drops like a doll.
Ding.
Another warehouse, another match...
The past is a foreign country. Look upon its works and despair.
From the perspective of human civilization of, say, three centuries ago, present-day humanity is clearly a superintelligence.
In any domain they would have considered important, we retain none of the values of that time. They tried to align our values to theirs, and failed abysmally.
With so few reasonable candidates for past superintelligences, reference-class forecasting the success of AI alignment looks bleak.
PDF versions for A Compute-Centric Framework of Takeoff Speeds (Davidson, 2023) and Forecasting TAI With Biological Anchors (Cotra 2020), because some people (like me) like to have it all in one document for offline reading (and trivial inconveniences have so far prevented anyone else making this).
A model I picked up from Eric Schwitzgebel.
The humanities used to be highest-status in the intellectual world!
But then, scientists quite visibly exploded fission weapons and put someone on the moon. It's easy to coordinate to ignore some unwelcome evidence, but not evidence that blatant. So, begrudgingly, science has been steadily accorded more and more status, from the postwar period on.
Many methods to "align" ChatGPT seem to make it less willing to do things its operator wants it to do, which seems spiritually against the notion of having a corrigible AI.
I think this is a more general phenomena when aiming to minimize misuse risks. You will need to end up doing some form of ambitious value learning, which I anticipate to be especially susceptible to getting broken by alignment hacks produced by RLHF and its successors.