On reflection, I suspect that I'm struggling with the is-ought problem in the entire project. Physics is "is" and ethics is "ought", and I'm very skeptical that "ethicophysics" is actually either, let alone a bridge between the two.
I understand (but do not agree with) the idea of preserving someone's clickstream. I do not want pure linkposts without any information on LessWrong. The equation is:
V = mc2
(Violence) = (Mass of animals) x (Degree of Confinement)2
and the solution is
Quite simply: fight violence with kindness.
I suspect we have a disagreement about whether the "worked out theoretical equations" suffer from is-ought any less than the plain language version. And if they are that fundamentally different, why should anyone think the equations CAN be explained in plain language.
I am currently unwilling to put in the work to figure out what the equations are actually describing. If it's not the same (though with more rigor) as the plain language claims, that seriously devalues the work.
As others have pointed out, there's an ambiguity in the word "you". We don't have intuitions about branching or discontinuous memory paths, so you'll get different answers if you mean "a person with the memories, personality, and capabilities that are the same as the one who went into the copier" vs "a singular identity experiencing something right now".
Q1: 100%. A person who feels like me experiences planet A and a different person who is me experiences planet B.
Q2: Still 100%. One of me experiences A, one C and one D.
Q3: Co...
Do you actually want discussion on LW, or is this just substack spam? If you want discussion, you probably should put something to discuss in the post itself, rather than a link to a link to a PDF in academic language that isn't broken out or presented in a way that can be commented upon.
From a very light skim, it seems like your "mathematically rigorous treatment" isn't. It includes some equations, but not much of a tie between the math and the topics you seem to want to analyze. It deeply suffers from is-ought confusion.
[ note: I am not a libertarian, and haven't been for many years. But I am sympathetic. ]
Like many libertarian ideas, this mixes "ought" and "can" in ways that are a bit hard to follow. It's pretty well-understood that all rights, including the right to redress of harm, are enforced by violence. In smaller groups, it's usually social violence and shared beliefs about status. In larger groups, it's a mix of that, and multi-layered resolution procedures, with violence only when things go very wrong.
When you say you'd "prefer a world c...
I'm very skeptical of fairly limited experiences being used to make universal pronouncements.
I'm sure this was the experience for many individuals and teams. I know for certain it was pretty normal and not worried about for others. I knew a lot of MS employees in that era, though I worked at a different giant tech firm with vaguely-similar procedures. I was senior enough, though an IC rather than a manager, to have a fair bit of input into evaluations of my team and division, and I saw firsthand the implementation and effects of thi...
Voting is one example. Who gets "human rights" is another. A third is "who is included, with what weight, in the sum over well being in a utility function". A fourth is "we're learning human values to optimize them: who or what counts as human"? A fifth is economic fairness,
I think voting is the only one with fairly simple observable implementations. The others (well, and voting, too) are all messy enough that it's pretty tenuous to draw conclusions about, especially without noting all the exceptions and historical violence that led to the current st...
[ epistemic status: I don't agree with all the premeses and some of the modeling, or the conclusions. But it's hard to find one single crux. If this comment isn't helpful, I'll back off - feel free to rebut or disagree, but I may not comment further. ]
This seems to be mostly about voting, which is an extremely tiny part of group decision-making. It's not used for anything really important (or if it is, the voting options are limited to a tiny subset of the potential behavior space). Even on that narrow topic, it switches from a fair...
Sorry, kind of bounced off the part 1 - didn't agree, but couldn't find the handle to frame my disagreement or work toward a crux. Which makes it somewhat unfair (but still unfortunately the case) to disagree now.
I like the focus on power (to sabotage or defect) as a reason to give wider voice to the populace. I wonder if this applies to uploads. It seems likely that the troublemakers can just be powered down, or at least copied less often.
I suspect your modeling of “the fairness instinct” is insufficient. Historically, there were many periods of time where slaves or mostly-powerless individuals were the significant majority. Even today, there are very limited questions where one-person-one-vote applies. Even in the few cases where that mechanism holds, ZERO allow any human (not even any embodied human) to vote. There are always pretty restrictive criteria of membership and accident of birth that limit the eligible vote population.
Without examples, I have trouble understanding "censorship of independent-minded people". It's probably not formal censorship (but maybe it is - most common media disallows some words and ideas). There's a big difference between "negative reactions to beliefs that many/most find unpleasant, even if partially true" and "negative reactions to ideas that contradict common values, with no real truth value". They're not the same motives, and not the same mechanisms for the idea-haver to refine their beliefs.
In many groups, especially public o...
Downvoted. This states an overgeneral concept far more forcefully than it deserves, and doesn't give enough examples to know what kind of exceptions to look for. I'm also unsure what "censure" means specifically in this model of things - is my comment a censure?
I also dislike the framing of "conventional-minded" vs "independent-minded" as attributes of people, rather than as descriptions of topics that bring criticism. This could be intentional, if you're arguing that the kind of censure you're talking about tends to be directed at people rather than ideas, but it's not clear if so.
Not really an answer, but a few modeling considerations:
At a simple calculation, $19B USD for 118M expected signatures (of different types) is $161 per signature. This contradicts the article which says 1.5 Francs or 2-4 Francs. However, it's also "2 to 17" Billion Francs, depending on actual usage. Still doesn't add up.
I have no clue what's actually included in the price - digitization and indexing/retrieval of documents can cost a lot more than just the identity verification. And legally-binding identity verification ain't cheap in the first place.
It does seem high to me, but I can say that about almost all government spending, for any country for any program.
I can't tell if you're saying "this is completely and horribly incorrect in approach and model", or if you're saying "yeah, there are cases where imposed rapid change is harmful, but there's nuance I'd like to point out". I disagree with the former, and don't see the latter very clearly in the text.
The title of Scott's post (give up 70 percent of the way through) seems about right to me, and skimming over the post, it seems he's mostly talking about extreme, rapid, politically-motivated changes. I agree with him that it's concerning, and the vi...
We absolutely agree that incentives matter. Where I think we disagree is on how much they matter and how controllable they are. Especially for orgs whose goals are orthogonal or even contradictory with the common cultural and environmental incentives outside of the org.
I'm mostly reacting to your topic sentence
EAs are, and I thought this even before the recent Altman situation, strikingly bad at setting up good organizational incentives.
And wondering if 'strikingly bad' is relative to some EA or non-profit-driven org that does it well,or if 'strikingly bad' is just acknowledgement that it may not be possible to do well.
I'm confused. NVidia (and most profit-seeking corporations) are reasonably aligned WRT incentives, because those are the incentives of the world around them.
I'm looking for examples of things like EA orgs, which have goals very different from standard capitalist structures, and how they can set up "good incentives" within this overall framework.
If there are no such examples, your complaint about 'strikingly bad at setting up good organizational incentives" is hard to understand. It may be more that the ENVIRONMENT in which they exist has competing incentives and orgs have no choice but to work within that.
Can you give some examples of organizations larger than a few dozen people, needing significant resources, with goals not aligned with wealth and power, which have good organizational incentives?
I don't disagree that incentives matter, but I don't see that there's any way to radically change incentives without pretty structural changes across large swaths of society.
A few aspects of my model of university education (in the US):
I mean, testing with a production account is not generally best practice, but it seems to show things are operational. What aspect of things are you testing?
I (a real human, not a test system) saw the post, upvoted but disagreeed, and made this reply comment.
I think "R&D" is a misleading category - it comprises a LOT of activities with different uncertainty, type, scope, and timeframe of impact. For tax and reporting purposes, a whole lot of not-very-research-ey software and other engineering is classified as "R&D", though it's more reasonably thought of as "implementation and construction".
Nordquist's "Innovation" measure is very different from economic reporting of R&D spending. This makes the denominator very questionable in your thesis.
Perhaps more important, returns are NOT uniform...
I'm not sure the connection between martial arts training/competition and rationalist discussion is all that strong. Also, I'm not sure if this is meant to apply to "casual discussion in most contexts" or "discussion about rationalist topics among people who share a LOT of context and norms", or "comment threads on LessWrong".
The primary difference I see is that in martial arts, the goal is generally self-improvement, where in rationalist discussions the goal is finding and agreeing on external truths. Martial arts isn't about disagreement or m...
Agreed with the main point of your comment: even mildly-rare events can be distributed in such a way that some of us literally never experience them, and others of us see it so often it appears near-universal. This is both a true variance in distribution AND a filter effect of what gets highlighted and what downplayed in different social groups. See also https://www.lesswrong.com/tag/typical-mind-fallacy .
For myself, in Seattle (San-Francisco-Lite), I'd only very rarely noticed that someone was trans until the early '00s, when a friend transiti...
In addition to measurement problems, and definitional problems (is p-hacking "fraud" or just bad methodology?), I think "academia" is too broad to meaningfully answer this question.
Different disciplines, and even different topics within a discipline will have a very different distribution of quality of research, including multiple components - specificity of topic, design of mechanism, data collection, and application of testing methodology. AND in clarity and transparency, for whether others can easily replicate the results, AND agree or disagree wi...
Thanks for this - it's an important part of modeling the world and understanding the competitive and cooperative symbiosis of commerce (and generally, human interaction).
I think application of this model requires extending the idea of "monopoly" to include partial substitutability (most non-government-supported monopolies aren't all or nothing, they're hard-to-quantify-but-generally-small differences in desirability). And also some amount of human herding and status-quo bias that makes a temporary advantage much more long-lived if you can make it habitual or accepted standard.
I mean, there are some parallels between any two topics. Whether those parallels are important, and whether they help model either thing varies pretty widely.
In this case, I don't see many useful parallels. The difference between individual small-scale rights and power to harm a very few individuals being demonstrably real for guns, vesus the somewhat theoretical future large-scale degradation or destruction of civilization makes it just completely a different dimension of disagreement.
One parallel MIGHT be the general distrust of government restriction on private activity, but from people I've talked with on both topics, that's present but not controlling for beliefs about these topics.
upvoted for interesting ideas and personal experience on the topic. If I could strong-disagree, I would. I do not recommend this to anyone.
Mostly my reasoning is "not safe". You're correct that historically, the IRS doesn't come at small non-payers very hard. You're incorrect to extend that to "never" or to "that won't change without warning due to technology, or legal/political environment". You're also correct that, at current interest rates, it's about double at ten years. You're incorrect, though, to think that's the...
It gets tried every so often, but there are HUGE differences between companies and geographical/political governance.
The primary difference, in my mind, is filtering and voluntary association. People choose where to work, and companies choose who works for them, independently (mostly) of where they live, what kind of lifestyle they like, whether they have children or relatives nearby, etc. Cities and countries can sometimes turn away some immigrants, but they universally accept children born there and they can't fire citizens who aren't productive.
Umm, I think you're putting too much weight on idiomatic shorthand that's evolved for communicating some common things very easily, and less-common ideas less easily. "Garfield is a cat" is a very reasonable and common thing to try to communicate - a specific not-well-known thing (garfield) being described in terms of a nearly-universal knowledge ("cat"). The reverse might be "Cats are things like Garfield", which is a bit odd because the necessity of communicating it is a bit odd.
It tends to track specific to general, not because they're specific or general concepts, but because specifics more commonly need to be described than generalities.
If you think evolution has a utility function, and that it's the SAME function that an agent formed by an evolutionary process has, you're not likely to get me to follow you down any experimental or reasoning path. And if you think this utility function is "perfectly selfish", you've got EVEN MORE work cut out in defining terms, because those just don't mean what I think you want them to.
Empathy as a heuristic to enable cooperation is easy to understand, but when normatively modeling things, you have to deconstruct the heuristics to actual goals and strategies.
I think you're using the wrong model for what "have a purpose" means. purpose isn't an attribute of a thing. Purpose is a relation between an agent and a thing. An agent infers (or creates) a purpose for things (including themselves). This purpose-for-me is temporary, mutable, and relative. Different agents may have different (or no) purposes for the same thing.
[epistemic status: mostly priors about fantastic quantities being bullshit. no clue what evidence would update me in any direction. ]
I don't believe the universe is infinite. It has a beginning, an end, and a finite (but large and perhaps growing) extent. I further do not believe the term "exist" can apply to other universes.
I see. So the experiment is to see if you can find a frequency that is comfortable/helpful, and then figure out if it's likely to match your alpha waves? From what I can tell, alpha waves are typically between 8 and 12 Hz, but I don't know if it varies over time (nor how quickly) for individuals.
Unfortunately, the linked paper notes that the pulse is timed with the "trough" of the alpha wave, which is unlikely to be found with at-home experimentation. That implies that it'd need to use an EEG to synchronize, rather than ANY fixed frequency.
Do you have a hypothesis you're collecting data for, or is this just fun for you? I'm a little put off by the imperative in the title, without justification in the post.
For some screen size/shape, for some browser positioning, for some readers, this is probably true. It's fucking stupid to believe that's anywhere close to a majority. If that's YOUR reading area, why not just make your browser that size?
It should be pretty easy to write a tampermonkey or browser extension to make it work that way. Now that you point it out, I'm kind of surprised this doesn't seem to exist.
The VAST majority of matter and energy in the universe is in the non-purpose category - it often has activity and reaction, and effects over time, but it doesn't strategically change it's mechanisms in order to achieve something, it just executes.
Humans (and arguably other animals and groups distinct from indiiduals) may have purpose, and may infer purpose on things that don't have it intrinsically. Even then, there are usually multiple simultaneous purposes (and non-purpose mechanisms) that interact, sometimes amplifying, sometimes dampening one another.
I think you're using a different sense of the word "possible". In a simplified physics model, where mass and energy are easily transformed as needed, you can just wave your hands and say "there's plenty of mass to use for computronium". That's not the same as saying "there is an achievable causal path from what we experience now to the world described".
It's also assuming:
3 and 4 are, I think, the point of the post. To the extent that work on immortality rather than alignment, we narrow the window of #2, and risk getting neither.
Honestly, I haven’t seen much about individual biological immortality, or even significant life-extension, in the last few years.
I suspect progress on computational consciousness-like mechanisms has fully eclipsed the idea that biological brains in the current iteration are the way of the future. And there’s been roughly no progress on upload, so the topic of immortality for currently-existing humans has mostly fallen away.
Also, if/when AI is vastly more effective than biological intelligence, it takes a lot of the ego-drive away for the losers.
Note that in adversarial (or potentially adversarial) situations, error is not independent and identically distributed. If your acceptance spec for gold coins is "25g +/- 0.5g", you should expect your suppliers to mostly give you coins near 24.5g. Network errors are also correlated, either because they ARE an attack, or because some specific component or configuration is causing it.
Hmm. I've not seen any research about that possibility, which is obvious enough that I'd expect to see it if it were actually promising. And naively, it's not clear that you'd get more powerful results from using 1M times the compute this way, compared to more direct scaling.
I'd put that in the exact same bucket as "not known if it's even possible".
An important sub-topic within "open source vs regulatory capture" is "there does not exist an authority that can legibly and correctly regulate AI".
I always like seeing interesting ideas, but this one doesn't resonate much for me. I have two concerns:
I ... don't think that line of thinking almost ever applies to me. If the topic interests me and/or there's something about the post that piques my desire to discuss, it almost always turns out that there are others with similar willingness. At the very least, the OP usually engages to some extent.
There are very few, and perhaps zero, cases where crafting or even evaluating an existing contract is less effort than just reading and responding AND I see enough potential to expend the contract effort but not the read/reply effort.
In addition...
A lot depends on whether this is a high-bandwidth discussion/debate, or an anonymous post/read of public statements (or, on messages boards, somewhere in between). In the interactive case, Alice and Bob could focus on cruxes and specific points of agreement/disagreement. In the public/semi-public case, it's rare that either side puts that much effort in.
I'll also note that a lot of topics on which such disagreements persist are massively multidimensional and hard to quantify degree of closeness, so "agreement" is very hard to define. No t...
I mean, the universal dispute resolution is violence, or the threat thereof. Typically this is encapsulated in governments, courts, and authorities, in order to make an escalation path that rarely comes down to actual violence.
For low-value wagers/markets, a less powerful authority generally suffices - a company or even individual running the market/site. The predictions can be written such that it's unlikely to be disputed, and to specify a dispute-resolution mechanism, but in the end the enforcement is by whoever is holding the money. &...
Yup, like so many thought experiments, it's intended to restrict all the real-world options in order to focus on the intuition conflict between "once" and "commonly". One of the reasons I'm not a Utilitarian is that I don't think most values are anywhere near linear, and simple scaling (shut up and multiply) just doesn't resonate with me.
If the "hero for hire" is a lifeguard or swimming instructor, we have LOTS of examples of communities or occasionally rich individuals deciding to provide that. The difference that the thought experiment fails to make clear is one of timeframe and (as you point out) uniqueness of YOUR ability to help.
Upvoted, and thanks for writing this. I disagree on multiple dimensions - on the object level, I don't think ANY research topic can be stopped for very long, and I don't think AI specifically gets much safer with any achievable finite pause, compared to a slowdown and standard of care for roughly the same duration. On the strategy level, I wonder what other topics you'd use as support for your thesis (if you feel extreme measures are correct, advocate for them). US Gun Control? Drug legalization or enforcement? Private capital...
Mostly agree, but also caution about being too confident in one's skepticism. Almost all innovation is stupid until it works, and it's VERY hard to know in advance which problems end up being solvable or what new applications come up when something is stupid for it's obvious purpose but a good fit for something else.
I honestly don't know which direction this should move your opinion of Hyundai's research agenda. Even if (as seems likely), it's not useful in car manufacturing, it may be useful elsewhere, and the project and measurement mechanisms may teach them/us something about the range of problems to address in drivetrain design.