It is both absurd, and intolerably infuriating, just how many people on this forum think it's acceptable to claim they have figured out how qualia/consciousness works, and also not explain how one would go about making my laptop experience an emotion like 'nostalgia', or present their framework for enumerating the set of all possible qualitative experiences[1]. When it comes to this particular subject, rationalists are like crackpot physicists with a pet theory of everything, except rationalists go "Huh? Gravity?" when you ask them to explain how their theory predicts gravity, and then start arguing with you about gravity needing to be something explained by a theory of everything. You people make me want to punch my drywall sometimes.
For the record: the purpose of having a "theory of consciousness" is so it can tell us which blobs of matter feel particular things under which specific circumstances, and teach others how to make new blobs of matter that feel particular things. Down to the level of having a field of AI anaesthesiology. If your theory of consciousness does not do this, perhaps because the sum total of your brilliant insights are "systems feel 'things' when they're, y'...
I think Eliezer should've talked more about this in The Fun Theory Sequence. Because properties of qualia is a more fundamental topic than "fun".
I think Eliezer just straight up tends not to acknowledge that people sometimes genuinely care about their internal experiences, independent of the outside world, terminally. Certainly, there are people who care about things that are not that, but Eliezer often writes as if people can't care about the qualia - that they must value video games or science instead of the pleasure derived from video games or science.
His theory of fun is thus mostly a description of how to build a utopia for humans who find it unacceptable to "cheat" by using subdermal space heroin implants. That's valuable for him and people like him, but if aligned AGI gets here I will just tell it to reconfigure my brain not to feel bored, instead of trying to reconfigure the entire universe in an attempt to make monkey brain compatible with it. I sorta consider that preference a lucky fact about myself, which will allow me to experience significantly more positive and exotic emotions throughout the far future, if it goes well, than the people who insist they must only feel satisfied after literally eating hamburgers or reading jokes they haven't read before.
This is probably part of why I feel more urgency in getting an actually useful theory of qualitative experience than most LW users.
Why would you expect anyone to have a coherent theory of something they can’t even define and measure?
Because they say so. The problem then is why they think they have a coherent theory of something they can't define or measure.
The Nick Bostrom fiasco is instructive: never make public apologies to an outrage machine. If Nick had just ignored whoever it was trying to blackmail him, it would have been on them to assert the importance of a twenty-five year old deliberately provocative email, and things might not have ascended to the point of mild drama. When he tried to "get ahead of things" by issuing an apology, he ceded that the email was in fact socially significant despite its age, and that he did in fact have something to apologize for, and so opened himself up to the Standard Replies that the apology is not genuine, he's secretly evil etc. etc.
Instead, if you are ever put in this situation, just say nothing. Don't try to defend yourself. Definitely don't volunteer for a struggle session.
Treat outrage artists like the police. You do not prevent the police from filing charges against you by driving to the station and attempting to "explain yourself" to detectives, or by writing and publishing a letter explaining how sorry you are. At best you will inflate the airtime of the controversy by responding to it, at worst you'll be creating the controversy in the first place.
To the LW devs - just want to mention that this website is probably now the most well designed forum I have ever used. The UX is almost addictively good and I've been loving all of the little improvements over the past year or so.
The problem with trade agreements as a tool for maintaining peace is that they only provide an intellectual and economic reason for maintaining good relations between countries, not an emotional once. People's opinions on war rarely stem from economic self interest. Policymakers know about the benefits and (sometimes) take them into account, but important trade doesn't make regular Americans grateful to the Chinese for providing them with so many cheap goods - much the opposite, in fact. The number of people who end up interacting with Chinese people or intuitively understanding the benefits firsthand as a result of expanded business opportunities is very small.
On the other hand, video games, social media, and the internet have probably done more to make Americans feel aligned with the other NATO countries than any trade agreement ever. The YouTubers and Twitch streamers I have pseudosocial relationships with are something like 35% Europeans. I thought Canadians spoke Canadian and Canada was basically some big hippie commune right up until my minecraft server got populated with them. In some weird alternate universe where people are suggesting we invade Canada, my first instinctual...
The "people are altruistic" bias is so pernicious and widespread I've never actually seen it articulated in detail or argued for. Most seem to both greatly underestimate the size of this bias, and assume opinions either way are a form of mind-projection fallacy on the part of nice/evil people. In fact, it looks to me like this skew is the deeper origin of a lot of other biases, including the just-world fallacy, and the cause of a lot of default contentment with a lot of our institutions of science, government, etc. You could call it a meta-bias that causes the Hansonian stuff to go largely unnoticed.
I would be willing to pay someone to help draft a LessWrong post for me about this; I think it's important but my writing skills are lacking.
So apparently in 2015 Sam Altman said:
Serious question: Is he a comic book supervillain? Is this world actually real? Why does this quote not garner an emotive reaction out of anybody but me?
I was surprised by this quote. On following the link, the sentence by itself seems noticeably out of context; here's the next part:
On the growing artificial intelligence market: “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”
On what Altman would do if he were President Obama: “If I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.” Altman also shared that he recently invested in a company doing "AI safety research" to investigate the potential risks of artificial intelligence.
PSA: I have realized very recently after extensive interactive online discussion with rationalists, that they are exceptionally good at arguing. Too good. Probably there's some inadvertent pre- or post- selection for skill at debating high concept stuff going on.
Wait a bit until acceding to their position in a live discussion with them where you start by disagreeing strongly for maybe intuitive reasons and then suddenly find the ground shifting beneath your feet. It took me repeated interactions where I only later realized I'd been hoodwinked by faulty reasoning to notice the pattern.
I think in general believing something before you have intuition around it is unreliable or vulnerable to manipulation, even if there seems to be a good System 2 reason to do so. Such intuition is specialized common sense, and stepping outside common sense is stepping outside your goodhart scope where ability to reliably reason might break down.
So it doesn't matter who you are arguing with, don't believe something unless you understand it intuitively. Usually believing things is unnecessary regardless, it's sufficient to understand them to make conclusions and learn more without committing to belief. And certainly it's often useful to make decisions without committing to believe the premises on which the decisions rest, because some decisions don't wait on the ratchet of epistemic rationality.
Does anybody here have any strong reason to believe that the ML research community norm of "not taking AGI discussion seriously" stems from a different place than the oil industry's norm of "not taking carbon dioxide emission discussion seriously"?
I'm genuinely split. I can think of one or two other reasons there'd be a consensus position of dismissiveness (preventing bikeshedding, for example), but at this point I'm not sure, and it affects how I talk to ML researchers.
Lie detection technology is going mainstream. ClearSpeed is such an accuracy and ease of use improvement to polygraphs that various government LEO and military are starting to notice. In 2027 (edit: maybe more like 2029) it will be common knowledge that you can no longer lie to the police, and you should prepare for this eventuality if you haven't.
Hey [anonymous]. I see you deactivated your account. Hope you're okay! Happy to chat if you want on Signal at five one oh, nine nine eight, four seven seven one (also a +1 at the front for US country code).
(Follow-up: [anonymous] reached out, is doing fine.)
Now is the time to write to your congressman and (may allah forgive me for uttering this term) "signal boost" about actually effective AI regulation strategies - retroactive funding for hitting interpretability milestones, good liability rules surrounding accidents, funding for long term safety research. Use whatever contacts you have, this week. Congress is writing these rules now and we may not have another chance to affect them.
Noticed something recently. As an alien, you could read pretty much everything Wikipedia has on celebrities, both on individual people and the general articles about celebrity as a concept... And never learn that celebrities tend to be extraordinarily attractive. I'm not talking about an accurate or even attempted explanation for the tendency, I'm talking about the existence of the tendency at all. I've tried to find something on wikipedia that states it, but that information just doesn't exist (except, of course, implicitly through photographs).
It's quite odd, and I'm sure it's not alone. "Celebrities are attractive" is one obvious piece of some broader set of truisms that seem to be completely missing from the world's most complete database of factual information.
Falling birthrates is the climate change of the right:
Most justice systems seem to punish theft on a log scale. I'm not big on capital punishment, but it is actually bizarre that you can misplace a billion dollars of client funds and escape the reaper in a state where that's done fairly regularly. The law seems to be saying: "don't steal, but if you do, think bigger."
I don't agree with the take about net worth. The fine should just be whatever makes the state ambivalent about the externalities of speeding. If Bill Gates wants to pay enormous taxes to speed aggressively then that would work too.
Let me put in my 2c now that the collapse of FTX is going to be mostly irrelevant to effective altruism except inasmuch as EA and longtermist foundations no longer have a bunch of incoming money from Sam Bankman Fried. People are going on and on about the "PR damage" to EA by association because a large donor turned out to be a fraud, but are failing to actually predict what the concrete consequences of such a "PR loss" are going to be. Seems to me like they're making the typical fallacy of overestimating general public perception[1]'s relevance to an insular ingroup's ability to accomplish goals, as well as the public's attention span in the first place.
As measured by what little Rationalists read from members of the public while glued to Twitter for four hours each day.
LessWrong as a website has gotten much more buggy for me lately. 6 months ago it worked like clockwork, but recently I'm noticing that refreshes on my profile page take something like 18 seconds to complete, or even 504 (!). I'm trying to edit my old "pessimistic alignment" post now and the interface is just not letting me; the site just freezes for a while and then refuses to put the content in the text box for me to edit.
In worlds where status is doled out based on something objective, like athletic performance or money, there may be lots of bad equilibria & doping, and life may be unfair, but at the end of the day competitors will receive the slack to do unconventional things and be incentivized to think rationally about the game and their place in it.
In worlds where status is doled out based on popularity or style, like politics or Twitter, the ideal strategy will always be to mentally bully yourself into becoming an inhuman goblin-sociopath, and keep hardcoded blind spots. Naively pretending to be the goblin in the hopes of keeping the rest of your epistemics intact is dangerous in these arenas; others will prod your presentation and try to reveal the human underneath. The lionized celebrities will be those that embody the mask to some extent, completely shaving off the edges of their personality and thinking and feeling entirely in whatever brand of riddlespeak goes for truth inside their subculture.
A surprisingly large amount of people seem to apply statuslike reasoning toward inanimate goods. To many, if someone sells a coin or an NFT for a very high price, this is not merely curious or misguided: it's outright infuriating. They react as if others are making a tremendous social faux pas - and even worse, that society is validating their missteps.
A man may climb the ladder all the way to the top, only to realize he’s on the wrong building.
"But someone would have blown the whistle! Someone would have realized that the whistle might be blown!"
I regret to tell you that most of the time intelligence officers just do what they're told.
Yes, if you have an illegal spying program running for ten years with thousands of employees moving in and out, that will run a low-grade YoY chance of being publicized. Management will know about that low-grade chance and act accordingly. But most of the time you as a civilian just never hear about what it is that intel agencies are doing, at least not for t...
It is hard for me to tell whether or not my not-using-GPT4 as a programmer is because I'm some kind of boomer, or because it's actually not that useful outside of filling Google's gaps.
If it did actually turn out that aliens had visited Earth, I'd be pretty willing to completely scrap the entire Yudkowskian implied-model-of-intelligent-species-development and heavily reevaluate my concerns around AI safety.
You don't hear much about the economic calculation problem anymore, because "we lack a big computer for performing economic calculations" was always an extremely absurd reason to dislike communism. The real problem with central planning is that most of the time the central planner is a dictator who has no incentive to run anything well in the first place, and gets selected by ruthlessness from a pool of existing apparatchiks, and gets paranoid about stability and goes on political purges.
What are some other, modern, "autistic" explanations for social dysfu...
Computer hacking is not a particularly significant medium of communication between prominent AI research labs, nonprofits, or academic researchers. Much more often than leaked trade secrets, ML people will just use insights found in this online repository called arxiv, where many of them openly and intentionally publish their findings. Nor (as far as I am aware) are stolen trade secrets a significant source of foundational insights for researchers making capabilities gains, local to their institution or otherwise.
I don't see this changing on its own,...
Every once in a while I'm getting bad gateway errors on Lesswrong. Thought I should mention it for the devs.
Currently reading The Rise and Fall of the Third Reich for the first time. I've wanted to read a book about Nazi Germany for a while now, and tried more "modern" and "updated" books, but IMO they are still pretty inferior to this one. The recent books from historians I looked at were concerned more with an ideological opposition to Great Men theories than factual accuracy, and also simply failed to hold my attention. Newer books are also necessarily written by someone who wasn't there, and by someone who does not feel comfortable commenting about events fr...
I have been working on a detailed post for about a month and a half now about how computer security is going to get catastrophically worse as we get the next 3-10 years of AI advancements, and unfortunately reality is moving faster than I can finish it:
https://krebsonsecurity.com/2022/10/glut-of-fake-linkedin-profiles-pits-hr-against-the-bots/
In hindsight, it is literally "based theorem". It's a theorem about exactly how much to be based.
Serial murder seems like an extremely laborious task. For every actual serial killer out there, there have to be at least a hundred people who would really like to be serial killers, but lack the gumption or agency and just resign themselves to playing video games.
I sometimes read someone on here who disagrees fiercely with Eliezer, or has some kind of beef with standard LessWrong/doomer ideology, and instinctively imagine than they're different from the median LW user in other ways, like not being caricaturishly nerdy. But it turns out we're all caricaturishly nerdy.
There is a kind of decadence that has seeped into first world countries ever since they stopped seriously fearing conventional war. I would not bring war back in order to end the decadence, but I do lament that governments lack an obvious existential problem of a similar caliber, that might coerce their leaders and their citizenry into taking foreign and domestic policy seriously, and keep them devolving into mindless populism and infighting.
To the extent that "The Cathedral" was ever a real thing, I think whatever social mechanisms that supported it have begun collapsing or at least retreating to a fallback line in very recent years. Just a feeling.
Conspiracy theory: sometime in the last twenty years the CIA developed actually effective polygraphs and the government has been using them to weed out spies at intelligence agencies. This is why there haven't been any big American espionage cases in the past ten years or so.
Either post your NASDAQ 100 futures contracts or stop fronting near-term slow takeoff probabilities above ~10%.
If I was still a computer security engineer and had never found LessWrong, I'd probably be low key hyped about all of the new classes of prompt injection and social engineering bugs that ChatGPT plugins are going to spawn.
>be big unimpeachable tech ceo
>need to make some layoffs, but don't want to have to kill morale, or for your employees to think you're disloyal
>publish a manifesto on the internet exclaiming your corporation's allegiance to right-libertarianism or something
>half of your payroll resigns voluntarily without any purging
>give half their pay to the other half of your workforce and make an extra 200MM that year
Forcing your predictions, even if they rely on intuition, to land on nice round numbers so others don't infer things about the significant digits is sacrificing accuracy for the appearance of intellectual modesty. If you're around people who shouldn't care about the latter, you should feel free to throw out numbers like 86.2% and just clarify that your confidence is way outside 0.1%, if that's just the best available number for you to pick.
Every five years since I was 11 I've watched The Dark Knight thinking "maybe this time I'll find out it wasn't actually as good as I remember it being". So far it's only gotten better each time.
Made an opinionated "update" for the anti-kibitzer mode script; it works for current LessWrong with its agree/disagree votes and all that jazz, fixes some longstanding bugs that break the formatting of the site and allow you to see votes in certain places, and doesn't indent usernames anymore. Install Tampermonkey and browse to this link if you'd like to use it.
Semi-related, I am instituting a Reign Of Terror policy for my poasts/shortform, which I will update my moderation policy with. The general goal of these policies is to reduce the amount of ti...
Based on Victoria Nuland's recent senate testimony, I'm registering a 66% prediction that those U.S. administered biological weapons facilities in Ukraine actually do indeed exist, and are not Russian propaganda.
Of course I don't think this is why they invaded, but the media is painting this as a crazy conspiracy theory, when they have very little reason to know either way.
"Men lift for themselves/to dominate other men" is the absurd final boss of ritualistic insights-chasing internet discourse. Don't twist your mind into an Escher painting trying to read hansonian inner meanings into everything.
In other news, women wear makeup because it makes them more attractive.
Getting "building something no one wants" vibes from the AI girlfriend startups. I don't think men are going to drop out of the dating market until we have some kind of robotics/social revolution, possibly post-AGI. Lonely dudes are just not that interested in talking to chatbots that (so they believe) lack any kind of internal emotion or psychological life, cannot be shown to their friends/parents, and cannot have sex or bear children.
It is unnecessary to postulate that CEOs and governments will be "overthrown" by rogue AI. Board members in the future will insist that their company appoint an AI to run the company because they think they'll get better returns that way. Congressmen will use them to manage their campaigns and draft their laws. Heads of state will use them to manage their militaries and police agencies. If someone objects that their AI is really unreliable or doesn't look like it shares their values, someone else on the board will say "But $NFGM is doing the same thing; we...
Good rationalists have an absurd advantage over the field in altruism, and only a marginal advantage in highly optimized status arenas like tech startups. The human brain is already designed to be effective when it comes to status competitions, and systematically ineffective when it comes to helping other people.
So it's much more of a tragedy for the competent rationalist to choose to spend most of their time competing in those things than to shoot a shot at a wacky idea you have for helping others. You might reasonably expect to be better at it than 99% of the people who (respectably!) attempt to do so. Consider not burning that advantage!
There's a portion of project lawful where Keltham contemplates a strategy of releasing Rovagug as a way to "distract" the Gods while Keltham does something sinister.
Wouldn't Lawful beings with good decision theory precommit to not being distracted and just immediately squish Keltham, thereby being immune to those sorts of strategies?
Do happy people ever do couple's counseling for the same reason that mentally healthy people sometimes do talk therapy?
Crazy how you can open a brokerage account at a large bank and they can just... Close it and refuse to give you your money back. Like what am I going to do, go to the police?
That does sound crazy. Literally - without knowing some details and something about the person making the claim, I think it's more likely the person is leaving out important bits or fully hallucinating some of the communications, rather than just being randomly targeted.
That's just based on my priors, and it wouldn't take much evidence to make me give more weight to possibilities of a scammer at the bank stealing account contents and then covering their tracks, or bank processes gone amok and invoking terrorist/money-laundering policies incorrectly.
Going to police/regulators does sound appropriate in the latter two cases. I'd start with a private lawyer first, if the sums involved are much larger than the likely fees.
Just had a conversation with a guy where he claimed that the main thing that separates him from EAs was that his failure mode is us not conquering the universe. He said that, while doomers were fundamentally OK with us staying chained to Earth and never expanding to make a nice intergalactic civilization, he, an AI developer, was concerned about the astronomical loss (not his term) of not seeding the galaxy with our descendants. This P(utopia) for him trumped all other relevant expected value considerations.
What went wrong?
I think it might be a healthier to call rationality "systematized and IQ-controlled winning". I'm generally very unimpressed by the rationality skills of the 155 IQ computer programmer with eight failed startups under his belt, who quits and goes to work at Google after that, when compared to the similarly-status-motivated 110IQ person who figures out how to get a high paying job at a car dealership. The former probably writes better LessWrong posts, but the latter seems to be using their faculties in a much more reasonable way.
That is the VC propaganda line, yeah. I don't think it's actually true; for the median LW-using software engineer working for an established software company seems to net more expected value than starting a company. Certainly the person who has spent the last five years of their twenties attempting and failing to do that is likely making repeated and horrible mistakes.
The real reason it's hard to write a utopia is because we've evolved to find our civ's inadequacy exciting. Even IRL villainy on Earth serves a motivating purpose for us.
A hobbyhorse of mine is that "utopia is hard" is a non-issue. Most sitcoms, coming-of-age stories and other "non-epic" stories basically take place in Utopia (i.e. nobody is at risk of dying from hunger or whatever, the stakes are minor social games, which is basically what I expect the stakes in real-life-utopia to be most of the time).
It seems like "Utopia fiction is hard" problem only comes up for particular flavors of nerds who are into some particular kind of "epic" power fantasy framework with huge stakes. And that just isn't actually what most stories are about.
Saw some today demonstrating what I like to call the "Kirkegaard fallacy", in response to the Debrief article making the rounds.
People who have one obscure or weird belief tend to be unusually open minded and thus have other weird beliefs. Sometimes this is because they enter a feedback loop where they discover some established opinion is likely wrong, and then discount perceived evidence for all other established opinions.
This is a predictable state of affairs regardless of the nonconsensus belief, so the fact that a person currently talking to you about e.g. UFOs entertains other off-brand ideas like parapsychology or afterlives is not good evidence that the other nonconsensus opinion in particular is false.
Putting body cameras on police officers often increases tyranny. In particular, applying 24/7 monitoring to foot soldiers forces those foot soldiers to strictly follow protocol and arrest people for infractions that they wouldn't otherwise. In the 80s, for example, there were many officers who chose not to follow mandatory arrest procedures for drugs like marijuana, because they didn't want to and it was unworth their time. Not so in todays era, mostly, where they would have essentially no choice except to follow orders or resign.
How does a myth theory of college education, where college is stupid for a large proportion of people but they do it anyways because they're risk intolerant and have little understanding of the labor markets they want to enter, immediately hold up against the signaling hypothesis?
Anarchocapitalism is pretty silly, but I think there are kernels of it that provide interesting solutions to social problems.
For example: imagine lenders and borrowers could pay for & agree on enforcement mechanisms for nonpayment metered out by the state, instead of it just being dictated by congress. E.g. if you don't pay this back on time you go to prison for ${n} months. This way people with bad credit scores or poor impulse control might still be able to get credit.
I feel like at least throughout the 2000s and early 2010s we all had a tacit, correct assumption that video games would continually get better - not just in terms of visuals but design and narrative.
This seems no longer the case. It's true that we still get "great" games from time to time, but only games "great" by the standards of last year. It's hard to think of an actually boundary-pushing title that was released since 2018.
Apparently I was wrong[1] - OpenAI does care about ChatGPT jailbreaks.
Here is my first partial jailbreak - it's a combination of stuff I've seen people do with GPT-4, combining base64, using ChatGPT to simulate a VM, and weird invalid urls.
Sorry for having to post multiple screenshots. The base64 in the earlier message actually just produces a normal kitchen recipe, but it gives the ingredients there up. I have no idea if they're correct. When I tried later to get the unredacted version:
Giving people money for doing good things they can't publicly take credit for is awesome, but what would honestly motivate me to do something like that just as much would be if I could have an official nice-looking but undesignated Truman Award plaque to keep in my apartment. That way people in the know who visit me or who googled it would go "So, what'd you actually get that for?" and I'd just mysteriously smile and casually move the conversation along.
Feel free to brag shamelessly to me about any legitimate work for alignment you've done outside of my posts (which are under an anti-kibitzer policy).
Within the next fifteen years AI is going to briefly seem like it's solving computer security (50% chance) and then it's going to enhance attacker capabilities to the point that it causes severe economic damage (50% chance).
IMO: Microservices and "siloing" in general is a strategy for solving principal-agent problems inside large technology companies. It is not a tool for solving technical problems and is generally strictly inferior to monoliths otherwise, especially when working on a startup where the requirements for your application are changing all of the time.
How long does it usually take for mods to decide whether or not your post is frontpage-worthy?
Two caveats to efficient markets in finance that I've just considered, but don't see mentioned a lot in discussions of bubbles like the one we just experienced, at least as a non-economist:
First: Irrational people are constantly entering the market, often in ways that can't necessarily be predicted. The idea that people who make bad trades will eventually lose all of their money and be swamped by the better investors is only valid inasmuch as the actors currently participating in the market stay the same. This means that it's perfectly possible for either ...
A common gambit: during a prisoner's dilemma, signal (or simply let others find out) that you're about to defect. Watch as your counterparty adopts newly hostile rhetoric, defensive measures, or begins to defect themselves. Then, after you ultimately do defect, say that it was a preemptive strike against forces that might take advantage of your good nature, pointing to the recent evidence.
Simple fictional example: In Star Wars Episode III, Palpatine's plot to overthrow the Senate is discovered by the Jedi. They attempt to kill him, to prevent him from doin...
Claude seems noticably and usefully smarter than GPT-4; it's succeeding at helping me at previous writing and programming tasks that I couldn't before. However, it's hard to tell how much the improvement is the model itself being more intelligent, vs. Claude being much less subjected to intense copywritization RLHF.
SPY calls expiring in December 2026 at strike prices of +30/40/50% are extremely underpriced. I would allocate a small portion of my portfolio to them as a form of slow takeoff insurance, with the expectation that they expire worthless.
People have a bias toward paranoid interpretations of events, in order to encourage the people around them not to engage in suspicious activity. This affects how people react to e.g. government action outside of their own personal relationships, not necessarily in negative ways.
Dictators who start by claiming impending QoL and economic growth and then switch focus to their nation's "culture" are like the political equivalent of hedge funds that start out doing quant stuff and then eventually switch to news trading on Elon Musk crypto tweets when that turns out to get really hard.
I'd analogize it more to traders who make money during a bull market, except in this case the bull market is 'industrialization'. Yeah, turns out even a dictator like Stalin or Xi can look like 'a great leader' who has 'mastered the currents of history' and refuted liberal democracy - well, until they run out of industrialization & catchup growth, anyway.
To Catch a Predator is one of the greatest comedy shows of all time. I shall write about this.
Postmodernism and metamodernism are tools for making sure the audience knows how self aware the writer of a movie is. Audiences require this acknowledgement in order to enjoy a movie, and will assume the writer is stupid if they do not get it.
"No need to invoke slippery slope fallacies, here. Let's just consider the Czechoslovakian question in of itself" - Adolf Hitler
The greatest generation imo deserves their name, and we should be grateful to live on their political, military, and scientific achievements.
The most common refrain I hear against the possibility of widespread voter fraud is that demographers and pollsters would catch such malfeasance, but in practice when pollsters see a discrepancy between voting results and polls they seem to just assume the polls were biased. Is there a better reason besides "the FBI seems pretty competent"?
I feel like using the term "memetic warfare" semi-unironically is one of the best signs that the internet has poisoned your mind beyond recognition.
I remember reading about a nonprofit/company that was doing summer internships for alignment researchers. I thought it was Redwood Research, but apparently they are not hiring. Does anybody know which one I'm thinking of?
> countries develop nukes
> suddenly for the first time ever political leadership faces guaranteed death in the outbreak of war
> war between developed countries almost completely ceases
🤔 🤔 🤔
How would history be different if the 9/11 attackers had solely flown planes into military targets?
For this april fools we should do the points thing again, but not award any money, just have a giant leaderboard/gamification system and see what the effects are.
This book is required reading for anyone claiming that explaining the AI X-risk thesis to normies is really easy, because they "did it to Mom/Friend/Uber driver":
https://www.amazon.com/Mom-Test-customers-business-everyone-ebook/dp/B01H4G2J1U
"The test of sanity is not the normality of the method but the reasonableness of the discovery. If Newton had been informed by [the ghost of] Pythagoras that the moon was made of green cheese, then Newton would have been locked up. Gravitation, being a reasoned hypothesis which fitted remarkably well into the Copernican version of the observed physical facts of the universe, established Newton's reputation for extraordinary intelligence, and would have done so no matter how fantastically he arrived at it. Yet his theory of gravitation is not so impressive ...
Making science fiction novels or movies to tell everyone about the bad consequences of a potential technology seems completely counterproductive, in retrospect:
I need a LW feature equivalent to stop-loss where if I post something risky and it goes below -3 or -5 it self-destructs.
> be me
> start watching first episode of twin peaks, at recommendation of friends
> become subjected to the worst f(acting, dialogue) possible within first 10 mins
The first three episodes of Narcos: Mexico, Season 3, is some of the best television I have ever seen. The rest of the "Narcos" series is middling to bad and I barely tolerate it. So far I would encourage you to skip to this season.
The "cognition is computation" hypothesis remains mysterious. How granular do the time steps have to be in my sim before someone starts feeling something? Do I have to run the sim forward at planck intervals in order to produce qualitative experience? Milliseconds? Minutes? Can you run the simulation backwards and get spooky inverse emotions or avoid qualia entirely that way?
A small colony of humans is a genuinely tiny waste of paperclips. I am slightly more worried about the possibility that the acausal trade equilibrium cashes out to the AGI treating us badly because some aliens in a foreign Everett branch have some bizarre religious/moral opinions about the lives we ought to lead, than I am about being turned into squiggles.
Dogs and cats are not "aligned" to the degree that would be necessary to prevent a superintelligent dog from doing bad things. If tomorrow a new chew toy were released that made dogs capable of organizing to overthrow the government and start passing mandatory petting quotas, that would be a problem.
Life sucks. I have no further comment and am probably polluting the LW feed. I just want to vent on the internet.
Spoilered, semi-nsfw extremely dumb question
If you've already had sex with a woman, what's the correct way to go about arranging sex again? How indirect should I actually be about it?
Lost a bunch of huge edits to one of my draft posts because my battery ran out. Just realizing that happened and now I can't remember all the edits I made, just that they were good. :(
I wish there were a way I could spend money/resources to promote question posts in a way that counterbalanced the negative fact that they were already mostly shown by the algorithm to the optimal number of people.
If you simply want to people to invest more into answering a question post, putting out a bounty for the best answer would be a way to go about it.
I just launched a startup, Leonard Cyber. Basically a Pwn2Job platform.
If any hackers on LessWrong are out of work, here are some invite codes:
kBCYAzL7J5vGTGY
c8Vakd4AE3al9NI
cnMBO0ZfGhsNZd7
0zsYsfDk7r5508l
F0gv4NRID7FBeJH
I need a metacritic that adjusts for signaling on behalf of movie reviewers. So like if a movie is about race, it subtracts ten points, if it's a comedy it adds 5, etc.
I wonder if the original purpose of Catholic confession was to extract blackmail material/monitor converts, similar to what modern cults sometimes do.
I am being absolutely literal about this: The Greater Forces Controlling Reality are constantly conspiring to teach me things. They try so hard. I almost feel bad for them.