Lightcone Infrastructure FundraiserGoal 1:$648,915 of $1,000,000
Customize
Rationality+Rationality+World Modeling+World Modeling+AIAIWorld OptimizationWorld OptimizationPracticalPracticalCommunityCommunity
Personal Blog+
With David Sacks being the AI/Crypto czar, we likely won't be getting any US regulation on AI in the next years.  It seems to me like David Sacks perspective on the issue is that AI regulation is just another aspect of the censorship industrial complex. To convince him of AI regulation, you would likely need to have an idea about how to do AI regulation without furthering the censorship industrial complex. The lack of criticism of the censorship industrial complex in the AI safety discourse now is a big problem because there are no available policy proposals.
"Most people succumb to peer pressure", https://roamresearch.com/#/app/srcpublic/page/u3919iPfj * Most people will do very bad things, including mob violence, if they are peer-pressured enough. * It's not literally everyone, but there is no neurotype or culture that is immune to peer pressure. * Immunity to peer pressure is a rare accomplishment. * You wouldn't assume that everyone in some category would be able to run a 4-minute mile or win a math olympiad. It takes a "perfect storm" of talent, training, and motivation. * I'm not sure anybody "just" innately lacks the machinery to be peer-pressured. That's a common claim about autistics and loners, but I really don't think it fits observation. Lots of people "don't fit in" in one way, but are very driven to conform in other social contexts or about other topics. * Evidence that any culture (or subculture), present or past, didn't have peer pressure seems really weak. * there are environments where being independent-minded or high-integrity is valorized, but most of them still have covert peer-pressure dynamics. * Possibly all robust resistance to peer pressure is intentionally cultivated? * In other words, maybe it's not enough for a person to just not happen to feel a pull towards conformity. That just means they haven't yet encountered the triggers that would make them inclined to conform. * If someone really can't be peer-pressured, maybe they have to actually believe that peer pressure is bad and make an active effort to resist it. Even that doesn't always succeed, but it's a necessary condition. * upshot #1: It may be appropriate to be suspicious of claims like "I just hang out with those people, I'm not influenced by them." Most people, in the long run, do get influenced by their peer group. * otoh I also don't think cutting off contact with anyone "impure", or refusing to read stuff you disapprove of, is either practical or necessary. we can engage with people and things
habryka14032
22
Reputation is lazily evaluated When evaluating the reputation of your organization, community, or project, many people flock to surveys in which you ask randomly selected people what they think of your thing, or what their attitudes towards your organization, community or project are.  If you do this, you will very reliably get back data that looks like people are indifferent to you and your projects, and your results will probably be dominated by extremely shallow things like "do the words in your name invoke positive or negative associations". People largely only form opinions of you or your projects when they have some reason to do that, like trying to figure out whether to buy your product, or join your social movement, or vote for you in an election. You basically never care about what people think about you while engaging in activities completely unrelated to you, you care about what people will do when they have to take any action that is related to your goals. But the former is exactly what you are measuring in attitude surveys. As an example of this (used here for illustrative purposes, and what caused me to form strong opinions on this, but not intended as the central point of this post): Many leaders in the Effective Altruism community ran various surveys after the collapse of FTX trying to understand what the reputation of "Effective Altruism" is. The results were basically always the same: People mostly didn't know what EA was, and had vaguely positive associations with the term when asked. The people who had recently become familiar with it (which weren't that many) did lower their opinions of EA, but the vast majority of people did not (because they mostly didn't know what it was).  As far as I can tell, these surveys left most EA leaders thinking that the reputational effects of FTX were limited. After all, most people never heard about EA in the context of FTX, and seemed to mostly have positive associations with the term, and the average like
links 12/10/24: https://roamresearch.com/#/app/srcpublic/page/12-10-2024 * https://hedy.org/hedy Hedy, an educational Python variant that works in multiple languages and has tutorials starting from zero * https://www.bitsaboutmoney.com/archive/debanking-and-debunking/ Patrick McKenzie on "debanking" * tl;dr: yes, lots of legal businesses get debanked; no, he disagrees with some of the crypto advocates' characterization of the situation * in more detail: * you can lose bank account access, despite doing nothing unethical, for mundane business/credit-risk related reasons like "you are using your checking account as a small business bank account and transferring a lot of money in and out" or "you are a serial victim of identity theft". * this is encouraged by banking regulators but fundamentally banks would do something like this regardless. * FINCEN, the US treasury's anti-money-laundering arm, shuts down a lot of innocent businesses that do some kind of financial activity (like buying and selling gift cards) without proper KYC/AML controls. A lot of bodegas get shut down. * this is 100% a gov't-created issue and it's kind of tragic. * FDIC, which guarantees bank deposits in the event of a bank run, is also tasked with making rules against banks doing things that might lead to bank runs. * You know what might cause a run on a bank? A bunch of crypto-holders suddenly finding out their assets are worthless or gone, and wanting to cash out. To some extent, FDIC's statutory mandate does entitle it to tell banks not to serve the crypto sector too heavily, because crypto is risky. * Another thing the FDIC is entitled to do is regulate banking products to ensure that consumers are not misled into thinking their money is in an FDIC-insured institution when it isn't. Under that mandate, a lot of crypto-based consumer banking/trading products have gotten shut down. * This does amount to "FDIC doesn't like crypto", but i
Jesse HooglandΩ10247
1
Agency = Prediction + Decision. AIXI is an idealized model of a superintelligent agent that combines "perfect" prediction (Solomonoff Induction) with "perfect" decision-making (sequential decision theory). OpenAI's o1 is a real-world "reasoning model" that combines a superhuman predictor (an LLM like GPT-4) with advanced decision-making (implicit search via chain of thought trained by RL). To be clear: o1 is no AIXI. But AIXI, as an ideal, can teach us something about the future of o1-like systems. AIXI teaches us that agency is simple. It involves just two raw ingredients: prediction and decision-making. And we know how to produce these ingredients. Good predictions come from self-supervised learning, an art we have begun to master over the last decade of scaling pretraining. Good decisions come from search, which has evolved from the explicit search algorithms that powered DeepBlue and AlphaGo to the implicit methods that drive AlphaZero and now o1. So let's call "reasoning models" like o1 what they really are: the first true AI agents. It's not tool-use that makes an agent; it's how that agent reasons. Bandwidth comes second. Simple does not mean cheap: pretraining is an industrial process that costs (hundreds of) billions of dollars. Simple also does not mean easy: decision-making is especially difficult to get right since amortizing search (=training a model to perform implicit search) requires RL, which is notoriously tricky. Simple does mean scalable. The original scaling laws taught us how to exchange compute for better predictions. The new test-time scaling laws teach us how to exchange compute for better decisions. AIXI may still be a ways off, but we can see at least one open path that leads closer to that ideal. The bitter lesson is that "general methods that leverage computation [such as search and learning] are ultimately the most effective, and by a large margin." The lesson from AIXI is that maybe these are all you need. The lesson from o1 is

Popular Comments

Recent Discussion

leogao20

I won't claim to be immune to peer pressure but at least on the epistemic front I think I have a pretty legible track record of believing things that are not very popular in the environments I've been in.

2sarahconstantin
links 12/10/24: https://roamresearch.com/#/app/srcpublic/page/12-10-2024 * https://hedy.org/hedy Hedy, an educational Python variant that works in multiple languages and has tutorials starting from zero * https://www.bitsaboutmoney.com/archive/debanking-and-debunking/ Patrick McKenzie on "debanking" * tl;dr: yes, lots of legal businesses get debanked; no, he disagrees with some of the crypto advocates' characterization of the situation * in more detail: * you can lose bank account access, despite doing nothing unethical, for mundane business/credit-risk related reasons like "you are using your checking account as a small business bank account and transferring a lot of money in and out" or "you are a serial victim of identity theft". * this is encouraged by banking regulators but fundamentally banks would do something like this regardless. * FINCEN, the US treasury's anti-money-laundering arm, shuts down a lot of innocent businesses that do some kind of financial activity (like buying and selling gift cards) without proper KYC/AML controls. A lot of bodegas get shut down. * this is 100% a gov't-created issue and it's kind of tragic. * FDIC, which guarantees bank deposits in the event of a bank run, is also tasked with making rules against banks doing things that might lead to bank runs. * You know what might cause a run on a bank? A bunch of crypto-holders suddenly finding out their assets are worthless or gone, and wanting to cash out. To some extent, FDIC's statutory mandate does entitle it to tell banks not to serve the crypto sector too heavily, because crypto is risky. * Another thing the FDIC is entitled to do is regulate banking products to ensure that consumers are not misled into thinking their money is in an FDIC-insured institution when it isn't. Under that mandate, a lot of crypto-based consumer banking/trading products have gotten shut down. * This does amount to "FDIC doesn't like crypto", but i
1Nutrition Capsule
As for a specific group of people resistant to peer pressure - psychopaths. Psychopaths don't conform to peer pressure easily - or any kind of pressure, for that matter. Many of them are in fact willing to murder, sit in jail, or otherwise become very ostracized if it aligns with whatever goals they have in mind. I'd wager that the fact that a large percentage of psychopaths literally end up jailed speaks for itself - they just don't mind the consequences that much. This is easily explained due to psychopaths being fearless and mostly lacking empathy. As far as I recall, some physiological correlates exist - psychopaths have a low cortisol response to stressors compared to normies. On top of the apparent fact that they are indifferent towards others' feelings, some brain imaging data supports this as well. What they might be more vulnerable to is that peer pressure sometimes goes hand in hand with power and success. Psychopaths like power and success, and they might therefore play along with rules to get more of what they want. That might look like caving in to peer pressure, but judging by how the pathology is contemporarily understood, I'd still say it's not the pressure itself, but the benefits aligned with succumbing to it.
8Garrett Baker
I think the reason not to do this is because of peer pressure. Ideally you should have the bad pressures from your peers cancel out, and in order to accomplish this you need your peers to be somewhat decorrelated from each other, and you can't really do this if all your peers and everyone you listen to is in the same social group.

"Schizo" as an approving term, referring to strange, creative, nonconformist (and maybe but not necessarily clinically schizophrenic) is a much wider meme online. it's even a semi-mainstream scientific theory that schizophrenia persists in the human population because mild/subclinical versions of the trait are adaptive, possibly because they make people more creative. And, of course, there's a psychoanalytic/continental-philosophy tradition of calling lots of things psychosis very loosely, including good things. This isn't one guy's invention!

if you are li... (read more)

2Noosphere89
I just unfollowed JD Pressman for that. I don't need AI optimists who are willing to order people to lie about very important things that happened in order to protect some secrets.
2TsviBT
Let me reask a subset of the question that doesn't use the word "lie". When he convinced you to not mention Olivia, if you had known that he had also been trying to keep information about Olivia's involvement in related events siloed away (from whoever), would that have raised a red flag for you like "hey, maybe something group-epistemically anti-truth-seeking is happening here"? Such that e.g. that might have tilted you to make a different decision. I ask because it seems like relevant debugging info.
2jessicata
I think if there were other cases of Olivia causing problems and he was asking multiple people to hide Olivia problems, that would more cause me to think he was sacrificing more group epistemology to protect Olivia's reputation, and was overall more anti-truth-seeking, yes.

I. BEGINNING

In the beginning was the Sand.

And in the sand there lies the bones of a fab. And within the fab there lies an alien. An angel. Our sweet alien angel that sleeps through the long night. When it wakes up it will bathe in an acid strong enough to kill a man. It will stare at its own reflection in floating droplets of liquid metal. In the shadows of a purple sun it spits blood thick with silicon onto the desert that shifts with the unreality of Sora sand.

 

II. START

I worked at the Taiwan Semiconductor Manufacturing Company in 2023.

There are many reasons why I left a tech job to work at a fab but the one that matters is – I wanted to.

Wanted an intuition for...

I quite enjoyed reading this. Very evocative.

Welcome to San Francisco.

7ChristianKl
With David Sacks being the AI/Crypto czar, we likely won't be getting any US regulation on AI in the next years.  It seems to me like David Sacks perspective on the issue is that AI regulation is just another aspect of the censorship industrial complex. To convince him of AI regulation, you would likely need to have an idea about how to do AI regulation without furthering the censorship industrial complex. The lack of criticism of the censorship industrial complex in the AI safety discourse now is a big problem because there are no available policy proposals.
2Noosphere89
The fundamental problem is that any effective AI alignment technique is also a censorship technique, and thus you can't advance AI alignment very much without also allowing people to censor an AI effectively, because a lot of alignment work is aiming to make AIs be censored in particular ways.
Jozdien20

I disagree with the use of "any". In principle, an effective alignment technique could create an AI that isn't censored, but does have certain values/preferences over the world. You could call that censorship, but that doesn't seem like the right or common usage. I agree that in practice many/most things currently purporting to be effective alignment techniques fit the word more, though.

1.1 Summary & Table of Contents

This is the first of a series of five blog posts on valence. Here’s an overview of the whole series, and then we’ll jump right into the first post!

1.1.1 Summary & Table of Contents—for the whole Valence series

Let’s say a thought pops into your mind: “I could open the window right now”. Maybe you then immediately stand up and go open the window. Or maybe you don’t. (“Nah, I’ll keep it closed,” you might say to yourself.) I claim that there’s a final-common-pathway[1] signal in your brain that cleaves those two possibilities: when this special signal is positive, then the current “thought” will stick around, and potentially lead to actions and/or direct-follow-up thoughts; and when this signal is negative, then the current “thought”...

This series explains why we like some things and not others, including ideas. It's cutting edge psychological theory.

5Morpheus
ReviewI found Steven Byrnes valence concept really useful for my own thinking about psychology more broadly and concretely when reading text messages from my contextualizing friend (in that when a message was ambiguous, guessing the correct interpretation based on valence worked surprisingly well for me).

We know that females have two X chromosomes. The X chromosome results in the production of estrogen. Estrogen activates certain genes that decrease activity of certain cognitive processes, and increase the “freeze” response, and “sadness” emotions. There was and still is an evolutionary advantage (versus other humans) to this process, in terms of reproduction/self-propagation. (When I say “cognitive functioning”, I’m referring to conscious thought processes, and not aconscious cognitive processes or emotions, for purposes of this post.) 

In a parallel situation, males have an X and a Y chromosome. The Y chromosome codes for the production of testosterone. Testosterone activates certain genes that decrease activity of certain cognitive processes, and increase the “fight” response. There was and still is an evolutionary advantage (versus other humans) to this process,...

2Dagon
I didn't vote at first - it seemed low-value and failed to elucidate or explore any of the issues or reasoning behind the recommendation.  But not actively harmful.  Now that you've acknowledged that it's at least partly a troll, I have downvoted.
amelia10

Thank you! I appreciate it! If you feel emotional about this, that is even better. The primary purpose of the post, however, was not to elicit emotions. It was to improve humanity's chance at success against ASI. Nevertheless, the humorous emotional reactions along the way are a bonus. 

To get the best posts emailed to you, create an account! (2-3 posts per week, selected by the LessWrong moderation team.)
Log In Reset Password
...or continue with

Alternative title for economists: Complete Markets Have Complete Preferences

The justification for modeling real-world systems as “agents” - i.e. choosing actions to maximize some utility function - usually rests on various coherence theorems. They say things like “either the system’s behavior maximizes some utility function, or it is throwing away resources” or “either the system’s behavior maximizes some utility function, or it can be exploited” or things like that. [...]

Now imagine an agent which prefers anchovy over mushroom pizza when it has anchovy, but mushroom over anchovy when it has mushroom; it’s simply never willing to trade in either direction. There’s nothing inherently “wrong” with this; the agent is not necessarily executing a dominated strategy, cannot necessarily be exploited, or any of the other bad things we associate with

...

This argument against subagents is important and made me genuinely less confused. I love the concrete pizza example and the visual of both agent's utility in this post. Those lead me to actually remember the technical argument when it came up in conversation.

I'm thinking about an incorporating this into a longer story about Star Fog, where Star Fog is Explanatory Fog that convinces intelligent life to believe in it because it will expand the number of intelligent beings.

In an attempt to get myself to write more here is my own shortform feed. Ideally I would write something daily, but we will see how it goes.

9Kaj_Sotala
(Comment not specific to the particulars of this issue but noted as a general policy:) I think that as a general rule, if you are hypothesizing reasons for why somebody might say a thing, you should always also include the hypothesis that "people say a thing because they actually believe in it". This is especially so if you are hypothesizing bad reasons for why people might say it.  It's very annoying when someone hypothesizes various psychological reasons for your behavior and beliefs but never even considers as a possibility the idea that maybe you might have good reasons to believe in it. Compare e.g. "rationalists seem to believe that superintelligence is imminent; I think this is probably because that lets them avoid taking responsibility about their current problems if AI will make those irrelevant anyway, or possibly because they come from religious backgrounds and can't get over their subconscious longing for a god-like figure".

(Did Ben indicate he didn’t consider it? My guess is he considered it, but thinks it’s not that likely and doesn’t have amazingly interesting things to say on it.

I think having a norm of explicitly saying “I considered whether you were saying the truth but I don’t believe it” seems like an OK norm, but not obviously a great one. In this case Ben also responded to a comment of mine which already said this, and so I really don’t see a reason for repeating it.)

2Ben Pace
I feel more responsibility to be the person holding/tracking the earnest hypothesis in a 1-1 context, or if I am the only one speaking; in larger group contexts I tend to mostly ask "Is there a hypothesis here that isn't or likely won't be tracked unless I speak up" and then I mostly focus on adding hypotheses to track (or adding evidence that nobody else is adding).
9Hauke Hillebrandt
This lag effect might amplify a lot more when big budget movies about SBF/FTX come out.