FactorialCode

Comments

FactorialCode's Shortform

That said, you can hide it in your user-settings.

This solves my problem, thank you. Also it does look just like the screenshot, no problems other than what I brought up when you click on it.

FactorialCode's Shortform

This might just be me, but I really hate the floating action button on LW. It's an eyesore on what is otherwise a very clean website. The floating action button was designed to "Represent the primary action on a screen" and draw the user's attention to itself. It does a great job at it, but since "ask us anything, or share your feedback" is not the primary thing you'd want to do, it's distracting.

Not only does it do that, but it also give the impression that this is another cobbled together Meteor app and therefore my brain instantly makes me associate it with all the other crappy Meteor apps.

The other thing is that when you click on it, it's doesn't fit in with the rest of the site theme. LW has this great black-grey-green color scheme, but if you click on the FAB, you are greeted with a yellow waving hand, and when you close it, you get this ugly red (1) in the corner of your screen.

It also kinda pointless since the devs and mods on this website are all very responsive and seem to be aware of everything that gets posted.

I could understand it at the start of LW 2.0 when everything was still on fire, but does anyone use it now?

/rant

Signalling & Simulacra Level 3

I bet this is a side effect of having a large pool of bounded rational agents that all need to communicate with each other, but not necessarily frequently. When two agents only interact briefly, neither agent has enough data to work out what the "meaning" of the other's words. Each word could mean too many different things. So you can probably show that under the right circumstances, it's beneficial for agents in a pool to have a protocol that maps speech-acts to inferences the other party should make about reality (amongst other things, such as other actions). For instance, if all agents have shared interests, but only interact briefly with limited bandwidth, both agents would have an incentive to implement either side of the protocol. Furthermore, it makes sense for this protocol to be standardized, because the more standard the protocol, the less bandwidth and resources the agents will need to spend working out the quirks of each others protocol.

This is my model of what languages are.

Now that you have a well defined map from speech-acts to inferences, the notion of lying becomes meaningful. Lying is just when you use speech acts and the current protocol to shift another agents map of reality in a direction that does not correspond to your own map of reality.

It's hard to use utility maximization to justify creating new sentient beings

I personally think that something more akin to minimum utilitarianism is more inline with my intuitions. That is, to a first order approximation, define utility as (soft)min(U(a),U(b),U(c),U(d)...) where a,b,c,d... are the sentients in the universe. This utility function mostly captures my intuitions as long as we have reasonable control over everyone's outcomes, utilities are comparable, and the number of people involved isn't too crazy.

As a Washed Up Former Data Scientist and Machine Learning Researcher What Direction Should I Go In Now?

Money makes the world turn and it enables research, be it academic or independent. I would just focus on getting a bunch of that. Send out 10x to 20x more resumes than you already have, expand your horizons to the entire planet, and put serious effort into prepping for interviews.

You could also try getting a position at CHAI or some other org that supports AI alignment PhDs, but it's my impression that those centres are currently funding constrained and already have a big list of very high quality applicants, so your presence or absence might not make that much of a difference.

Other than that, you could also just talk directly with the people working on alignment. Send them emails, and ask them about their opinion on what kind of experiments they'd like to know the result of but don't have time to run. Then turn those experiments into papers. Once you've gotten a taste for it, you can go and do your own thing.

Is Stupidity Expanding? Some Hypotheses.

I'd put my money on lowered barriers to entry on the internet and eternal September effects as the primary driver of this. In my experience the people I interact with IRL haven't really gotten any stupider. People can still code or solve business problems just as well as they used to. The massive spike in stupidity seems to have occurred mostly on the internet.

I think this is because of 2 effects that reinforce each other in a vicious cycle.

  1. Barriers to entry on the internet have been reduced. A long time ago you needed technical know how to even operate a computer, then thing got easier but you still needed a PC, and spending any amount of time on the internet was still the domain of nerds. Now anyone with a mobile phone can jump on twitter and participate.

  2. Social media platforms are evolving to promote ever dumber means of communication. If they don't they're out competed by the ones that do. For example, compare a screenshot of the reddit UI back when it started vs now. As another example, the forums of old made it fairly easy to write essays going back and forth arguing with people. Then you'd have things like facebook where you can still have a discussion, but it's more difficult. Now you have TikTok and Instagram, where the highest form of discourse comes down to a tie between a girl dancing with small text popups and an unusually verbose sign meme. You can forget about rational discussion entirely.

So I hypothesize that you end up with this death spiral, where technology lowers barriers to entry, causing people who would otherwise have been to dumb effectively to participate, causing social media companies to further modify their platforms to appeal to the lowest common denominator, causing more idiots to join... and so on and so forth. To top it off, I've found myself and other people I would call "smart" disconnecting from the larger public internet. So you end up with evaporative cooling on top of all the other aforementioned effects.

The end result is what you see today, I'm sure the process is continuing, but I've long ago checked out of the greater public internet and started hanging out in the cozyweb or outside.

The Solomonoff Prior is Malign

At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively.

Am I the only one who sees this much less as a statement that the Solomonoff prior is malign, and much more a statement that reality itself is malign? I think that the proper reaction is not to use a different prior, but to build agents that are robust to the possibility that we live in a simulation run by influence seeking malign agents so that they don't end up like this.

The Treacherous Path to Rationality

Hmm, at this point it might be just a difference of personalities, but to me what you're saying sounds like "if you don't eat, you can't get good poisoning". "Dual identity" doesn't work for me, I feel that social connections are meaningless if I can't be upfront about myself.

That's probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don't completely trust, and for most practical intents and purposes I'm quite a bit more "myself" online than IRL.

But in any case there will many subnetworks in the network. Even if everyone adopt the "village" model, there will be many such villages.

I think that's easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another:

Many of our best and brightest leave, hollowing out and devastating their local communities, to move to Berkeley, to join what they think of as The Rationalist Community.

Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there's a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.

The Treacherous Path to Rationality

So first of all, I think the dynamics of surrounding offense are tripartite. You have the the party who said something offensive, the party who gets offended, and the party who judges the others involved based on the remark. Furthermore, the reason why simulacra=bad in general is because the underlying truth is irrelevant. Without extra social machinery, there's no way to distinguish between valid criticism and slander. Offense and slander are both symmetric weapons.

This might be another difference of personalities...you can try to come up with a different set of norms that solves the problem. But that can't be Crocker's rules, at least it can't be only Crocker's rules.

I think that's a big part of it. Especially IRL, I've taken quite a few steps over the course of years to mitigate the trust issues you bring up in the first place, and I rely on social circles with norms that mitigate the downsides of Crocker's rules. A good combination of integrity+documentation+choice of allies makes it difficult to criticize someone legitimately. To an extent, I try to make my actions align with the values of the people I associate myself with, I keep good records of what I do, and I check that the people I need either put effort into forming accurate beliefs or won't judge me regardless of how they see me. Then when criticism is levelled against myself and or my group, I can usually challenge it by encouraging relevant third parties to look more closely at the underlying reality, usually by directly arguing against what was stated. That way I can ward off a lot of criticism without compromising as much on truth seeking, provided there isn't a sea change in the values of my peers. This has the added benefit that it allows me and my peers to hold each other accountable to take actions that promote each others values.

The other thing I'm doing that is both far easier to pull off and way more effective, is just to be anonymous. When the judging party can't retaliate because they don't know you IRL and the people calling the shots on the site respect privacy and have very permissive posting norms, who cares what people say about you? You can take and dish out all the criticism you want and the only consequence is that you'll need to sort through the crap to find the constructive/actionable/accurate stuff. (Although crap criticism can easily be a serious problem in and of itself.)

Load More