FactorialCode

Wiki Contributions

Comments

Sorted by

Another alternative is to use a 440nm light source and a frequency doubling crystal https://phoseon.com/wp-content/uploads/2019/04/Stable-high-efficiency-low-cost-UV-C-laser-light-source-for-HPLC.pdf although the efficiency is questionable, there are also other variations based on frequency quadrupling https://opg.optica.org/oe/fulltext.cfm?uri=oe-29-26-42485&id=465709.

That said, you can hide it in your user-settings.

This solves my problem, thank you. Also it does look just like the screenshot, no problems other than what I brought up when you click on it.

This might just be me, but I really hate the floating action button on LW. It's an eyesore on what is otherwise a very clean website. The floating action button was designed to "Represent the primary action on a screen" and draw the user's attention to itself. It does a great job at it, but since "ask us anything, or share your feedback" is not the primary thing you'd want to do, it's distracting.

Not only does it do that, but it also give the impression that this is another cobbled together Meteor app and therefore my brain instantly makes me associate it with all the other crappy Meteor apps.

The other thing is that when you click on it, it's doesn't fit in with the rest of the site theme. LW has this great black-grey-green color scheme, but if you click on the FAB, you are greeted with a yellow waving hand, and when you close it, you get this ugly red (1) in the corner of your screen.

It also kinda pointless since the devs and mods on this website are all very responsive and seem to be aware of everything that gets posted.

I could understand it at the start of LW 2.0 when everything was still on fire, but does anyone use it now?

/rant

I bet this is a side effect of having a large pool of bounded rational agents that all need to communicate with each other, but not necessarily frequently. When two agents only interact briefly, neither agent has enough data to work out what the "meaning" of the other's words. Each word could mean too many different things. So you can probably show that under the right circumstances, it's beneficial for agents in a pool to have a protocol that maps speech-acts to inferences the other party should make about reality (amongst other things, such as other actions). For instance, if all agents have shared interests, but only interact briefly with limited bandwidth, both agents would have an incentive to implement either side of the protocol. Furthermore, it makes sense for this protocol to be standardized, because the more standard the protocol, the less bandwidth and resources the agents will need to spend working out the quirks of each others protocol.

This is my model of what languages are.

Now that you have a well defined map from speech-acts to inferences, the notion of lying becomes meaningful. Lying is just when you use speech acts and the current protocol to shift another agents map of reality in a direction that does not correspond to your own map of reality.

I personally think that something more akin to minimum utilitarianism is more inline with my intuitions. That is, to a first order approximation, define utility as (soft)min(U(a),U(b),U(c),U(d)...) where a,b,c,d... are the sentients in the universe. This utility function mostly captures my intuitions as long as we have reasonable control over everyone's outcomes, utilities are comparable, and the number of people involved isn't too crazy.

Answer by FactorialCode50

Money makes the world turn and it enables research, be it academic or independent. I would just focus on getting a bunch of that. Send out 10x to 20x more resumes than you already have, expand your horizons to the entire planet, and put serious effort into prepping for interviews.

You could also try getting a position at CHAI or some other org that supports AI alignment PhDs, but it's my impression that those centres are currently funding constrained and already have a big list of very high quality applicants, so your presence or absence might not make that much of a difference.

Other than that, you could also just talk directly with the people working on alignment. Send them emails, and ask them about their opinion on what kind of experiments they'd like to know the result of but don't have time to run. Then turn those experiments into papers. Once you've gotten a taste for it, you can go and do your own thing.

Answer by FactorialCode110

I'd put my money on lowered barriers to entry on the internet and eternal September effects as the primary driver of this. In my experience the people I interact with IRL haven't really gotten any stupider. People can still code or solve business problems just as well as they used to. The massive spike in stupidity seems to have occurred mostly on the internet.

I think this is because of 2 effects that reinforce each other in a vicious cycle.

  1. Barriers to entry on the internet have been reduced. A long time ago you needed technical know how to even operate a computer, then thing got easier but you still needed a PC, and spending any amount of time on the internet was still the domain of nerds. Now anyone with a mobile phone can jump on twitter and participate.

  2. Social media platforms are evolving to promote ever dumber means of communication. If they don't they're out competed by the ones that do. For example, compare a screenshot of the reddit UI back when it started vs now. As another example, the forums of old made it fairly easy to write essays going back and forth arguing with people. Then you'd have things like facebook where you can still have a discussion, but it's more difficult. Now you have TikTok and Instagram, where the highest form of discourse comes down to a tie between a girl dancing with small text popups and an unusually verbose sign meme. You can forget about rational discussion entirely.

So I hypothesize that you end up with this death spiral, where technology lowers barriers to entry, causing people who would otherwise have been to dumb effectively to participate, causing social media companies to further modify their platforms to appeal to the lowest common denominator, causing more idiots to join... and so on and so forth. To top it off, I've found myself and other people I would call "smart" disconnecting from the larger public internet. So you end up with evaporative cooling on top of all the other aforementioned effects.

The end result is what you see today, I'm sure the process is continuing, but I've long ago checked out of the greater public internet and started hanging out in the cozyweb or outside.

At its core, this is the main argument why the Solomonoff prior is malign: a lot of the programs will contain agents with preferences, these agents will seek to influence the Solomonoff prior, and they will be able to do so effectively.

Am I the only one who sees this much less as a statement that the Solomonoff prior is malign, and much more a statement that reality itself is malign? I think that the proper reaction is not to use a different prior, but to build agents that are robust to the possibility that we live in a simulation run by influence seeking malign agents so that they don't end up like this.

Hmm, at this point it might be just a difference of personalities, but to me what you're saying sounds like "if you don't eat, you can't get good poisoning". "Dual identity" doesn't work for me, I feel that social connections are meaningless if I can't be upfront about myself.

That's probably a good part of it. I have no problem hiding a good chunk of my thoughts and views from people I don't completely trust, and for most practical intents and purposes I'm quite a bit more "myself" online than IRL.

But in any case there will many subnetworks in the network. Even if everyone adopt the "village" model, there will be many such villages.

I think that's easier said than done, and that a great effort needs to be made to deal with effects that come with having redundancy amongst villages/networks. Off the top of my head, you need to ward against having one of the communities implode after their best members leave for another:

Many of our best and brightest leave, hollowing out and devastating their local communities, to move to Berkeley, to join what they think of as The Rationalist Community.

Likewise, even if you do keep redundancy in rationalist communities, you need to ensure that there's a mechanism that prevents them from seeing each other as out-groups or attacking each other when they do. This is especially important since one group viewing the other as their out-group, but not vice versa can lead to the group with the larger in-group getting exploited.

Load More