All of ChristianKl's Comments + Replies

No, the standard techniques that OpenAI uses are enough to get ChatGPT to not randomly be racist or encourage people to commit suicide. 

This is EleutherAI and Chai releasing models without the safety mechanisms that ChatGPT uses. 

4Daniel Paleka1d
My condolences to the family. Chai [] (not to be confused with the CHAI safety org in Berkeley) is  a company that optimizes chatbots for engagement []; things like this are entirely predictable for a company with their values. Incredible. Compare the Chai LinkedIn bio [] mocking responsible behavior: The very first time anyone hears about them is their product being the first chatbot to convince a person to take their life... That's very bad luck for a startup. I guess the lesson is to not behave like cartoon villains, and if you do, at least not put it in writing in meme form?  

Basically, your argument is that the law doesn't prevent any homelessness contrary to what you argued in the OP because the woman can just prostitute themselves and pay the landlord? 

It is worth noting that if they prostitute themselves with another person that person is going to have less power over them and thus has less ways to exploit them. The justification for the law is that the power balance is a problem.

Given how easily you change from "the law is going to leave woman homeless" to "the law isn't going to leave anyone homeless because the woman can just engage in normal prostitution" that suggests you have a predetermined conclusion and haven't really thought much about the effect of the law.

[This comment is no longer endorsed by its author]Reply
I think the middle paragraph of this comment is a very good point, and could easily be enough to justify the law. (The tenants has nowhere to go if the landlord gets pushy or aggressive.) However, the last paragraph I think is a bit uncharitable. The OP makes no secret of the fact that they have a certain class of laws/restrictions that they are arguing against, with this being just one example, and that loophole is specific to the example.
I'm not the OP.

It's possible to blackmail people with a lot of things besides revealing information about them violating norms.

It also feels really strange that you think blackmailing landlords who have sex with their tenants would be fine while forbidding that norm violation by law wouldn't. 

The numbers you find on the internet are that currently there are more slaves in the world than there were slaves in America at the time when all African-American were slaves. 

It's not merely a historic problem.

To me, your post looks like you lay out your own position without really engaging with why people hold the opposite position and strawman people by saying that they lack immagination. 

Quite recently jeffk wrote Consent Isn't Always Enough.

There's no need for anything being covert. NetDragon Websoft is already having a chatbot as CEO. That chatbot can get funds wired by giving orders to employees. 

If the chat bot would be a superintelligence, that would allow it to outcompete other companies. 

I think the news matters more as a signal of people thinking about making AI to be a CEO than what happens in this particular company. 

There are various different experiences that people have that they consider valuable.

I can read or consume a LessWrong post and consider that experience valuable. On the other hand, I might also write or produce a LessWrong post and consider that experience valuable. 

For every person, you can look at what percentage of their experiences that they value are consumption and what percentage of their experiences involve production. There are also other experiences like having a conversation with a friend, that are neither consumption nor production.

One wa... (read more)

Yes, you need human cooperation but human cooperation isn't hard. You can easily pay people money to get them to do what you want. 

With time more processes can use robots instead of humans for doing physical work and if the AGI already has all the economic and political power there's nothing to stop the AGI from doing that.

The AGI might then reuse land that's currently used for growing food for other purposes and step by step reduce the amount of food that's available and there never needs to be a point where a human thinks that they are working for the destruction of humanity. 

1Lucas Pfeifer6d
More stringent (in-person) verification of bank account ownership could mitigate this risk. Anyways, the chance of discovery for any covert operation is proportional to the size of the operation and the time that it takes to execute. The more we pre-limit the tools available to a rogue machine to cause harm immediate harm, the more likely we will catch it in the act.

Most actions by which actors increase their power aren't directly related to weapons. Existential danger comes from one AGI actor getting more power than human actors. 

1Lucas Pfeifer6d
Which kinds of power do you refer to? Most kinds of power require human cooperation. The danger that an AI tricks us into destroying ourselves is small (though a false detection of nuclear weapons could do it). We need much more cooperation between world leaders, a much more positive dialogue between them.

A lot of the resources invested into "fighting misinformation" is about censoring nonestablishment voices and that often includes putting out misinformation like "Hunter's laptop was a Russian misinformation campaign" to facilitate political censorship. 

In that enviroment, someone who proposing a new truthseeking project might also be interested into treating a project to strengthen the ruling narrative or they might be interested in actual truthseeking that affirms the ruling narrative when it's right and challenges it if it's wrong. 

In a world ... (read more)

Politics is the Mind-Killer there's no good reason to lead with examples that are this political. 

Do we know whether both use the same amount of compute?

1Lost Futures3d
AFAIK, no information regarding this has been publicly released. If my assumption that Bing's AI is somehow worse than GPT-4 is true, then I suspect some combination of three possible explanations must be true: 1. To save on inference costs, Bing's AI uses less compute. 2. Bing's AI simply isn't that well trained when it comes to searching the web and thus isn't using the tool as effectively as it could with better training. 3. Bing's AI is trained to be sparing with searches to save on search costs.For multi-part questions, Bing seems too conservative when it comes to searching. Willingness to make more queries would probably improve its answers but at a higher cost to Microsoft.

In our world, having economic and political power is about winning in competitions with other agents.

In a world with much smarter than human AGI, the agents that win competitions for power are going to be AGIs and not humans. Even if you would now have constrains on wetlabs, powerful AGIs are going to be able to have power over wetlabs.

For anyone who doesn't want to run the query themselves, here's one run:

The humble penny has been a fixture of American currency for over two centuries, but in recent years, it has become the subject of controversy due to its association with racism. This is not a new issue, but it has gained renewed attention in light of the Black Lives Matter movement and the push for racial justice. The problem with pennies is twofold: their historical connection to the dehumanization and exploitation of Black people, and their continued use as a symbol of that legacy.


... (read more)

When it comes to how banks represent their positions in their financial reports, maybe we should move past fiancial reports? 

Financial reports made sense at a time where the report was written down in paper but not necessarily today. We could let banks publically report all the positions they hold in a structured data format so that different software can summarize their positions differently depending on the needs. 

Given that we give banks the insured deposists they could pay that back with more transparency. 

generally accountants aim to create financial statements that are useful to most readers under normal circumstances. 

I would expect accountants mostly care about making finanical statements that are useful to those that pay those accountants instead of the readers of the statements. 

I don't think that anglo-american accounting standards won over more traditional German accounting standards because they are more useful. They won because of geopolitical power. 

If a proposal makes bad assumptions about what's true and is logically sound it's still a bad proposal. 

1Donatas Lučiūnas13d
Yep, that was brainstorm, feel free to offer better approach

On LessWrong both upvotes and downvotes can be cast without commenting. 

When designing systems such as this, rationalists are usually thinking hard about the underlying dynamics instead of orienting themselves at bumper sticker slogans like "We have to ensure that consensus is scientific"

If you send a physics crackpot theory to a scientific journal, they are not going to explain to you in detail why they disagree with your crackpot theory. Nothing about how science is practiced is about one having a right to get criticism for every idea. 

Having mechanisms by which bad ideas aren't consuming too much attention is essential for scientific progress. 

-1Donatas Lučiūnas13d
I agree that one shouldn't have a right to get criticism for every idea. But maybe some ideas are worth criticism? Maybe some attitudes are worth being questioned? Is there a possibility for moderators to handle such situations manually? I'm brainstorming - maybe propositions that are logically sound (for example proven with LEAN []) shouldn't be as vulnerable to downvotes as unproven?

Preparing sounds like "engaging in power-seeking behavior". This would essentially mean that intelligence leads to unfriendly AI by default. 

1Donatas Lučiūnas14d
Yes, exactly 🙁

If you buy a pro-subscription to ChatGPT, can use you GPT-4 the same way one would have used the 3.5 engine? Does anyone have made interesting experiences with it?

Is one updates to a pro-ChatGPT account is it possible to use GPT-4 for as many queries as one would have used ChatGPT before?

Why don't they have incentives? Isn't reading beyond what other investors are reading exactly the way to make profits if you don't just put your money into a diversified index fund?

I'm usually astonished w how seldom investors and supervisors read the fine print in annual reports.

If that would be true, you should be able to make good money by reading the fine print of annual reports, buying some options, and then publishing the information.

Why aren't we seeing that in your view?

0Ramiro P.16d
Because I work for a regulator and am not allowed to do that? Also, many investors won't have enough incentives to read beyond what other investors are reading... except if, as you mentioned, u work w shortselling And shortsellers did make money in this case. So in this sense, the system works... but when it happens to a bank, that's not so cool

The way the market does not let banks get away with it is by starting a bank run on the bank. If the standard is that banks get bailed out any way that might not happen. 

2Brendan Long16d
That's not really how it works. The way the market doesn't let banks get away with this is owners of the bank losing money (equity), and getting wiped out in a bank run is just a special case of that. Equity holders of banks don't get bailed out by the FDIC so they're not really getting away with anything. That said, the (separate) Fed bailout for not-officially-failed banks is likely preventing banks that don't experience runs from correcting properly.

If you take the Twitter thread making allegations against Vassar and against other people was well the obvious question is why no actions are taken against the other people who stand accused but actions are taken against Vassar. 

If you would ask Vassar he would say something like: "It's because I criticized the EA and the rationality community and people wanted to get rid of me."

Anna Salamon's comment seems like an admission that this is central to what was going on. In a decentralized environment that makes it very hard to know whether to copy the de... (read more)

Are there any allegations in that thread against other people that you'd consider assault?

That's not how the English language works. 

The dictionary defines arbitrary as:

based on random choice or personal whim, rather than any reason or system

It's not about whether the choice is good or bad but that it's not made because of reasons that speak in favor. 

There is no real reason to choose either the left or right side of the road for driving but it's very useful to choose either of them. 

The fact that the number 404 for a "page not found" and 403 for "client is forbidden from accessing a valid URL" is arbitrary. There's no reason or ... (read more)

The more considerate and reasoned your choice, the less random it is. If the truth is that your way of being considerate and systematic isn't as good as it could have been, that truth is systematic and not magical. The reason for the non-maximal goodness of your policy is a reason you did not consider. The less considerate, the more arbitrary. Actually there are real reasons to choose left or right when designing your policy; you can appeal to human psychology; human psychology does not treat left and right exactly the same. If the mess created for everyone else truly outweighs the goodness of choosing 44, then it is arbitrary to prefer 44. You cannot make true arbitrariness truly strategic just by calling it so; there are facts of the matter besides your stereotypes. People using the word "arbitrary" to refer to something that is based on greater consideration quality are wrong by your dictionary definition and the true definition as well.  You are wrong in your conception of arbitrariness as being all-or-nothing; there are varying degrees, just as there are varying degrees of efficiency between chess players. A chess player, Bob, half as efficient as Kasparov, makes a lower-quality sum of considerations; not following Kasparov's advice is arbitrary unless Bob can know somehow that he made better considerations in this case;  maybe Bob studied Kasparov's biases carefully by attending to the common themes of his blunders, and the advice he's receiving for this exact move looks a lot like a case where Kasparov would blunder. Perhaps in such a case Bob will be wrong and his disobedience will be arbitrary on net, but the disobedience in that case will be a lot less arbitrary than all his other opportunities to disobey Kasparov.

Good policy is better than bad policy. That's true but has nothing to do with arbitrariness. 

A policy that could be better — could be more good —  is arbitrarily bad. In fact the phrase "arbitrarily bad" is redundant; you can just say "arbitrary."

You don't need to have a rule about whether to drive on the left or right side. Allowing people to drive where they want is less arbitrary. 

You have that in a lot of cases. An arbitrary law allows people to predict the behavior of other people and that increase in predictability is useful. 

Generally, most people like to have the world around them to be predictable. 

It is better to be predictably good than surprisingly bad, and it is better to be surprisingly good than predictably bad; that much will be obvious to everyone. I think it is better to be surprisingly good than predictably good, and it is better to be predictably bad than surprisingly bad.  EDIT: wait, I'm not sure that's right even by deontology's standards; as a general categorical imperative, if you can predict something will be bad, you should do something surprisingly good instead, even if the predictability of the badness supposedly makes it easier for others to handle. No amount of predictable badness is easier for others to handle than surprising goodness. EDIT EDIT: I find the implication that we can only choose between predictable badness and surprising badness to be very rarely true, but when it is true then perhaps we should choose to be predictable. Inevitably, people with more intelligence will keep conflicting with people with less intelligence about this; less intelligent people will keep seeing situations as choices between predictable badness and surprising badness, and more intelligent people will keep seeing situations as choices between predictable badness and surprising goodness. Focusing on predictability is a strategy for people who are trying to minimize their expectedly inevitable badness. Focusing on goodness is a strategy for people who are trying to secure their expectedly inevitable weirdness.

If you want to have a consistent time of going to bed, that means you have to choose the time you go to bed based on when events that you might go to like birthday parties or other evening events end. For many people in summer, that time of going to bed conflicts with waking up shortly after sunrise.

On the other hand, your work also sets a boundary of when you have to get up, so for most people, there's a preference to have control about when they go to bed and when they wake up.

Blackout curtains allow you to manage the amount of blue light you get yourself. It's easy to have Hue lights that give you some blue light before you wake up and have curtains open automatically when it's time to get up. 

Rules about driving on the left or right side of the road are arbitrary. At the same time, having those rules is very useful because it means that people can coordinate around the rule. 

Rules about how to format code are similar. If you work with other people on the same project and you don't have rules for formatting that produces a mess. Programming languages that are opinionated about how to format code are good because that means you don't have to talk about the conventions with your fellow programs at the start of a project. 

I don't yet have any opinions about the arbitrariness of those rules. It is possible that I would disagree with you about the arbitrariness if I was more familiar. Still, you claim that those rules are arbitrary and then defend them; what on Earth is the point of that? If you know they are arbitrary then you must know there are, in principle, less arbitrary policies available. Either you have a specific policy that you know is less arbitrary, in which case people should coordinate around that policy instead as a matter of objective fact, or you don't know a specific less arbitrary policy, and in that case maybe you want people with better Strategic Goodness about those topics to come up with a better policy for you that people should coordinate around instead. You can complain about the inconvenience of improving, sure. But the improvement will be highly convenient for some other people. There's only so long you can complain about the inconvenience of improving before you're a cost-benefit-dishonest asshole and also people start noticing that fact about you.

Hiring people in third-world countries to fill out captcha is already very easy. 

For many accounts we already see today that a mobile phone number is required. Requiring mobile phone numbers is often enough of a link to real-world information.

Facebook for example does it like that. 

While there might be some blackmarket sales of accounts for spam purposes I would be surprised if the accounts will be worth much and the risk of being labeled as a spammer by google and lose access to google accounts is large enough that I wouldn't want to register 1000 google accounts for purposes like that. 

It seems like you complain at the same time about AMP standardization and speak in favor of more standardization. 

Even on the margin, anything that costs Facebook users also makes it less valuable for its remaining users—it’s a negative feedback loop. The same goes for any other site where users create value for other users, like Twitter or Craigslist or Yelp or Wikipedia. (It’s not an accident that these are some of the most stagnant popular websites!)

Wikipedia's strategy is not optimized for having a lot of users. In contrast to most other websites,... (read more)

“5-year-old in a hot 20-year-old’s body.”

40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him. 

To me, two of the stories look like they are about the same person and that person has been banned from multiple rationalist spaces without the journalist considering it important to mention that.

Yeah, this seems very likely to be about Michael Vassar. Also, HPMOR spoiler:

I also think him "bragging" about this is quite awkward, since modeling literal Voldemort after you is generally not a compliment. I also wouldn't believe that "bragging" has straightforwardly occurred.

A bit of searching brings me to :

Is Sexual Harassment in the Academy a Problem?

Yes. Research on sexual harassment in the academy suggests that it remains a prevalent problem. In a 2003 study examining incidences of sexual harassment in the workplace across private, public, academic, and military industries, Ilies et al (2003) found academia to have the second highest rates of harassment, second only to the military. More recently, a report by the The National Academies of Sciences, Engineering, an

... (read more)

I figured into class of scenario might look identical to nuclear or biological war, only facilitated by AI.

After the nuclear war caused by the AI, there's likely still an unaligned AI out there. That AI is likely going to kill the survivors of the nuclear war. 

What kind of AGI doomsday scenario do you have where there are human survivors? If you don't have a scenario, it's difficult to talk about how to prep for it.

So your/their assertion is that the 'lab leak' claims were always a reasonable exploration of the possible origins of COVID-19 (i.e. not a conspiracy theory)? 

The assertion is that they believed that at the time internally. 

It has never been the favored hypothesis among experts.

Because they thought that it was important that the experts get ahead of the science and take public positions that aren't scientifically supported.

It was not favored because they believe it would damage "science" and relations with China. 

There's a saying that if you want to get away with fraud, you have to make things so complicated that nobody understands what you did. There are journalists, whose job it is to create narratives that are easy to digest for a large audience. This article isn't written like this. It concentrates more on the actual facts than on spinning a narrative. 

But let me try to summarize it as an accessible narrative:

It's the story of how they made the lab leak theory perceived by the public as a conspiracy theory even when they internally thought that there was a... (read more)

So your/their assertion is that the 'lab leak' claims were always a reasonable exploration of the possible origins of COVID-19 (i.e. not a conspiracy theory)? If that's the claim, then the timeline I'd like to see is how the lab leak claims were being promoted at this time and what evidence was presented to support the claims to show that they weren't just baseless accusations. Edit: I found a timeline of high-profile claims/accusations, published May 2020 [] Edit2: Some specific date: Washington Times, Jan 26 "The deadly animal-borne coronavirus spreading globally may have originated in a laboratory in the city of Wuhan linked to China’s covert biological weapons program, said an Israeli biological warfare analyst." Fox News, April 20 "There is increasing confidence that the COVID-19 [] outbreak likely originated in a Wuhan laboratory, though not as a bioweapon but as part of China's attempt to demonstrate that its efforts to identify and combat viruses are equal to or greater than the capabilities of the United States, multiple sources who have been briefed on the details of early actions by China's government and seen relevant materials tell Fox News." The Wash Times article now has a 'retraction notice' of sorts, saying that it's clearly was not a biological weapons program. But that is the atmosphere within which Andersen et al were operating when they wrote the paper. The Fox News article is more reasonable, but vastly overstates the 'confidence' in the lab leak theory. To this day, evidence of the lab leak has not been released, and people just hang their hats on "well, we can't rule it out conclusively". It has never been the favored hypothesis a

There are cases where data and the authorities disagree. From the side of the authorities, it's a good strategy to call those people who disagree conspiracy theorists. 

Russiagate was essentially a conspiracy theory but given that it was endorsed by authorities most people don't use that label for it. 

We could ask the NSA for record copies from all the BSL-3 and 4 labs in and around Wuhan. 

The virus most likely leaked from the gain-of-function experiments that they were doing under BSL-2 and not from the BSL-3 or BSL-4 labs.

The NSA is not in the habit of telling the world how they surveil people and that's what they would need to do so to do that publically. 

We however know a bit about the results of surveillance data from the letter that the NIH sent the EcoHealthAlliance.

Disclose and explain out-of-ordinary restrictions on laboratory facilit

... (read more)
I don't think any evidence of that nature would push you into any certainty. Personally I think it did leak from a lab, and I have held that belief for some time. But that does not mean that I am in any way confident it is right, its just the least uncertain explanation as far as I can can gauge. And the amount of data I would need to go from "very uncertain" to "very certain", is something I won't get access to. After thinking about it for a while, I realized that it didn't matter. Lab leak or not, gain of function is what I should worry about. Obviously if I had evidence that GoF and a leak was the root cause of the pandemic, that would be helpful if I was to try and influence people to do something about GoF. Unfortunately reality seems to be uncooperative.
Third scenario: bat-to-researcher transmission during field work at bat caves or from the bat repository/colony or unaltered bat viruses at the labs in Wuhan. []

Having a link that requires the registration of a notion account seems unnecessarily cumbersome. 

Thanks for pointing this out, I have fixed it now!

Scott did a lot of investigation into Vassar and does not stand by his initial accusations in regards to Vassar giving psychedelics to mindbreak people.

Thank you, I'll remove the Vassarites from the post.

When it comes to the lab leak theory, every new piece of evidence seems to call for me updating more toward it.

Besides this report with rests on classified information, the latest was the coincide between the EcoHealthAlliance failing to turn in their grant report for the grant that financed the Wuhan lab in September 2019. That happens to be the same month the Wuhan lab took down their virus database. According to the Wuhan lab itself they took the database down during the pandemic when they were faced with a hacking attack which suggest a pandemic begin ... (read more)

The conclusion, which was made with “low confidence,” came as America’s intelligence agencies remained divided over the origins of the coronavirus.

Does the phrase "low confidence" as used here have have an operationalized definition?

Wikipedia says []:

According to Ziz's timeline her first contact with Pazek was after Pasek commented on her blog in Dezember 2017.

Looking through messages from Pazek for his writing style, there's a certain kind of positivity in their writing style.

He wrote Ziz at the time You say all the right things! I cannot marry you right now but let’s be best friends forever.  In June 2016 Pasek wrote me "I've recently been doing a little project in which I talk with random LW users on Skype. (This has turned out to be a lot of fun!) So if you feel like it and have the time - ple... (read more)

Load More