No, the standard techniques that OpenAI uses are enough to get ChatGPT to not randomly be racist or encourage people to commit suicide.
This is EleutherAI and Chai releasing models without the safety mechanisms that ChatGPT uses.
Sorry. My mistake.
Basically, your argument is that the law doesn't prevent any homelessness contrary to what you argued in the OP because the woman can just prostitute themselves and pay the landlord?
It is worth noting that if they prostitute themselves with another person that person is going to have less power over them and thus has less ways to exploit them. The justification for the law is that the power balance is a problem.
Given how easily you change from "the law is going to leave woman homeless" to "the law isn't going to leave anyone homeless because the woman can just engage in normal prostitution" that suggests you have a predetermined conclusion and haven't really thought much about the effect of the law.
It's possible to blackmail people with a lot of things besides revealing information about them violating norms.
It also feels really strange that you think blackmailing landlords who have sex with their tenants would be fine while forbidding that norm violation by law wouldn't.
The numbers you find on the internet are that currently there are more slaves in the world than there were slaves in America at the time when all African-American were slaves.
It's not merely a historic problem.
To me, your post looks like you lay out your own position without really engaging with why people hold the opposite position and strawman people by saying that they lack immagination.
Quite recently jeffk wrote Consent Isn't Always Enough.
There's no need for anything being covert. NetDragon Websoft is already having a chatbot as CEO. That chatbot can get funds wired by giving orders to employees.
If the chat bot would be a superintelligence, that would allow it to outcompete other companies.
I think the news matters more as a signal of people thinking about making AI to be a CEO than what happens in this particular company.
There are various different experiences that people have that they consider valuable.
I can read or consume a LessWrong post and consider that experience valuable. On the other hand, I might also write or produce a LessWrong post and consider that experience valuable.
For every person, you can look at what percentage of their experiences that they value are consumption and what percentage of their experiences involve production. There are also other experiences like having a conversation with a friend, that are neither consumption nor production.
One wa...
Yes, you need human cooperation but human cooperation isn't hard. You can easily pay people money to get them to do what you want.
With time more processes can use robots instead of humans for doing physical work and if the AGI already has all the economic and political power there's nothing to stop the AGI from doing that.
The AGI might then reuse land that's currently used for growing food for other purposes and step by step reduce the amount of food that's available and there never needs to be a point where a human thinks that they are working for the destruction of humanity.
Most actions by which actors increase their power aren't directly related to weapons. Existential danger comes from one AGI actor getting more power than human actors.
A lot of the resources invested into "fighting misinformation" is about censoring nonestablishment voices and that often includes putting out misinformation like "Hunter's laptop was a Russian misinformation campaign" to facilitate political censorship.
In that enviroment, someone who proposing a new truthseeking project might also be interested into treating a project to strengthen the ruling narrative or they might be interested in actual truthseeking that affirms the ruling narrative when it's right and challenges it if it's wrong.
In a world ...
Do we know whether both use the same amount of compute?
In our world, having economic and political power is about winning in competitions with other agents.
In a world with much smarter than human AGI, the agents that win competitions for power are going to be AGIs and not humans. Even if you would now have constrains on wetlabs, powerful AGIs are going to be able to have power over wetlabs.
For anyone who doesn't want to run the query themselves, here's one run:
...The humble penny has been a fixture of American currency for over two centuries, but in recent years, it has become the subject of controversy due to its association with racism. This is not a new issue, but it has gained renewed attention in light of the Black Lives Matter movement and the push for racial justice. The problem with pennies is twofold: their historical connection to the dehumanization and exploitation of Black people, and their continued use as a symbol of that legacy.
T
When it comes to how banks represent their positions in their financial reports, maybe we should move past fiancial reports?
Financial reports made sense at a time where the report was written down in paper but not necessarily today. We could let banks publically report all the positions they hold in a structured data format so that different software can summarize their positions differently depending on the needs.
Given that we give banks the insured deposists they could pay that back with more transparency.
generally accountants aim to create financial statements that are useful to most readers under normal circumstances.
I would expect accountants mostly care about making finanical statements that are useful to those that pay those accountants instead of the readers of the statements.
I don't think that anglo-american accounting standards won over more traditional German accounting standards because they are more useful. They won because of geopolitical power.
If a proposal makes bad assumptions about what's true and is logically sound it's still a bad proposal.
On LessWrong both upvotes and downvotes can be cast without commenting.
When designing systems such as this, rationalists are usually thinking hard about the underlying dynamics instead of orienting themselves at bumper sticker slogans like "We have to ensure that consensus is scientific"
If you send a physics crackpot theory to a scientific journal, they are not going to explain to you in detail why they disagree with your crackpot theory. Nothing about how science is practiced is about one having a right to get criticism for every idea.
Having mechanisms by which bad ideas aren't consuming too much attention is essential for scientific progress.
Preparing sounds like "engaging in power-seeking behavior". This would essentially mean that intelligence leads to unfriendly AI by default.
If you buy a pro-subscription to ChatGPT, can use you GPT-4 the same way one would have used the 3.5 engine? Does anyone have made interesting experiences with it?
Is one updates to a pro-ChatGPT account is it possible to use GPT-4 for as many queries as one would have used ChatGPT before?
Why don't they have incentives? Isn't reading beyond what other investors are reading exactly the way to make profits if you don't just put your money into a diversified index fund?
I'm usually astonished w how seldom investors and supervisors read the fine print in annual reports.
If that would be true, you should be able to make good money by reading the fine print of annual reports, buying some options, and then publishing the information.
Why aren't we seeing that in your view?
The way the market does not let banks get away with it is by starting a bank run on the bank. If the standard is that banks get bailed out any way that might not happen.
If you take the Twitter thread making allegations against Vassar and against other people was well the obvious question is why no actions are taken against the other people who stand accused but actions are taken against Vassar.
If you would ask Vassar he would say something like: "It's because I criticized the EA and the rationality community and people wanted to get rid of me."
Anna Salamon's comment seems like an admission that this is central to what was going on. In a decentralized environment that makes it very hard to know whether to copy the de...
That's not how the English language works.
The dictionary defines arbitrary as:
based on random choice or personal whim, rather than any reason or system
It's not about whether the choice is good or bad but that it's not made because of reasons that speak in favor.
There is no real reason to choose either the left or right side of the road for driving but it's very useful to choose either of them.
The fact that the number 404 for a "page not found" and 403 for "client is forbidden from accessing a valid URL" is arbitrary. There's no reason or ...
Good policy is better than bad policy. That's true but has nothing to do with arbitrariness.
You don't need to have a rule about whether to drive on the left or right side. Allowing people to drive where they want is less arbitrary.
You have that in a lot of cases. An arbitrary law allows people to predict the behavior of other people and that increase in predictability is useful.
Generally, most people like to have the world around them to be predictable.
If you want to have a consistent time of going to bed, that means you have to choose the time you go to bed based on when events that you might go to like birthday parties or other evening events end. For many people in summer, that time of going to bed conflicts with waking up shortly after sunrise.
On the other hand, your work also sets a boundary of when you have to get up, so for most people, there's a preference to have control about when they go to bed and when they wake up.
Blackout curtains allow you to manage the amount of blue light you get yourself. It's easy to have Hue lights that give you some blue light before you wake up and have curtains open automatically when it's time to get up.
Rules about driving on the left or right side of the road are arbitrary. At the same time, having those rules is very useful because it means that people can coordinate around the rule.
Rules about how to format code are similar. If you work with other people on the same project and you don't have rules for formatting that produces a mess. Programming languages that are opinionated about how to format code are good because that means you don't have to talk about the conventions with your fellow programs at the start of a project.
Hiring people in third-world countries to fill out captcha is already very easy.
For many accounts we already see today that a mobile phone number is required. Requiring mobile phone numbers is often enough of a link to real-world information.
Facebook for example does it like that.
While there might be some blackmarket sales of accounts for spam purposes I would be surprised if the accounts will be worth much and the risk of being labeled as a spammer by google and lose access to google accounts is large enough that I wouldn't want to register 1000 google accounts for purposes like that.
It seems like you complain at the same time about AMP standardization and speak in favor of more standardization.
Even on the margin, anything that costs Facebook users also makes it less valuable for its remaining users—it’s a negative feedback loop. The same goes for any other site where users create value for other users, like Twitter or Craigslist or Yelp or Wikipedia. (It’s not an accident that these are some of the most stagnant popular websites!)
Wikipedia's strategy is not optimized for having a lot of users. In contrast to most other websites,...
“5-year-old in a hot 20-year-old’s body.”
40ish startup founder in the rationalist sphere, because he had a close connection to Peter Thiel. At dinner the man bragged that Yudkowsky had modeled a core HPMOR professor on him.
To me, two of the stories look like they are about the same person and that person has been banned from multiple rationalist spaces without the journalist considering it important to mention that.
Yeah, this seems very likely to be about Michael Vassar. Also, HPMOR spoiler:
I also think him "bragging" about this is quite awkward, since modeling literal Voldemort after you is generally not a compliment. I also wouldn't believe that "bragging" has straightforwardly occurred.
A bit of searching brings me to https://elephantinthelab.org/sexual-harassment-in-academia/ :
...Is Sexual Harassment in the Academy a Problem?
Yes. Research on sexual harassment in the academy suggests that it remains a prevalent problem. In a 2003 study examining incidences of sexual harassment in the workplace across private, public, academic, and military industries, Ilies et al (2003) found academia to have the second highest rates of harassment, second only to the military. More recently, a report by the The National Academies of Sciences, Engineering, an
I figured into class of scenario might look identical to nuclear or biological war, only facilitated by AI.
After the nuclear war caused by the AI, there's likely still an unaligned AI out there. That AI is likely going to kill the survivors of the nuclear war.
What kind of AGI doomsday scenario do you have where there are human survivors? If you don't have a scenario, it's difficult to talk about how to prep for it.
So your/their assertion is that the 'lab leak' claims were always a reasonable exploration of the possible origins of COVID-19 (i.e. not a conspiracy theory)?
The assertion is that they believed that at the time internally.
It has never been the favored hypothesis among experts.
Because they thought that it was important that the experts get ahead of the science and take public positions that aren't scientifically supported.
It was not favored because they believe it would damage "science" and relations with China.
There's a saying that if you want to get away with fraud, you have to make things so complicated that nobody understands what you did. There are journalists, whose job it is to create narratives that are easy to digest for a large audience. This article isn't written like this. It concentrates more on the actual facts than on spinning a narrative.
But let me try to summarize it as an accessible narrative:
It's the story of how they made the lab leak theory perceived by the public as a conspiracy theory even when they internally thought that there was a...
There are cases where data and the authorities disagree. From the side of the authorities, it's a good strategy to call those people who disagree conspiracy theorists.
Russiagate was essentially a conspiracy theory but given that it was endorsed by authorities most people don't use that label for it.
We could ask the NSA for record copies from all the BSL-3 and 4 labs in and around Wuhan.
The virus most likely leaked from the gain-of-function experiments that they were doing under BSL-2 and not from the BSL-3 or BSL-4 labs.
The NSA is not in the habit of telling the world how they surveil people and that's what they would need to do so to do that publically.
We however know a bit about the results of surveillance data from the letter that the NIH sent the EcoHealthAlliance.
...Disclose and explain out-of-ordinary restrictions on laboratory facilit
Having a link that requires the registration of a notion account seems unnecessarily cumbersome.
Scott did a lot of investigation into Vassar and does not stand by his initial accusations in regards to Vassar giving psychedelics to mindbreak people.
When it comes to the lab leak theory, every new piece of evidence seems to call for me updating more toward it.
Besides this report with rests on classified information, the latest was the coincide between the EcoHealthAlliance failing to turn in their grant report for the grant that financed the Wuhan lab in September 2019. That happens to be the same month the Wuhan lab took down their virus database. According to the Wuhan lab itself they took the database down during the pandemic when they were faced with a hacking attack which suggest a pandemic begin ...
The conclusion, which was made with “low confidence,” came as America’s intelligence agencies remained divided over the origins of the coronavirus.
Does the phrase "low confidence" as used here have have an operationalized definition?
According to Ziz's timeline her first contact with Pazek was after Pasek commented on her blog in Dezember 2017.
Looking through messages from Pazek for his writing style, there's a certain kind of positivity in their writing style.
He wrote Ziz at the time You say all the right things! I cannot marry you right now but let’s be best friends forever. In June 2016 Pasek wrote me "I've recently been doing a little project in which I talk with random LW users on Skype. (This has turned out to be a lot of fun!) So if you feel like it and have the time - ple...
Unfortunately, not really.