LESSWRONG
LW

Kronopath
620130
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
I hired 5 people to sit behind me and make me productive for a month
Kronopath10mo30

The kind of employers that would not be okay with you streaming your work on Twitch are usually also the kind of employers that would not be okay with you hiring randos to sit behind you staring at confidential info on your screen during the work day.

This is really only suitable for people who are entrepreneurs/small business owners with less concerns over confidentiality, or have enough rapport with their employer for them to be ok with this.

Reply
Probabilistic Negotiation
Kronopath2y10

I have to admit, i rolled my eyes when I saw that you worked in financial risk management. Not because what you did was stupid—far from It—but because of course this is the kind of cultural environment in which this would work.

If you did this in a job that wasn’t heavily invested in a culture of quantitative risk management, it would likely cause a likely-permanent loss of trust that would be retaliated against in subtle ways. You’d get a reputation as “the guy that plays nasty/tricky games when he doesn’t get his way” which would make it harder to collaborate with people.

So godspeed, glad it worked for you, but beware applying this in other circumstances and cultures.

Reply
Optimality is the tiger, and agents are its teeth
Kronopath2y10

Sure, I agree GPT-3 isn't that kind of risk, so this is maybe 50% a joke. The other 50% is me saying: "If something like this exists, someone is going to run that code. Someone could very well build a tool that runs that code at the press of a button."

Reply
Optimality is the tiger, and agents are its teeth
Kronopath2y70

Equally one could make a claim from the true ending, that you do not run the generated code.

Meanwhile, bored tech industry hackers:

“Show HN: Interact with the terminal in plain English using GPT-3”

https://news.ycombinator.com/item?id=34547015

Reply
Frequently Asked Questions for Central Banks Undershooting Their Inflation Target
Kronopath3y10

It's kind of surreal to read this in the 2020s.

Reply
What would we do if alignment were futile?
Kronopath4y30

Do we have to convince Yann LeCun? Or do we have to convince governments and the public?

(Though I agree that the word "All" is doing a lot of work in that sentence, and that convincing people of this may be hard. But possibly easier than actually solving the alignment problem?)

Reply
What would we do if alignment were futile?
Kronopath4y10

A thought: could we already have a case study ready for us?

Governments around the world are talking about regulating tech platforms. Arguably Facebook's News Feed is an AI system and the current narrative is that it's causing mass societal harm due to it optimizing for clicks/likes/time on Facebook/whatever rather than human values.

See also:

  • This story about how Facebook engineers tried to make tweaks to the News Feed algorithm's utility function and it backfired.
  • This story about how Reddit's recommendation algorithms may have influenced some of the recent stock market craziness.

All we'd have to do is to convince people that this is actually an AI alignment problem.

Reply
Stop button: towards a causal solution
Kronopath4y10

On Wednesday, the lead scientist walks into the lab to discover that the AI has managed to replicate itself several times over, buttons included. The AIs are arranged in pairs, such that each has its robot hand hovering over the button of its partner.

"The AI wasn't supposed to clone itself!" thinks the scientist. "This is bad, I'd better press the stop button on all of these right away!"

At this moment, the robot arms start moving like a swarm of bees, pounding the buttons over and over. If you looked at the network traffic between each computer, you'd see what was happening: the AI kills its partner, then copies itself over to its partner's hard drive, then its partner kills it back, and copies itself back to its original. This happens as fast as the robot arms can move.

Far in the future, the AIs have succeeded in converting 95% of the mass of the earth into pairs of themselves maddeningly pressing each other's buttons and copying themselves as quickly as possible. The only part of the earth that has not been converted into button-pressing AI pairs is a small human oasis, in which the few remaining humans are eternally tortured in the worst way possible, just to make sure that every single human forever desires to end the life of all of their robot captors.

Reply
Discussion with Eliezer Yudkowsky on AGI interventions
Kronopath4y70

Are we sure that OpenAI still believes in "open AI" for its larger, riskier projects? Their recent actions suggest they're more cautious about sharing their AI's source code, and projects like GPT-3 are being "released" via API access only so far. See also this news article that criticizes OpenAI for moving away from its original mission of openness (which it frames as a bad thing).

In fact, you could maybe argue that the availability of OpenAI's APIs acts as a sort of pressure release valve: it allows some people to use their APIs instead of investing in developing their own AI. This could be a good thing.

Reply
Discussion with Eliezer Yudkowsky on AGI interventions
[+]Kronopath4y-70
Load More
No posts to display.