Posts

Sorted by New

Wiki Contributions

Comments

I think the reacts being semantic instead of being random emojis is what makes this so much better.

I wish other platforms experimented with semantic reacts as well, instead of just letting people react with any emoji of their choosing, and making you guess whether e.g. "thumbs up" means agreement, acknowledgement, or endorsement, etc.

This was my first time taking this, looking forward to the results!

I know of Robert Miles, and Writer, who does Rational Animations. (In fact Robert Miles' channel is the primary reason I discovered LessWrong :) )

Don't leave me hanging like this, does the movie you are describing exist? (Though I guess your description is a major spoiler, you would need to go in without knowing whether there will be anything supernatural.)

  1. The Thing: classic
  2. Eden Lake
  3. Misery
  4. 10 Cloverfield Lane
  5. Gone Girl: not horror, but I specifically like it because of how agentic the protagonist is

2., 3. and 4. have in common that there is some sort of abusive relationship that develops, and I think this adds another layer of horror. (A person/group of people gain some power over the protagonist(s), and they slowly grow more abusive with this power.)

Somewhat related: does anyone else strongly dislike supernatural elements in horror movies?

It's not that I have anything against a movie exploring the idea of "what if we suddenly discovered that we live in a universe where supernatural thing X exist", but the characters just accept this without much evidence at all.

I would love a movie though where they explore the more likely alternate hypotheses first (mental issues, some weird optical/acoustic phenomenon, or just someone playing a super elaborate prank), but then the evidence starts mounding, and eventually they are forced to accept that "supernatural thing X actually exists" is really the most likely hypothesis.

These examples show that, at least in this lower-stakes setting, OpenAI’s current cybersecurity measures on an already-deployed model are insufficient to stop a moderately determined red-teamer.

I... don't actually see any non-trivial vulnerabilities here? Like, these are stuff you can do on any cloud VM you rent?

Cool exploration though, and it's certainly interesting that OpenAI is giving you such a powerful VM for free (well actually not because you already pay for GPT-4 I guess?), but I have to agree with their assessment which you found that "it's expected that you can see and modify files on this system".

The malware is embedded in multiple mods, some of which were added to highly popular modpacks.

Any info on how this happened? This seems like a fairly serious supply chain attack. I have heard of incidents with individual malicious packages on npm or PyPI, but not one where multiple high profile packages in a software repository were infected in a coordinated manner.

Uhh this first happening in 2023 was the exact prediction Gary Marcus made last year: https://www.wired.co.uk/article/artificial-intelligence-language

Not sure whether this instance is a capability or alignment issue though. Is the LLM just too unreliable, as Gary Marcus is saying? Or is it perfectly capable, and just misaligned?

I don't see why communicating with an AI through a BCI is necessarily better than through a keyboard+screen. Just because a BCI is more ergonomic and the AI might feel more like "a part of you", it won't magically be better aligned.

In fact the BCI option seems way scarier to me. An AI that can read my thoughts at any time and stimulate random neurons in my brain at will? No, thanks. This scenario just feels like you are handing it the "breaking out of the box" option on a silver platter.

Load More