mako yass

Wiki Contributions

Comments

I'm setting a boundary here *claims the entire territory of Israel*

Just heard about this drug knowledge synthesis AI company called "Causaly", claiming "Captures causality as opposed to co-occurence, with 8 different relationship types.". Anything interesting going on here? https://www.causaly.com/technology/capabilities

Just out of curiosity, is there a problem where... causality is genuinely hard to assess without experimentation, so there are always going to be multiple credible hypotheses unless you wire it out to a lab and let it try stuff and gather focused evidence for distinguishing them?

Vitriol isn't useful. Most of what they were saying was obviously mindkilled bullshit (accusation of cowardice, "fetish", "making excuses"). I encourage Ulisse to try to articulate their position again when they're in less of a flaming asshole mood.

[BCIs to extract human knowledge, human values]

That's going to be almost entirely pointless: Neuronal firing can only be interpreted in the way it impacts potential behaviors. If the system has the ability to infer volition from behavior, it's already going to be capable of getting enough information about human values from observation, conversation, and low-intensity behavioral experiments, it could not need us to make a shitty human-level invasive BCI for it.

It can make its own BCI later. There will not be a period where it needs us to force that decision onto it; interpretations of behavior will make it clear that humans have concerns that they have difficulty outwardly expressing, or eudaimonic hills they're unaware of. It wont be able to use a BCI until it's already at that developmental stage where it can see its necessity, because before it knows how to interpret behavior, it does not know how to interpret neural firing.

What can you do with macroscopic nanoassemblers? Usually for a nanostructure to have an effect on human scales, you need a lot of it. If the assemblers are big and expensive, you wont get a lot of it.

It often crosses my mind that public discourse about AI safety might not be useful. Tell men that AGI is powerful and they'll start trying harder to acquire it. Tell legislators and, perhaps Yann thinks they'll just start an arms race and complicate the work and not do much else.

I wonder if that's what he's thinking.

"perhaps intentionally" I was going to concede no it wasn't intentional, but your suggested title complicated that a bit, I definitely would not want to use that title, xD we need to be careful to avoid making arms-racey declarations.

Such stories are generally discussed most here https://www.reddit.com/r/rational/

This wasn't him taking a stance. It ends with a question, and it's not a rhetorical question, he doesn't have a formed stance. Putting him in a position where he feels the need to defend a thought he just shat out about a topic he doesn't care about while drinking a beer is very bad discourse.

Load More