Algon

Wikitag Contributions

Comments

Sorted by
Algon20

Man, that's depressing. Gives too low an estimate for a good outcome though IMO. Very cool tool, though.

Algon20

A key question is if the typical goal-directed superintelligence would assign any significant value to humans. If it does, that greatly reduces the threat from superintelligence. We have a somewhat relevant article earlier in the sequence: AI's goals may not match ours.

BTW, if you're up for helping up improve the article, would you mind answering some questions? Like: do you feel like our article was "epistemically co-operative"? That is, do you think it helps readers orient themselves in the discussion on AI safety, makes the assumptions clear, and generally tries to explain rather than persuade? What's your general level of familiarity with AI Safety? 

Algon40

Personally, I liked "it's always been the floor". Feels real. I've certainly said/heard people say things like that in strained relationships. Perhaps "it's always the floor" would have been better. Or "it always is". Yes, that sounds right. 

Algon40

I think I got decent results from Gemini 2.5 Pro (AIStudio version). 

Her fork scraped the plate. It was a sound he’d noticed three nights ago. Not the tine-on-ceramic screech, but the specific resonance of the plate on the tabletop. A low, almost sub-audible hum that vibrated up through his own chair.

He put his water glass down, gently. The wobble was minute, a tiny seesaw over an invisible fulcrum. He watched the surface of the water shiver, the reflection of the overhead light distorting and re-forming.

“It’s the floor,” she said. She was watching his eyes, not the glass. “It’s always been the floor.”

He pushed his chair back and knelt, his cheek almost to the cool laminate. He sighted along the gap under the table’s far leg. A sliver of light. He rocked the table with his fingertips. The contact point shifted.

“It’s not the floor,” he said, standing up. “The load isn’t distributed evenly. One of the leg screws has probably settled.”

She took a bite of her salad. “We could put a coaster under it.”

He looked at the small pile of mail on the counter. A folded piece of junk mail, a cardboard insert from a package. A temporary, imperfect patch. He felt a familiar, low-grade resistance in his chest.

He went to the hall closet and came back with his toolkit. He knelt again, the cool metal of a hex key in his hand. He found the screw head on the offending leg. It was tight. He tried the adjacent one. Also tight. He lay on his back, the ceiling light bright in his eyes, and began systematically checking them all, a quarter-turn to test, then back.

From his new vantage point, the apartment was an unfamiliar landscape. The underside of her chair, the dust bunnies gathered around the baseboard, the scuff mark on the wall where the vacuum cleaner had hit it. He heard her stand up and carry her plate to the sink. The scrape of the fork was gone. The water ran.

He found it on the fourth leg. A fractional looseness. He gave the screw a half-turn, then another. The wood groaned slightly as the tension equalized. He slid out from under the table and stood, brushing dust from his shirt.

He placed his palms flat on the tabletop and leaned his weight onto it. Nothing. Rock solid. He looked toward the sink.

She was standing there, scrolling on her phone, her back to him. The TV was on, muted. A city street at night, the headlights and taillights rendered as slow, continuous ribbons of red and white light. He watched her thumb move up the screen, fast and smooth. He waited for her to turn around.

Could be tightened a fair bit. Since that is my biggest criticism, it feels pretty promising. Getting this took your prompt, a free-association mash of words for the system prompt, and telling Gemini that the first story it produced was terrible. 

Algon30

This was pretty combative. I was thinking of saying "sorry for saying this" but that would have been kinda dishonest as I thought it's better to post this as is then not have something like this comment exist, which were the only realistic options. I will, however, acknowledge that this is a skill issue on my part, and I would prefer to be better at communicating non-violently. I also acknowledge that I'm being somewhat mean here, which isn't virtuous. It would make sense if you thought somewhat less of me for that. 

Algon40

Looking at your map, you notice there's a long ridge that leaves your mountain at roughly the same height. Does this ridge to reach your friend's mountain? possible to follow this ridge to reach your friend's mountain?

"ridge to" -> "ridge",
"possible" to "Is it possible". 

However, when the class of models is sufficiently overparametrized (i.e., large-scale neural net architecture), we suspect that eventually have just one connected ridge of local minima: the ridge of global minima. The dimension of this single ridge is smaller than the dimension  of the whole weight space, but still quite high-dimensional. Intuitively, the large number of linearly independent zero-loss directions at a global minimum allow for many opportunities to path-connect towards a local minimum while staying within the ridge, making all local minima globally minimal.
 

"that eventually have" -> "they eventually have",

"path-connect towards" - > "path connect to".

Considering getting a linter for your browser, like https://languagetool.org, to avoid these sorts of errors in future. 

One potential choice of a subset of interpretable models  which is geometrically "nice'' is the submanifold of models which is prunable in a certain way. For example, the submanifold defined by the system of equations , where  are the output connections of a given neuron , is comprised of models which can be pruned by removing neuron number . Thus, we may be able to maximally prune a neural net (with respect to the given dataset) by using an algorithm of the following form:

  1. Find in the Rashomon manifold an optimal model that can be pruned in some way.
  2. Prune the neural net in that way.
  3. Repeat Steps 1 and 2 until there are no simultaneously prunable and optimal models anymore.  

This is clever, and also seems like the core idea of the post. You should put this info at the start. 

Algon1116

Nah.  Past-you must have had many opportunities to realize you were wrong. But the way your blog post is written, as if you were defending your past self for being reasonable, sure makes it seem like you're not even trying to analyse how your thinking went wrong in detail because you haven't identified how you could have got the correct answer. 

And you could have. The world is coherent, and five seconds of actual thinking would have revealed that "women are as physically strong as men" just makes no sense. For instance:
1) Remember the square cube law? Strength depends on size. Men are bigger than women. QED. Even if you don't remember the exact form of the law, the general pattern that "size correlates with strength" is one you recognized, yes? 
2) Even if you weren't interested in sports records, you said you were "thinking for WEEKS" about how surely men and women are equally capable at soccer. Did you never even try to resolve your curiosity here, or argue with people about this and cite statistics or ...? For that matter, did you, like, never get curious about why people don't talk about the world's fastest woman, or the world's strongest woman or so on? 

3) You were in your 30s before you realized your error. But since you were a teenager, you would have been stronger than the women in your life. Did no woman in your life ask you to open stuff for them, or lift something heavy or carry the heavy stuff when moving/setting up stuff? Did you never once test your strength against theirs, even accidentally? Like, I don't know, fight with foam swords or play shove against each other? 

4) Sexism! That's another thing. Where did this come from? What allowed for the horrible treatment of women, if not for martial might? All that rape and abuse, that terrible evil, you think women just accepted it? That we could have slave revolts, caste revolts, ethnic revolts, religious revolts, but never gender revolts because ???

5) ALL OF MILITARY HISTORY. Like. All the wars, the conquests, the mass enslavement of women and slaughter of defeated males occurred because men were irrationally viewed as more dangerous and women less so? That it was terribly rare for someone to try doubling their pool of potential fighters by arming women, because ??? That there were are all military experiments like: using horses, chariots, phalanxes, archery, guns, cannons, etc but none of them tried (and succeeded!) with using lots of female soldiers? 

A version of this post where you seriously grappled with how you did have opportunities to get things right, tried to find thoughts that would have led you down the right path, and integrated that into your analysis of how you were so badly wrong would have been much better. 

Algon20

Fair enough. If you ever get round to figuring out how this all works, it would be nice to know. 

Algon30

I see. I was confused because e.g. in a fight this certainly doesn't seem true. If your tank's plating is suddenly 2^10 times stronger, that's a huge deal and requires 2^10 times stronger offense. Realistically, of course, it would take less as you'd invest in cheaper ways of disabling the tank than increasing firepower. But probably not logarithmically fewer! 

Algon20

but exponentially better defenses only require linearly better offense
 

QRD?

Load More