Wiki Contributions

Comments

Cedar1y10

Got it. WORTH.

Not wearing glasses have huge social signaling benefits & people somehow treat me nicer & listen to me more. As usual this is based on my perceptions and may be placebo effect.

If you about break even per-hour, you should definitely get it 

Cedar1y10

you can get cold water pressure driven bidets for like $60

 

https://www.amazon.com/Veken-Ultra-Slim-Non-Electric-Adjustable-Attachment/dp/B082HFS8KT/ref=sr_1_6?keywords=bidet&qid=1678545307&sr=8-6

 

actually nvm this one is only $29.

No clue about the quality though. May be better to go for something $20.

Installing mine costed me around 2 hours. You could use it for a year.

Which means the cost per hour actually comes down to something like $7.5, which is way below minimum wage!

Cedar1y10

checked out talon. looks amazing for people with disabilities.

now as a person with no obvious disabilities i wonder if it's still worth it to learn it for:

  1. just incase
  2. maybe eyetracking etc would make it easier to do things on my computer vs mouse & kbd.

any opinions?

Cedar1y-2-3

Ty 4 the catch. Used chatgpt to generate the html and i think when i asked it to add the CSS, it didn't have enough characters to give me everything.

Cedar1y2-5

somewhat far-fetched guess:

internet -> everybody does astrology now -> zebra gets confused with Libra -> replacement with Zulu

Cedar1y30

Oh! This is really good to know. Thank you so much for speaking up!

Answer by CedarMar 07, 202310

Friend of mine: "people listed seem cool; prolly easy to meet without spending money tho"

Cedar1y20

Wup time to edit that : D

Got the "My experiences are universal" going on in here

Cedar1y61

Don’t think about big words or small words—think about which particular word you mean.

I think this is good advice for most people who are used to being forced to use big words by school systems, but I personally follow a different version of this.

I see compression as an important feature of communication, and there are always tradeoffs to be made between

  1. Making my sentence short
  2. Conveying exactly what I want to say.

And sometimes I settle with transferring a "good enough" version of my idea, because communicating all the hairy details takes too much time / energy / social credit. I'm always scared of taking too much of people's attention or overrunning their working memory.

Cedar1y30
  • AI is pretty safe: unaligned AGI has a mere 7% chance of causing doom, plus a further 7% chance of causing short term lock-in of something mediocre
  • Your opponent risks bad lock-in: If there’s a ‘lock-in’ of something mediocre, your opponent has a 5% chance of locking in something actively terrible, whereas you’ll always pick good mediocre lock-in world (and mediocre lock-ins are either 5% as good as utopia, -5% as good)
  • Your opponent risks messing up utopia: In the event of aligned AGI, you will reliably achieve the best outcome, whereas your opponent has a 5% chance of ending up in a ‘mediocre bad’ scenario then too.
  • Safety investment obliterates your chance of getting to AGI first: moving from no safety at all to full safety means you go from a 50% chance of being first to a 0% chance
  • Your opponent is racing: Your opponent is investing everything in capabilities and nothing in safety
  • Safety work helps others at a steep discount:  your safety work contributes 50% to the other player’s safety 

This is more a personal note / call for somebody to examine my thinking processes, but I've been thinking really hard about putting hardware security methods to work. Specifically, spreading knowledge far and wide about how to:

  1. allow hardware designers / manufacturers to have easy, total control over who uses their product for what for how much throughout the supply chain
  2. make it easy to secure AI related data (including e.g. model weights and architecture) and difficult to steal.

This sounds like it would improve every aspect of the racey-environment conditions, except:

Your opponent is racing: Your opponent is investing everything in capabilities and nothing in safety

The exact effect of this is unclear. On the one hand, if racey, zero-sum thinking actors learn that you're trying to "restrict" or "control" AI hardware supply, they'll totally amp up their efforts. On the other hand, you've also given them one more thing to worry about (their hardware supply).

I would love to get some frames on how to think about this.

Load More