Robert Cousineau

Wiki Contributions

Comments

Sorted by

I've donated 5k.  Lesswrong (and the people it brings together) deserve credit for the majority of my intellectual growth over the last 6 years.  I cannot think of a higher signal:noise place to learn, nor can I think of a more enjoyable and growth inducing community than the community which has grown around it.  

Thank you to both those who directly work on it and those who contribute to it!

 

Lighthaven's wonder is self evident. 

I'm honestly really skeptical of the cost effectiveness of pedestrian tunnels as a form of transportation.  Asking Claude for estimates on tunnel construction costs gets me the following:

A 1-mile pedestrian tunnel would likely cost $15M-$30M for basic construction ($3,000-$6,000 per foot based on utility tunnel costs), plus 30% for ventilation, lighting, and safety systems ($4.5M-$9M), and ongoing maintenance of ~$500K/year.

To put this in perspective: Converting Portland's 400 miles of bike lanes to tunnels would cost $7.8B-$15.6B upfront (1.1-2.3× Portland's entire annual budget) plus $200M/year in maintenance. For that same $15.6B, you could:

  • Build ~780 miles of protected surface bike lanes ($2M/mile)
  • Fund Portland's bike infrastructure maintenance for 31 years
  • Give every Portland resident an e-bike and still have $14B left over

Even for a modest 5-mile grid serving 10,000 daily users (optimistic for suburbs), that's $10K-$20K per user in construction costs alone.

Alternative: A comprehensive street-level mural program might cost $100K-$200K per mile, achieving similar visual variety at ~1% of the tunnel cost.

I'll preface this with: what I'm saying is low confidence - I'm not very educated on the topics in question (reality fluid, consciousness, quantum mechanics, etc).  

Nevertheless, I don't see how the prison example is applicable.  In the prison scenario there's an external truth (which prisoner was picked) that exists independent of memory/consciousness. The memory wipe just makes the prisoner uncertain about this external truth.

But this post is talking about a scenario where your memories/consciousness are the only thing that determines which universes count as 'you'. 

There is no external truth about which universe you're really in - your consciousness itself defines (encompasses?) which universes contain you. So, when your memories become more coarse, you're not just becoming uncertain about which universe you're in - you're changing which universes count as containing you, since your consciousness is the only arbiter of this.

A cool way to measure dishonesty: How many people claim to have completed an impossible five minute task.

This has since been community noted, fairly from my understanding.  

This graph is not about how many people reported completing a task in 5 minutes when that was not true, this graph shows how many people completed the whole task even though it took them more than 5 minutes (which was all the time they were getting paid for).  
 

Derek Lowe I believe does the closest to a Matt Levine for Pharma (and chem): https://www.science.org/blogs/pipeline

 

He has a really fun to read series titled "Things I Won't Work With" where he talks a bunch about dangerous chemicals: https://www.science.org/topic/blog-category/things-i-wont-work-with 

In the limit (what might be considered the ‘best imaginable case’), we might imagine researchers discovering an alignment technique that (A) was guaranteed to eliminate x-risk and (B) improve capabilities so clearly that they become competitively necessary for anyone attempting to build AGI. 

I feel like throughout this post, you are ignoring that agents, "in the limit", are (likely) provably taxed by having to be aligned to goals other than their own.  An agent with utility function "A" is definitely going to be less capable at achieving "A" if it is also aligned to utility function "B".   I respect that current LLM's not best described as having a singular consistent goal function, however, "in the limit" that is what they will be best described as.  

Answer by Robert Cousineau173

I stopped paying for chatGPT earlier this week, while thinking about the departure of Jan and Daniel.

Whereas before they left I was able to say to myself "well, there are smarter people than me with worldviews similar to mine who have far more information about openAI than me, and they think it is not a horrible place, so 20 bucks a month is probably fine", I am no longer able to do that.

They have explicitly sounded the best alarm they reasonably know how to currently. I should listen!

Market odds are currently at 54% that 2024 is hotter than 2023: https://manifold.markets/SteveRabin/will-the-average-global-temperature?r=Um9iZXJ0Q291c2luZWF1

I have some substantial limit orders +-8% if anyone strongly disagrees.

I like the writeup, but reccomend actually directly posting it to LessWrong. The writeup is of a much higher quality than your summary, and would be well suited to inline comments/the other features of the site.

Load More