LESSWRONG
LW

Timothy Underwood
42321840
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Which investments for aligned-AI outcomes?
Timothy Underwood1y30

You might capture value out of that relative to broad equities if the world ends up both severely deflationary due to falling costs, and where current publicly traded companies are mostly unable to compete in the new context. 

Reply
Would you have a baby in 2024?
Timothy Underwood2y2-2

Yeah, but assuming your p(doom) isn't really high, this needs to balanced against the chance that AI goes well, and your kid has a really, really, really good life.

I don't expect my daughter to ever have a job, but think that in more than half of worlds that seem possible to me right now, she has a very satisfying life -- one that is better than it would be otherwise in part because she never has a job.

Reply
The Offense-Defense Balance Rarely Changes
Timothy Underwood2y70

I'd note that acoup's model of fires primacy making defence untenable between hi tech nations, while not completely disproven by the Ukraine war, is a hypothesis that seems much less likely to be true/ less true than it did in early 2022. The Ukraine war has shown in most cases a strong advantage to a prepared defender and the difficulty of taking urban environments.

The current Israel - Hamas was shows a similar tendency, where Israel is moving very slowly into the core urban concentrations (ie it has surrounded Gaza city so far, but not really entered it), though its superiority in resources relative to its opponent is vastly greater than Russia's advantage over Ukraine was.

Reply
The Offense-Defense Balance Rarely Changes
Timothy Underwood2y30

I'd expect per capita war deaths to have nothing to do with offence/ defence balance as such (unless the defence gets so strong that wars simply don't happen, in which case it goes to zero).

Per capita war deaths in this context are about the ability of states to mobilize populations, and about how much damage the warfare does to the civilian population that the battle occurs over. I don't think there is any uncomplicated connection between that and something like 'how much bigger does your army need to be for you to be able to successfully win against a defender who has had time to get ready'.

Reply
How do you feel about LessWrong these days? [Open feedback thread]
Timothy Underwood2y2616

This matches my sense of how a lot of people seem to have... noticed that GPT-4 is fairly well aligned to what the OpenAI team wants it to be, in ways that Yudkowsky et al said would be very hard, and still not view this as at a minimum a positive sign?

Ie problems of the class 'I told the intelligence to get my mother out of the burning building and it blew her up so the dead body flew out the window, this is because I wasn't actually specific enough' just don't seem like they are a major worry anymore?

 

Usually when GPT-4 doesn't understand what I'm asking, I wouldn't be surprised if a human was confused also.

Reply1
What Is Childhood Supposed To Be?
Timothy Underwood2y00

Weirdly, and I think this is because my childhood definitely was not optimized for getting into a good university (I was homeschooled, and ended up transferring to Berkley based off perfect grades for two years in a community college), but reading the last paragraphs here made me rather nostolgic for the two or three weeks I spent doing practice SAT tests.

Reply
The world where LLMs are possible
Timothy Underwood2y01

I mean, it kind of does fine at arithmetic? 

I just gave gpt3.5 three random x plus y questions, and it managed one that I didn't want to bother doing in my head.

Reply
The Case for Overconfidence is Overstated
Timothy Underwood2y31

I think the issue is that creating an incentive system where people are rewarded for being good at an artificial game that has very little connection to their real world cericumstances, isn't going to tell us anything very interesting about how rational people are in the real world, under their real constraints.

I have a friend who for a while was very enthused about calibration training, and at one point he even got a group of us from the local meetup + phil hazeldon to do a group exercise using a program he wrote to score our calibration on numeric questions drawn from wikipedia. The thing is that while I learned from this to be way less confident about my guesses -- which improves rationality, it is actually, for the reasons specified, useless to create 90% confidence intervals about making important real world decisions.

Should I try training for a new career? The true 90% confidence interval on any difficult to pursue idea that I am seriously considering almost certainly includes 'you won't succeed, and the time you spend will be a complete waste' and 'you'll do really well, and it will seem like an awesome decision in retrospect'. 

Reply
Why is violence against AI labs a taboo?
Timothy Underwood2y30

If you think P(doom) is 1, you probably don't believe that terrorist bombing of anything will do enough damage to be useful. That is probably one of EYs cruxes on violence.

Reply
Why is violence against AI labs a taboo?
Timothy Underwood2y10

You don't become generally viewed by society as a defector when you file a lawsuit. Private violence defines you in that way, and thus marks you as an enemy of ethical cooperators, which is unlikely to be a good long term strategy.

Reply
Load More
17EA novel published on Amazon
2y
0
10I've started publishing the novel I wrote to promote EA
3y
3
8Request for feedback on sample blurbs for the EA fantasy novel I wrote
3y
0
18I wrote a fantasy novel to promote EA: More Chapters
3y
0
23I’ve written a Fantasy Novel to Promote Effective Altruism
3y
21
21I'm writing a novel to promote Effective Altruism
3y
0
4Welcome to ACX/Less Wrong Budapest
4y
0
23The Accord
4y
1
79When can Fiction Change the World?
5y
18
6I'm looking for research looking at the influence of fiction on changing elite/public behaviors and opinions
Q
5y
Q
7
Load More