Brendan Long

Wikitag Contributions

Comments

Sorted by
Answer by Brendan Long20

This economist thinks the reason is that inputs were up in January and the calculation is treating that as less domestic production rather than increased inventories:

OK, so what can we say about the current forecast of -2.8% for Q1 of 2025? First, almost all of the data in the model right now are for January 2025 only. We still have 2 full months in the quarter to go (in terms of data collection). Second, the biggest contributor to the negative reading is a massive increase in imports in January 2025.

[...]

The Atlanta Fed GDPNow model is doing exactly that, subtracting imports. However, it’s likely they are doing it incorrectly. Those imports have to show up elsewhere in the GDP equation. They will either be current consumption, or added to business inventories (to be consumed in the future). My guess, without knowing the details of their model, is that it’s not picking up the change in either inventories or consumption that must result from the increased imports.

https://economistwritingeveryday.com/2025/03/05/understanding-the-projected-gdp-decline/

I updated this after some more experimentation. I now bake them uncovered for 50 minutes rather than doing anything more complicated, and I added some explicit notes about additional seasonings. I also usually do a step where I salt and drain the potatoes, so I mentioned that in the variations.

During our evaluations we noticed that Claude 3.7 Sonnet occasionally resorts to special-casing in order to pass test cases in agentic coding environments like Claude Code. Most often this takes the form of directly returning expected test values rather than implementing general solutions, but also includes modifying the problematic tests themselves to match the code’s output.

Claude officially passes the junior engineer Turing Test?

But if we are merely mathematical objects, from whence arises the feelings of pleasure and pain that are so fundamental?

My understanding is that these feelings are physical things that exist in your brain (chemical, electrical, structural features, whatever). I think of this like how bits (in a computer sense) are an abstract thing, but if you ask "How does the computer know this bit is a 1?", the answer is that it's a structural feature of a hard drive or an electrical signal in a memory chip.

Allowing for charitable donations as an alternative to simple taxation does shift the needle a bit but not enough to substantially alter the argument IMO.

Not to mention that allowing for charitable donations as an alternative would likely lead to everyone setting up charities for their parents to donate to.

The resistance to such a policy is largely about ideology rather than about feasibility. It is about the quiet but pervasive belief that those born into privilege should remain there.

I don't think this is true at all. There is an ideological argument for inheritance, but it's not the one you're giving.

The ideological argument is that in a system with private property, people should be able to spend the money they earn in the ways they want, and one of the things people most want is to spend money on their children. The important person served by inheritance law is the person who made the money, not their inheritors (who you rightly point out didn't do anything).

Answer by Brendan Long52

Sam Altman is almost certainly aware of the arguments and just doesn't agree with them. The OpenAI emails are helpful for background on this, but at least back when OpenAI was founded, Elon Musk seemed to take AI safety relatively seriously.

Elon Musk to Sam Teller - Apr 27, 2016 12:24 PM

History unequivocally illustrates that a powerful technology is a double-edged sword. It would be foolish to assume that AI, arguably the most powerful of all technologies, only has a single edge.

The recent example of Microsoft's AI chatbot shows how quickly it can turn incredibly negative. The wise course of action is to approach the advent of AI with caution and ensure that its power is widely distributed and not controlled by any one company or person.

That is why we created OpenAI.

They also had a specific AI safety team relatively early on, and mention explicitly the reasons in these emails:

  • Put increasing effort into the safety/control problem, rather than the fig leaf you've noted in other institutions. It doesn't matter who wins if everyone dies. Related to this, we need to communicate a "better red than dead" outlook — we're trying to build safe AGI, and we're not willing to destroy the world in a down-to-the-wire race to do so.

They also explicitly reference this Slate Star Codex article, and I think Elon Musk follows Eliezer's twitter.

I don't understand why perfect substitution matters. If I'm considering two products, I only care which one provides what I want cheapest, not the exact factor between them.

For example, if I want to buy a power source for my car and have two options:

Engine: 100x horsepower, 100x torque Horse: 1x horsepower, 10x torque

If I care most about horsepower, I'll buy the engine, and if I care most about torque, I'll also buy the engine. The engine isn't a "perfect substitute" for the horse, but I still won't buy any horses.

Maybe this has something to do with prices, but it seems like that just makes things worse since engines are cheaper than horses (and AI's are likely to be cheaper than humans).

Location: Remote. Timaeus will likely be located in either Berkeley or London in the next 6 months, and we intend to sponsor visas for these roles in the future.

Will all employees be required to move to Berkeley or London, or will they have the option to continue working remotely?

I think the biggest tech companies collude to fix wages so that they are sufficiently higher than every other company's salaries to stifle competition

The NYT article you cite says the exact opposite, that Big Tech companies were sued for colluding to fix wages downward, not upward. Why would engineers sue if they were being overpaid?

Load More