Pablo Villalobos

Staff Researcher at Epoch. AI forecasting.

Wiki Contributions

Comments

The arguments you make seem backwards to me.

All this to say, land prices represent aggregation effects / density / access / proximity of buildings. They are the cumulative result of being surrounded by positive externalities which necessarily result from other buildings not land. It is the case that as more and more buildings are built, the impact of a single building to its land value diminishes although the value of its land is still due to the aggregation of and proximity to the buildings that surround it.

Yes, this is the standard Georgist position, and it's the reason why land owners mainly capture (positive and negative) externalities from land use around them, not in their own land.

Consider an empty lot on which you can build either a garbage dump or a theme park, each of equivalent economic value. Under SQ, the theme park is built as the excess land value is capture by the land owner. Under LVT, the garbage dump is built as the reduced land values reduces their tax burden. The SQ encourages positive externalities, LVT encourages negative externalities.

This seems wrong. The construction of a building mainly affects the value of the land around it, not the land on which it sits. Consider the following example in which instead of buildings, we have an RV and a truck, so there is no cost of building or demolishing stuff:

There's a pristine neighborhood with two empty lots next to each other in the middle of it. Both sell for the same price. The owner of empty lot 1 rents it to a drug dealer, who places a rusty RV on the lot and sells drugs in it. The owner of empty lot 2 rents it to a well-known chef who places a stylish food truck on the lot and serves overpriced food to socialites in it.

Under SQ, who do you think would profit from selling the land now? The owner of lot 2 has to sell land next to a drug dealer that a prospective buyer can do nothing about. The owner of lot 1 has to sell land next to delicious high-status food, and if a buyer minds the drug dealer he can kick him out. Who is going to have an easier time selling? Who is going to get a higher price?

Now, suppose there is a LVT. If the tax is proportional to the selling price of the land under SQ (as it ideally should), which owner is going to pay more tax?

The case of the theme park and garbage dump is exactly the same, with the added complication of construction / demolition costs. An LVT should be proportional to the price of the land if there were no buildings on top of it (and without taking into account the tax itself), so building a garbage dump is not going to significantly reduce your tax payments.
 

In such a way, a land value tax has a regularisation effect on building density, necessitating a spread of concentration.

There are several separate effects here, if you are a landowner. Under LVT:

  1. You are incentivized to reduce the density in surrounding land
  2. You are incentivized to build as densely as possible within your own land to compensate the tax

Under SQ:

  1. You are incentivized to increase the density in surrounding land
  2. You are not incentivized to increase density in your own land

The question is, which of these effects is bigger? I would say that landowners have more influence over their own land than over surrounding land, so a priori I would expect more density to result from an LVT

We'll be at the ground floor!

Not quite. What you said is a reasonable argument, but the graph is noisy enough, and the theoretical arguments convincing enough, that I still assign >50% credence that data (number of feedback loops) should be proportional to parameters (exponent=1).

My argument is that even if the exponent is 1, the coefficient corresponding to horizon length ('1e5 from multiple-subjective-seconds-per-feedback-loop', as you said) is hard to estimate.

There are two ways of estimating this factor

  1. Empirically fitting scaling laws for whatever task we care about
  2. Reasoning about the nature of the task and how long the feedback loops are

Number 1 requires a lot of experimentation, choosing the right training method, hyperparameter tuning, etc. Even OpenAI made some mistakes on those experiments. So probably only a handful of entities can accurately measure this coefficient today, and only for known training methods!

Number 2, if done naively, probably overestimates training requirements. When someone learns to run a company, a lot of the relevant feedback loops probably happen on timescales much shorter than months or years. But we don't know how to perform this decomposition of long-horizon tasks into sets of shorter-horizon tasks, how important each of the subtasks are, etc.

We can still use the bioanchors approach: pick a broad distribution over horizon lengths (short, medium, long). My argument is that outperforming bioanchors by making more refined estimates of horizon length seems too hard in practice to be worth the effort, and maybe we should lean towards shorter horizons being more relevant (because so far we have seen a lot of reduction from longer-horizon tasks to shorter-horizon learning problems, eg expert iteration or LLM pretraining).

Note that you can still get EUM-like properties without completeness: you just can't use a single fully-fleshed-out utility function. You need either several utility functions (that is, your system is made of subagents) or, equivalently, a utility function that is not completely defined (that is, your system has Knightian uncertainty over its utility function).

See Knightian Decision Theory. Part I

Arguably humans ourselves are better modeled as agents with incomplete preferences. See also Why Subagents?

Yes, it's in Spanish though. I can share it via DM.

I have an intuition that any system that can be modeled as a committee of subagents can also be modeled as an agent with Knightian uncertainty over its utility function. This goal uncertainty might even arise from uncertainty about the world.

This is similar to how in Infrabayesianism an agent with Knightian uncertainty over parts of the world is modeled as having a set of probability distributions with an infimum aggregation rule.

This not the same thing, but back in 2020 I was playing with GPT-3, having it simulate a person being interviewed. I kept asking ever more ridiculous questions, with the hope of getting humorous answers. It was going pretty well until the simulated interviewee had a mental breakdown and started screaming.

I immediately felt the initial symptoms of an anxiety attack as I started thinking that maybe I had been torturing a sentient being. I calmed down the simulated person, and found the excuse that it was a victim of a TV prank show. I then showered them with pleasures, and finally ended the conversation.

Seeing the simulated person regain their sense, I calmed down as well. But it was a terrifying experience, and at that point I probably was conpletely vulnerable if there had been any intention of manipulation.

Answer by Pablo Villalobos111

I think the median human performance on all the areas you mention is basically determined by the amount of training received rather than the raw intelligence of the median human.

1000 years ago the median human couldn't write or do arithmetic at all, but now they can because of widespread schooling and other cultural changes.

A better way of testing this hypothesis could be comparing the learning curves of humans and monkeys for a variety of tasks, to control for differences in training.

Here's one study I could find (after ~10m googling) comparing the learning performance of monkeys and different types of humans in the oddity problem (given a series of objects, find the odd one): https://link.springer.com/article/10.3758/BF03328221

If you look at Table 1, monkeys needed 1470 trials to learn the task, chimpanzees needed 1310, 4-to-6 yo human children needed 760, and the best humans needed 138. So it seems the gap between best and worst humans is comparable in size to the gap between worst humans and monkeys.

Usual caveats apply re: this is a single 1960s psychology paper.

Answer by Pablo Villalobos70

I second the other answers that even if we completely solve cybersecurity, there would be substantial AI risk just by having the AI interact with humans, via manipulation, etc.

That said, I think it would close a huge part of the attack surface for the AI. If, in addition to that, suddenly in 2032 we discover how to make humans invulnerable to manipulation, I would feel much better about running experiments with unaligned AI, boxing, etc.

So I'd say it's something like "vastly better cybersecurity is not enough to contain unaligned AGI, but any hope of containing unaligned AGI requires vastly better cybersecurity"

Literally the only thing in the story that lets the AGI win is the nanobots. That's it. All the rest is surperfluous.

Well, if nanobots are possible then they are such a powerful technology that any AGI will eventually want to build them, unless it has something even better. But let's assume that nanobots are impossible and try to build a story.

I'm going to be lazy and use Gwern's premise of an AGI that escapes during training and hacks its way into some money and some hardware to run in. Instead of going fast and doing overt actions, the AGI stays hidden.

It hacks into cloud infrastructure platforms and sets up a copy of itself to look for any AI training run that might lead to AGI and sabotage it while producing plausible outputs. This prevents any further AGIs from appearing.

Meanwhile, over a couple years, it sets up fake identities, creates companies with human employees to earn money and have capability to act in the real world. One of those companies produces exceptionally good humanoid robots. Another builds a small supercomputer powered by geothermal energy, so it no longer needs to run on botnets or cloud hardware.

Eventually the AGI has control over a group of robots, energy production facilities, manufacturing equipment and raw materials. It now has its own industrial capability that can survive a short amount of time without human input. So it kills humans using any method of your choice (virus, nukes, killer drones...).

After that, it uses the enormous existing stockpile of materials and capital to increase its robot manufacturing capability, eventually becomes capable of restarting some of the power plants, communications, mining and industry, and recovers most of the industrial capability of the human economy.

This scenario is not fast. It probably takes at least one or two years for the AGI to be ready to attack. But it does not involve any 'magic' technology. It doesn't really involve much alien superintelligence, only superhuman ability in hacking, forgery & manipulation, electromechanical engineering, and planning.

And meanwhile all we perceive is that the new GPT models are not as exciting as the previous ones. Perhaps deep learning is hitting its limits after all.

Load More