Tetraspace Grouping

Tetraspace Grouping's Posts

Sorted by New

Tetraspace Grouping's Comments

Habryka's Shortform Feed

Since this hash is publicly posted, is there any timescale for when we should check back to see the preimage?

Tetraspace Grouping's Shortform

Life 3.0 Liveblog/Review Thread

Prelude

The prologue begins with a short story called the Tale of the Omega Team. It's a wish-fulfilment pseudo-isekai about a bunch of effective altruist tech people working for not-Google called the Omegas who make an AGI and then use it to take over the world.

But a cybersecurity specialist on their team talked them out of the game plan [...] risk of Prometheus breaking out and seizing control of its own destiny [...] weren't sure how its goals would evolve [...] go to great lengths to keep Prometheus confined

For some reason, the Omegas in the story claim that the Prometheus (the AI) might be unsafe, and then proceed to do things like have it write software which they then run on computers and let it produce long pieces of animated media and let it send blueprints of technologies to scientists. There is a cybersecurity expert in the team who just barely stops them from straight up leaving the whole thing unboxed, and I do not envy her job position.

(Prometheus is safe, it turns out, which I can tell because there are humans alive at the end of the story.)

[...] Omega-controlled [...] controlled by the Omegas [...] the Omegas harnessed Prometheus [...] the Omegas' [...] the Omegas' [...]

There's also another odd thing where it says that the Omegas are using Prometheus as a tool to do things, instead of what's clearly actually happening which is that Prometheus is achieving its goals with the Omegas being some lumps of atoms that it's been pushing around according to its whims, as it has been since they decided to switch it on.

All-in-all, I like it. It wouldn't be out of place on r/rational, if wish-fulfillment pseudo-isekai does happen then AGI sweeping aside the previous social order will be how (a real AGI would come close to some of the capabilities I've seen those protagonists have), and fiction about more plausible robopocalypses (or roboutopias) coming about is always great.

A Critique of Functional Decision Theory

The note is just set-dressing; you could have both the boxes have glass windows that let you see whether or not they contain a Bomb for the same conclusions if it throws you off.

[This comment is no longer endorsed by its author]Reply
Tetraspace Grouping's Shortform

In the Parable of Predict-O-Matic, a subnetwork of the titular Predict-O-Matic becomes a mesa-optimiser and begins steering the future towards its own goals, independently of the rest of Predict-O-Matic. It does so in a way that sabotages the other subnetworks.

I am reminded of one specification problem that a run of Eurisko faced:

During one run, Lenat noticed that the number in the Worth slot of one newly discovered heuristic kept rising, indicating that Eurisko had made a particularly valuable find. As it turned out the heuristic performed no useful function. It simply examined the pool of new concepts, located those with the highest Worth values, and inserted its name in their My Creator slots.

One thing I wondered is whether this could happen in humans, and if not, why it doesn't. A simplified description of memory that I learned in a flash game is that "neural connections" are "strengthened" whenever they are "used", which sounds sort of like gradients in RL if you don't think about it too hard. Maybe the analogue of this would be some memory that "wants" you to remember it repeatedly at the expense of other memories. Trauma?

ozziegooen's Shortform

Other things that Tim might mean when he says 20%:

  • Tim is being dishonest, and believes that the listeners will update away from the radical and low-status figure of 20% to avoid being associated with the lowly Tim.
  • Tim believes that other listeners will be encouraged to make their own probability estimates with explicit reasoning in response, which will make their expertise more legible to Tim and other listeners.
  • Tim wants to show cultural allegiance with the Superforecasting tribe.
Should We Still Fly?

Quick estimate: Global average is 4.8 tons per person = $50 additional per year per life saved = ~$1500 total (over 30 additional years of life), so over the course of saving an average person's life the costs if you're buying offsets are the same order as the costs of saving a life via a Givewell charity (~half).

For the people helped by Givewell recommended charities, the additional CO2 emissions are probably lower; among the world's poorest, <1 tons of CO2 per capita per year is pretty common, which is <$300 over a lifetime, about an order of magnitude less than the cost of saving a life.

Tetraspace Grouping's Shortform

Over the past few days I've been reading about reinforcement learning, because I understood how to make a neural network, say, recognise handwritten digits, but I wasn't sure how at all that could be turned into getting a computer to play Atari games. So: what I've learned so far. Spinning Up's Intro to RL probably explains this better.

(Brief summary, explained properly below: The agent is a neural network which runs in an environment and receives a reward. Each parameter in the neural network is increased in proportion to how much it increases the probability of making the agent do what it just did, and how good the outcome of what the agent just did was.)

Reinforcement learners play inside a game involving an agent and an environment. On turn , the environment hands the agent an observation , and the agent hands the environment an action . For an agent acting in realtime, there can be sixty turns a second; this is fine.

The environment has a transition function which takes an observation-action pair and responds with a probability distribution over observations on the next timestep ; the agent has a policy that takes an observation and responds with a probability distribution over actions to take .

The policy is usually written as , and the probability that outputs an action in response to an observation is . In practise, is usually a neural network that takes observations as input and has actions as output (using something like a softmax layer to give a probability distribution); the parameters of this neural network are , and the corresponding policy is .

At the end of the game, the entire trajectory is assigned a score, , measuring how well the agent has done. The goal is to find the policy that maximises this score.

Since we're using machine learning to maximise, we should be thinking of gradient descent, which involves finding the local direction in which to change the parameters in order to increase the expected value of by the greatest amount, and then increasing them slightly in that direction.

In other words, we want to find .

Writing the expectation value in terms of a sum over trajectories, this is = , where is the probability of observing the trajectory if the agent follows the policy , and is the space of possible trajectories.

The probability of seeing a specific trajectory happen is the product of the probabilities of any individual step on the trajectory happening, and is hence where is the probability that the environment outputs the observation in response to the observation-action pair . Products are awkward to work with, but products can be turned into sums by taking the logarithm - .

The gradient of this is . But what the environment does is independent of , so that entire term vanishes, and we have . The gradient of the policy is quite easy to find, since our policy is just a neural network so you can use back-propagation.

Our expression for the expectation value is just in terms of the gradient of the probability, not the gradient of the logarithm of the probability, so we'd like to express one in terms of the other.

Conveniently, the chain rule gives , so . Substituting this back into the original expression for the gradient gives

,

and substituting our expression for the gradient of the logarithm of the probability gives

.

Notice that this is the definition of the expectation value of , so writing the sum as an expectation value again we get

.

You can then find this expectation value easily by sampling a large number of trajectories (by running the agent in the environment many times), calculating the term inside the brackets, and then averaging over all of the runs.

Neat!

(More sophisticated RL algorithms apply various transformations to the reward to use information more efficiently, and use various gradient descent tricks to use the gradients acquired to converge on the optimal parameters more efficiently)

Grue_Slinky's Shortform

Are we allowed to I-am-Groot the word "cake" to encode several bits per word, or do we have to do something like repeat "cake" until the primes that it factors into represent a desired binary string?

(edit: ah, only nouns, so I can still use whatever I want in the other parts of speech. or should I say that the naming cakes must be "cake", and that any other verbal cake may be whatever this speaking cake wants)

Follow-Up to Petrov Day, 2019

If anyone asks, I entered a code that I knew was incorrect as a precommitment to not nuke the site.

Load More