Pattern

Interested in math, Game Theory, etc.

Pattern's Comments

Algorithms vs Compute
Which of the two would perform better?

Will the experiment be run?


What is the experiment? What is the question?

I take a vision or language model which was cutting edge in 2000, and run it with a similar amount of compute/data to what's typically used today.

Guess A. Is the difference (between 2000 and today) modern compute?

I take a modern vision or language model, calculate how much money it costs to train, estimate the amount of compute I could have bought for that much money in 2000, then train it with that much compute

Guess B. Is the difference (between 2000 and today) modern compute costs?

But the experiment doesn't seem to be about A or B. More likely it's about both:

Which is more important (to modern ML performance (in what domain?*)):

  • Typical compute (today versus then)?
  • Or typical compute cost (today versus then)?

(Minor technical note - if you're comparing results from the past, to results today, while it might be impossible to go back in time and test these things for a control group, rather than taking 'things weren't as good back then' for granted, this should be tested as well for comparison. (Replicate earlier results.**)

This does admit other hypotheses.

For example, 'the difference between 2020 and 2000 is that training took a long time, and if people set things up wrong, they didn't get feedback for a long time. Perhaps modern compute enables researchers to set ML programs up correctly despite the code not being written right the first time.')


A and B can be rephrased as:

  • Do we use more compute today, but spend 'the same amount'?
  • Do we spend 'more' on compute today?

*This might be intended as a more general question, but the post asks about:

vision or language model[s.]

**The most extreme version would be getting/recreating old machines and then re-running old ML stuff on them.

Hedonic asymmetries

Perhaps hobbies are areas where people understand this about themselves, albeit narrowly.

Hedonic asymmetries

But there are behaviors we always seek out. Trivially, eating, and sleeping.

What research has been done on the altruistic impact of the usual good actions?
What research has been done on the altruistic impact of the usual good actions?

There might be research on hedonic impact. (You can also see/test what makes you feel good.)

What research has been done on the altruistic impact of the usual good actions?
or your co-founder's continued stream of horrible ideas are plenty of reward

This sounds weird.

("the dumbest thing I've heard this year". This sounds more like someone snapping at someone else than 'the ideas are part of a horrible stream'.

If the people in question are co-founders, then perhaps they think some of each other's ideas are good.)

Technology Changes Constraints

It's also worth noting that if you want to print a book in Chinese with a printing press, you'll need a lot more than 26 typing blocks.

DanielFilan's Shortform Feed
A bunch of my friends are very skeptical of the schooling system and promote homeschooling or unschooling as an alternative.
As such, I tentatively wish that home/unschooling advocates would focus on more institutional ways of supervising children, e.g. Sudbury schools, community childcare, child labour.

So you're a proponent of improving institutional ways of supervising children?

What happens if we reverse Newcomb's Paradox and replace it with two negative sums? Doesn't it kinda maybe affirm Roko's Basilisk?
Can you please refute this for me

"Reversing things" is complicated. The other way the situation is reversed is that it the basilisk has to be made, instead of Omega already having made a prediction.


My thoughts on NP:

Also, Newcomb's paradox is accompanied by the assumption that there are only 2 choices:

  • Taking the box with a lot of $$$
  • Taking the box with a lot of $$$, and the box with a little $.

However, if one can either take or not take each box, that's 4 (mutually exclusive) choices.

The problem itself addresses taking both boxes (and says that the box with a lot of money will be empty if you do that). And this is where things get complicated ("If you would (take both boxes) if (they were both filled with money) then (only one box will have money in it).).

But what if you take the box with a little $? Instead of one answer, here are several:

1. The box not taken magically disappears.

2. The predictor only puts $$$ in the million dollar box if they predict you will take that box, and only that box.

3. The box not taken does not magically disappear. The choice doesn't end. The prediction is made about your entire life.

4. This scenario is specifically undefined (perhaps because it doesn't need to be - the game was made by a perfect predictor after all, which chose the players...), or something else weird happens.


What should be done in each situation?

1.

Let's suppose we reason in such a fashion that:

1. Take 'box A'.


2. The same as 1.

3. Likely the same as 1. Exceptions include "unless other people can pick up the $1,000 and we don't like other people doing that more than we like getting $1,000,000" and "by means of using some other predictor game, we receive a message from our future self that the good we'll do with the 1 million $, will be less than the evil that was done with the 1 thousand $.".

4. Intentionally left not handled. (A writing prompt.)


My thoughts on Roko's basilisk:

It doesn't sound like there is a predictor in this scenario, but the solution is the same either way:

Kill the basilisk.

(You might enjoy an episode of Sherlock called "A case in pink.")

How Doomed are Large Organizations?

It wasn't removed, it was moved.

In the process of measurement the deaths due to a pesticide you need a complex model about causality.

Suppose we jettisoned causality. What exactly do you think can, and cannot, be measured?

[Link] "Doing being rational: polymerase chain reaction" by David Chapman
Isn't there evidence that human brains/minds have what is effectively a dedicated 'causal reasoning' unit/module? It probably also relies on the 'thing memory' unit(s)/module(s) too tho.

I'm not an expert on neuroscience. I'm not sure how continuous those things are between being modular and integrated. (I have a suspicion they're both.)

Load More