Mis-Understandings

Wikitag Contributions

Comments

Sorted by

whether

Is that supposed to be weather?

No, because you need different strategies to prove the loop case (which you can do with just a sequence of transitions), the halt case (the same) and the use the infinite amount of memory case.

There is no proof that will catch 100% of case 3, (because then you would have a halting oracle). But you can create a program that will halt iff a program halts, or halt iff it loops in finite states or halts. (I could write it, but it just runs real slow).  You cannot write the program that halts if another program uses an infinite amount of memory (since then you could build a halting oracle). There are NO halting oracles, not just no efficient halting oracles. 

Random Thought. Automated Corp has a real advantage. That is, inference and training can run on the same GPUs, (to a first approximation). So for slow corp, if they spend a day deciding, and don't commit to runs, they are wasting GPU time. But the other corps don't have this problem. It is a big problem. There is something there, the real question is (How much does thinking about the results of your test queue improve the informational value of the tests you run). 

A program must either loop, halt, or use an infinite amount of memory. 

Halting is almost never the highest EV action for any goal

Looping mught be high EV in some cases and some goals, but I would not expect many of them, and definitely not short loops (the longer the loop, the more like case 3). 

So the policy (Always take the highest EV action for a particular goal (that is the danger model)) is a program that will use an infinite amount of memory, since it neither halts nor loops.  

Thoughts about HEPA is as a standard. A lower percentage removed can happen in two ways. One is probabilistic, where the same particle might or might not get trapped. Managing probabilistic capture, total circulation and fan stats (airflow vs static pressure) is probably a good idea. Introducing determinsitic non-captures (where there are a a class of particles not captured), can be a problem, as those will not be affected by the purifier. But that is engineering that requires only dilligence.  

Another way of putting it is that it makes more sense to use higher airflow, lower static pressure fans, and the filters should work with that.
 

I agree that the correct measure is particulates where people breathe, not simply exhaust particulates. 

A contextualization of people toting big personal speedup numbers. 

People get way more productive by rethinking their workflow, especially in research, not all the time, but like it was not an unprecedented story in 2015. 

Do you remember when people were talking about 10x engineers in the 2010s. 

Discovering that in a new workflow, you are the 10x engineer is not unprecedented. 

The question is the rate of (try new thing)-> clicks with workflow so output jumps, higher. 

Sometimes, people got 10x more productive from some change before any of this, so understand that any change in workflow has a noise floor even at these productivity leaps. 

No, because we want some of that behaviour. It is neccesary for being able to split research tasks across multiple models (in research settings), so that we can get integrated work out of forks, which requires some amount of communication.

Additionally, cross inference communication is likely a goal for practical applications, since it is what allows (I send a customer service bothering agent to talk to the customer service agent), which is a predicted pattern.

Basically, stenography concerns mean that cross inference communication can come through any shared environement element, and so to rule it out is to block all communication, and additionally hide all shared environment, and so we can never guarntee that there is not a bypass.

But siloing is still a useful tool for principle of minimum required privledge out of regular cybersecurity. 

The basic theory of action is that signs of big problems (red hands) will generate pauses and drastic actions.

 

Which is a governance claim. 

Also note that there are people who can tutor you in geoguesser, but not in interpreting pixel art.

If even one blog that goes through that process step by step ends up in the training data, and it is routinely a useful subtask in image tasks (What and where are correlated), then the subcapacity can be directly elicited. 

This works probably better for the drone units than the infantry (for instance).

Specifically, the policy of sending drones to the units that confirm the most kills in a way that is really hard to fake (the video, and the fact that lying here would result in punishment (obviously)) is a regular logistics policy.

This is just doing 3 things. The first is making the requisition game more legible to individual soldiers (for NATO style militaries this is very good), because this policy of supply priority and flexibility focusing on the most successful units is not a new system (it is built into the actual practice of professional militaries). Secondly it probably results in better data, because now they have a data pipeline for it. Third is that it affects morale, because all military communication also does that. 

Load More