Posts

Sorted by New

Wiki Contributions

Comments

description of (network, dataset) for LLMs ?= model that takes as input index of prompt in dataset, then is equivalent to original model conditioned on that prompt

There exist inexpensive real co2 sensors, e.g. https://www.sparkfun.com/products/22396 . Datasheet says only updates every 5 seconds & 60s response time "for achieving 63% of a respective step function", which I guess is what parent comment means by "They’ll likely be extremely slow".

Probably worth searching e.g. digikey for sensors with faster response time.

What about specialized algorithms for problems (e.g. planning algorithms)?

acertain7mo1511

IANAL, but I think that this is currently impossible due to anti-trust regulations.

I don't know anything about anti-trust enforcement, but it seems to me that this might be a case where labs should do it anyways & delay hypothetical anti-trust enforcement by fighting in court.

blueiris's posts read to me as a combination of good concepts & poor quality attacks/attempts to defend leverage (or something?). Personally I'd mind the attacks more if they were more successful and/or less obvious I think? As-is they're annoying but don't seem very dangerous epistemically.

Trying to reduce the amount of compute risks increasing hardware overhang once that compute is rebuilt. I think trying to slow down capabilities research (e.g. by getting a job at an AI lab and being obstructive) is probably better.

edit: meh idk. Whether or not this improves things depends on how much compute you can destroy & for how long, ml scaling, politics, etc etc. But the current world of "only big labs with lots of compute budget can achieve SOTA" (arguable, but possibly more true in the future) and less easy stuff to do to get better performance (scaling) both seem good.

I personally think work on reduced precision inference (e.g. 4 bit!) is probably useful, as circuits should be easier to analyze than floats.

Answer by acertainJul 28, 202130

How to convert simple predictions/probability distributions (e.g. $stock will go down with x% probability at a date distributed around day Y an amount normally distributed around Z) into positions.

How much should the average person worry about tail risk? the average EA?

Less naive portfolio construction.

What tools from quantitative finance might be useful outside of finance: Econometrics & probabilistic modeling as used in finance (or as used 8 years ago or whatever)? Risk modeling?