The 'new user' flag being applied to old users with low karma is condescending
as fuck.
I'm not a new user. I'm an old user who has spent most of my recent time on LW
telling people things they don't want to hear.
Well, most of the time I've actually spent posting weekly meetups, but other
than that.
4
5Garrett Baker8d
Last night I had a horrible dream: That I had posted to LessWrong a post filled
with useless & meaningless jargon without noticing what I was doing, then I went
to slee, and when I woke up I found I had <−60 karma on the post. When I read
the post myself I noticed how meaningless the jargon was, and I myself couldn't
resist giving it a strong-downvote.
5DirectedEvolution8d
Over the last six months, I've grown more comfortable writing posts that I know
will be downvoted. It's still frustrating. But I used to feel intensely anxious
when it happened, and now, it's mostly just a mild annoyance.
The more you're able to publish your independent observations, without worrying
about whether others will disagree, the better it is for community epistemics.
1
3jacquesthibs8d
AI labs should be dedicating a lot more effort into using AI for cybersecurity
as a way to prevent weights or insights from being stolen. Would be good for
safety and it seems like it could be a pretty big cash cow too.
If they have access to the best models (or specialized), it may be highly
beneficial for them to plug them in immediately to help with cybersecurity
(perhaps even including noticing suspicious activity from employees).
I don’t know much about cybersecurity so I’d be curious to hear from someone who
does.
3Quinn8d
messy, jotting down notes:
* I saw this thread https://twitter.com/alexschbrt/status/1666114027305725953
[https://twitter.com/alexschbrt/status/1666114027305725953] which my
housemate had been warning me about for years.
* failure mode can be understood as trying to aristotle the problem, lack of
experimentation
* thinking about the nanotech ASI threat model, where it solves nanotech
overnight and deploys adversarial proteins in all the bloodstreams of all the
lifeforms.
* These are sometimes justified by Drexler's inside view of boundary conditions
and physical limits.
* But to dodge the aristotle problem, there would have to be an amount of
bandwidth of what's passing between sensors and actuators (which may roughly
correspond to the number of do applications in pearl)
* Can you use something like communication complexity
https://en.wikipedia.org/wiki/Communication_complexity
[https://en.wikipedia.org/wiki/Communication_complexity] (between a system
and an environment) to think about "lower bound on the number of
sensor-actuator actions" mixed with sample complexity (statistical learning
theory)
* Like ok if you're simulating all of physics you can aristotle nanotech, for a
sufficient definition of "all" that you would run up against realizability
problems and cost way more than you actually need to spend.
Like I'm thinking if there's a kind of complexity theory of pearl (number of do
applications needed to acquire some kind of "loss"), then you could direct that
at something like "nanotech projects" to fermstimate the way AIs might tradeoff
between applying aristotlean effort (observation and induction with no
experiment) and spending sensor-actuator interactions (with the world).
There's a scenario in the sequences if I recall correctly about which physics an
AI infers from 3 frames of a video of an apple falling, and something about how
security mindset suggests you shouldn't expect your information-theoret