the gears to ascenscion

Wiki Contributions


I continue to expect that I will prefer to control my computer with formal grammars - I have spent significant time using caster speech recognition with dragon, and I'm sure I'll keep doing so with openai whisper. nobody has ever beaten the CLI and nobody ever will.

now, if an AI could automatically generate interfaces that standardize messy UIs into coherent formal grammars that fit comfortably in a CLI workflow, that would be amazing...

also, I want a CLI that shows me what commands are available at any step.

yeah, eventually keyboard clis will be beaten, but even with whisper I expect to sometimes prefer keyboard. it's just that hard to beat CLIs.

I'm curious what examples of highly authoritarian threats you see besides trump, and how trump ranks among them

oh this is great, agreed on all points. (but what if they just embed logseq)

representation theory is a related topic - representing other math using linear algebra. see, eg, - the wikipedia page is also an okay intro.

the question isn't what class of problems can be understood, it's how efficiently can you jump to correct conclusions, check them, and build on them. any human can understand almost any topic, given enough interest and willingness to admit error that they actually try enough times to fail and see how to correct themselves. but for some, it might take an unreasonably long time to learn some fields, and they're likely to get bored before perseverance compensates for efficiency at jumping to correct conclusions.

in the same way, a sufficiently strong ai is likely to be able to find cleaner representations of the same part of the universe's manifold of implications, and potentially render the implications in parts of possibility space much further away than a human brain could given the same context, actions, and outcomes.

in terms of why we expect it to be stronger, because we expect someone to be able to find algorithms that are able to model the same parts of the universe as advanced physics folks study, with the same or better accuracy in-distribution and/or out-of-distribution, given the same order amount of energy burned as it takes to run a human brain. once the model is found it may be explainable to humans, in fact! the energy constraint seems to push it to be, though not perfectly. and likely the stuff too complex for humans to figure out at all is pretty rare - it would have to be pseudo-laws about a fairly large system, and would probably require seeing a huge amount of training data to figure it out.

semi-chaotic fluid systems will be the last thing intelligence finds exact equations for.

If even professional researchers can't easily understand the papers, it means they don't have high level ideas about "learning"[1]. So it's strange to encounter a rare high level idea and say that it's not worth anyone's time if it's not math. Maybe it's worth your time because it's not math. Maybe you just rejected thinking about a single high level idea you know about abstract learning.

This will be my last comment on this post, but for what it's worth, math vs not-math is primarily a question of vagueness. Your english description is too vague to turn into useful math. Precise math can describe reality incredibly well, if it's actually the correct model. Being able to understand the fuzzy version of precise math is in fact useful, you aren't wrong, and I don't think your sense that intuitive reasoning can be useful is wrong. Your idea here, however, seems to underspecify which math it describes, and to the degree I can see ways to convert it into math, it appears to describe math which is false. The difficulty of understanding papers isn't because they don't understand learning, it's simply because writing understandable scientific papers is really hard and most papers do a bad job explaining themselves. (it's fair to say they don't understand it as well as they ideally would, of course.)

I agree that good use of vague ideas is important, but someone else here recently made the point that a lot of what needs to be done to use vague ideas well is to be good at figuring out which vague ideas are not promising and skip focusing on them. Unfortunately, vagueness makes it hard to avoid accidentally paying too much attention to less-promising ideas, and it makes it hard to avoid accidentally paying too little attention to highly-promising ideas.

In machine learning, it is very often the case that someone tried an idea before you thought of it, but tried it poorly and their version can be improved. If you want to make an impact on the field, I'd strongly suggest finding ways to rephrase this idea so that it is more precise; again, my problem with it is that it underspecifies the math severely and in order to make use of your idea I would have to go myself read those papers I suggest you go look at.

Load More