NicholasKross

Software engineer, blogging and editing at /nickai/. PM me your fluid-g-increasing ideas.

Wiki Contributions

Comments

I don't have much energy. I have my prescribed medication for my ADHD, but it both relies on, and can cause problems with, me getting good sleep. (Also tolerance maybe? But that's confounded with the sleep thing.). I think I might do "best" with a >24-hour sleep-wake cycle.

I have a decent tech job, but it neither makes enough money to quickly save runway, nor leaves me with enough energy after work to do much of anything productive. Also, it starts with a required morning voice-call, and I'm on the US East Coast, so not even a state/timezone switch can help me with this unless I leave the country. (Plus I have to be online overlapping at least a few hours with some coworkers, in order to actually do my job.)

I want to do technical AI alignment work, but I'm right on the edge of energy/intelligence/working-memory/creativity/rationality, where I legitimately don't know if I'm "cut out" for this work or not. The field's incentive as a whole is to never discourage me from the work, while also not helping me much (grants, jobs, ongoing support) without me having to signal my abilities.

Doing anything to signal my abilities is hard, due to the energy thing. So I need slack to signal much of anything useful about myself, but I need these signals to already exist for me to "earn" this slack. (This also applies to getting a better tech job, with the signaling being things like at-home coding projects, night classes, certifications...)

Perhaps I could simply save money, get runway, and then that's my slack to do the other stuff? 4 problems with this:

  1. My AI timelines are mostly shorter than 15 years, and my wages don't make me enough money to save for runway at anything other than a glacially slow pace.
  2. I could maybe lower the required amount of money for runway by buying some cheap land and a shed... but I'm not sure where/how to do this in a way that meets the criteria of "cheap enough to save for in <1 year, is <30 min away from groceries, is actually legal to live in, and can accommodate decent Internet access". At minimum, I'd have to leave New York state, and also setting up a shed might involve lots of work (not sure how much).
  3. In any case, the runway would only work well if I do manage to get myself to a sustainable life-setup. Unless I'm in the fully-living-off-investments shed route described above, this requires me to be smart/talented/etc enough for AI alignment and/or some other "gig" that doesn't require me to wake up early. As noted above, I'm legitimately uncertain about this. One round of AISS video coaching (as well as talking to many people at EAG SF 2022) did not solve this.
  4. All this, and I'm 23. Math-heavy work (of the kind I think technical AI alignment requires) is notorious for requiring flexibility and energy of the kind associated with youth and possibly irrecoverable after age 25.
  5. The poor-sleep-energy cycle seems difficult to break, and it also impacts my real quality-of-life and mental health. So I really don't want to rely on a solution like "suck it up for 5 years and save money". The other reasons above tie into this "I want to get my life together quickly" desire, but I also just... desire it.

Lurking behind all this, is a suspicion: If I always have excuses for not getting my life together, then it's more likely that some of those excuses are fake in some way. But like... which ones?

My current methods of action are:

  • investigate cheap barely-livable land in other US states.
  • post my ramblings as LW shortforms, in the hopes that they both (output any of my good ideas to people who can execute them quicker) and (can eventually signal something good about me... when it's not just complaining or asking for help).
  • trying to sublimate my more bizarre personality traits (which don't help with most of the above problems!) into low-effort creative works in my spare time, mainly YouTube videos under pseudonyms. This is also a long-shot to make more money through such creative works, although this can't actually be relied on when planning for the long run, obviously.
  • trying to save money for the shed/land plan, which relies the least on my competence.
  • maybe occasionally entering AI-related contests with cash prizes. (I've done this three times, and won money twice, although I did poorly in the most technical/actually-useful-for-alignment one). This is hardest to do for the energy/time reasons noted above, so I'm not likely to do it often.

If anyone knows how to fix my life, for <$1,000 upfront and/or <$100/month ongoing, that'd be helpful. I mean anti-inductive advice (no "try losing weight" (too vague, even if true) or "try BetterHelp" (I already tried it)), that's personally tailored to my exact ouroboros of problems described above.

(If an idea seems risky, to you or to me, DM me about it instead of posting it publicly.)

Yep, agreed. I'm just glad that (allegedly?) the LTFF is still doing specifically the upskilling-grant thing.

Bad-case, I get to have to work on harebrained side-business ideas as well, while jobless-and-not-yet-funded (or even while-funded-but-not-for-a-long-runway, possibly?).

Forgot to mention this in the post proper, but: Pages would be organized in a multi-examples-per-subsection way, where each subsection corresponds to something like a part of an extended "ADEPT Method".

Agreed, most "fraudulent" listed public companies (on places like the NYSE, where they actually check stuff), fill weird conditions like:

  • being really old, to where their corporate history stretches back before modern high-standards checks.
  • being acquired/SPAC'd to allow a fraudulent private company to kinda list itself.
  • be based on a larger fraud that probably isn't accounting/insider related.

(Disclaimer: not an expert, not financial advice.)

(sources: discord chats on public servers) Why do I believe X?

What information do I already have, that could be relevant here?

What would have to be true such that X would be a good idea?

If I woke up tomorrow and found a textbook explaining how this problem was solved, what's paragraph 1?

What is the process by which X was selected?

I was texting multiple Discords for a certain type of mental "heuristic", or "mental motion", or "badass thing that Planecrash!Keltham or HPMOR!Harry would think to themselves, in order to guide their own thoughts into fruitful and creative and smart directions". Someone commented that this could also be reframed as "language-model prompts to your own mind" or "language-model simulations of other people in your own mind".

I've decided to clarify what I meant, and why even smart people could benefit from seemingly hokey tricks like this.

Heuristics/LMs-of-other-people are something like, a mental hack to trick our godshatter human brains into behaving smarter than we reasonably would have if left unprompted, due to our computational (and in particular our "recall" memory) limitations.

Like, yes, optimal Bayesian reasoners (plus a few Competent People who might exist, like Yudkowsky or Wentworth or mumblemumble) can do this unprompted, presumably because they have years of practice at the "mental motions" and/or better recall ability (combined, of course, with the rarity of having read any of the Sequences at all, to make a lot of this stuff explicit). Thus, the heuristics help those of us who don't consciously remember every Bayes-approximating tip we've ever heard about a given mental situation.

Yep, we can easily have multiple hypotheses of the form "something we don't (yet) understand has caused this". My odds are more on "weather/camera/light/experimental aircraft we don't understand" than "aliens we don't understand".

One problem with "using a simpler example", is that there's a lower bound. Prime numbers are not-too-hard to explain, at some levels of thoroughness.

Like, some part of my subconscious basically thinks (despite evidence to the contrary): "There is Easy Math and Hard Math. All intuitive explanations have been done only about Easy Math. Hard Math is literally impossible to explain if you don't already understand it."

Part of the point of Mathopedia, is to explicitly go after hard, advanced, graduate-level and research-level mathematics. To make them intelligible enough that someone can learn them just from browsing the site and maybe doing a few exercises.

Even if they need to go down a TVTropes-style rabbit-hole (still within the site) to find all the background knowledge they're missing.

Even if we add increasingly-unrealistic constraints like "any non-mentally-disabled teen should be able to do this".

Even if it requires laborious features like "there should be a toggle switch / separate page-subsection that replaces all the jargon in a page with [parentheses of (increasingly recursive (definitions)]], so the whole page is full of run-on sentences while also in-principle being explainable to an elementary schooler".

Even if we have to use some incredibly hokey diagrams.

Pretty tangential, feel free to remove:

The YouTube "BarelySociable" did a 2-part video a while back, trying to figure out who Satoshi Nakamoto was. He gave pretty decent evidence it was a British guy who's not any of the 3 candidates mentioned.

Yep! My main hope is that it works in a niche of people who needed specifically-it (or who find it more "intrinsically fun" to read and/or contribute to than the other options).

Load More