Noah Scales

Wiki Contributions

Comments

Sorted by

Huh, ok. I will have to check out the new version. Thanks!

Why was connectionism unlikely to succeed?

Can you clarify? I'm not sure what you mean by this with respect to machine learning.

Interesting, and why is that an improvement?

Hmm, that's interesting. Thanks Peter!

Answer by Noah Scales30

How I do math that starts out as a mathematical expression

I learned how to do math on paper or blackboard, except for an interlude at a Montessori school, where we used physical media. After a while, any math problem that took the form of a listing of mathematical expressions was one to solve with successive string manipulations. The initial form implied a set of transformations and a write-up that I had to perform. By looking at what I just wrote, and plotting how to create a transformation closer to my final answer, but with just a few manipulations of the earlier expression, I would record successive approaches to the final answer.

"Move the expression there, put that number there, that symbol there, combine those numbers into a new number using that operator, write that new thing underneath that old thing, line up the equals signs, line up those numbers,". Repeat down the page as I write.

How I do math that starts out with a verbal description(for example, an algebra word problem)

Starting from the verbal description, get an idea of the mathematical expressions that I need to write to express what's given in the problem. Write those down. From there, go on to solve the mathematical expression with a successive transformation approach.

How I do math that starts out with a graphical description (for example, a graph of a function)

Similar to starting with a verbal description, get an idea of the mathematical expressions that I need to write to express what's given in the problem. From there, go on to solve the mathematical expression with a successive transformation approach.

NOTE: sometimes a verbal description could benefit from a picture, for example, in a physics problem. In that case, I would draw a picture of the physical system to help me identify the corresponding mathematical expressions to start with but also to help me feel like I "understand" what the verbal description depicts.

About internal visualizing vs using cognitive aids to do math

Cognitive aids reduce cognitive load for representing information. If you can choose between having an external picture that you can look at anytime of a graph, versus an internal picture of the same graph, then for most purposes, and to allow you the most freedom of operation in approaching a mathematics problem, I would suggest that you use the cognitive aid.

That's how I've always preferred to do math:

  • write out a math expression or draw out the graph or diagram of the problem
  • don't reduce write-outs of successive steps if that could lead to calculation errors
  • keep it all neat on the page

For higher-level math, I think it makes sense to use a computer as much as you can to handle details and visualization, relying on its precision and memory while you concentrate on the identification and use of an algorithm that produces a useful solution to the problem.

Conclusions about how I do math

Basically, I do it the same way I have since I was little. I remember learning my times tables by memorization and writing them out, then paper and pencil work in class, then a bit of calculator work much later on with a TI-83, but mainly for large-number multiplication or division, or to check my work.

I took a graph theory class for my math minor, and I wish I still had my notes. I suspect that some of the answers were actually pictures, but I don't remember much of what I did in the class, it was 30 years ago.

Mathematics involving a computer is more or less the same. You write out equations, but you might be working with cell references or variable names or varying data sets, so you are basically stopping at the point that you turn a word or graphical problem description into a mathematical expression. Then you let the calculator or computer do the work.

Correlations among AGI doomer predictions could reveal common AI Safety milestones

I am curious about how considerations of overlaps could lead to a list of milestones for positive results in AI safety research. If there are enough exit points from pathways to AI doom available through AI Safety improvements, a catalog of those improvements might be visible in models of correlations among AI Safety researcher's predictions about AGI doom of various sorts.

But as a sidenote, here's my response to your mention of climate researchers thinking in terms of P(Doom).

Commonalities among climate scientists that are pessimistic about climate change and geo-engineering

Doubts held by pessimistic climate scientists about future climate plausibly include:

  • countries meeting climate commitments (they haven't so far and won't in future).
  • tipping elements (e.g., Amazon) remaining stable this century (several are projected to tip this century).
  • climate models having resolution and completeness sufficient for atmospheric geo-engineering (none do).
  • politics of geo-engineering staying amiable (plausibly not if undesirable weather effects occur).
  • GAST < 1.5C this century (GAST is predicted to rise higher this century).

Common traits among more vocal climate scientists forecasting climate destruction could include:

  • applying the precautionary principle.
  • rejecting some economic models of climate change impacts.
  • liking the idea of geo-engineering with marine cloud brightening in the Arctic.
  • disliking atmospheric aerosol injection over individual countries.
  • agreeing with (hypothetical) DACCS or BECCS that is timely and scales well.
  • worrying about irreversible tipping element changes such as sea level rise from Greenland ice melt.

NOTE: I see that commonality through my own browsing of literature and observations of trends among vocal climate scientists, but my list is not the result of any representative survey.

Climate scientist concern over a climate emergency contrasts with prediction of pending catastrophe

There's 13,000 scientist signatories on a statement of a climate emergency, I think that shows concern (if not pessimism) from a vocal group of scientists about our current situation.

However, if climate scientists are asked to forecast P(Doom), the forecasts will vary depending on what scientists think:

  • that countries and people will actually do as time goes on.
  • is the amount of time available to limit GHG production.
  • are the economic and societal changes suitable vs pending as climate change effects grow.
  • is the amount of time required to implement effective geo-engineering.
  • is the technological response suitable to prevent, adapt to, or mitigate climate change consequences.

A different question is how climate scientists might backcast not_Doom or P(Doom) < low_value. A comparison of the scenarios they describe could show interesting differences in worldview.

You wrote:

I'm not sure I understand your question. Do you mean, why wouldn't someone who's running the engine I describe end up resenting things like OpenAI that seem to be accelerating AI risk?

For one, I think they often do.

Oh. Good! I'm a bit relieved to read that. Yes, that was the fundamental question that I had. I think that shows common-sense.

I'm curious what you think a sober response to AGI research is for someone whose daily job is working on AI Safety, if you want to discuss that in more detail. Otherwise, thank you for your answer.

I'm a little surprised that doomerism could take off like this, dominate one's thoughts, and yet fail to create resentment and anger toward of its apparent cause source. Is that something that was absent for you or was it not relevant to discuss here? 

I wonder:

  • in the prediction of doom, as the threat seems to be growing closer, does that create resentment or anger at apparent sources of that doom? If I dwelled on AI existential risk, I would feel more resentment of sources of that risk.
  • do the responses to that doom, or desperation of measures, become wilder as one thinks about it more? Just a passing thought about AI doom immediately brings to mind, "Lets stop these companies and agencies from making such dangerous technology!" In other words, lets give up on AI safety and try the other approach.
  • is there still appeal to a future of AGI? I can see some of the excitement or tension around the topic coming from the ambiguity of the path toward AGI and their consequences. I've seen the hype about AGI to be that it saves humanity from itself, advances science radically, turbo-charges economic growth, etc. Is that vision, alternated with a vision of horrible suffering doom, a cause of cognitive dissonance? I would think so.

Factors that might be protecting me from this include:

  • I take a wait and see approach about AGI, and favor use of older, simpler technologies like expert systems or even simpler cognitive aids relying on simple knowledgebases. In the area of robots, I favor simpler, task-specific robots (such as manufacturing robot arms) without, for example, self-learning abilities or radically smart language recognition or production. It's helpful to me to have something specific to advocate for, and think about, as an alternative, rather than thinking that it's AGI or nothing. 
  • I assume that AGI development is overall, a negative outcome, simply more risk to people (including the AGI themselves, sure to be exploited if they are created). I don't accept that AGI development offers necessary opportunities for human technological advancement. In that  way, I am resigned to AGI development as a mistake others make My hopes are not in any way invested in AGI. That saves me some cognitive dissonance.

Thank you for sharing this piece, I found it thought-provoking.