rodeo_flagellum

Wiki Contributions

Comments

For the person who strong downvoted this, it would be helpful to me if you also shared which facets of the idea you found inadequate; the 5 or so people I've talked to online have generally supported this idea and found it interesting. I would appreciate some transparency on exactly why you believe the idea is a waste of time or resources, since I want to avoid wasting either of those. Thanks you.

Great piece.

Note: small thank you for the link https://www.etymonline.com/word/patience; I've never seen this site but I will probably have a lot of fun with it.

Thank you for your input on this. The idea is to show people something like the following image [see below] and give a few words of background on it before asking for their thoughts. I agree that this part wouldn't be too helpful for getting people's takes on the future, but my thinking is that it might be nice to see some people's reactions to such an image. Any more thoughts on the entire action sequence?

I understand its performance is likely high variance and that it misses the details.

My use with it is in structuring my own summaries. I can follow the video and fill in the missing pieces and correct the initial summary as I go along. I haven't viewed it as a replacement for a human summarization.

Thank you for bringing my attention to this.

It seems quite useful, hence my strong upvote.

I will use it to get an outline of two ML Safety videos before summarizing them in more detail myself. I will put these summaries in a shortform, and will likely comment on this tool's performance after watching the videos.

Is there a name for the discipline or practice of symbolically representing the claims and content in language (this may be part of Mathematical Logic, but I am not familiar enough with it to know)?

Practice: The people of this region (Z) typically prefer hiking in the mountains of the rainforest to walking in the busy streets (Y), given their love of the mountaintop scenery (X).

XYZ Output: Given their mountaintop scenery love (X), rainforest mountain hiking is preferred over walking in the busy streets (Y) by this region's people (Z).

Thoughts, Notes: 10/14/0012022 (2)

Contents

  1. Track Record, Binary, Metaculus, 10/14/0012022
  2. Quote: Universal Considerations [Forecasting]
  3. Question: on measuring importance of forecasting questions

Please tell me how my writing and epistemics are inadequate. 

1.

My Metaculus Track Record, Binary, [06/21/0012021 - 10/14/0012022]

2. 

The Universal Considerations for forecasting in Chapter 2 of Francis X. Diebold's book Forecasting in Economics, Business, Finance and Beyond

(Forecast Object) What is the object that we want to forecast? Is it a time series, such as sales of a firm recorded over time, or an event, such as devaluation of a currency, or something else? Appropriate forecasting strategies depend on the nature of the object being forecast.

(Information Set) On what information will the forecast be based? In a time series environment, for example, are we forecasting one series, several, or thousands? And what is the quantity and quality of the data? Appropriate forecasting strategies depend on the information set, broadly interpreted to not only quantitative data but also expert opinion, judgment, and accumulated wisdom.

(Model Uncertainty and Improvement) Does our forecasting model match the true GDP? Of course not. One must never, ever, be so foolish as to be lulled into such a naive belief. All models are false: they are intentional abstractions of a much more complex reality. A model might be useful for certain purposes and poor for others. Models that once worked well may stop working well. One must continually diagnose and assess both empirical performance and consistency with theory. The key is to work continuously toward model improvement.

(Forecast Horizon) What is the forecast horizon of interest, and what determines it? Are we interested, for example, in forecasting one month ahead, one year ahead, or ten years ahead (called h-step-ahead fore- casts, in this case for h = 1, h = 12 and h = 120 months)? Appropriate forecasting strategies likely vary with the horizon.

(Structural Change) Are the approximations to reality that we use for forecasting (i.e., our models) stable over time? Generally not. Things can change for a variety of reasons, gradually or abruptly, with obviously important implications for forecasting. Hence we need methods of detecting and adapting to structural change.

(Forecast Statement) How will our forecasts be stated? If, for exam- ple, the object to be forecast is a time series, are we interested in a single “best guess” forecast, a “reasonable range” of possible future values that reflects the underlying uncertainty associated with the forecasting prob- lem, or a full probability distribution of possible future values? What are the associated costs and benefits?

(Forecast Presentation) How best to present forecasts? Except in the simplest cases, like a single h-step-ahead point forecast, graphical methods are valuable, not only for forecast presentation but also for forecast construction and evaluation.

(Decision Environment and Loss Function) What is the decision environment in which the forecast will be used? In particular, what decision will the forecast guide? How do we quantify what we mean by a “good” forecast, and in particular, the cost or loss associated with forecast errors of various signs and sizes?

(Model Complexity and the Parsimony Principle) What sorts of models, in terms of complexity, tend to do best for forecasting in business, finance, economics, and government? The phenomena that we model and forecast are often tremendously complex, but it does not necessarily follow that our forecasting models should be complex. Bigger forecasting models are not necessarily better, and indeed, all else equal, smaller models are generally preferable (the “parsimony principle”).

(Unobserved Components) In the leading time case of time series, have we successfully modeled trend? Seasonality? Cycles? Some series have all such components, and some not. They are driven by very different factors, and each should be given serious attention.

3. 

Question: How should I measure the long-term civilizational importance of the subject of a forecasting question?

I've used the Metaculus API to collect my predictions on open, closed, and resolved questions.

I would like to organize these predictions; one way I want to do this is by the "civilizational importance" of the forecasting question's content. 

Right now, I've thought to given subjective ratings of importance on logarithmic scale, but want a more formal system of measurement. 

Another idea for each question is to give every category a score of 0 (no relevance), 1 (relevance), or 2 (relevant and important). For example, if all of my categories "Biology, Astronomy, Space_Industry, and Sports", then the question - Will SpaceX send people to Mars by 2030? - would have this dictionary {"Biology":0, "Space_Industry":2, "Astronomy":1, "Sports":0}. I'm unsure whether this system is helpful. 

Does anyone have any thoughts for this? 

Thank you for taking a look Martin Vlach.

For the latter comment, there is a typo. I meant:

Coverage of this topic is sparse relative to coverage of CC's direct effects.

The idea is that the corpus of work on how climate change is harmful to civilization includes few detailed analyses of the mechanisms through which climate change leads to civilizational collapse but does includes many works on the direct effects of climate change. 

For the former comment, I am not sure what you mean w.r.t "engender".

Definition of engender

transitive verb

1 : beget, procreate 

2 : to cause to exist or to develop : produce 

"policies that have engendered controversy"

Thoughts, Notes: 10/14/0012022 (1)

Contents:

  1. Summary, comment: Climate change and the threat to civilization (10/06/2022)
  2. Compression of (1)
  3. Thoughts: writing and condensing information
  4. Quote: my friend Evan on concision 

To the reader: Please point out inadequacies in my writing. 

1. 

Article: Climate change and the threat to civilization (10/06/2022)

Context: My work for Rumtin Sempasspour (gcrpolicy.com) includes summarizing articles relevant to GCRs and GCR policy. 

Summary: An assessment of the conditions under which civilizational collapse may occur due to climate change would greatly improve the ability of the public and policymakers to address the threats from climate change, according to academic researchers Steela et al. in a PNAS opinion piece. While literature on climate change (e.g., reports from the Intergovernmental Panel on Climate Change) typically covers the deleterious effects that climate change is having or will have on human activities, there has been much less focus on exactly how climate change might factor into different scenarios for civilization collapse. Given the deficits in this research topic, Steela et al. outline three civilizational collapse scenarios that could stem from climate change - local collapse, broken world, and global collapse - and then discuss three groups of mechanisms - direct impacts, socio-climate feedbacks, and exogenous shock vulnerability - for how these scenarios might be realised. (6 October 2022) 

Policy comment: Just as governments and policymakers have directed funding and taken action to mitigate the harmful, direct effects of climate change, it seems natural that they should take the next step and address making the aspects of civilization most vulnerable to climate change more robust. The recommendation in this paper for policymakers and researchers alike to promote more rigorous scientific investigation of the mechanisms and factors of civilizational collapse involving climate change seems keen. While this paper does not perform a detailed examination of the scenarios and mechanisms of civilizational collapse that it proposes, it is a call-to-action for more work to understand how climate change affects civilization stability and the role of climate change in civilization collapse. 

2. 

A condensed version of the summary and policy comment in (1) 

Summary: Humanity must understand how climate change (CC) could engender civilizational collapse. Coverage of this topic is sparse relative coverage of CC's direct effects. Steela et al.'s PNAS opinion piece is a call to action for more research on this topic; they contribute an outline of 3 collapse scenarios - local collapse, broken world, and global collapse - and 3 collapse mechanisms - direct impacts, socio-climate feedbacks, and exogenous shock vulnerability (6 October 2022). 

Policy comment: Policymakers and researchers need to promote research on the effects of climate change on civilizational stability so that critical societal institutions and infrastructure are protected from collapse. Such research efforts would include further investigations of the many scenarios and mechanisms through which civilization may collapse due to climate change; Steela et al. lay some groundwork in this regard, but fail to provide a detailed examination.

3.

One issue I have is being concise with my writing. This was recently pointed out to me by my friend Evan, when I asked him to read (1), and I want to write some thoughts of mine that were evoked by the conversation. 

My first thought: What do I want myself and others to get from my writing?   

I want to learn, and writing helps with this. I want to generate novel and useful ideas and to share them with people. I want to show people what I've done or am doing. I want a record of my thinking on certain topics

I want my writing to help others learn efficiently and I want to tell people entertaining stories, ones that engender curiosity. 

My next thought: How is my writing inadequate? 

I aim for transparency, informativeness, clarity, and efficiency in my writing, but feel that my writing is much less transparent, informative, clear, and efficient than it could be. 

  • W.r.t. transparency, my model is Reasoning Transparency. My writing sometimes includes answers to these questions[1] (this comment). 
  •  W.r.t. informativeness, I assume someone has already thought about or attempted what I am working on, so I try not to repeat (Don't repeat yourself) and to synthesize works when synthesizing has not yet occurred or has occurred but inadequately. 
  • W.r.t. clarity, I try to edit my work multiple times and make it clear what I want to be understood. I read my writing aloud to determine if hearing it is pleasurable.
  • W.r.t. efficiency, my sense of where to allocate attention across my writing is fuzzy. I use editing and footnotes to consolidate, but still have trouble. 

I don't have good ways to measure or assess these things in my writing, and I haven't decided which hypothetical audiences to gear my writing towards; I believe this decision affects how much effort I expend optimizing at least transparency and efficiency. 

I will address my writing again at some point, but think it best I read the advice of others first. 

4.

My friend Evan on concision: 

Yelling at people on the internet is a general waste of time, but it does teach concision. No matter how sound your argument, if you say something in eight paragraphs and then your opponent comes in and summarizes it perfectly in twenty words, you look like an idiot. Don't look like an idiot in front of people! Be concise.

 

  1. ^
    • Why does this writing exist? 
    • Who is this writing for?
    • What does this content claim? 
    • How good is this content? 
    • Can you trust the author? 
    • What are the author's priors?
    • What beliefs ought to updated? 
    • What has the author contributed here?  

 

Sometimes failing at things makes it harder to try in future even if you expect things to go well, and sometimes people are so afraid that they give up on trying, but you can break out of this by making small, careful bets with your energy. 

 

reminds me of this article

Researchers and educators have long wrestled with the question of how best to teach their
clients be they humans, non-human animals or machines. Here, we examine the role of a
single variable, the difficulty of training, on the rate of learning. In many situations we find that
there is a sweet spot in which training is neither too easy nor too hard, and where learning
progresses most quickly.

...the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%

One might benefit from modulating their learning so that their failure rate falls in the above range (assuming the findings are accurate). 

Load More