NunoSempere

I'm an independent researcher, hobbyist forecaster, programmer, and aspiring effective altruist.

In the past, I've studied Maths and Philosophy, dropped out in exhasperation at the inefficiency; picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

I like to spend my time acquiring deeper models of the world, and a a good fraction of my research is available on nunosempere.github.io.

With regards to forecasting, I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell, and I have been running a Forecasting Newsletter since April 2020. I also quite enjoy winning bets against people too confident in their beliefs.

I was a Future of Humanity Institute 2020 Summer Research Fellow, and I'm working on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." You can share feedback anonymously with me here.

Sequences

Forecasting Newsletter
Inner and Outer Alignment Failures in current forecasting systems

Comments

Survey on cortical uniformity - an expert amplification exercise

EDIT: rephrased the estimations so they match the probability one would enter in the Elicit questions 

Oof, that means I get to change my predictions. 

Survey on cortical uniformity - an expert amplification exercise

I made three quick predictions, of which I'm not really sure. Someone should do the Bayesian calculation with a reasonable prior to determine how likely is it than more than half of experts would answer some way given the answers by the 6 experts who did answer.

For some of these questions, I'd expect experts to care more about the specific details than I would. E.g., maybe for “The entire cortical network could be modeled as the repetition of a few relatively simple neural structures, arranged in a similar pattern even across different cortical areas” someone who spends a lot of time researching the minutiae of cortical regions is more likely to consider the sentence false.

Real-Life Examples of Prediction Systems Interfering with the Real World (Predict-O-Matic Problems)

Two other examples:

  • Youtube's recommender system changes the habits of Youtube video producers (e.g., using keywords at the beginning of the titles, and at the beginning of the video now that Youtube can parse speech)
  • Andrew Yang apparently received death threats over a prediction market on the number of tweets. 
GraphQL tutorial for LessWrong and Effective Altruism Forum

I've come back to this occasionally, thanks. Here are two more snippets:

To get one post 

{
        post(
            input: {  
            selector: {
                _id: "Here goes the id"
            }      
            }) 
        {
            result {
            _id
            title
            slug
            pageUrl
            postedAt
            baseScore
            voteCount
            commentCount
            meta
            question
            url
            user {
                username
                slug
                karma
                maxPostCount
                commentCount
            }
            }
        }
}

or, as a JavaScript/node function:

let graphQLendpoint = 'https://forum.effectivealtruism.org/graphql' // or https://www.lesswrong.com/graphql. Note that this is not the same as the graph*i*ql visual interface talked about in the post. 

async function fetchPost(id){ 
  // note the async
  let response  = await fetch(graphQLendpoint, ({
    method: 'POST',
    headers: ({ 'Content-Type': 'application/json' }),
    body: JSON.stringify(({ query: `
       {
        post(
            input: {  
            selector: {
                _id: "${id}"
            }      
            }) 
        {
            result {
            _id
            title
            slug
            pageUrl
            postedAt
            baseScore
            voteCount
            commentCount
            meta
            question
            url
            user {
                username
                slug
                karma
                maxPostCount
                commentCount
            }
            }
        }
}`
})),
  }))
  .then(res => res.json())
  .then(res => res.data.post? res.data.post.result : undefined)  
  return response
}

 

To get a user

{
  user(input: {
    selector: {
      slug: "heregoestheslug"
    }
  }){
    result{
      username
      pageUrl
      karma
      maxPostCount
      commentCount
    }
  }
  
}

Or, as a JavaScript function

let graphQLendpoint = 'https://forum.effectivealtruism.org/graphql' // or https://www.lesswrong.com/graphql. Note that this is not the same as the graph*i*ql visual interface talked about in the post. 

async function fetchAuthor(slug){
  // note the async
  let response  = await fetch(graphQLendpoint, ({
    method: 'POST',
    headers: ({ 'Content-Type': 'application/json' }),
    body: JSON.stringify(({ query: `
       {
  user(input: {
    selector: {
      slug: "${slug}"
    }
  }){
    result{
      username
      pageUrl
      karma
      maxPostCount
      commentCount
    }
  }
  
}`
})),
  }))
  .then(res => res.json())
  .then(res => res.data.user? res.data.user.result : undefined)  
  return response
}
Load More