Jozdien

Posts

Sorted by New

Comments

Challenge: know everything that the best go bot knows about go

That's evidence for it being harder to know what a Go bot knows than to know what a chess bot does, right?  And if I'm understanding Go correctly, those years were at least a significant part due to computational constraints, which would imply that better transparency tools or making them more human-understandable still wouldn't go near letting a human know what they know, right?

Challenge: know everything that the best go bot knows about go

I'm not clear on your usage of the word "know" here, but if it's in a context where knowing and level of play have a significant correlation, I think GMs not knowing would be evidence against it being possible for a human to know everything that game bots do.  GMs don't just spend most of their time and effort on it, they're also prodigies in the sport.

Challenge: know everything that the best go bot knows about go

How comparable are Go bots to chess bots in this?  Chess GMs at the highest level have been using engines to prepare for decades; I think if they're similar enough, that would be a good sample to look at for viable approaches.

Open and Welcome Thread - May 2021

80,000 Hours' data suggests that people are the bottleneck, not funding.  Could you tell me why you think otherwise?  It's possible that there's even more available funding in AI research and similar fields that are likely sources for FAI researchers.

Open and Welcome Thread - May 2021

Thanks!  2006 is what I remember, and what my older brother says too.  I was 5 though, so the most I got out of it was learning how to torrent movies and Pokemon ROMs until like 2008, when I joined Facebook (at the time to play an old game called FarmVille).

Thoughts on Re-reading Brave New World

I'm far from calling Brave New World a utopia, but I also couldn't easily describe it as a dystopia.  People are happy with their lives for the most part, but there's no drive to push average levels of happiness up, and death still exists.  The best dystopian argument I can see is that there's no upward trend of good associated with scientific advancement, but even this needn't necessarily be true, because of the islands where only the most unorthodox thinkers are sent presumably without having to worry about their actions there.  I think something approximating a utopia by our standards would likely involve mass genetic equalization (but you know, in an upward direction), controlled environments, and easy access to hedons.

Open and Welcome Thread - May 2021

I’m Jose.  I’m 20.  This is a comment many years in the making.

I grew up in India, in a school that (almost) made up for the flaws in Indian academia, as a kid with some talent in math and debate.  I largely never tried to learn math or science outside what was taught at school back then.  I started using the internet in 2006, and eventually started to feel very strongly about what I thought was wrong with the institutions of the world, from schools to religion.  I spent a lot of time then trying to make these thoughts coherent.  I didn’t really think about what I wanted to do, or about the future, in anything more than abstract terms until I was 12 and a senior at my school recommended HPMOR.

I don’t remember what I thought the first time I read it up until where it had reached (I think it was chapter 95).  I do remember that on my second read, by the time it had reached chapter 101, I stayed up the night before one of my finals to read it.  That was around the time I started to actually believe I could do something to change the world (there may have been a long phase where I phrased it as wanting to rule the universe).  But apart from an increased tendency in my thoughts at the time toward refining my belief systems, nothing changed much, and Rationality from AI to Zombies remained on my TBR until early 2017, which is when I first lurked LessWrong.

I had promised myself at the time that I would read all the Sequences properly regardless of how long it took, and so it wasn’t until late 2017 that I finally finished it.  That was a long, and arduous process, and much of which came from many inner conflicts I actually noticed for the first time.  Some of the ideas were ones I had tried to express long ago, far less coherently.  It was epiphany and turmoil at every turn.  I graduated school in 2018; I’d eventually realize this wasn’t nearly enough though, and it was pure luck that I chose a computer science undergrad because of vague thoughts about AI, despite not yet deciding on what I really wanted to do.

Over my first two years in college, I tried to actually think about that question.  By this point, I had read enough about FAI to know it to be the most important thing to work on, and that anything I did would have to come back to that in some way.  Despite that, I still stuck to some old wish to do something that I could call mine, and shoved the idea of direct work in AI Safety in the pile where things that you consciously know and still ignore in your real life go.  Instead, I thought I’d learned the right lesson and held off on answering direct career questions until I knew more, because I had a long history of overconfidence in those answers (not that that’s a misguided principle, but there was more I could have seen at that point with what I knew).

Fast forward to late-2020.  I had still been lurking on LW, reading about AI Safety, and generally immersing myself in the whole shindig for years.  I even applied to the MIRIx program early that year, and held off on starting operations on that after March that year.  I don’t remember what it was exactly that made me start to rethink my priors, but one day, I was shaken by the realization that I wasn’t doing anything the way I should have been if my priorities were actually what I claimed they were, to help the most people.  I thought of myself as very driven by my ideals, and being wrong only on the level where you don’t notice difficult questions wasn’t comforting.  I went into existential panic mode, trying to seriously recalibrate everything about my real priorities.  

In early 2021, I was still confused about a lot of things.  Not least because being from my country sort of limits the options one has to directly work in AI Alignment, or at least makes them more difficult.  That was a couple months ago.  I found that after I took a complete break from everything for a month to study for subjects I hadn’t touched in a year, all those cached thoughts I had that bred my earlier inner conflicts had mostly disappeared.  I’m not entirely settled yet though, it’s been a weird few months.  I’m trying to catch up on a lot of lost time and learn math (I’m working through MIRI’s research guide), focus my attention a lot more in specific areas of ML (I lucked out again there and did spend a lot of time studying it broadly earlier), and generally trying to get better at things.  I’ll hopefully post infrequently here.  I really hope this comment doesn’t feel like four years.

Best empirical evidence on better than SP500 investment returns?

What would your advice be on other cryptocurrencies, like Ethereum or minor coins that aren't as fad-prone and presumably cheaper to mine?

Covid 4/22: Crisis in India

I haven’t been tracking India, but I don’t have any reason to think there was a large behavioral change since February that could take us from static to doubling every week. What could this be other than the variant?

I’m putting it at about 85% that the surge in India’s primary cause is that the B.1.617 variant is far more infectious than their previous variant.

I don't have much better data about how much to attribute the surge to the variant because as far as I've seen there isn't any, but in the weeks before the surge began, there was a sizeable contingent of people predicting that cases would go up very badly in April even before news of the variant, because of religious festivals (Kumbh Mela in mid-April saw millions of people in crowds without masks after many of the priests tested positive) and regional elections (standard practice to have huge crowds of people surrounding candidates on the road as they pass by, most places didn't stop this year) happening at the same time.

This article gives a bit of credence to the possibility that some countries had populations with higher prior immunity than others.  I can't say whether this is true or not, but if so, it's possible that's where the new variant differs.  And because India was hit far less hard than people expected, many weren't following mask and distancing protocols by April, which meant new viral loads would have been very opportune.

Why We Launched LessWrong.SubStack

According to the post The Present State of Bitcoin, the value of 1 BTC is about $13.2.  Since the title indicates that this is, in fact, the present value, I'm inclined to conclude that those two websites you linked to are colluding to artificially inflate the currency.  Or they're just wrong, but the law of razors indicates that the world is too complicated for the simplest solution to be the correct one.

Load More