All of ozziegooen's Comments + Replies

Happy to see experimentation here. Some quick thoughts:

  • The "Column" looked a lot to me like a garbage can at first. I like the "+" in Slack for this purpose, that could be good.
  • Checkmark makes me think "agree", not "verified". Maybe a badge or something?
  • "Support" and "Agreement" seem very similar to me?
  • While it's a different theme, I'm in favor of using popular icons where possible. My guess is that these will make it more accessible. I like the eyes you use, in part because are close to the icon. I also like:
    • šŸš€ or šŸŽ‰ -> This is a big accomplishment.&nb
... (read more)
2the gears to ascension23d
I agree with the :tada: icon, šŸŽ‰, but I don't agree that šŸ™ means "thanks" or that 😮 is actually surprised. 😮 looks sorta {horrified, frightened, shocked} to me.

I liked this a lot, thanks for sharing.

Here's one disagreement/uncertainty I have on some of it:

Both of the "What failure looks like" posts (yours and Pauls) posts present failures that essentially seem like coordination, intelligence, and oversight failures. I think it's very possible (maybe 30-46%+?) that pre-TAI AI systems will effectively solve the required coordination and intelligence issues. 

For example, I could easily imagine worlds where AI-enhanced epistemic environment make low-risk solutions crystal clear to key decision-makers.

In general,... (read more)

Quick update: 


Immersed now supports a BETA for "USB Mode". I just tried it with one cable, and it worked really well, until it cut out a few minutes in. I'm getting a different USB-C cable that they recommend. In general I'm optimistic.

(That said, there are of course better headsets/setups that are coming out, too)

https://immersed.zendesk.com/hc/en-us/articles/14823473330957-USB-C-Mode-BETA-

Happy to see discussion like this. I've previously written a small bit defending AI friends, on Facebook. There was some related comments there.

I think my main takeaway is "AI friends/romantic partners" are some seriously powerful shit. I expect we'll see some really positive uses and also some really detrimental ones. I'd naively assume that, like with other innovations, some communities/groups will be much better at dealing with them than others.

Related, research to help encourage the positive sides seems pretty interesting to me. 

Maybe we can refer to these systems as cybernetic or cyborg rubber ducking? :)

2Richard_Kennaway4mo
Silicon ducking? Cyberducking?

Yea; that's not a feature that exists yet. 

Thanks for the feedback!

Not yet. There are a few different ways of specifying the distribution, but we don't yet have options for doing from the 25th&75th percentiles. It would be nice to do eventually. (Might be very doable to add in a PR, for a fairly motivated person). 
https://www.squiggle-language.com/docs/Api/Dist#normal

You can type in, normal({p5: 10, p95:30}). It should later be possible to say normal({p25: 10, p75:30}).

Separately; when you say "25, 50, 75 percentiles"; do you mean all at once? This would be an overspecification; you only need two points. Also; wo... (read more)

1Cedar10mo
25,50,75: I'm thinking, just like how to can infer whether it's normal or lognormal, we can use one of the bell curve shaped distribution that gives a sort of closest approximation. More generally, it'd be awesome if there a way to get the max entropy distribution given a bunch of statistics like quantiles or nsamples with min and max.

Mostly. The core math bits of Guesstimate were a fairly thin layer on Math.js. Squiggle has replaced much of the MathJS reliance with custom code (custom interpreter + parser, extra distribution functionality). 

If things go well, I think it would make sense to later bring Squiggle in as the main language for Guesstimate models. This would be a breaking change, and quite a bit of work, but would make Guesstimate much more powerful. 

Really nice to see this. I broadly agree. I've been concerned with boards for a while.

I think that "mediocre boards" are one of the greatest weaknesses of EA right now. We have tons of small organizations, and I suspect that most of these have mediocre or fairly ineffective boards. This is one of the main reasons I don't like the pattern of us making lots of tiny orgs; because we have to set up yet one more board for each one, and good board members are in short supply.

I'd like to see more thinking here. Maybe we could really come up with alternative struc... (read more)

For what it's worth, I think I prefer the phrase,
"Failing with style"

Minor point:

I suggest people experiment with holiday ideas and report back, before we announce anything "official". Experimentation seems really nice on this topic, that seems like the first step.

In theory we could have a list of holiday ideas, and people randomly choose a few of them, try them out, then report back.

5hamnox1y
This... is that experiment?

The more sophisticated system is Squiggle. It's basically a prototype. I haven't updated it since the posts I made about it last year.
https://www.lesswrong.com/posts/i5BWqSzuLbpTSoTc4/squiggle-an-overview 

Update: 
I think some of the graphs could be better represented with upfront fixed costs.

When you buy a book, you pay for it via your time to read it, but you also have the fixed initial fee of the book.

This fee isn't that big of a deal for most books that you have a >20% chance of reading, but it definitely is for academic articles or similar.

(Also want to say I've been reading them all and am very thankful)

I enjoyed writing this post, but think it was one of my lesser posts. It's pretty ranty and doesn't bring much real factual evidence. I think people liked it because it was very straightforward, but I personally think it was a bit over-rated (compared to other posts of mine, and many posts of others). 

I think it fills a niche (quick takes have their place), and some of the discussion was good. 

Good point! I feel like I have to squint a bit to see it, but that's how exponentials sometimes look early on. 

2jasoncrawford2y
Yeah, I had to pull the data into a spreadsheet and look at the annual absolute and percent increase

To be clear, I care about clean energy. However, if energy production can be done without net-costly negative externalities, then it seems quite great. 

I found Matthew Yglesias's take, and Jason's writings, interesting.

https://www.slowboring.com/p/energy-abundance

All that said, if energy on the net leads to AGI doom, that could be enough to offset any gain, but my guess is that clean energy growth is still a net positive. 

but I think this is actually a decline in coal usage.

Ah, my bad, thanks!

They estimate ~35% increase over the next 30 years

That's pretty interesting. I'm somewhat sorry to see it's linear (I would have hoped solar/battery tech would improve more, leading to much faster scaling, 10-30 years out), but it's at least better than some alternatives.

I found this last chart really interesting, so did some hunting. It looks electricity generation in the US grew linearly until around ~2000. In the last 10 years though, there's been a very large decline in "petroleum and other", along with a strong increase in natural gas, and a smaller, but significant, increase in renewables.

I'd naively guess things to continue to be flat for a while as petroleum use decreases further; but at some point, I'd expect energy use to increase again.

That said, I'd of course like for it to increase much, much faster (more like... (read more)

6Bucky2y
The colours are a bit confusing but I think this is actually a decline in coal usage. The EIA have a projection here [https://www.eia.gov/outlooks/aeo/pdf/04%20AEO2021%20Electricity.pdf]. They estimate ~35% increase over the next 30 years, roughly linearly, with renewables doubling over that period and natural gas staying more-or-less the same.
7Dave Orr2y
It's interesting that you (and OP) think that growing electricity use per capita is good. I think it's a reasonable proxy for other kinds of growth in many situations, but what we care about is those other kinds of growth -- and given that there are negative externalities associated with electricity (pollution and climate change) and so there's been a concerted effort to be more efficient, using it as a proxy for growth is a lot more suspect over the past couple of decades. I would probably want to measure something we care about directly rather than electricity use per capita, at least until something like fusion comes along.
2jasoncrawford2y
If you look at the second-to-last chart, energy production was growing exponentially 1950–73, then about linearly 1973–2005, and then not growing at all after that.Ā 

I liked this post a lot, though of course, I didn't agree with absolutely everything. 

These seemed deeply terrible. If you think the best use of funds, in a world in which we already have billions available, is to go trying to convince others to give away their money in the future, and then hoping it can be steered to the right places, I almost don’t know where to start. My expectation is that these people are seeking money and power,

I'm hesitant about this for a few reasons.

  1. Sure, we have a few billion available, and we're having trouble donating that
... (read more)

I did explicitly note that there are things one can do with higher OOM funding that EA can't do now, even if they wouldn't be as efficient per dollar spent, and that this is the counterargument to there being TMM. 

In general, I notice I'm more optimistic that someone capable of being a founder (or even early employee) will do more good directly by going out and creating new companies that provide value like PayPal or Tesla, rather than entering a charitable ecosystem (EA or otherwise), whether or not their main motivation is the money. There are of co... (read more)

7Zvi2y
I don't have any posts outlining techniques.Ā  I don't use any unusual software, I currently compose posts in a combination of the Substack editor and Google docs. I don't believe speech-to-text (or someone taking dictation) would do anything but slow me down, even if it was 100% accurate, typing speed isn't a limiting factor. Mostly I've had a ton of (at least somewhat deliberate) practice writing a ton of stuff, including Magic articles twice a week for many years. To me, the key bottleneck is the 'figure out what to say' step, then the writing mostly flows quickly, then if something is worth the effort, editing is a huge time sink with or without help.Ā  But every piece of writing advice has the same thing at the top: Write. A lot.Ā 

Thanks! 
Just checking; I think you might have sent the wrong link though?

9Richard Korzekwa 1y
Yes, sorry, I somehow missed your reply until now. I probably meant this: https://store.yujiintl.com/collections/high-cri-white-led-strips/products/yujileds-high-cri-95-dynamic-tunable-white-multirow-led-flexible-strip [https://store.yujiintl.com/collections/high-cri-white-led-strips/products/yujileds-high-cri-95-dynamic-tunable-white-multirow-led-flexible-strip] (That link I did share is pretty interesting, BTW. It describes some stuff during the Wild West days of medical research, including the use of a frighteningly primitive laser to kill a tumor in a kid's eye)

Quick question: 
When you say, "Yuji adjustable-color-temperature LED strips/panels"

Do you mean these guys?
https://store.yujiintl.com/products/yujileds-high-cri-95-dim-to-warm-led-flexible-strip-1800k-to-3000k-168-leds-m-pack-5m-reel

It looks kind of intimidating to setup, and is pricey, but maybe is worth it.

1Richard Korzekwa 2y
Yeah, those or these: https://vdoc.pub/documents/lasers-in-opthalmology-basics-diagnostics-and-surgical-aspects-a-review-3ha5mu7ureog [https://vdoc.pub/documents/lasers-in-opthalmology-basics-diagnostics-and-surgical-aspects-a-review-3ha5mu7ureog] Or the long 2700K/6500K ribbons. They're not as bad to setup as I'd feared, though they are a bit of a hassle. I'm experimenting with them now, and I will write about it if I come up with a good way to build a light fixture with them.

Just want to say; I'm really excited to see this.

I might suggest starting with an "other" list that can be pretty long. With Slack, different subcommunities focus heavily on different emojis for different functional things. Users sometimes figure out neat innovations and those proliferate. So if it's all designed by the LW team, you might be missing out.

That said, I'd imagine 80% of the benefit is just having anything like this, so I'm happy to see that happen.

I just (loosely) coined "disagreeables" and "assessors" literally two days ago.

I suggest coming up with any name you think is a good fit.

I wouldn't read too much into my choice of word there.

It's also important to point out that I was trying to have a model that assumed interestingness. The "disagreeables" I mention are the good ones, not the bad ones. The ones worth paying attention to I think are pretty decent here; really, that's the one thing they have to justify paying attention to them.

A few quick thoughts:

1) This seems great, and I'm impressed by the agency and speed.

2) From reading the comments, it seems like several people were actively afraid of how Leverage could retaliate. I imagine similar for accusations/whistleblowing for other organizations. I think this is both very, very bad, and unnecessary; as a whole, the community is much more powerful than individual groups, so it seems poorly managed when the community is scared of a specific group. Resources should be spent to cancel this out.

In light of this, if more money were availa... (read more)

The latter option is more of what I was going for.

I’d agree that the armor/epistemics people often aren’t great at coming up with new truths in complicated areas. I’d also agree that they are extremely unbiased and resistant to both poor faith arguments, and good faith, but systematically misleading arguments (these are many of the demons the armor protects against, if that wasn’t clear).

When I said that they were soft-spoken and poor at arguing, I’m assuming that they have great calibration and are likely arguing against people who are very overconfident,... (read more)

When I hear the words "intelligence" and "wisdom", I think of things that are necessarily properties of individual humans, not groups of humans. Yet some of the specifics you list seem to be clearly about groups.

I tried to make it clear that I was referring to groups with the phrase, "of humanity", as in, "as a whole", but I could see how that could be confusing. 

the wisdom and intelligence[1] of humanity

 

For those interested in increasing humanity’s long-term wisdom and intelligence[1]


I also suspect that work on optimizing group decision making

... (read more)

That's an interesting perspective. It does already assume some prioritization though. Such experimentation can only really be done in a very few of the intervention areas. 

I like the idea, but am not convinced of the benefit of this path forward, compared to other approaches. We already have had a lot of experiments in this area, many of which cost a lot more than $15,000; marginal exciting ones aren't obvious to me.

But I'd be up for more research to decide if things like that are the best way forward :)

2Matt Goldenberg2y
Ā  And I'd be up for more experiments to see if this is a better way forward.

The first few chapters of "The Existential Pleasures of Engineering" detail some optimism, then pessimism, of technocracy in the US at least. 

I think the basic story there was that after WW2, in the US, people were still pretty excited about tech. But in the 70s (I think), with environmental issues, military innovations, and general malaise, people because disheartened.

https://www.amazon.com/Existential-Pleasures-Engineering-Thomas-Dunne-ebook/dp/B00CBFXLWQ

I'm sure I'm missing details, but I found the argument interesting. It is true that in the US at... (read more)

Thanks for the opinion, and I find the take interesting.

I'm not a fan of the line, "How about a policy that if you use illegal drugs you are presumptively considered not yet good enough to be in the community?", in large part because of the phrase "not yet good enough". This is a really thorny topic that seems to have several assumptions baked into it that I'm uncomfortable with.

I also think that many here like at least some drugs that are "technically illegal", in part, because the FDA/federal rules move slowly. Different issue though.

I like points 2 and 3, I imagine if you had a post just with those two it would have gotten way more upvotes.

5James_Miller2y
Thanks for the positive comment on (2) and (3) and I probably should have written them in a separate comment from (1). Ā While I'm far from an expert on drugs or the California rationalist community, the comments on this post seem to scream "huge drug problem." Ā I hope leaders in the community at least consider evaluating the drug situation in the community. Ā I agree with you about the FDA.Ā 

There's an "EA Mental Health Navigator" now to help people connect to the right care.
https://eamentalhealth.wixsite.com/navigator

I don't know how good it is yet. I just emailed them last week, and we set up an appointment for this upcoming Wednesday. I might report back later, as things progress.

I really like things like this. I think it's possible we could do a "decent enough" job, though it's impossible to have a solution without risk.

One thing I've been thinking about is a browser extention. People would keep a list of things, like, "User XYZ is Greg Hitchenson", and then when it sees XYZ, it adds annotation". 

Lots of people are semi-anonymous already. They have psuedonyms that most people don't know, but "those in the know" do. This sort of works, but isn't formalized, and can be a pain. (Lots of asking around: "Who is X?")

4Gunnar_Zarncke2y
I think Yair Halberstadt [https://www.lesswrong.com/users/yair-halberstadt] is aware that it is not safe - but that doesn't seem to be his goal anyway. I like this too. I did something a bit like this a while ago in the LW wiki. Pages there aren't found via the author search either - but can be traced back by checking who edited the page.

That's good to know. 

I imagine grantmakers would be skeptical about people who would say "yes" to an optional form. Like, they say they're okay with the information being public, but when it actually goes out, some of them will complain about it, leading to a lot of extra time.

However, some of our community seems unusually reasonable, so perhaps there's some way to make it viable.

I agree that it would have been really nice for grantmakers to communicate with the EA Hotel more, and other orgs more, about their issues. This is often a really challenging conversation to have ("we think your org isn't that great, for these reasons"), and we currently have very few grantmaker hours for the scope of the work, so I think grantmakers don't have much time now to spend on this. However, there does seem to be a real gap here to me. I represent a small org and have been around other small orgs, and the lack of communication with small grantmak... (read more)

Fewer people would apply and many would complain a whole lot when it happens. The LTFF already gets flack for writing somewhat-candid information on the groups they do fund. 

I think that it would be very interesting to have a fund that has that policy. Yes, that might reduce in fewer people applying but people applying might itself be a signal that their project is worth funding.

"If you apply to this grant, and get turned down, we'll write about why we don't like it publically for everyone to see."

I feel confident that Greg of EA Hotel would very much prefer this in the case of EA Hotel. It can be optional, maybe.

Thanks for the review here. I found this book highly interesting and relevant. I've been surprised at how much it seems to have been basically ignored. 

6Kaj_Sotala2y
I did a few posts on it back in the day [https://www.lesswrong.com/s/uPjHAiXAKrMzvTFyt], but yeah it hasn't been noticed much.

I was just thinking of the far right-wing and left-wing in the US; radical news organizations and communities. Q-anon, some of the radical environmentalists, conspiracy groups of all types. Many intense religious communities. 

I'm not making a normative claim about the value of being "moral" and/or "intense", just saying that I'd expect moral/intense groups to have some of the same characteristics and challenges.

Agreed, though I think that the existence of many groups makes it a more obvious problem, and a more complicated problem.

To put it bluntly, EA/rationalist community kinda selects for people who are easy to abuse in some ways. Willing to donate, willing to work to improve the world, willing to consider weird ideas seriously -- from the perspective of a potential abuser, this is ripe fruit ready to be taken, it is even obvious what sales pitch you should use on them. —-

For what it’s worth, I think this is true for basically all intense and moral communities out there. The EA/rationalist groups generally seem better than many religious and intense political groups in these areas, to me. However, even ā€œbetterā€ is probably not at all good enough.

-2Said Achmiz2y
What are ā€œintenseā€ and/or ā€œmoralā€ communities? And, why is it (or is it?) a good thing for a community to be ā€œmoralā€ and/or ā€œintenseā€?

I very much agree about the worry, My original comment was to make the easiest case quickly, but I think more extensive cases apply to. For example, I’m sure there have been substantial problems even in the other notable orgs, and in expectation we should expect there to continue to be so. (I’m not saying this based on particular evidence about these orgs, more that the base rate for similar projects seems bad, and these orgs don’t strike me as absolutely above these issues.)

One solution (of a few) that I’m in favor of is to just have more public knowledge... (read more)

[1] I don’t particularly blame them, consider the alternative.

I think the alternative is actually much better than silence!

For example I think the EA Hotel is great and that many "in the know" think it is not so great. I think that the little those in the know have surfaced about their beliefs has been very valuable information to the EA Hotel and to the community. I wish that more would be surfaced. 

Simply put, if you are actually trying to make a good org, being silently blackballed by those "in the know" is actually not so fun. Of course there are ... (read more)

Sorry, edited. I meant that it was a mistake for me to keep away before, not now.

(That said, this post is still quite safe. It's not like I have scandalous information, more that, technically I (or others) could do more investigation to figure out things better.)

6Evan_Gaensbauer2y
Yeah, at this point, everyone coming together to sort this out together as a way of building a virtuous spiral of making speaking up feel safe enough that it doesn't even need to be a courageous thing to do or whatever is the kind of thing I think your comment also represents and what I was getting at.Ā 

As someone who's been close to these, some had a few related issues, but Leverage seemed much more extreme in many of these dimensions to me.

However, now there are like 50 small EA/rationalist groups out there, and I am legitimately worried about quality control.

4ChristianKl2y
It seems to me that quality control has always an issue with some groups no matter how many groups there were.

I generally worry about all kinds of potential bad actors associating themselves with EA/rationalists.

There seems to be a general pattern where new people come to an EA/LW/ACX/whatever meetup or seminar, trusting the community, and there they meet someone who abuses this trust and tries to extract free work / recruit them for their org / abuse them sexually, and the new person trusts them as representatives of the EA/rationalist community (they can easily pretend to be), while the actual representatives of EA/rationalist community probably don't even notic... (read more)

As someone part of the social communities, I can confirm that Leverage was definitely a topic of discussion for a long time around Rationalists and Effective Altruists. That said, often the discussion went something like, "What's up with Leverage? They seem so confident, and take in a bunch of employees, but we have very little visibility." I think I experienced basically the same exact conversation about them around 10 times, along these lines.

As people from Leverage have said, several Rationalists/EAs were very hostile around the topic of Leverage, parti... (read more)

3Evan_Gaensbauer2y
For what it's worth, my opinion is that you sharing your perspective is the opposite of making a mistake.

+1 for the detail. Right now there's very little like this explained publicly (or accessible in other ways to people like myself). I found this really helpful.

I agree that the public discussion on the topic has been quite poor.

Some updates:

  1. I'm now using it a bit here and there, but I changed rooms and the connection isn't as good, so it's much more painful to use.
  2. There's a new VR headset being made specifically for linux, which looks very neat. https://simulavr.com/
  3. Here's a much more in-depth blog by someone who's been doing this for many hours.https://blog.immersed.team/working-from-orbit-39bf95a6d385
Load More