Wiki Contributions

Comments

To be honest, I am pretty confused by your argument and I tried to express one of those confusions with my reply. I think you probably also got what I wanted to express but chose to ignore the content in favor of patronizing me. As I don't want to continue to go down this road, here is a more elaborate comment that explains where I am coming from:

First, you again make a sweeping claim that you do not really justify: "Many (perhaps most) famous "highly recognized" philosophical arguments are nonsensical". What is your ground for this claim? Do you mean that it is self-evident that much (perhaps most) of philosophy is bullshit? Or do you have a more nuanced understanding of nonsensical? Are you referring to Wittgenstein here? 

Then you position this unjustified claim as a general prior to justify that your own position in a particular situation is much more likely to be valid than the alternative. Doesn't that seem a little bit like cherry picking to you? 

My critique of the post and your comments boils down to the fact that both are very quick to dismiss other positions as nonsensical and by doing so claim their own perspective/position to be superior. This is problematic because although certain positions may seem nonsensical to you, they may make perfect sense from another angle. While this problem cannot be solved in principle, in practice it calls for investing at least some effort and resources into recognizing potentially interesting/valid perspectives and, in particular, staying open minded to the recognition that one may not have consider all relevant aspects and to reorient accordingly. I will list a couple of resources that you can check out if you are interested in a more elaborate argument on this matter. 

* Stegmaier, W. (2019). What Is Orientation? A Philosophical Investigation. De Gruyter.
* Ulrich, W. (2000). Reflective Practice in the Civil Society: The contribution of critically systemic thinking. Reflective Practice, 1(2), 247–268. https://doi.org/10.1080/713693151
* Ulrich, W., & Reynolds, M. (2010). Critical Systems Heuristics. In M. Reynolds & S. Holwell (Eds.), Systems Approaches to Managing Change: A Practical Guide (pp. 243–292). Springer London. https://doi.org/10.1007/978-1-84882-809-4_6

Since a lot of arguments on internet forums are nonsensical, the fact that your comment doesn’t makes sense to me, means that it is far more likely that it doesn’t make sense at all than it is that I am missing something.

That’s pretty ironic.

alex.herwix2mo-3-13

I downvoted this post because the whole set up is straw manning Rawls work. To claim that a highly recognized philosophical treatment of justice that has inspired countless discussions and professional philosophers doesn’t “make any sense” is an extraordinary claim that should ideally be backed by a detailed argument and evidence. However, to me the post seems handwavey and more like armchair philosophizing than detailed engagement. Don’t get me wrong, feel free to do that but please make clear that this is what you are doing.

Regarding your claim that the veil of ignorance doesn’t map to decision making in reality, that’s obvious. But that’s also not the point of this thought experiment. It’s about how to approach the ideal of justice and not how to ultimately implement it in our non-ideal world. One can debate the merits of talking and thinking about ideals but calling it “senseless” without some deeper engagement seems pretty harsh.

Hey Kenneth, 

thanks for sharing your thoughts. I don't have much to say about the specifics of your post because I find it somewhat difficult to understand how exactly you want an AI (what kind of AI?) to internalize ethical reflection and what benefit the concept of the ideal speech situation (ISS) has here.

What I do know is that the ISS has often been characterized as an "unpractical" concept that cannot be put into practice because the ideal it seeks simply cannot be realized (e.g., Ulrich, 1987, 2003). This may be something to consider or dive deeper into to see if this affects your proposal. I personally like the work of Werner Ulrich on this matter, which has heavily inspired my phd thesis on a related topic. I put one of the papers from the thesis in the reference section. Feel free to reach out via PM if you want to discuss this further.

References

Herwix, A. (2023). Threading the Needle in the Digital Age: Four Paradigmatic Challenges for Responsible Design Science Research. SocArXiv. https://doi.org/10.31235/osf.io/xd423

Ulrich, W. (1987). Critical heuristics of social systems design. European Journal of Operational Research, 31(3), 276–283.

Ulrich, W. (1994). Can We Secure Future-Responsive Management Through Systems Thinking and Design? Interfaces, 24(4), 26–37. https://doi.org/10.1287/inte.24.4.26

Ulrich, W. (2003). Beyond methodology choice: Critical systems thinking as critically systemic discourse. Journal of the Operational Research Society, 54(4), 325–342. https://doi.org/10.1057/palgrave.jors.2601518

Ulrich, W. (2007). Philosophy for professionals: Towards critical pragmatism. Journal of the Operational Research Society, 58(8), 1109–1113. https://doi.org/10.1057/palgrave.jors.2602336

I see your point regarding different results depending on order of how people see the post but that’s also true the other way around. Given the assumption that less people are likely to view a post that has negative Karma, people who may actually turn out to like the post and upvote it never do so because of preexisting negative votes.

In fact, I think that’s the whole point of this scheme, isn’t it?

So, either way you never capture an „accurate“ picture because the signal itself is distorting the outcome. The key question is then what outcome one prefers, neither is objectively „right“ or in all respects „better“.

I personally think that downvoting into negative karma is an unproductive practice, in particular with new posts because it stifles debate about potentially interesting topics. If you are bothered enough to downvote there should often be something to the post that is controversial.

Take this post as an example. When I found it a couple of hours after posting, it was already downvoted into negative karma but there is no obvious reason why this should be so. It’s well written and makes a clear point that‘s worth discussing as exemplified by our engagement. Because it’s negative karma, however fewer people are likely to weight in to the debate because the signal is telling them to not bother engaging with this.

In general my suggestion would be to only downvote into negative karma if you can be bothered to explain and defend your downvote in a comment and are willing to take it back if the author if the author of the post gives a reasonable reply.

But as I said, this is just one way of looking at this. I value discourse and critical debate as essential pieces to sense and meaning making and believe that I made a reasonable argument for how this is stifled by current practice.

Thanks to the author of the post for his thoughtful invite for critical reflection!

I think this is a very contextual question that really depends on the design of the mechanisms involved. For example, if we are talking about high risk use cases the military could be involved as part of the regulatory regime. It’s really a question of how you set this up, the possible design space is huge if we look at this with an open mind. This is why I am advocating for engaging more deeply with the options we have here.

I just wanted to highlight that there also seems to be an opportunity to combine the best traits of open and closed source licensing models in the form of a new regulatory regime that one could call: regulated source.

I tried to start a discussion about this possibility but so far the take up has been limited. I think that’s a shame, there seems to be so much that could be gained by “outside the box” thinking on this issue since the alternatives both seem pretty bleak.

That seems to downplay the fact that we will never be able to internalize all externalities simply because we cannot reliably anticipate all of them. So you are always playing catch up to some degree.

Also simply declaring an issue “generally” resolved when the current state of the world demonstrates it’s actually not resolved seems premature in my book. Breaking out of established paradigms is generally the best way to make rapid progress on vexing issues. Why would you want to close the door to this?

Answer by alex.herwixSep 09, 202312

I ask myself the same question. I recently posted an idea about AI regulation to address such issues and start a conversation but there was almost no reaction and mostly just pushback. See: https://www.lesswrong.com/posts/8xN5KYB9xAgSSi494/against-the-open-source-closed-source-dichotomy-regulated

My take is that many people here are very worried about AI doom and think that for-profit work is necessary to get the best minds working on the issue. It also seems that Governments in general are perceived to be incompetent so the fear is more regulation will screw things up rather than make them better.

Needless to say, I think this is a false dichotomy and we should consider how we (as a society involving diverse actors and positions in transparent process) can develop regulation that actually creates a playing field where the best minds can responsibly work on societal and AI alignment. It’s difficult of course but the better option when compared to letting things develop as is. The last couple of years have demonstrated clearly enough that this will not work out. Let’s not just bury the head in the sand and hope for the best.

Thanks for engaging with the post and acknowledging that regulation may be a possibility we should consider and not reject out of hand. 

I don't share your optimistic view that transnational agencies such as the IAEA will be all that effective. The history of the nuclear arms race is that those countries that could develop weapons did, leading to extremes such as the Tsar Bomba, a 50-megaton monster that was more of a dick-waving demonstration than a real weapon. The only thing that ended the unstable MAD doctrine was the internal collapse of the Soviet Union. So, while countries have agreed to allow limited inspection of their nuclear facilities and stockpiles, it's nothing like the level of complete sharing that you envision in your description.

My position is actually not that optimistic. I don't believe that such transnational agencies are very likely to work or a safe bet to ensure a good future, it is more that it seems to be in our best interest to really consider all of the options that we can put on the table, try to learn from what has more or less worked in the past but also look for creative new approaches and solutions because the alternative is dystopia or catastrophe.

A key difference between AI and nuclear weapons is that the AI labs are not as sovereign as nation states. If the US, UK, and EU were to impose strong regulation on their companies and "force them to cooperate" similar to what I outlined, this would seem (at least theoretically) possible and already a big win to me. For example, more resources could be allocated to alignment work compared to capabilities work. China seems much more concerned about regulation and control of companies anyway so I see a chance that they would follow suit in approaching AI carefully. 

However, it seems likely that the major commercial players will fight tooth and nail to avoid that situation, and you'll have to figure out how to apply similar restrictions worldwide.

To be honest, it's overdue that we find the guts to face up to them and put them in their place. Of course that's easier said than done but the first step is to not be intimidated before we even tried. Similarly, the call for worldwide regulations often seems to me to be a case of "don't let the perfect be the enemy of the good". Of course, worldwide regulations would be desirable but if we only get US, UK, and EU or even the US or EU alone to make some moves here, we would be in a far better position. It's a bogeyman that companies will simply turn around and set up shop in the Bahamas to pursue AGI development because they would not be able to a) secure the necessary compute to run development and b) sell their products in the largest markets. We do have some leverage here.

So, I think this is an excellent discussion to have, but I'm not convinced that the regulated source model you describe is workable.

Thanks for acknowledging the issue that I am pointing to here. I see the regulated source model mostly as a general outline of a class of potential solutions some of which could be workable and others not. Getting to specifics that are workable is certainly the hard part. For me, the important step was to start discussing them more openly to build more momentum for the people who are interested in taking such ideas forward. If more of us would start to openly acknowledge and advocate that there should be room for discussing stronger regulation our position would already be somewhat improved. 

Load More