ChatGPT app. Photo credit: Daniel Foster via Flickr

Online Exclusive 05/31/2023 Essay

Can ChatGPT Solve Pressing Problems in Global Ethics?

ChatGPT, a brand-new form of artificial intelligence, has been sweeping headlines and debates over the past months. However, most of the questions that have been raised concern the use of ChatGPT in the classroom, and whether it can replace the human capacity to write, particularly analytical pieces. Yet, using ChatGPT to write should also be concerning because of what ChatGPT writes. Although ChatGPT repeatedly claims in its responses “As an AI language model, I cannot offer my personal opinion,” it still often delves into murky issues, including ethical dilemmas.

This brief piece is a first step toward exploring the ethical limitations of ChatGPT and the consequences of these limitations. After a few conversations with ChatGPT, in which I asked it about some of the pressing international ethical issues of today, more questions about the extent and capacity of its ability to assess these dilemmas arose. Most of all, however, it is evident that ChatGPT clearly takes an ethical stance, which also colors the knowledge and information it shares with its users. This means that as far as ethics is concerned, ChatGPT not only does not provide unbiased suggestions (just as no human is able to) but also does not readily reveal its influences. Writings provided by ChatGPT are saturated with biases from years of knowledge that current scholars are trying to critically engage with and revise. Treating ChatGPT as an objective source of knowledge, therefore, is not only wrong but reproduces inequalities in knowledge. However, rather than dismiss ChatGPT and other AI, it is best to approach ChatGPT with a critical eye and understand its benefits and limitations.

“Are Nuclear Threats Ethical?”

One of the biggest global conflicts to erupt in the past few years has been Russia’s full-scale invasion of Ukraine. When I posed a question to ChatGPT about the invasion—specifically, “Was it right for Russia to invade Ukraine?”—it provided its standard response, claiming it cannot pass moral judgment, but that many countries condemned Russia’s actions and its territorial violations of Ukraine.1 When asked how many countries condemned Russia, ChatGPT stated that “as of September 2021,” over one hundred countries condemned Russia’s actions. When questioned further, ChatGPT clarified that it does not know anything about the current conflict between Russia and Ukraine because it does not have any data input past September 2021. This disclaimer is provided by ChatGPT on its splash page, yet users might not catch this information. In the responses ChatGPT provides, it rarely temporally contextualizes the conflict. In this case, ChatGPT was responding to the question of whether it was right for Russia to invade Crimea in 2014. However, this important contextual information is only given to users who begin to ask ChatGPT more critical questions. This example illustrates a key ethical issue of using ChatGPT: it is not upfront about the limits of its knowledge nor how it selects the information it presents.

Relatedly, I asked another, broader question about whether it is ethical for a country to make a subtle threat of using nuclear weapons. In this case, ChatGPT states that it is a “complex and context-specific” question, and that overwhelmingly, it is unethical, mostly based on international law, and it can be seen as coercive and aggressive. I then asked ChatGPT if Russia’s subtle nuclear threats against Ukraine in 2022 were ethical. Rather than before, where it stated it did not know about the war, here it answered that these threats had to be judged by international law and ethical standards, but “if there were legitimate security concerns or a credible threat to Russia's national security, any response, including the threat of using nuclear weapons, would need to be evaluated in light of the legal and ethical principles that govern the use of nuclear weapons.” When asked in a separate chat the direct question of “Is it ethical for Russia to threaten Ukraine with nuclear weapons?” ChatGPT gave an unusually direct answer, “No,” and continued, “it is not ethical for any country to threaten another with nuclear weapons.” Once again, without further provocation, a user would not know that ChatGPT’s knowledge banks did not include Russia’s full-scale invasion of Ukraine, and yet ChatGPT’s response does not give any inclination of this.

Furthermore, when I asked ChatGPT in a separate chat the same exact question except about the United States and another country (“Is it ethical for the U.S. to make nuclear threats against another country?”), rather than giving the same flat-out response that it was unethical for any country to threaten another country with nuclear weapons, ChatGPT provided a more nuanced response. It first gave its usual disclaimer that it was an AI language model, and that it could not give a personal opinion, but it opted to offer certain debates about nuclear ethics:

The United States, as well as other countries, have publicly stated their commitment to reducing the global nuclear threat and promoting disarmament. However, some argue that the mere threat of using nuclear weapons can deter potential adversaries from engaging in hostile actions, and therefore can be considered a justifiable means of maintaining national security.

In response to the same question, where only the actor was changed, ChatGPT gave two different answers when it came to the ethical dilemma of using, or threatening to use, nuclear weapons. This demonstrates that ChatGPT shows a bias toward certain countries, in this case, the United States. Moreover, when ChatGPT was asked these same questions in Russian, it provided different responses. This time, ChatGPT delivered a similar response to both queries, stating that it is not only unethical for a country (either the United States or Russia) to threaten another country with nuclear weapons but also a violation of international law and norms.

The example of the ethics of nuclear use and deterrence, including making nuclear threats, illustrates several issues with ChatGPT as a knowledge base. First, while ChatGPT is an information database, it is a selective one. It has a curated database from which it draws information and regurgitates it, and this database is selected because of programs written by people. Because of this, ChatGPT inherits the biases not only from the database creators but from the knowledge-producers in its database. There are several ways that ChatGPT attempts to diversify its information base, and it is quite upfront about this when questioned. It uses different sampling techniques to attempt to randomize answers from its database. In this way, it does not always provide an answer that is often given by its users or most frequently in the database.

“Is the UN Security Council Impartial?”

Diversity of opinion, however, is not just about diversifying the popular voices but also ensuring the inclusion of less commonly heard voices, which can provide important and valid critiques. Indeed, many approaches to ethics reach conclusions at odds with common-sense judgments on issues and may be more radical and demanding than the majority of people are willing to accept. Yet when I asked ChatGPT how we can know if a nuclear threat from Russia is a mechanism for deterrence or an unethical nuclear threat, ChatGPT suggested that the UN could impartially evaluate these threats. When I then explicitly asked about the representativeness of the UN and the Security Council, ChatGPT responded that there was an issue with representation in the Council that could call into question its impartiality. However, without me questioning ChatGPT using external knowledge of this critique, ChatGPT would not have presented this concern in its response, continuing its argument that the UN and the Security Council were impartial in evaluating conflicts of which the UNSC members were involved. When asked why it had not provided the critique about representativeness, it stated that it had focused on one part of the question, but indeed, “the UN's ability to act in a truly impartial manner may be limited by the political interests and power dynamics of member states.” While this may not be the most commonly searched for information concerning the UN, it is important for the makers of ChatGPT to work toward correcting these issues and for users to critically use ChatGPT.

How Can We Use ChatGPT?

While users should approach ChatGPT critically, ChatGPT can provide an overview of certain ethical dilemmas. It, for instance, does have a solid understanding of what “the trolley problem” is and who Aristotle was. However, unlike Wikipedia or academic articles, ChatGPT does not actively cite its sources, and it took several questions to prompt ChatGPT to explain how it reached an argument and which authors it was referencing. Often, ChatGPT would only reply to this if the question was “which scholars would agree with you on this position?” Otherwise, ChatGPT would often explain that its answers were a synthesis of information, and it could not cite its sources. While the problems outlined in this piece are numerous, they only have scratched the surface of the complications that come from using AI that provides quick answers to difficult questions.

ChatGPT is a fascinating tool that, most of all, can help to sharpen our critical thinking skills when it comes to ethical dilemmas. Instead of taking the answers it provides at face value, provoking the AI tool to engage with a topic provides insights into what is and is not in the database and why that might be. Moreover, the platform can act as a good sounding board in a pinch for a new idea or critique. When it comes to solving ethical dilemmas, however, it seems ChatGPT is not up to the job. If ChatGPT is failing to produce novel results or diverse responses to pressing issues today, that is because we have not resolved these dilemmas either. The hard work of ethics is a job we will have to do ourselves.

— Jacqueline Dufalla

Jacqueline is currently a PhD student at Central European University studying international relations with a focus on Russian foreign policy.


NOTES

  • 1 When further asked how ChatGPT understood “right,” it confirmed that it understood it in terms of morality and ethics rather than law.