What can we do when AI damage human interactions?
We live in an era where the boundaries between human action and technological processes are becoming increasingly blurred. Artificial intelligence systems, once mere tools of automation, have evolved into powerful agents capable of generating content, making decisions, and, in some cases, causing harm. Deepfakes, algorithmic discrimination, AI-generated misinformation, and the misuse of autonomous decision-making tools are only a few examples of how these technologies can impact individuals and communities in deeply harmful ways. This raises a pressing question: can restorative justice, a practice rooted in human dialogue and accountability, be adapted to respond to the unique harms produced by autonomous systems?
To begin unpacking this question, it is important to recognize the nature of harm in artificial intelligence-related cases. Traditional restorative justice processes focus on interpersonal conflicts where identifiable victims and offenders can engage in dialogue to address harm and its consequences. In this case harms, however, the equation becomes more complex. Who is the offender? Is it the developer, the company, the user, or the algorithm itself? And how can victims find meaningful redress when the harm is caused by autonomous processes or synthetic content that lacks a clear human perpetrator?
For example, when an AI-generated deepfake is used to damage someone’s reputation or cause psychological harm, the individual victim may struggle to identify who is responsible. Is it the person who uploaded the deepfake? The platform that hosted it? Or the creator of the artificial intelligence tool that enabled its production? These complex chains of agency make conventional accountability mechanisms—and by extension, restorative processes—difficult to apply.
Moreover, the emotional and psychological dimensions of harm caused by artificial intelligence tools can mirror those of traditional offenses. Victims often report feelings of violation, helplessness, and humiliation, much like victims of interpersonal offenses. However, the impersonal and automated nature of the harm can compound these feelings by adding a layer of ambiguity and dehumanization.
In light of these challenges, the question is not whether restorative justice can address artificial intelligence-generated harms in their traditional form, but rather how it might be adapted to do so. Some scholars and practitioners are beginning to explore the possibility of expanding the restorative framework to include corporate actors, designers, and system developers. In this approach, dialogue processes could involve not just the victim and a direct offender, but also representatives from companies, regulators, or civil society, creating a space where harm is acknowledged and pathways for repair and prevention can be explored.
One key strength of restorative justice is its flexibility and adaptability to different contexts. In cases of IAG harm, restorative practices could focus on acknowledgment, validation, and the repair of dignity. Even in the absence of a direct individual offender, victims can be given a voice to share their experiences and have them recognized by responsible institutions or representatives of the community. This can be particularly powerful in addressing the invisibility and isolation that often accompanies victimization by technological systems.
Additionally, restorative processes could serve as platforms for collective dialogue about the ethical use of artificial intelligence, giving voice to those impacted and fostering shared responsibility among various stakeholders. By bringing together victims, developers, users, and policymakers, restorative circles could open avenues for systemic change, including commitments to ethical standards, improved transparency, or modifications in platform policies and algorithms.
A particularly compelling example of this approach can be found in the emerging practices of restorative environmental justice, where the harm is often diffuse and systemic rather than individual and direct. Here, restorative processes have been adapted to engage communities, corporations, and governments in dialogue about environmental degradation, acknowledging harm, and co-creating solutions. Similarly, restorative practices in the AI field could evolve to address harms at both the personal and systemic levels.
Of course, such adaptations are not without challenges. Questions of scale, participation, and enforcement are significant when dealing with global platforms, anonymous users, and complex artificial intelligence systems. Moreover, there is the philosophical question of whether restorative justice, which is deeply rooted in human empathy and acknowledgment, can be meaningfully applied in cases where the harm is mediated by non-human agents. These debates are ongoing and highlight the need for cautious, reflective, and innovative approaches.
Despite these obstacles, integrating restorative justice principles into the conversation about artificial intelligence-generated harms offers a much-needed human dimension. While traditional regulatory, legal, and technical responses remain essential, they often overlook the emotional, relational, and community aspects of harm. Restorative practices can fill this gap by creating spaces where those affected can be heard, where harm is not only punished but also repaired, and where dialogue fosters understanding and collective accountability.
At Restorativ, we believe that these emerging challenges demand equally innovative responses. Our digital tools and platforms are designed precisely to enable safe, structured, and accessible restorative processes, even in complex and non-traditional contexts. Restorativ could play a pivotal role in facilitating these new forms of dialogue, offering victims of artificial intel harm spaces to share their experiences, and helping communities and organizations develop ethical guidelines and commitments for responsible AI use. Our commitment is to continue exploring how restorative justice can evolve alongside technology, ensuring that the values of dignity, accountability, and repair remain at the heart of our increasingly digital societies.