AI in Conservation: Communities’ control over the decision-making process

Imagine a tropical forest region, a seascape or mangroves, where big data on the society and ecology— on biodiversity, the behavior of peoples as individuals and the community— are being collected through data sensing and other methods and used in a larger Artificial Intelligence project. The machine— the computers and so on— will, of course, learn in the process. Still, from the beginning, the decision about what information to acquire and what and how to use that information is decided by specific (human) stakeholders. Gradually machine learning will take its course and will take AI processes forward. AI will acquire data and set rules for data-use to decide about the access to nature by communities about natures’ commons. AI will determine nature conservation and what is not; it will choose where, when, and how to intervene for conservation.

In recent years, several non-governmental organizations based in North America and Europe embraced AI in nature conservation. The plans and actions of these conservation NGOs have significance for communities all across the world. Because narratives promoted by these big NGOs and their work heavily influence policies and resource allocation outside North America and Europe, unfortunately, it appears that conservation groups who have international influence are yet to recognize that AI is an automated decision-making process. None of these groups are addressing the question of communities’ participation in and control over AI. But the success of these NGOs will mean that, in the coming decades, AI will increasingly determine the extent of control over natures’ commons enjoyed by local and indigenous communities across the world.

For instance, the largest association of nature conservation groups— the International Union for Conservation of Nature (IUCN), is currently drafting its program for the 2021-2024 period. The IUCN has identified Artificial Intelligence as one of the main enablers to achieve its goals related to core program areas. It seems the use of big datamachine learning, and AI is considered the most critical enabler in the future programs of the IUCN. But there’s no word about safeguarding against the autonomous superpower of AI to harm; nothing is mentioned about whether there will be efforts to ensure communities’ participation in AI and communities’ control over big data.

If you take a serious look into the current state of the AI field, you will see that basic premises of discussions on AI in the governance of nature conservation should at least consider the following;

  1. AI is a simulation of the human intelligence process owned and run by big data monopolies that also simulate all human biases and aggravate violation of rights and accelerate injustices.
  2. AI is an autonomous decision-making process that has independent power to harm individuals and communities by violating privacy and other rights and has inherent features to aggravate the current state of global inequality through the unequal distribution of resources.
  3.  To date, AI innovations and applications are primarily run and owned by a few big data monopolies. Suppose communities do not have ownership of big data. In that case, AI processes and tools have inherent capacities to be used in the disempowerment of people and to hinder equitable governance of nature’s commons.

Unfortunately, while conservation groups are embracing AI, none of these discussions are present. After decades of community work to secure environmental rights and justice, inclusion, and participation, and establishing the concept of free and prior and informed consent— why is this happening all over again when it comes to AI? I see three main reasons. Firstly, conservation groups consider AI as a mere technological tool that is innovative and can tremendously enhance the operation of nature conservation governance. Secondly, conservation groups fail to recognize that the AI processes are still business products owned by a very few giant corporations with a total monopoly on the powerhouse of AI— the big data. Lastly, conservation groups do not recognize that AI is resource-expensive, and the absence of AI is not necessarily the main challenge for many communities to conserve nature’s commons.

These limitations of big conservation groups’ position about AI should be seriously addressed. Members, supporters, and patrons of conservation NGOs should know better that AI isn’t just an innovative technological tool that state or non-state actors can use to implement nature conservation interventions; it’s much more than that. AI brings a very high level and extent of automation to the decision-making process. It will determine who gets to decide about what interventions are necessary and when and how to intervene.

To date, the main powerhouses of AI— the Big Data are owned by invasive, non-transparent, and unaccountable corporations who have established their monopoly in the business. So, AI has all the inherent biases against marginalized communities in every nation and innate capacities to be used against marginalized communities (e.g., indigenous nations, artisanal fishers, and vulnerable gender groups) whose livelihoods practices offer protection to nature against unsustainable extractive industries. So, without ensuring the democratization of AI, it will be dangerous for vulnerable communities to welcome it in the management of environmental commons to which their life, livelihoods, and cultures are deeply connected. Deployment of AI without securing direct control over the data by communities can undo decades of efforts in environmental justice; and participatory and inclusive governance of nature’s commons.

AI is resources-expensive. Nature conservation management is doable with the less; it will be counter-productive to welcome such a resource-expensive process indiscriminately. The efficiency in nature conservation governance promised by Artificial Intelligence is helpful for indigenous and local communities only if they have the political power, opportunity of direct participation, and authority to control such an automated decision-making process. Imagine artisanal fishers or indigenous communities who aren’t allowed to participate in governance directly. Then outside actors bring AI into the scene without ensuring democratization of the ownership of the big data. In that case, AI will be used to justify injustices against communities.

Conservation groups should make it very clear that when they talk about Artificial Intelligencebig datadata sensing, and machine learning— they recognize AI as a highly automated decision-making process with inherent biases and inherent power to harm communities. Secondly, conservation groups should prioritize democratizing such processes before deploying AI in nature conservation. And lastly, it should be recognized by conservation groups that democratization of AI does not only mean that communities have the right to know or see (access) about what’s going on. Instead, it means communities own the big data, and the communities have total control over the processes related to AI.


Featured Photo: Fishers and honey collectors in the Sundarbans— the largest continuous mangrove forest in the world. Photo by the author.