What are the impacts of Artificial Intelligence (AI) on human labour and the environment? How do legislative proposals for regulating AI in Europe and Brazil respond to these impacts beyond discussions on surveillance and automated decision-making bias?
When we think of artificial intelligence systems, it is common that sci-fi images of self-driving vehicles and humanoid robots pop into our minds. These ideas are frequently presented to us as futuristic visions, something beyond the imagination that will push forward automation and improve quality of life. However, what this narrative usually omits are the impacts on labour and the environment resulting from AI’s development.
It ignores, for instance, what US scholar Lilly Irani calls the “hidden [human] faces of automation”, which calibrate algorithms to fit our society. It also does not usually mention how nature is affected by the race for AI development, as its demands for resources increase in terms of energy consumption and the exploration of minerals.
A perpetuation of Modern Times
In Charlie Chaplin’s “Modern Times”, the relationship between human worker and machine is presented in a comic yet tragic way in terms of how the Second Industrial Revolution was transforming the individual’s relationship with his or her work. In one memorable scene, Chaplin’s character develops a quirk after making the same repetitive movement over and over again. In another, he is swallowed by a machine, basically being “integrated with it”.
Decades later, now that we are already in a so-called Fourth Industrial Revolution, the relationship between human workers and machines can still be extremely dire.
AI systems are mostly developed by feeding algorithms with large pools of data. However, they frequently fail to conduct precise predictions by not being able to assess specific nuances in data that require a deeper contextual knowledge of the world. AI thus depends on humans for both labelling data sets and reviewing its outputs.
Historically, most of these activities have been done by low-wage workers, who counterweigh the incapacities of AI systems by feeding in information to assist them in conducting better classifications. One example is the hiring of content moderators by social media companies to identify nuances in speech and the reliability of information before flagging content as misinformation. Horrific stories have filled the media in the last few years about the working conditions of these workers.
Another remarkable instance of this new Modern Times dynamic is the use of crowdsourcing platforms to have groups of people tagging objects in images that will serve as training data for image recognition systems. The best example here is probably Amazon’s crowdsourcing platform, the Mechanical Turk, where AI developers from the most diverse fields can find freelancers to execute tasks either as volunteers or in exchange for money. Developers impose their own rules and contractors have no right to minimum wage or social benefits. All of this in exchange for only two US dollars per hour, on average. Of course, individuals watching toxic social media content for hours will still “earn” a lot of vicarious trauma and other mental health issues.
Lastly, we could not avoid mentioning how AI systems are replacing many job positions. While in the past one would guess that only manual labour would be at risk, AI development has proven this to be inaccurate: an OECD 2019 report estimates that middle-skilled jobs are increasingly exposed to risk, with 14% of existing jobs being prone to extinction in the next 15-20 years due to automation, and an additional 32% on the verge of radical changes. At the same time, six out of 10 adults do not have the right skills for emerging jobs that might be created in the following years.
The carbon footprint of AI
Apart from the exploitation of human labour for further developing AI and exerting automated control over our bodies, we should not disregard the environmental effects that this industry has been leaving behind.
One notable effect relates to the carbon footprint necessary to train AI systems. For tracking this impact, researchers from the University of Massachusetts Amherst assessed the levels of energy consumption necessary for developing different natural language processing (NLP) systems, which are embedded in applications such as Google Translate or DeepL for discerning oral and written speech or translating texts, for instance.
The scholars found that in the process of training and developing one single NLP model, approximately 660,000 pounds of CO2 were emitted into the atmosphere. This is roughly the same amount produced by five cars over their lifetime.
Another environmental effect of AI systems refers to the mining of metals necessary for building the devices in which AI operates. Gadgets from smartphones and notebooks to AI-powered autonomous vehicles all need chips, covers and batteries to make the applications inside them work.
A didactic example can be found in the manufacturing process of Tesla vehicles. To build the rechargeable battery pack for each Tesla Model S electric car, the company needs about 63 kg of lithium. Not surprisingly, Tesla is the largest lithium consumer in the world, estimated to use more than 28,000 tons of lithium hydroxide annually — half of the planet’s total consumption. Although entirely autonomous cars are still quite far from being deployed in the streets, Tesla is considered to occupy a leading position among its competitors for putting its vehicles on the market in a scalable manner, albeit as soon as its self-driving systems are considered safe enough for use in real-world scenarios.
Tesla’s CEO has also been advertising his company’s development of robots as if it was becoming closer to creating super intelligent machines - something which experts regard as absurd. Nevertheless, the reproduction of these narratives of revolutionary AI shows how the environmental impacts of these AI systems are being veiled by narratives of futurism and superpowers.
It is also worth noting that, besides lithium, rare metals such as tantalum, dysprosium and neodymium are also part of the equation of elements necessary for building smartphones and electric vehicle motors. World suppliers of these metals are countries that have been involved in recent armed conflicts, such as the Democratic Republic of Congo, where mineral resource extraction is commonly linked to modern slavery and warfare funding.
These environmental footprints left by AI developers and users can be a warning sign with regard to how narratives about these technologies are constantly built without a critical perspective on how they affect the environment and human beings. The next section addresses how legislative proposals for regulating AI are responding to this.
Are regulators worried about AI prospects for labour and the environment?
When confronted with such relevant human and environmental effects, one might guess that legislators debating AI regulation are already moving ahead regarding these topics. Unfortunately, the reality is somewhat different.
In the European Union, where one of the world’s most robust AI regulations is being drafted (the AI Act), not much attention seems to be drawn toward these themes. In Recital 28, we find phrases that “worker’s rights” and the “right of a high level of environmental protection” are elements enshrined by the EU Charter of Fundamental Rights, and as such, a risk-based approach for AI regulation should include them. However, this is presented in generalised language that doesn’t concretely answer how to do that.
In Recital 81 and Article 69, environmental sustainability is suggested to be considered in voluntary codes of conduct. Nevertheless, the mere non-binding mention of sustainability, with no clues on how particular issues should be addressed, does not assist much on how to deal with such a complex topic.
Recently, the EU Parliament’s CULT Committee issued a Draft Report that adds a new layer to the debate. It recommends amendments that include potential environmental impacts among the issues to be assessed when establishing whether a system should be considered high risk. However, not much detail is provided on, for instance, how this assessment should be made, or what should be considered an environmental risk.
The situation in other regions of the globe does not seem better. Take, for example, the case of Brazil. As a country recognised for its protective labour rights regime and with a fundamental role in the world’s environmental debate, one could expect that the country’s regulatory debate would take AI’s impact in these fields into consideration.
However, an AI Bill approved by the Chamber of Deputies, and which is being discussed by a committee of jurists commissioned by the Senate, moves in a different direction. The text approved in the Chamber presented a principle-based approach providing that the deployment of AI systems should aim to, among other things, protect and preserve the environment (Article 3, VI) and should have as one of its fundamentals the “respect for labour rights” (Article 4, III). Such generalised provisions, again, do not properly address the labour and environmental impacts posed by AI, and represent how Brazilian workers and natural resources may be vulnerable to the social transformations posed by these technologies without proper regulatory oversight.
AI’s social impacts are slowly being recognised by regulators worldwide. Fairness, accountability and transparency are some of the topics most discussed with regard to AI, and pose serious challenges to legislators and policymakers. However, at a time in which humanity, in the words of UN Secretary-General António Gutierrez, is “sleepwalking to climate catastrophe” and human beings are being seriously affected by AI in their work, stakeholders should be putting more attention to how AI affects human labour and natural resources. It is time to cease sleepwalking and finally wake up if we are to avoid a bitter ending.
This article first appeared here: eu.boell.org