Data governance, privacy and trust – A sweet spot for ESG? – Privacy

Trust is an essential part of any relationship. Companies’ relationships with their customers and consumers are no different. But the concept of trust in a business context is fragile.

At its core, corporate “trust” seems to boil down to a mix of compliance, reputation and ethics issues. Over the past decade, phrases such as “driving to purpose”, “sustainable business” and “environmental social governance (ESG)” have become part of the corporate lexicon as companies attempt to demonstrate that they are listening to changes in public sentiment and values.

However, the rapid rise of digitalization in the global economy has resulted in perhaps some of the most difficult trust issues for the consumer business relationship to date. Concerns raised by the academic community about the use of unintelligible and opaque AI systems and the commoditization of human data pose a real risk to a company’s reputation. The privacy and data conversation has moved beyond the legal and compliance spheres to determine whether certain technologies shouldbe used in a given context, although it is legally permitted.

In the absence of a comprehensive set of international rules on the ethical purchase of data and the use of AI, ESG principles can offer companies a lens through which many of the risks associated with these technological issues can be better understood and managed, in order to foster sustainable growth. value.


Controversial ethical issues can arise when data is used to make automated determinations or predictions that affect individuals, even if done within the letter of the law.

Over the past few years, we have seen examples where algorithms have made faulty or biased decisions that have negatively impacted human lives, as a result of training on faulty or biased data sets.1. This is rooted in the practices of classifying and labeling training data, often by the machines themselves or by low-paid human workers. This process often involves taking data out of context and giving it a singular, reductive meaning that not only limits how AI interprets the world, but also imposes a particular, narrow worldview on it.

In the quest to acquire ever more data to power AI, we have also seen a move away from consent-based data collection. In Australia, we have recently seen a regulatory pushback against non-consensual data acquisition practices. In November 2021, the Australian Information Commissioner’s Office discovered that Clearview AI, Inc. had breached the privacy of Australians by removing their biometric information from the web and disclosing it through a facial recognition tool. The “lack of transparency around Clearview AI’s collection practices, the monetization of individuals’ data for a purpose entirely outside of reasonable expectations, and the risk of adversity to individuals whose images are included in their database”2, all contributed to the discovery. In addition, Commissioner Faulkener explicitly noted that the law needs to catch up with technological developments, saying the case “reinforces the need for stronger protections through the current Privacy Act review.”3.

Sometimes the technological innovation or the product itself creates the ethical dilemma. Recently, Facebook announced that it was shutting down its photo tagging feature (a controversial facial recognition system). Mr. Pesenti, vice president of Facebook’s parent company Meta, said the company is trying to weigh the technology’s positive use cases “against growing societal concerns, especially as regulators don’t have not yet provided clear rules”.4


Given the widespread use of AI and data, AI ethics and data compliance are issues that cut across all businesses and areas of business. AI ethics and data management considerations also appear in each of the three core ESG pillars.

The relationship is more evident for governance. Good internal data governance (i.e. the ability to demonstrate that appropriate oversight and controls are in place to show compliance with privacy commitments) as well as a truly consensual purchase of data for AI training, make complying with changing privacy and data protection regulations worldwide more manageable. Such transparency also leads to accountability, and Australian regulators and courts are increasingly insisting on greater accountability in the use of AI by businesses and government agencies.

The impact of technology and data processing on individuals falls under the social pillar of ESG. Advances in technology and science allow for ever deeper analysis of how we live our lives. Algorithms determine creditworthiness and are used to diagnose medical conditions. The technology is also capable of intrusive levels of surveillance often without the knowledge of the individual, whether at work, at home or in public spaces. Failure to consider the ethical implications of these activities undermines trust in AI, and in the companies and governments that use it. It is important to recognize that data collected today may be used in the future in ways that are not currently contemplated. This creates additional risks for both companies and individuals.

Finally, there are concerning environmental considerations related to the use of data and the lifecycle of AI. These include the intensive energy consumption associated with data centers and the enormous demands on the earth’s resources for the minerals that form the building blocks of computing such as lithium. Advanced computing is rarely considered in terms of carbon footprint, fossil fuels, human labor and pollution. Companies that promote their environmental merits need to be aware of the impact of technology on the environment and how this is communicated to stakeholders.


The rationale for considering AI ethics and data compliance in ESG mirrors that of the movement more broadly – ​​taking a holistic view of the impact of AI and the use of data can lead to better risk management and ultimately create longer-term value for the business and investors.

Companies facing data-related compliance issues face the potential for reputational damage, significant remediation costs, reduced assessments, and significant regulatory penalties. The development of forward-thinking data governance policies based on principles of good data stewardship, ethical frameworks for the design and use of AI, and the implementation of transparent targets against which progress can be measured will help mitigate risk in an area that is changing at a colossal pace.




3. Supra.


The content of this article is intended to provide a general guide on the subject. Specialist advice should be sought regarding your particular situation.

Helen D. Jessen