The risk of basing decisions on biased data is increasingly relevant to the investment community
As we gradually emerge from the pandemic with technology more deeply ingrained in our daily lives, issues related to the scope and use of the vast amounts of data collected, such as data security and the use of predictive technologies and artificial intelligence (AI) are becoming increasingly important.
This rise has meant that awareness of the “darker” side of AI, as highlighted by mainstream hits on Netflix such as Coded Bias, and specifically how machine learning algorithms have driven to greater discrimination and inequality is on the rise.
The technology itself and the data we collect daily are not inherently “good” or “bad”. However, as humans, we can program our biases into the technology we create. Existing racial, social and gender biases can be programmed into the algorithms we develop, often unconsciously, or the data available to train algorithms, which are biased in nature, inevitably affect the results. In the US health care system, for example, an algorithm for determining health care risk and the need for additional medical care was found to be racially biased, favoring additional care for white patients over black patients. This stemmed from the fact that the algorithm was trained on the basis of previous patients’ healthcare expenditures, a very poor indicator of actual healthcare needs in the United States, given the privatized healthcare system, unequal distribution of financial wealth and structural racism.
The concept of fair AI reflects the notion that AI systems should be designed in a way that ensures that human or social biases do not translate into algorithms. This ethical approach to the design and implementation of AI systems is essential because of the significant risks associated with not controlling these biased AI algorithms. The AI market is expected to witness significant growth over the next few years. According to research by Fortune Business Insightsthe industry is expected to grow at a compound annual growth rate of 33% per year, from $47.47 billion in 2021 to $360 billion in 2028, in addition, recent McKinsey research indicates that two-thirds of companies plan to increase their investment in AI over the next three years. As such, greater attention to equality in data analysis is essential to prevent bias from being further embedded and amplified.
“We can work to develop fair and more ethical machine learning algorithms and ensure that the historical biases and biases we seek to eradicate in society do not continue to be reproduced by AI”
The risk of basing decisions on biased data is also increasingly relevant to the investment community. Responsible investors are expected to have an intimate understanding of the impacts of portfolio companies at all levels of the supply chain, with the integration of complex social issues being a crucial part of this. At the same time, increased scrutiny means that to avoid claims of “impact-washing” or “greenwashing”, due diligence processes must be watertight in their assessment of the impact of company activities on a diverse selection of stakeholders. Additionally, for impact investors, determining the true impact of their investment decisions is critical to achieving their performance goals and ensuring their long-term success.
While AI is often seen as critical to business success and scale, what is less talked about in the tech and AI industry is how we can leverage data. and machine learning to drive and accelerate social change, justice and equity.
We recognize that there is enormous untapped potential in the vast amounts of textual data related to social change and its impact in the world. At ImpactMapper, we have developed analysis software tools to highlight trends from evaluation and research reports, beneficiary or investment reports, change stories, interviews and notes. information on corporate social responsibility to create aggregate-level quantitative indicators to measure not only social change, but also progress. on climate change, sustainability, social justice and human rights.
We find that many biases are built into AI, with algorithms trained on datasets that are not representative of the diversity of perspectives that match our communities. We therefore work to align data collection with the principles of social justice, which will inevitably lead to alternative and fairer outcomes than traditional data analysis.
Imagine a world where the voices of underrepresented groups were prioritized, such as human rights activists, social justice activists, people of color, girls, young teens, members of the LGBTQI+ community and people with disabilities, among many others. By building pro-social, pro-equality, and pro-diversity databases, we can shift power imbalances and harness the power of machine learning and AI to mobilize social good and equity. When we train our databases on the vast amounts of social change data that exists in the form of assessments, research reports, progress reports from nonprofits and social movements around the world, we are beginning to harness and harness the power of data for social change. and good in ways we’ve never seen before. By doing so, we can work to develop fair and more ethical machine learning algorithms and ensure that the historical biases and biases we seek to eradicate in society do not continue to be reproduced by AI.
ImpactMapper partners with like-minded foundations, nonprofits and social justice activists, UN agencies, networks and businesses that have deep commitments to equity and rights around the world. And there are many other exciting emerging initiatives to bring more equitable insights into the AI space being undertaken by researchers and data scientists around the world. These include Alliance A+ (Alliance for Inclusive Algorithms), AI for Neurodiversity, Data and Feminism Lab at MIT. These will be important places to watch and fund.
Reviewing our relationship with data and aligning it with principles of social justice will be essential as we move forward. Taking this approach could be transformative not only for the development sector, which represents a group of largely unknown voices, but could also have much broader implications for the future of AI and how it affects all aspects of investment and financing.
Alexandra Pittman, PhD, is the founder and CEO of ImpactMapper, a software tool that helps businesses, donors, and nonprofits track, visualize, and optimize the real effects of their social impact activities.