2020 is forcing us to confront some hard truths about the world we live in. It is worth taking pause, just for a moment, to consider whether the exponential technological progress we have been experiencing, and which does aim to create a better future for humanity, is not also amplifying some of the very same challenges we are trying to overcome, as a global society.
As we strive to meet the unmet and unarticulated needs of customers, we continuously look towards technology to fulfil the promise of ushering in a new era of human advancement. We see leading companies globally investing heavily in technologies such as Cloud Computing, Internet of Things, Advanced Analytics, Edge Computing, Virtual and Augmented reality, 3-D printing and of course Artificial Intelligence. And it is AI, which many experts tout as one of the most transformational technologies of our time, perhaps even more transformational than electricity or fire, in terms of sheer impact on humanity.
Global use of AI has ballooned by 270% over the past five years, with estimated revenues of more than $118-billion by 2025. AI powered technology solutions have become so pervasive, a recent Gallup poll found that nearly 9 in 10 Americans use AI based solutions in their everyday lives. Phones, apps, search engines, social media, email, cars and even appliances in our homes are all powered by AI infused technologies today.
And yet, a dark side of AI is surfacing with alarmingly more frequency as AI engrains itself in our daily lives. Questions that are increasingly being posed and must be addressed, concern themselves with whether AI algorithms are indeed perpetuating various forms of bias to the detriment of under-represented communities and minorities? To what extent do AI-imbued solutions discriminate against exposed classes of our society due to embedded bias?
Bias in the machine
There are ample examples of algorithms displaying forms of bias.
In 2018, reports emerged of Gmail’s predictive text tool automatically assigning “investor” as “male”. When a research scientist typed “I am meeting an investor next week”, Gmail’s Smart Compose tool thought they would want to follow up with the question “Do you want to meet him?”.
That same year, Amazon had to decommission their AI-powered talent acquisition system after it appeared to favour male candidates. The software seemingly downgraded female candidates if their resumes included phrases with the word “women’s” in them, for example “women’s hockey club captain”.
Many of the large tech firms battle with diversity, with men much better represented than women in most major tech companies. Having gender bias embedded in algorithms designed to support the hiring process presents a significant risk to efforts at achieving greater diversity: Mercer’s Global Talent Trends report for 2019 highlights that 88% of companies globally, already use AI powered solutions in some way for HR, with 100% of Chinese firms and 83% of US employers relying on some form of AI for HR.
For Amazon, it has forced a rethink of how they recruit globally, no small feat for a company that employs more than 575 000 workers.
Persecuted by an algorithm
Errant algorithms can be responsible for greater harm than just a few missed employment opportunities.
In June 2020, the New York Times reported on an African American man wrongfully arrested for a crime he didn’t commit after a flawed match from a facial recognition algorithm. Experts at the time believed it was the first such case to be tested in US courts. I’d wager, it won’t be the last.
Recent studies by MIT found that facial recognition software, used by US police departments for decades, work relatively well on certain demographics, but is far less effective on other demographics, mainly due to a lack of diversity in the data that the developers used to train these algorithms.
Microsoft and Amazon have halted sales of their facial recognition software until there is a better understanding and mitigation of their impact, on especially vulnerable or minority communities. IBM has even gone as far to halt offering, developing or researching facial recognition technology.
But how does this happen in the first place?
How bias enters our algorithms
McKinsey supports the view, that it is actually the underlying data that is the culprit in perpetuating bias, more so than the actual algorithm itself. In a 2019 paper, the firm argued that algorithms trained on data containing human decisions have a natural tendency toward bias. For example, using news articles for natural language processing could instil the common gender stereotypes we find in society simply due to the nature of the language used.
Many of the early algorithms were also trained using web data, which is often rife with our raw, unfiltered thoughts and prejudices. A person commenting anonymously on an online forum arguably has more freedom to display prejudices without much consequence. It’s not socially acceptable to admit to being racist or sexist, but anonymity offered by the web means many of these views proliferate mainstream and niche websites. Any algorithm trained on this data is likely to assimilate the embedded biases.
As Princeton researcher Olga Russakovsky observes: “Debiasing humans is a lot harder than debiasing AI systems.”
One example of this is Microsoft’s well-intentioned experiment with their chatbot, Tay. Tay was plugged directly into Twitter, where users across the world could interact with it. Users of the popular social media platform promptly got to work teaching the bot racist, misogynist phrases. Within one day, the bot started praising Hitler, forcing Microsoft researchers to pull the experiment.
The lesson: algorithms learn precisely what you teach them, consciously or unconsciously. And because algorithms learn from data, data matters.
Web data is also not fairly representative of society at large: issues with access to connectivity and the cost of smartphones and data could exclude many – especially minorities – from engaging with online content. This means that data collected from the web is naturally skewed to the demographics that make most use of websites and social media.
Combating bias in our AI solutions
One of the biggest challenges for the creators of AI algorithms trying to eliminate bias is knowing what should replace it. If fairness is the opposite of bias, how do you define fairness?
Princeton computer scientist Arvind Narayanan argues there are at least 21 different definitions of fairness, ranging across notions of individual fairness, group fairness, process fairness, diversity and representation. Our individual and collective life experiences will largely determine what type of fairness we favour, but the problem this creates is that one person’s fairness could be another’s discrimination.
For example, what is the fair demographic representation of “Global CEO” when you enter that into an image search bar? Is it a 50/50 split between male and female? Is it an equal split between White, Black, Hispanic and Asian CEOs? Or should its results simply be proportional to real-world data: if there are only four black CEOs of Fortune 500 companies, should only 0.8% of search results be of black CEOs?
There is arguably a need for greater diversity in the development rooms where AI algorithms are created. A cursory glance at the demographics of the big tech firms shows a disproportionate gender and demographic bias. More must be done to accelerate the synthesis of diverse and inclusive perspectives in the AI creation process, so that AI algorithms embody a broad range of perspectives, allowing them to drive more optimal outcomes for all those represented in society.
What can we do to mitigate bias in the AI solutions we increasingly use to make potentially life changing decisions, such as arresting someone or hiring someone? Greater awareness of bias can help developers see the context in which AI could amplify embedded bias and guide them to put corrective measures in place. Testing processes should also be developed with bias in mind: AI creators should deliberately create processes and practices that test for and correct bias. Design should always keep bias in mind.
It’s probably impossible to have a completely unbiased human, but having more diverse voices and greater awareness of the various forms of embedded bias within our societies can help AI creators build greater fairness into their algorithms. Diversity is important and can lead to more meaningful and helpful discussions around potential bias in human decisions.
Finally, AI firms need to make investments into bias research and share the learnings broadly to ensure all the algorithms we use can operate alongside humans in a responsible and helpful manner.
Fixing bias is not something we can do overnight. It’s a process, just like solving discrimination in any other part of society. However, with greater awareness and a purposeful approach to combating bias, AI algorithm creators have a hugely influential role to play in helping establish a more fair and just society for everyone.
This could be one silver lining in the ominous cloud that is 2020.
ENDS
Visit the SAP News Center. Follow SAP on Twitter at @SAPNews.
About SAP
As the Experience Company powered by the Intelligent Enterprise, SAP is the market leader in enterprise application software, helping companies of all sizes and in all industries run at their best: 77% of the world’s transaction revenue touches an SAP® system. Our machine learning, Internet of Things (IoT), and advanced analytics technologies help turn customers’ businesses into intelligent enterprises. SAP helps give people and organizations deep business insight and fosters collaboration that helps them stay ahead of their competition. We simplify technology for companies so they can consume our software the way they want – without disruption. Our end-to-end suite of applications and services enables more than 440,000 business and public customers to operate profitably, adapt continuously, and make a difference. With a global network of customers, partners, employees, and thought leaders, SAP helps the world run better and improve people’s lives. For more information, visit www.sap.com.
Brandstories Disclaimer:
Brandstories is not liable for the contents of the information published on this platform. The information which subscribers publish on this website is for general information purposes only and Brandstories facilitates the ability for viewers and subscribers to access this platform. Subscribers who publish their content on Brandstories are held responsible for their own content. This includes ensuring that it is factually accurate, grammatically correct, free of spelling errors, and does not contain unsavoury content that could result in legal action. In the case of linguistic translations, the onus is on the client to ensure that the translation is accurate. In no event does Brandstories make representations or warranties of any kind, expressed or implied about the completeness, accuracy, reliability, suitability or availability with respect to the information supplied and published. This website includes links to other websites, including third party websites. Brandstories does not recommend, endorse or support any views that are held by subscribers publishing information, and within these links provided. Furthermore, Brandstories does not have control over the nature, contents and availability of information contained on these sites. Any form of reliance readers and consumers may place on information published on Brandstories is strictly at their own risk. Brandstories makes every effort to ensure that the website is up and running smoothly at all times, however Brandstories does not take responsibility for, and will not be held liable for times when the website is temporarily unavailable due to technical glitches that are beyond our control.
You may also like
-
Steps towards sustainability must be a key part of every business’s strategy for success
-
DPD Laser and Multiserv partner to democratise parcel delivery to all South Africans
-
DPD and NSBC join hands to empower South Africa’s small businesses
-
The quest for certainty in e-commerce delivery
-
When it comes to parcel delivery, few names resonate on a global scale quite like DPD. The company’s reach, credibility, and the esteemed network it belongs to are nothing short of phenomenal. In this article, we delve into the vast expanse of DPD and its associated companies around the world, unveiling a picture of unmatched expertise and commitment. At its core, DPD South Africa is backed by Geopost, the majority shareholder. Geopost, a true multinational juggernaut, operates in 49 countries across all continents. It boasts an extensive network of expert delivery brands, each a leader in its respective domain. These include DPD, Chronopost, SEUR, BRT, Speedy, and Jadlog, collectively serving as a testament to Geopost’s global prominence. With 57,000 dedicated employees, Geopost is on a mission to make commerce more convenient, profitable, and sustainable for its customers and the communities it serves. A remarkable feat is Geopost’s commitment to becoming an international reference in sustainable delivery. It stands as the first global delivery company to have its roadmap to Net Zero by 2040 approved by the Science Based Targets initiative (SBTi). Geopost’s influence spans borders and is setting an industry standard that others can only aspire to reach. Local Partnerships and Powerhouses In the local arena, DPD South Africa enjoys the support of The Laser Group, a minority shareholder. The Laser Group is a proudly South African company, proudly majority black-owned, and holds the distinction of being one of the largest independent logistics businesses in the country. This local collaboration adds a uniquely South African touch to the global excellence that DPD embodies. An International Partner of Unmatched Stature Geopost, with its DPDgroup, reigns supreme as the largest international parcel delivery network in Europe. DPDgroup seamlessly blends innovative technology with local knowledge, creating a flexible and user-friendly service that benefits both shippers and shoppers. Geopost’s revolutionary Predict service, for instance, has set a new industry standard for convenience, ensuring that customers stay closely connected with their deliveries. With a colossal workforce of 122,000 delivery experts and an astonishing network of more than 58,000 Pickup points, DPDgroup achieves an awe-inspiring feat – delivering 8.4 million parcels each day, amounting to a staggering 1.9 billion parcels annually. The Global Success Story Continues The DPD business units function as the parcel delivery network of GeoPost, a holding company with sales soaring to €11 billion in 2021. Geopost is owned by Le Groupe La Poste, underlining the significance and reach of DPD’s corporate family. In sum, DPD isn’t just a parcel delivery company; it’s a global powerhouse that epitomizes excellence in logistics. Its association with Geopost and the extensive network of delivery brands it encompasses speaks volumes about its global standing. DPD isn’t just delivering parcels; it’s delivering on a promise of unparalleled service, worldwide. By the Numbers: DPD’s Global Impact Number of parcels delivered every single day: 7.5 million Annual Revenue: €11 billion Countries DPD can ship to via its network or partners: +230 Delivery Experts: 97,000 Countries in which DPD operates: 50 Number of parcels delivered per year: 1.9 billion https://www.dpd.com/za/en/