Can You Rely on the Authenticity of Generative AI?
Can You Rely on the Authenticity of Generative AI?
Back to The Intelligent Enterprise
Is Generative AI Trustworthy?
by Andrew Pery, AI Ethics Evangelist & Maxime Vermeir, Senior Director of AI Strategy
Regulations that mitigate against the most obvious potential harms of generative AI is just one dimension to harness its potential. There are best practices and technological approaches that can improve the accuracy and reliability of generative AI-based applications.
One such approach is to leverage already proven narrow AI applications such as intelligent document processing (IDP).
Share
The recent buzz around ChatGPT raises legitimate questions relating to its trustworthiness. By his own admission, Open AI’s CEO Sam Altman acknowledged that ChatGPT has “shortcomings around bias” .
A recent article by Forbes , “Uncovering the Different Types of ChatGPT Bias” went even further: “Problematic forms of bias make ChatGPT’s output untrustworthy.” The article cites five categories of ChatGPT bias, which in general are risks to be aware of with generative artificial intelligence (AI) technologies:
- Sample bias
- Programmatic morality bias
- Ignorance bias
- Overton window bias
- Deference bias
The first is sample bias. Keep in mind that 60 percent of ChatGPT training data is based on information scraped from the internet, limited by a knowledge base of up to the year 2021. It can generate convincing but entirely inaccurate results. Its filters are not yet effective in recognizing inappropriate content.
Second is what Forbes refers to as “programmatic morality bias,” reflecting software developers’ subjective opinions of what may be deemed to be socially acceptable responses and injecting their norms into the model.
Third is “ignorance bias”—that the model is inherently designed to generate natural language-based responses that appear like human conversations; however, without the ability to really understand the meaning behind the content.
Fourth is what is referred to as the “Overton window bias ”, whereby ChatGPT tries to generate responses that are deemed to be socially acceptable based on the training data content, which may in fact amplify bias in the absence of rigorous data governance strategies.
Fifth is “deference bias,” wherein there is a tendency to trust technology, considering the “workload of most knowledge workers and the immense promise of this shiny new toy” .
The urgent need for AI regulation
To address these biases, it’s prudent to adhere to ethical principles and values relating to the development and regulation of artificial intelligence (AI) technologies. There are four key reasons that the need for AI regulations must be addressed immediately:
- AI is becoming pervasive, impacting virtually every facet of our lives. AI is forecast to contribute $15.7 trillion to the global gross domestic product (GDP) by the end of the decade. Moreover, a Goldman Sachs study projects that 300 million jobs may be subsumed by AI technologies such as generative AI. AI is simply too big to be governed by self regulation.
- The propensity by innovators of disruptive technologies to release products with a “ship first and fix later” mentality in order to gain first-mover advantage. For example, while OpenAI is somewhat transparent about the potential risks of ChatGPT, they have released it for broad commercial use, its harmful impacts notwithstanding. Placing the burden on users and consumers who may be adversely impacted by AI results is unfair.
- In many instances, AI harms may be uncontestable as consumers lack visibility to how AI systems work. AI regulation is needed to impose on developers much higher standards of accountability, transparency, and disclosure obligations that ensure AI systems are safe and protect fundamental privacy and economic rights.
- AI tends to amplify bias. As was put by David Weinberger of Harvard , “bias is machine learning’s original sin.” AI is proven to amplify bias that spans facial recognition, employment, credit, and criminal justice, which profoundly impact and marginalize disadvantaged groups.
Achieving the promise of AI technology requires a harmonized approach that encompasses human values of dignity, equality, and fairness.
It’s not just technology or just conformance with specific laws, but as Dr. Stephen Cave , executive director of the Leverhulme Centre for the Future of Intelligence at Cambridge, said: “AI is only as good as the data behind it, and as such, this data must be fair and representative of all people and cultures in the world.”
It is for these reasons that there is increased urgency in developing AI regulations. Chief among them is the European Union Artificial Intelligence Act, the compromise text of which was approved by the European Parliament on June 14, 2023. It is likely to become the first comprehensive AI regulation that imposes prescriptive obligations on AI providers to safeguard human rights, and the safety of AI systems while promoting innovation. In the US, recently Chuck Schumer, the Senate Majority Leader, urged implementation of AI regulation .
It can’t be ignored, however, that artificial intelligence systems are man-made, including the data on which they are trained. The bias that is being demonstrated by these systems mirrors the bias that exists in the human race. To improve this, the responsibility lies not only on developers but the end users of these technologies themselves—everyone must be held to a higher standard in order to correct the problem we have created.
Leveraging ChatGPT with ABBYY Vantage
Blog
What can be done to optimize ChatGPT output accuracy?
Regulations that mitigate against the most obvious potential harms of generative AI is just one dimension to harness its potential. There are best practices and technological approaches that can improve the accuracy and reliability of generative AI-based applications.
One such approach is to leverage already proven narrow AI applications such as intelligent document processing (IDP). Narrow AI applications are proven to perform certain tasks exceptionally well—even better than humans. They use advanced machine learning such as convolutional neural networks and supervised machine learning to recognize and extract text and data from images and documents with high degree of accuracy.
It is important to contextualize applications of generative AI to specific use cases. For example, using intelligent document processing to classify and extract content from complex processes such as loan applications, insurance claims, and patient onboarding will improve the generative AI’s knowledge base, thereby improving the accuracy and reliability of the generated content. Intelligent document processing can be a valuable tool to improve the quality and accuracy of the ChatGPT training data knowledge base.
We at ABBYY can add value to foundation models, particularly in the area of training data accuracy. ABBYY’s globally deployed IDP portfolio leverages advanced AI technologies such as convolutional neural networks and natural language processing to increase document classification and recognition accuracy, which can mitigate potential inaccurate outputs generated by foundational AI.
Ultimately, the utility of foundational AI systems will depend on contextualizing use cases by applying rigorous data governance strategies to mitigate bias, inaccurate results, copyright and privacy infringement, and harmful content. This may become even more important as legal issues relating to copyright and privacy concerns are raised in training generative AI, as evidenced by recent class action suits initiated against Google and OpenAI .
Continue reading on this particular topic in Techopedia: RDS and Trust Aware Process Mining: Keys to Trustworthy AI?
Would you like to stay up to date on the latest thought leadership from ABBYY exploring a range of topics including the latest AI regulations and developments in applied AI? We invite you to subscribe to The Intelligent Enterprise today by filling out the form on the right side of the page on desktop or below this article on mobile.
Andrew Pery
Digital transformation expert and AI Ethics Evangelist for ABBYY
Andrew Pery is an AI Ethics Evangelist at intelligent automation company ABBYY . His expertise is in artificial intelligence (AI) technologies, application software, data privacy and AI ethics. He has written and presented several papers on the ethical use of AI and is currently co-authoring a book for the American Bar Association. He holds a Masters of Law degree with Distinction from Northwestern University Pritzker School of Law and is a Certified Information Privacy Professional (CIPP/C), (CIPP/E) and a Certified Information Professional (CIP/AIIM).
Connect with Andrew on LinkedIn .
Maxime Vermeir
Senior Director of AI Strategy
With a decade of experience in product and technology, Maxime Vermeir is an entrepreneurial professional with a passion for creating exceptional customer experiences. As a leader, he has managed global teams of innovation consultants and led large enterprises’ transformation initiatives. Creating insights into new technologies and how they can drive higher customer value is a key point in Maxime’s array of Subject Matter Expertise. He is a trusted advisor and thought leader in his field, guiding market awareness for ABBYY’s technologies.
Connect with Max on LinkedIn .
Project Manager - Asset Browser for 3Ds Max
Additional Insights:
1 / 3
Are Large Language Models (LLMs) the Future? Read more The Gap Is Closing Between AI Innovation and Time-to-Value Read more How Banks Are Meeting Compliance Regulations and Fighting Fraud with AI and Machine Learning Read more How Process Mining Improves Business Processes and Prevents Cyber Threats Read more Customer Point of View: Process Mining Reveals $6 Million in Savings Read more Analysis Reveals Top Use Cases for IDP in US, Europe, and Asia-Pacific Read more The Second Tax Revolution—How Trustworthy AI Transforms Online Tax Filing Read more How AI Can Help Government Agencies Win at Total Experience (TX) Read more Creating an Intelligent Automation Symphony Read more Document AI Creating a Safer World Read more Customer Point of View: Approach Automation Step by Step Read more Top Reasons Why Customers Abandon Your Onboarding Processes…and How to Fix Them Read more
NeoDownloader - Fast and fully automatic image/video/music downloader.
Subscribe for updates
Get updated on the latest insights and perspectives for business & technology leaders
First name*
Last name
E-mail*
Сountry*
СountryAfghanistanAland IslandsAlbaniaAlgeriaAmerican SamoaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelgiumBelizeBeninBermudaBhutanBoliviaBonaire, Sint Eustatius and SabaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBritish Virgin IslandsBrunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos (Keeling) IslandsColombiaComorosCongo (Brazzaville)Congo, (Kinshasa)Cook IslandsCosta RicaCroatiaCuraçaoCyprusCzech RepublicCôte d’IvoireDenmarkDjiboutiDominicaDominican RepublicEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEthiopiaFalkland Islands (Malvinas)Faroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuamGuatemalaGuernseyGuineaGuinea-BissauGuyanaHaitiHeard and Mcdonald IslandsHoly See (Vatican City State)HondurasHong Kong, SAR ChinaHungaryIcelandIndiaIndonesiaIraqIrelandIsle of ManIsraelITJamaicaJapanJerseyJordanKazakhstanKenyaKiribatiKorea (South)KuwaitKyrgyzstanLao PDRLatviaLebanonLesothoLiberiaLibyaLiechtensteinLithuaniaLuxembourgMacao, SAR ChinaMacedonia, Republic ofMadagascarMalawiMalaysiaMaldivesMaliMaltaMarshall IslandsMartiniqueMauritaniaMauritiusMayotteMexicoMicronesia, Federated States ofMoldovaMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNetherlands AntillesNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorthern Mariana IslandsNorwayOmanPakistanPalauPalestinian TerritoryPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalPuerto RicoQatarRomaniaRwandaRéunionSaint HelenaSaint Kitts and NevisSaint LuciaSaint Pierre and MiquelonSaint Vincent and GrenadinesSaint-BarthélemySaint-Martin (French part)SamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint Maarten (Dutch part)SlovakiaSloveniaSolomon IslandsSouth AfricaSouth Georgia and the South Sandwich IslandsSouth SudanSpainSri LankaSurinameSvalbard and Jan Mayen IslandsSwazilandSwedenSwitzerlandTaiwan, Republic of ChinaTajikistanTanzania, United Republic ofThailandTimor-LesteTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurks and Caicos IslandsTuvaluUgandaUkraineUnited Arab EmiratesUnited KingdomUnited States of AmericaUruguayUS Minor Outlying IslandsUzbekistanVanuatuVenezuela (Bolivarian Republic)Viet NamVirgin Islands, USWallis and Futuna IslandsWestern SaharaZambiaZimbabwe
I have read and agree with the Privacy policy and the Cookie policy .*
I agree to receive email updates from ABBYY Solutions Ltd. such as news related to ABBYY Solutions Ltd. products and technologies, invitations to events and webinars, and information about whitepapers and content related to ABBYY Solutions Ltd. products and services.
I am aware that my consent could be revoked at any time by clicking the unsubscribe link inside any email received from ABBYY Solutions Ltd. or via ABBYY Data Subject Access Rights Form .
Referrer
Query string
GA Client ID
UTM Campaign Name
UTM Source
UTM Medium
UTM Content
ITM Source
Page URL
Captcha Score
Connect with us
- Title: Can You Rely on the Authenticity of Generative AI?
- Author: Mark
- Created at : 2024-08-21 17:44:52
- Updated at : 2024-08-22 17:44:52
- Link: https://some-guidance.techidaily.com/can-you-rely-on-the-authenticity-of-generative-ai/
- License: This work is licensed under CC BY-NC-SA 4.0.