top of page
Search

ALGORITHMS OF RECONSTRUCTION:“Practicing Accountability and Punitive Measures for Racist Al Devices"

  • clgpage
  • Dec 30, 2021
  • 4 min read

HARMONIC ONE COMMUNITIES - COMMENTARY

January 2022 Edition




Article Dedication: U.S Special Warfare Operator, David Goggins


“ … Don’t be the person who wants to turn the water on full blast and fill the glass quickly. While that may get you a full glass of water faster, you failed to gain the mental endurance in that process and therein lies the major tool in accomplishing all of you goals … “


David Goggins 12/25/2021


see the whole statement here on Goggins instagram:



Article:


Publically unleashing AI programs which disrespectfully and publically miscategorize black men, women and children is a reflection of the pugnant racist stench that already exist in all areas of society. Hence its not surprising these abhorrent miscategorizations , as documented by leading researchers, has increasing error rates with darker skin people. These dignerating mishaps also appear to mirror the robbery and miscategorizations of ancient and advanced African civilizations which predated some of the world’s oldest religions.


The recent AI incidents are disgusting, off the chart disrepectful and extremely dangerous; This include examples such as the dumb miscategorization of pictures of Oprah Winfrey and First Lady Michelle Obama as “men” (“76% probable man”) ; or the misuse of facial recognition system that falsely imprisoned a black man while a different black man on the other side of the earth was miscategorized as “un-employable” and a person at the same company stating there is “nothing they can do about ” his employment miscategorization ……. all happening at name brand companies.


Since most programmers and AI developers are not black , its not clear if there is a disposition of respect for those in the African Diaspora as these applications are being developed. The “anti-diversity” manifesto discovered circulating one large tech company does not make it clear if respect for Black people does exist (officially AND unofficially). Additionally, it's not clear how many ignorant people like the author of that manifesto toilet vomit secretly work at the same tech company it was discovered.


This problem and concern for accountability is not new and several researchers, companies and institutions have been “discussing such measures.” For example, recently, the U.S. Government Accountability Office has developed the federal government’s first framework to help assure accountability and responsible use of AI systems.It defines the basic conditions for accountability throughout the entire AI life cycle — from design and development to deployment and monitoring — and lays out specific questions for leaders and organizations to ask, and the audit procedures to use, when assessing AI systems.


Additionally, according to the Harvard Review, steps are outlined on how an AI organization can build accountability into AI. This view is characterized by understanding the AI life cycle, including community stakeholders, implementing dimensions of AI accountability and “thinking like an auditor” at all times.


Another valuable effort is the Datasheets for Datasets paper proposed by Dr. Timnit Gebru ( founder of DAIR https://www.dair-institute.) and other researchers. As noted in that paper, Datasheets for datasets will facilitate better communication between dataset creators and users,and encourage the machine learning community to prioritize transparency and accountability.


Algorithms of Reconstruction


These atrocities strongly dictate the need for a systematic effort that would reconstruct what is acceptable in publically framed AI applications and to follow up compliance failures with tough monetary and criminal penalties. Such measures can add an extra incentive for AI organizations to audit their products prior to release.


Just as the health industry started with literally no universal standards in the handling of patient health information (and now has standards), the time has come for practicing accountability standards and punitive measures in Artificial Intelligence.


A punitive framework for AI accountability violations should include a tiered structure that is applied to various degrees of violations. For each instance of a violation, this should include consideration of prison time for repeat violators and for those who willfully engage in such violations. This would include violations resulting in civil and criminal penalties attributed to ignorance, reasonable vigilance, willful neglect not corrected in a certain amount of time , and violations occurring under false pretenses and/or committed for personal gain.

Other punitive measures for AI accountability violations could also include:


  • AI Violation Public Notification Rule

Similar to PHI breaches in Healthcare, organizations which commit AI accountability violations that affect a large number of individuals should be required to notify individuals directly affected and report to major news outlets (local, state, national and international).


  • Required Consultation and Access to Audit Logs

Allow assigned AI professionals and audit agencies access to audit logs, procedures and practices and other auxiliary information such as Datasheets which describe the datasets involved in the AI violation. This will help advise the violator on how to prevent the incident from happening in the future and determine the level of punishment in a court of law.


  • Required Diversity Training for each instance of an incident

The training should be led by an outside agency or representative pre approved by AI regulatory and law agencies. This would include instructing AI violators on the importance of diversity in AI and why avoiding bias in the roots of these systems is required.


  • Donating Percentage of Penalty Monetary Proceeds to Nonprofits/Schools:

Implement funding activities, such as donating portions of Civil Penalty Monetary Proceeds to Nonprofits, research institutions and k-12 schools that have programs dedicated to diversity and eradicating bias from AI. The percentage breakdown of donation and its implementation should be determined by local community groups and regulatory agencies.


  • Public Reward Money

In addition to providing an incentive for compliance, posting a reward announcement is a good way to encourage members of the public to bring verifiable AI violations to appropriate stakeholders and into public view. Regulatory agencies could construct the reward wording in conjunction with industry professionals, law enforcement or/and legal counsel.


Implementing such measures would require concerted cooperation between government, legal and business industries similar in fashion to efforts with the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and Health Information Technology for Economic and Clinical Health Act (HITECH) … both enforced with incentives and stiff penalties.

Enacting measures which encourage compliance will ensure AI organizations and their business associates are actively and systematically implementing safeguards. This can help to keep the AI algorithms they release to the public safe, ethical and appropriate for all members of society.


 
 
 

Comments


ABOUT US >

Harmonic One Communities, a Los Angeles based organization,  utilizes systems thinking to creatively address and provide, holistic and structured solutions to local communities

Subscribe to Our Newsletter

Thanks for submitting!

CONTACT >

E: harmonicone2021@gmail.com

 

© 2021 Harmonic One Communities
Proudly created with Wix.com

bottom of page