“How Racist AI Devices Expose the Limits of Programming Human-like Awareness and Consciousness”
- clgpage
- Feb 5, 2022
- 9 min read
HARMONIC ONE – COMMENTARY | February 2022 Edition

Article Dedication: African American Inventors - Cell Phone and Voice Over IP
The ingenuity of African Americans and others in the African Diaspora in all facets of civilization has been one aspect of world history that has been hidden, under-appreciated and belittled.
The invention of the digital cell phone by African American inventor Jesse Eugene Russell is one of such hidden truths about the worldwide excellence of Black people
Before Russell invented the wireless mobile device, the mobile devices available were mainly used in cars or in other vehicles. This was mainly because mobile devices needed the power to be able to transmit signals to a cell tower. And at that point the power needed to drive a mobile phone was too much to fit into a wireless movable device.
It was Russell’s ingenuity and creation of the mobile device that made it possible for mobile phones to be handy and affordable today. His innovation made it possible for mobile devices to transmit signals between the handsets we use today, and the cell phone towers.
Just like him, an African American woman, Dr. Marian Rogers Croak, invented the VOIP system for making calls over the internet. And they are just a few in the internet and telecommunications industry
Let this article pay tribute and sing their name, and others for Black History Month
Article:
The explosive use of AI in virtually all areas of society has caused the release of racist AI applications in economics, education, healthcare, entertainment, labor, law, politics, religion, sex and war. These devices ,which consistently and outrageously insult Black men, women and children, expose several limits on the possibilities and practices of programming machine “agents” with “intelligence” or “human-like awareness”.
Some AI scientists claim they have created machines which can “truly think” by reverse engineering the way the human cortex processes information. But study of the human cortex may overlook limits on human capabilities and bias. This can cause the over application of AI to possibly be an existential threat if some form of standardization is not applied to ensure "objectivity" as the machine “learns” its environment. This threat has already materialized in several racist AI incidents. These insulting incidents often revealed the subjectivity of the AI application to its creator, not the objectivity needed to be fairly implemented in a public setting.
Hence, the potential for discrepancies between perceptions programmed in the machine and qualities of the perceived world generates some serious questions on whether or not AI "fairness" is even possible. A simple example of these discrepancies could be when an autonomous AI machine “views” optical illusions.
We know that optical illusions play tricks on the human eye as the brain gets clues about depth, shading, lighting and position to help interpret what you see. The picture below can possibly illustrate the dilemma of human vs machine interpretation.

If a machine was asked to identify which side of the rectangular bar in the center is the lightest (and which is darker), it's not immediately clear if the machine would recognize that the entire bar in the center actually is the same shading ,or follow the illusionary trick of shading, lighting and depth. It is also not known how to interpret the results of the machine’s “answer” if it does not agree with what the human eye observes upon the first moments of observing this picture. If the "truth" is that the center bar is one shading, then why, at some point in time during your observation, does your eyes tell you differently? This dilemma becomes more serious if an AI is forced to interpret what is "fair" along similar lines of "illusion".
Furthermore, at what point in the development of an AI device/application would it be objective enough to be deployed to the public and not have any error during its interaction with people.? It will not make sense to create “Artificial Objectivity” by saying “ … well the application is fair to 99% of people… it's now objective and fair to all …. ”, because that 1% does matter and objectivity will not be achieved.
Many researchers know that in the world of statistics, there is no such thing as getting anything at 100% (you can get asymptotically close, but never equal). Statistics helps us mathematically describe and infer about various phenomena in the universe which are not perfect. We often report our statistical “findings” with some margin of “error” and scoff / laugh at people who may say their research was 100% perfect and correct.
But the irony of that laugh is that it appears that “unified machine ethics” is requiring “One Hundred Percent”. Leaving anyone out of the “fairness box” and chalking them up as a “margin of error” is not true fairness and objectivity. ( if error is allowed, who gets to decide which people/persons are part of the “standard error box”?). Hence its not immediately clear if an AI machine can use “quantum qubit strangeness ” or imperfect statistical models, to “learn fairness” across every single person across all cultures, different levels of consciousness and races on this planet.
Racist AI incidents show that it is already difficult for humans to be consistently objective about ourselves (and others) and even more so about the things that we make. Hence if human beings struggle with developing unified definitions of “intelligence”, “consciousness” and PRACTICING objectivity on a consistent basis, it's not clear how one expects an autonomous machine to do this.
Furthermore, its not clear if non-Black AI developers know anything about the history and extraordinary contributions of Africans or those in the African Diaspora, or if they even care about it. Hence it's not clear how their “AI creations” will be 100% consistently fair to every single dark skinned person on this planet.
The discourse over the objectivity of human awareness is yet to be resolved and has shaped debates in science, philosophy and religion for centuries. Choosing a word to even fit this type of “objective awareness” is extremely difficult since most of us are consciously "blind" when compared to the ideas of this phenomenon. But a popular word, with is roots in ancient civilizations found in Africa, South America, Asia and India is “objective consciousness” .
Searching history for beings who claimed to have achieved the type of consciousness that could be used to “build 100% fairness” in a machine, leads to many names. But one of the most famous is Buddha. We learn via many ancient texts, temple gatherings, sutras and priests who speak and teach about Buddha’s “nirvana” enlightenment; but has any AI researcher concisely defined and consistently experienced (and hold on to for a long time period) Buddha’s consciousness and able to discretize that experience into an AI algorithm?
But briefly reflecting on human attempts to achieve the ideas of this "nirvana" is like a blind person being told about the features of an object through someone who claims they can see the object. The blind person “listens” as the person with "vision" describes the object’s features and even encourages the blind person to “touch” the object. Next, the blind person tells other blind people what he/she learned from the person who they claim they can see.
Passing the description of the object among blind people for several hours (let alone thousands of years, which is the age of many religions) will most often change the original story. In the meantime, the only person who probably knows the truth is the person who can actually see.
Bringing AI Back Down To Earth
Careful thought of the “blind person” analogy would show that in order for an AI application to be completely fair to all people all the time requires a level of awareness and understanding that has only been described to the human masses by the few beings who claimed to have achieved it. The saying, “experience is the best teacher” has a non-exhaustive weight in a situation where one is trying to program something many have yet to understand. This can definitely question the sanity of attempting to do so.
These intense debates of intelligence, “objective fairness” and consciousness reveal that maybe AI needs to be, as Dr. Timnit Gebru was quoted in one article, “…. be brought back down to earth”. Despite the advancement of AI applications across various industries, the hype may have exceeded the basic function of AI, which is:
“ Tools which provide the introduction of automation to operate with minimal human supervision, further increasing the productivity of human labor.”
While there were many tools used by humans throughout the ages, computers are basically information and data manipulation tools. Since the idea of what constitutes “information” has invaded and transformed almost every branch of science, it has been used as a tool to explore the inner workings of human beings and the universe. This is often achieved by using information theory to bridge math with electrical engineering and computer science (i.e. boolean algebra). This function has not changed much since the early days of the 1945 ENIAC (including during the industry wide transition from vacuum tubes to transistors/integrated circuits in computer hardware).
But in 2022, the instant processing and conversion of vast amounts of data and logic into autonomous movements fascinates the average human observer who then attributes the results of this “magic” as “intelligence (artificial)”. This “magic” is not much different from the G-code used to run a CNC machine or 3D-printer. This supports the quote of one researcher who mentioned the following: “Any sufficiently advanced technology is indistinguishable from magic”. But without data (or fed the wrong data), these AI creations become heaps of expensive scrap metal, broken machines or waist of screen space; not boxes of “fairness to all” or “objective consciousness”.
This basic function of AI as a data collection/processing tool has been a central component utilized to help study the human brain. But, because humans are creative, inventive and imaginative, people have found many ways to apply this basic function to various situations across nearly every industry - including situations which may have, in certain instances, over inflated the decision power of AI machines over humans. Therefore, attempting to expect a data processing tool to be fair when it is just replicating human bias through massive data collection and processing needs further pondering.
Since AI is a tool that can assist humans with the productivity of human labor, a framework needs to be developed around how they should be regarded and treated. Some views which support thinking of AI as tools include the following:
AI tools are not human
There should be some consideration of changing the view of AI as follows:
FROM “ digital beings to which we hope will become human and fair to all”
TO “just a data tool to augment or support human work”.
AI machines are tools that should have their proper place in society according to each of their unique function, operation role and purpose in specific situations. They cannot function optimally without relying on attributes like intuition found in humans and animals (i.e. no “human cortex” taught a female sperm whale all the things she must do to nourish and protect her young in the vast ocean.)
AI Tool Industry is No Different Than Others – It has ups and downs
AI started off as a research topic in the 1950’s and, since then, has experienced “up’s and down’s” such as the “AI winters". These were periods where, due to loss of interest,many research organizations, higher learning institutions and companies either lost funding or could not find funding to continue their work. In the last 20+ years, the use of AI across many industries has exploded. But there is some debate on whether hype in the media has caused an “AI bubble” that would eventually pop like the “dotcom” industry of the early 2000s.
Adopt OSHA-like Laws to Regulate AI Devices and Applications
The Occupational Safety and Health Administration was created to basically ensure the health and safety of people in a workplace. Similarly, AI regulatory, legal and industry experts should develop, maintain and enforce international laws which assure the safety and healthy living conditions of humans working or living around AI machines. This might include training, outreach, education and assistance where needed, such as protecting whistleblowers who report rogue AI applications. As stated in part one of this article, any violations should be swiftly addressed according to various degrees of violation, including monetary fines and jail time.
Use Remote ID Technologies to Track AI Machines and Applications
Since AI applications and devices are being developed and applied at an explosive rate across multiple industries, there should be considerations to assign ID information and use it to track these devices. Just as cars have license plates, airplanes have ID numbers on their fuselage and people carry documents to identify themselves, it may be critically important to create laws that enforce monitoring the location, role, function and purpose of AI devices.
One technology which can possibly achieve this is a drone tracking technology called “Remote ID”. This newly proposed FAA initiated technology will track a large number of drones flown today by hobbyists and companies. This technology enhances the safety and security of flying by allowing the FAA, law enforcement, and Federal security agencies to identify drones flying in their jurisdiction. Creating a parallel tracking infrastructure for AI devices could enhance safety and streamline searching, tracking and reporting AI violators of the law.
Conclusion
While there are many ways to fight racist AI applications, these incidents provide opportunities to learn and review the limitations of AI and reconsider expectations of their “decision” abilities. These machines can apply cognition and reasoning, as we know it, not as we can necessarily and universally prove. This includes applications associated with reasoning, knowledge representation, planning, learning, natural language processing, perception, and the ability to move and manipulate objects. Thus, more importantly, they lack abstract phenomena that enable humans to discover things in themselves that have yet to be discovered.
Comments