Search
  • Hubert Marlin

New technologies and racism


There are facts that survive time and space , since their essential vectors remain immutable. As long as, humans will somehow think with regard to the visual and genetic difference between them, all the facts of life on earth will always be tinted. New information technologies are not immune to this phenomenon they are even becoming the essential element of a racism that is now more sophisticated. A sophistication that seems to trivialize racism by camouflaging it, making it more digestible under hi tech tunes. The pamphleteer Gaston Mary, writing for La Libre Parole, Edouard Drumont's diary, was the first person known to have used the word "racist" in 1894, according to the Provençal writer Charles Maurras. However, the first author who seems to have used the word "racist" is Leon Trotsky who in 1930 in his History of the Russian Revolution, gives this attribute to a group of traditionalist Slavs who defended their culture and their national way of life.

Racism, based on the postulate of the existence of races within the human species, considers that certain categories of people are inherently superior to others, supremacist ideas are the foundation of racism because when one says being different from others, in general it is a way of affirming difference under the principle of superlative speculation, with views or bias always eon from modesty . The machines and algorithms written by a certain category of scientists seem to be racist, because they are defined by the prism of the individuals who see the world based on what they have received as education. A series of biased assertions that contribute to the constitution of the ideological profile of individuals, who build programs that become more and more prominent in the functioning of the world today, affecting greatly the lives of laymen. In an article published in April 2017 by Science magazine, Joanna Bryson, Aylin Caliskan and Arvind Narayanan, researchers from Princeton, New Jersey, and Bath, UK, demonstrated how Machine Learning technology, works. Machine learning is a subset of artificial intelligence in the field of computing that often uses statistical techniques to give computers the ability to "learn" with data they contain, without being explicitly programmed. The software named GloVe, developed by Stanford University (California), chose for the experiment was built based on this system, and scientists observing its behavior concluded that GloVe reproduced human biases, with cynical acumen. GloVe calculates the associations between words, and to correlate words with each other, the software must rely on examples using countless data from which the program detects the most logical associations.

In-depth learning or deep learning (also known as deep structured learning or hierarchical learning) which is a more targeted method observed in several software programs, is part of a larger family of machine learning methods, based on learning data representations, opposed to task-specific algorithms. Apprenticeship can be supervised, semi-supervised or unsupervised. Using deep learning a program can, for example, recognize the content of an image or understand the spoken language, and tackle complex challenges, on which the community of researchers in artificial intelligence has been tinkering for a while, with no or less successes . For a program to learn to recognize a car, for example, it is "fed" tens of thousands of images of cars, labeled as such. A "training", which can take hours or even days. Once trained, it can recognize cars on new images. In addition to its implementation in the field of voice recognition with Siri, Cortana and Google Now, deep learning is primarily used to recognize the content of images. Google Maps uses it to decipher text in landscapes, such as street numbers. Facebook uses it to detect images that are contrary to its terms of use, and to recognize and tag users in published photos. The result is effective in the reproduction of the coloring of the profiling. Words in the lexical field of flowers are associated with terms related to happiness and pleasure (freedom, love, peace, joy, paradise, etc.). The words relating to insects are, on the contrary, close to negative terms (death, hatred, ugliness, illness, pain, etc.). Similarly, racist and sexist stereotypes are also reproduced. statistically most given characteristics to black Americans are related to a negative lexical field, than those attributed to whites.

The Google search engine scandal in its US version could find a denial of racism from its conception in the fact that Google does not, or would not be responsible for the plethora of information about black women distilled in North America on the web, as typing the words “black women” on search engine more than 60 percent results showed that there is some automatic arrangement between, the terms black woman and whore, because the majority of the results usually showed porn actresses in very thin outfit, while typing white women in general, one could see in its computer screen a galaxy of movie actress and supermodels that celebrated the purity and success of the white beauty. The reality behind this truth is that some search engines are designed to give results that are true, not only to what is on the internet but to the ideology of their designers on a given issue. If the algorithm was written with the data that portrait black women as sexy, or if most of the data on black women on the internet are related to sexist terms, it is normal for search results to be tinted, by this characteristic. Also, at this level it is not only the designer of the software who is to blame, but the society itself, which builds retrograde stereotypes concerning a certain group of individuals. Stereotypes that are undoubtedly transcribed in the technology. In the same vein, personalized search engines will give results that conform to the personality or the field of work of the person using them. A journalist typing women on a personalized search engine will have a considerable number of results from women doing in the media trades. Facial recognition technology that addresses the physical characteristics of individuals is also cited in the spread of racism in technology. It is less effective when it comes to recognizing the facies of peoples of African-type. Facial recognition software, which are usually designed by white engineers, have a hard time recognizing black peoples, and it has become clearer that the same software when designed in Asia were more successful in facial recognition of Asian-type subjects. It is therefore undeniable that technology obeys its environment and the philosophy of life of those who design it. A ProPublica survey found that justice systems in the United States used AI to predict the likelihood of re-offending and admitted that blacks were more likely to become criminals. AI is also used to determine which prisons an inmate should go, or, as the Atlantic has revealed, what access rights he or she might have. AI is also used to determine credit eligibility or offers for other financial products, and often in a discriminatory way, programs may offer you a higher interest rate if you are black or Hispanic, and less if you are Asian or white.

In fact, technology is never neutral. The example of the sexist interpretation of images is only the tip of the iceberg. Since machine learning and AI work through the collection, filtering, then learning and analysis of existing data, they will replicate existing structural biases unless they are explicitly designed to ignore it. Man creates machines not only to meet his needs but also to its image, and the resurgence of racism in technological tools simply proves that technology remains the prerogative of a certain white and racist elite that controls most of Internet traffic, as well as the production of advanced tools like those using artificial intelligence. In 2016, Tay, an AI from Microsoft, supposed to play a teenager on Twitter, was disconnected after a few hours of exchanges on the web. It had been given the freedom to learn from his exchanges with humans, and to formulate his own vocabulary. Tay, conceived by white American engineers in a racist country, began making racist and denialist statements before being suspended by Microsoft in disaster. It was a showcase of the quality of life of Western society undermined by racism and prejudice. Today, if by chronic narcissism some seem to pour in the pride of their technological prowess it is not less time to denounce the pernicious modus operandi which hides behind the technological progress which besides strengthening the principle of a society of exclusion, with the ever increasing exploitation of the natural resources of the countries impoverished by a system of predation, promotes racism through an elitist industry that has since made racial profiling a norm that forces the world into an antagonistic order prone to chaos. prejudices and its corollary that are sexism, racism, or xenophobia have always been vectors of war. Also, new technologies beyond their snobbiness, contribute more to the frustration of the have not and the egocentric formatting of an important fringes of the world population. A situation that sooner or later results in clashes between the different kinds of classes, genders and races, which are the different divisions that humans have imposed on themselves to better understand their environment in a logical and fair way, but also especially more and more to control it by using the most perverse methods. A control which can be accomplished only by taking the freedom of others, who one defines according to its aspirations to please its ego, when feeling stronger by the pride that confers technological evolution that some have decided to confiscate by egoism and racism.

Read more at www.flashmag.net

Hubert Marlin

Journalist


8 views

Also Featured In

© 2018  Flashmag The Vanguard Webzine is a Creation of Medianet LLC - All rights reserved