You are currently viewing A.I. & Social Networks – Is an inequality process started?
A.I. & Social Networks - by Alessandro Civati

A.I. & Social Networks – Is an inequality process started?

Over the past several years, artificial intelligence and machine learning have radically transformed society. Social networks are lauded for creating connections and bringing people together. However, a study published in the Scientific Reports journal indicates that algorithms worsen existing inequalities and discriminate against specific groups of people. Sociologists have always acknowledged the existence of inequalities in any domain of society and equally recognize that the benefits and harms of technology are not evenly distributed. The most pertinent questions have been directed at developers of new algorithmic technologies.

Artificial intelligence and algorithmic systems are criticized for perpetuating biases, unjust discrimination, and worsening inequality. Today’s dominant forms of AI and machine learning algorithms are trained using datasets that reflect human judgment, priorities, and conceptual categories. When a dataset is biased, any inequalities will be encoded and reproduced in algorithms based on machine learning. There have been many growing social issues linked to AI by evaluating how undesirable characteristics creep in and how they can be removed from AI systems.

Sociologists and other experts are concerned about the issues of bias that are deeply rooted in pre-existing social inequalities. Data about patterns in society serves as the input on which these systems are trained, with the resulting automated decisions (the output) reflecting and perpetuating social inequalities. Further proliferation of algorithmic systems would create unequal consequences in education, employment, government benefits, or criminalization and law. The inequalities reproduced and reshaped through algorithmic technologies can play out on a global scale in areas such as international labor and the flow of capital through colonial and extractive processes.

Even algorithmic systems built to be objective and without bias also discriminate along the most familiar human lines – amplifying social differences and inequalities. Humans and societies tend to exhibit different tendencies often reproduced in automated systems, which is evident in today’s dominant AI and machine learning algorithms.

The study sought to investigate how social mechanisms influence the rank distributions of two of the most popular algorithms. The algorithms chosen were PageRank, an algorithm on which Google’s search engine is built, and Who-to-Follow, the algorithm used by Twitter to suggest people that you may find interesting and want to follow. These ranking algorithms have been shown to increase the popularity of already popular users and may lead to a lack of opportunities for specific groups of people.

The researchers sought to understand how these algorithms usually go wrong based on their structure and the characteristics of the network. Using 2,000 people for the study, the researchers simulated different networks and adjusted the social mechanisms of relationships between the individuals in each network. Some of the variations made on the networks included tweaking the numbers of the minority, how the active users connected with other users, and how people generally associated in the network.

The researchers were keen to evaluate if people associated more with an already popular individual and if people were more likely to link with individuals who were similar to them. The preference to connect with people similar to oneself is a principle referred to as homophily which essentially means that birds of a feather flock together.

The researchers’ findings were that homophily was the principal social mechanism responsible for distorting the visibility of minorities in rankings as well as the proportion of minorities. The majority groups associate with other members of the majority, which means that the minority groups are underrepresented in the top ranks.

Minorities can overcome the challenges of under-representation by using a strategic approach when connecting with famous people. These strategic connections will help minorities achieve statistical parity in top rankings. Statistical parity means that if the number of minorities within a population is 20%, then the same should be reflected in the people within a network, especially in the top ranks. The onus is upon minorities to create more connections with other people in the majority and become more active to increase their visibility in the network. On the other hand, the majority can diversify their connections to minority groups to increase visibility.

By using realistic social network scenarios, it is evident from the study that ranking algorithms and social recommender algorithms on social network platforms such as Twitter can distort the visibility of minority groups in unexpected ways.

It is important to have algorithms and other AI systems working effectively and efficiently since we are becoming increasingly dependent on these systems. Social inequalities shouldn’t be amplified or entrenched any further, and public policy should guide and articulate the social dynamics of AI technologies. Sociologists are helping create positive visions for AI that all people can work towards with improved governance of algorithms.

The solution to this problem among AI developers has been to find various ways to remove or reduce bias in datasets and algorithmic decisions. Biases in algorithms are increasingly recognized as a complex and multi-dimensional challenge that cannot be achieved purely through technological solutions. The problem requires the input of experts in the social, data science, and technical fields. AI and algorithmic systems can be studied sociologically, questioning how sociology and other disciplines can contribute to current debates about these technologies.

The three leading contributions that sociologists, AI developers, and other experts can push through interdisciplinary collaboration and policy influence are:

  • Critique and politics of refusal – analysis will help unpack the politics of algorithmic technologies drawing on existing social theories, skills, and methods. Where necessary, society can exercise the refusal of algorithmic technologies to dismantle unjust systems and institutions.
  • New technologies unsettle established systems since they tend to be resistant to change.
  • Improved algorithmic governance – social inequality issues are matters of public interest and are therefore addressed through institutions mandated with safeguarding public good. Governments have a role to play in facilitating policies and regulations that promote robust algorithmic systems. We can already see governments worldwide having a solid urge to rein in the tech giants.

Algorithms and AI systems are now used widely and have a role in achieving outcomes and distributing goods. However, AI systems are central to the reproduction and perpetuation of bias and inequality. Combining the three measures will help drive positivity in the face of disruption by algorithmic systems and the rampant reproduction of inequalities by algorithms.

Blockchain Intellectual Property Protection by LutinX.com

Author: Alessandro Civati

Blockchain ID: https://x88.life/R1LtGlEMgw