You are currently viewing Humanity & A.I. – Artificial Intelligence Benefits & RISKS
Humanity & A.I. by Alessandro Civati

Humanity & A.I. – Artificial Intelligence Benefits & RISKS

Technological advancements have been associated with reshaping people and are never a neutral tool for achieving human ends. Both sophisticated and rudimentary technological innovations alter how people behave and work as they use them to control their environment. Artificial intelligence is a new and powerful tool and is fast altering humanity. Technology is slowly atrophying the native capacity of human beings to orient themselves to different situations. AI is gradually shaping the human experience away from the fears associated with the mention of AI. Let’s explore how AI is changing what it means to be human, the ability to make choices, and the moral implications of various judgments.

There are two types of AI systems: rule-based AI systems that borrow heavily from rule-based expert system development that taps the knowledge of human experts. These systems are used to resolve complex problems by reasoning through bodies of knowledge. The second type of AI system is the machine learning approach that defines its own rules based on patterns identified from data sets. Machine learning systems evolve and adapt continuously based on the training data streams and rely significantly on statistical models. Generally, machine learning models require more data compared to rule-based models.

Is AI Taking Over Our Lives?

Artificial intelligence is finding widespread application across different sectors. For example, AI is used for simple tasks such as predicting movies or television shows that an individual wants to watch based on past preferences. AI is used to decide who can qualify for loans on past performance and other factors that predict the likelihood of repayment. AI underpins technologies that detect fraudulent commercial transactions in making hiring and firing decisions, identify malignant tumors, assess the chances of recidivism in policing work, and facial identification of criminal suspects. If the algorithms used for AI technologies such as loan approvals, hiring, and facial recognition are trained using partial data, the models being used will perpetuate existing prejudices and inequalities. Researchers state that rigorous modeling and cleaning up of data will help reduce and eliminate algorithmic bias. In such cases, AI can make less biased and fair predictions compared to those made by humans. Algorithmic bias is often a technical issue that can be solved in theory. However, the elephant in the room is how AI changes the abilities that clearly define human beings.

Are We Losing the Ability to Choose?

Aristotle pointed out that the ability to make practical judgments is dependent on making them regularly on habit and practice. The emergence of AI and machine learning has seen machines substitute judgment in everyday applications. Machines are posing a potential threat to people effectively affects how people make judgments. For example, a bank manager will regularly decide for the people to hire in different positions, loans to approve, and other administrative decisions. Now algorithms have replaced human judgment, and these people in managerial seats no longer have to develop practical judgment.

Machine learning and AI have been used to create recommendation engines that have become prevalent intermediaries in the consumption of culture. These recommendation engines are constraining choice and significantly reducing serendipity by presenting consumers with algorithmically curated options of things to watch, read, stream, and websites to visit next. Human taste ouch is slowly being replaced by machine taste. One advantage is that machines can scan a broader range of choices than a human being can have time or energy to complete independently. The curation of options by engines is optimized based on what people have preferred in the past. The greatest fear is that people’s choices will be constrained by their history in a new and unanticipated way. We are already seeing a generalization of the echo chamber people are experiencing with social media.

The widespread emergence of powerful predictive technologies is already disrupting primary political institutions. The technologies violate human rights based on the idea that human beings are majestic, unpredictable, and self-governing agents whose freedoms must be guaranteed by the state. Algorithms that were first created with good intentions, such as regulating speech on online platforms, have been used to censor speech ranging from religious content to sexual diversity. AI-based technologies and systems created to monitor illegal activities have been used to track and target human rights defenders. In the medical and security fields, algorithms that had noble intentions have been used to discriminate against Black people when used to detect cancers or to maliciously assess the flight risk of people accused of crimes. The question is whether political institutions will continue to protect human rights in the same way, seeing that the predictive technologies affect decision-making and humanity. Self-regulation has done do little to help. Instead, it has delayed developing and implementing necessary laws to regulate AI’s uses and protect human rights.

A Predictable Life

Machine learning algorithms continue to train and improve based on extensive data sets, and we can expect to see large parts of daily life becoming predictable. Predictions will get better and better with time, and common experiences will become more efficient and pleasant. Algorithms will soon have a better idea of what movie or show you want to watch next and the best job candidate to hire for a specific position.

AI and machine learning will make better decisions away from the biases that humans typically have not so far into the future. Humanity is faced with the prospect of losing something vital. Unpredictability is a significant factor in how people understand themselves and what people like about themselves. It is feared that as an AI-mediated world becomes increasingly predictable, human beings will eventually lose a significant aspect and become less like previous generations. Companies and governments need to commit to designing AI tools, technologies, and services to respect human rights and, by extension, human choice by default.

Blockchain Intellectual Property Protection by LutinX.com

Author: Alessandro Civati

Blockchain ID: https://x88.life/Hk2Xtqax4n