You are currently viewing AI Creators and Policymakers – A complex but needed way for our future

AI Creators and Policymakers – A complex but needed way for our future

There has been a giant abyss between AI creators and policymakers, as evidenced by recent happenings. It is indicative of a need for consensus building, trust-building, and resolution of accountability issues with AI technology. AI developers have requisite information and understanding, but the same cannot be said about policymakers and regulators. Overall, AI can impact all sectors of society and humankind and hence the need for accountability and trust-building. AI Creators and Policymakers, it’s a complex but needed way for our future.

The creation of sound mechanisms will enhance a comprehensive understanding of the AI development and deployment cycle. Governance should be designed so that it runs concurrently with the AI development process and utilizes multi-stakeholder methodologies and skills. That means that both AI developers and policymakers must be able to speak the same language. 

Only a handful of policymakers fully understand the AI technology cycle. The problem is further compounded by the fact that technology providers show little to no interest in shaping AI policy, especially concerning ethics in their technological designs. 

The primary ethical considerations revolve around AI bias either by race or gender and algorithmic transparency. Algorithmic transparency is all about clarifying the rules and methods used by AI-powered machines in making decisions. These ethical issues have already posed a negative impact on society and daily life. 

There are increasing incidences of unethical AI practice, such as inherent biases being built into systems. In the past, significant players in the sector have owned up and apologized for their misdeeds. For example, MIT has taken offline a dataset used to train AI models with misogynistic and racist tendencies. In addition, Google has fronted up with errors that happened in YouTube moderation. 

Artificial intelligence technology use cases in law enforcement have been faulted. For example, a forthcoming paper that claims AI can be used to predict criminality through automated facial recognition has been questioned in an open letter signed by experts, academics, and researchers. In yet another case, a chief of police in Detroit has admitted that AI-powered face recognition technology did not work in most cases. 

Recent happenings at Google have highlighted the need for ethical AI development. As a result, there should be efforts directed at ethics literacy and enhanced commitment to multi-disciplinary research from all AI developers and providers. 

The greatest challenge in deploying AI-powered technologies is that the technical teams are not thoroughly educated on the complexities of human social systems. It means that their AI-powered products could negatively impact society since they do not know how to embed ethics in their designs and applications. 

According to experts, the process of understanding and acknowledging the social and cultural context in which these AI technologies are deployed both time and patience, like with previous innovations, where the general population needed time to understand the underlying principles, techniques, and fundamental impacts. However, policymakers must be placed on a steep learning curve to keep abreast of transformation and advancements in AI technologies being deployed across the board. 

AI creators are encouraged to identify ethical considerations that may touch on their products and ensure transparency in implementing their solutions. However, policymakers and regulators have not been spared from the spotlight and need to step up. 

The first step entails familiarizing themselves with AI and associated benefits and risks. Policymakers may not have all the answers or the expertise required to make good regulatory decisions, but asking pertinent questions may help in that regard. Having good knowledge of AI will help policymakers draft sensible regulations that offer a balance between legal and ethical limits in AI development. Without the knowledge, policymakers and regulators would become overbearing or fail to do enough to protect society. 

Governments should invest heavily in recruiting technical talent and in relevant training to stay abreast of developments. Reasonable and sensible regulation of AI technologies will come about from familiarization with AI, its benefits, and risk and will further help industries and people leverage its huge potential within well-outlined boundaries. 

Literacy in AI technology will further enhance the work of policymakers and help them reap the benefits of the technology. When policymakers become users of AI, the technology will support their schedule and goals. It will also enhance constructive dialogue with stakeholders in the AI industry and set the ground for a comprehensive framework of norms and ethics under which innovation can thrive. Indeed, the public-private discussion will help in building trustworthy AI. 

Building the knowledge repertoire in the AI industry will serve the dual role of developing smarter regulations and facilitating dialogue so that all stakeholders are on equal footing. In addition, it will set the foundation for a comprehensive framework of ethics and norms so that AI innovations can happen within established standards. 

The focus in the AI industry has been towards innovations that solve algorithmic biases so that developers can build suitable systems using algorithms that improve rather than worsen decision-making. Increased investment in the development and deployment of AI necessitates IT companies to identify and evaluate ethical considerations with relation to products. By transparently implementing solutions, AI creators can kill two birds with one stone. They will be embedding a sound risk mitigation strategy and ensuring that financial gain after deployment doesn’t come at the cost of the economic and social wellbeing of the society/human population. Again, AI Creators and Policymakers, a complex but needed way for our future.

Stakeholders in the AI sector have the responsibility of ensuring ethical literacy for their staff and encouraging dialogue and collaboration with policymakers. That way, AI creators have a say in designing regulatory and ethical frameworks to guide the creation of real AI solutions, their deployment, and scaling. AI integration within industry segments and society will definitely impact human lives hence the need for ethical and legal frameworks. Both legal and ethical frameworks will ensure effective governance, enhance AI social opportunities, and minimize risks associated with AI technologies.

Blockchain Intellectual Property Protection

Author: Alessandro Civati


Blockchain ID: