Artificial Intelligence now pervades almost all walks of human life. In many areas, what was once thought to be fictional is today commonplace. Companies and governments are routinely deploying AI.

Widespread usage of AI, which is essentially machine intelligence replacing or aiding human intelligence, will naturally create new risks. It is no surprise that there has been a lot of debate about regulating its use in several activities, including the use of AI in law enforcement, which is perceived as a risk to privacy and fundamental rights.

The European Union (EU) proposes to prohibit use of Facial recognition technologies by law enforcement for the purpose of surveillance. Live face detection will be banned in public space, unless the “situations involve the search for potential victims of crime, including missing children; certain threats to the life or physical safety of natural persons or of a terrorist attack.”

Other applications that may manipulate people into causing self-harm or harm to others will be completely banned. A few months ago there was a news report that a chatbot built on GPT-3 (Generative Pre-trained Transformer 3, a language model that uses deep learning to produce human-like text) had advised one fake patient to kill himself when he reported he had suicidal tendencies.

Huge fines will become applicable for anyone dabbling with even AI-generated videos that look remarkably real, unless they are clearly labelled as computer-generated.

EU has become the first body to outline draft rules on regulating AI. Before long many others will follow suit. In India there are no laws currently in vogue relating to AI or ML as the Government’s intent right now is in promotion of AI and its applications. But, even as existing policy encourages rapid development of AI for economic growth and social good, the limitations and risks of data-driven decisions and the societal and ethical concerns in AI deployment will surely be considered by policy makers.

The Human Element

The ultimate aim of AI research, as in any technology advance, is to improve lives. However, fortunately or unfortunately, AI will never be a substitute for human philosophy and intellectuality. Machines are unlikely to ever gain an understanding of humanity, and our innate emotions and motives. The human touch will always be missing – empathy, love or any other emotion. Instilling AI with human–compatible values will be a major challenge.

It is widely expected that, within a decade, automation will replace a variety of current jobs. We may also assume that this new industrial revolution will engender a new workforce that is able to navigate and take control of this data-dominated world. Nevertheless, socio-economic disruptions are bound to erupt.

Steve Shwartz, author of the book “Evil Robots, Killer Computers, and Other Myths: The Truth about AI and the Future of Humanity” says that the notion of AI taking jobs is a myth. “Today’s AI systems are only capable of learning functions “that relate a set of inputs to a set of outputs,” he says. “Rather than replace jobs, AI is replacing tasks — especially repetitive, data-oriented analyses are candidates for automation by AI systems”.

AI will be beneficial only if it is developed with sustainable economic development and human security in mind, and not centred around perfectionism and maximum productivity. How much AI must be regulated to favour ethics and human security over institutional efficiency is a vital question at this juncture.

The debate rages on!