Public reaction on Artificial Intelligence (AI) today probably ranges from extreme positivity to extreme fear of the unknown..the fear that AI would lead to more social harm than good.
On the one hand is the gung-ho feeling, growing stronger by the day, that programming computers to perform tasks is absolutely essential for solving many societal challenges, from treating diseases and creating smart cities to minimising the environmental impact of farming and predicting natural disasters. But, on the other hand is the concern that development of high-level machine intelligence would impinge on privacy, be a threat to jobs, and even that robots would take over the world in the near future.
Research firm Gartner expects the global AI economy to increase from about $1.2 trillion last year to about $3.9 Trillion by 2022, while McKinsey sees it delivering global economic activity of around $13 trillion by 2030. It is partly this gigantic projected growth that has also lead to beliefs that AI could actually lead to an “intelligence explosion” that might take an unsuspecting human race by surprise.
There are already examples as in the case of AlphaZero (a computer program developed by AI research company DeepMind to master the games of chess), which show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly.
While people generally talk about AI in a positive bent of mind, there is also no doubt that greater public understanding of AI is needed. Very naturally, people tend to be influenced by futuristic movies dominated by powerful forms of Robotics, and perceive AI as a domineering technology that creates robotic monsters, lethal weaponry and driverless cars. The numerous more mundane instances of AI benefiting the society tend to be overlooked. Often there is the perceptible lack of confidence that the corporate giants of the AI world will ensure the best interests of the public are not compromised.
Perhaps the most important issues to be worried about at the moment are more everyday things like internet search engines and fake news. Everybody agrees that the IT industry, research communities and governments must get together to support public education around the issues raised by the technology.
It is critical that the public understand both the positive and negative potentials of AI technology, and also get the confidence that there are well-informed and regulated frameworks in place to govern and promote responsible innovation in AI. This would mean development of public sector policies and laws for promoting and regulating AI, focusing both on technical and economic implications and also on trustworthy and human-centered AI systems.
One of the most astonishing news in recent times has been Elon Musk’s call for regulation of AI development. The Tesla CEO was reported to be “clearly not thrilled” to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without norms and laws are too high. There could not have been a more potent message conveyed by the head of a company in the vanguard of the AI dream. The broad takeaway from Musk’s message was that AI has become an area of such strategic importance and a key driver of economic development that socio-economic, legal and ethical impacts have to be carefully addressed.
Regulation is considered necessary to both encourage AI and manage associated risks. Both the European Union and the World Economic Forum have confirmed their intention to develop global rules for AI and create common ground on policy between nations on the potential of AI and other emerging technologies.
Regulatory compliance can also be incredibly complicated, especially with new technologies being implemented into business models on an almost daily basis. Due to constantly evolving technology many regulations quickly become obsolete, which means that steps are taken to make sure that any iteration of it also gets covered.
Applications vs Technology
AI will continue to have a profound impact on society. On the opposite end of the spectrum is the bias caused by hacking and AI terrorism, and the larger issues that can arise from misuse of the technology. Hence, it would make more sense for the applications themselves to be regulated, and not the technology as such. The AI application requirements, for example, in healthcare are different from banking. And so will the ethical, legal and economic issues around both.
Policymakers will need to work closely with professional bodies from each industry, with regard to what the technology is needed for, how they’ll make it work, how it may impact their workforce and retraining, and what support they need from the government, rather than drawing a blanket regulation on the whole technology.
Looking ahead into the 2020s
Smart regulation is what is called for. That would mean providing the framework for further innovations in AI technologies, while at the same time regulating applications and ensuring a future where AI is ethical and useful, in a world where workers displaced by automation have been re-trained to be more suited for other, more important roles.