How Vision Intelligence can improve Business Outcomes

From deep reinforcement learning to wavelet powered deep networks, explore how Cogniphi’s AIVI (Artificial Intelligence Vision) is taking this challenge head-on and transforming businesses with new level of efficiencies.

AIVI is a cutting-edge hybrid system that brings computer vision and artificial intelligence together into one powerful tool. It is propelled by data driven learning, feedback-based supervised learning and advanced computer vision algorithms.

Harnessing the outcomes, AIVI is able to enhance functions in processes in a range of industries, such as these:

Healthcare – Next generation tech-enabled solutions, redefining the health system and hospital operations through AIVI. From prognostic prediction to disease detection to patient experience, reinvent medical technology and healthcare to face new challenges posed by COVID. Automate pathogeny detections, enable vision-based tracking of nursing care to critical patients, monitor crucial assets, as well as derive new insights into pathogen and patient behavior.

Retail – Cutting-edge insights and SMART data-points around customers’ retail behaviour. Digitise operations, in-store learning and customer perception patterns using technology which can influence margins and keep bringing shoppers back for an awesome experience.

Manufacturing – Technology that enables predictability, improves intelligent design and reduces wastage. AIVI has revealed an immediate 18% spike in efficiency improvement and upto 23% loss reduction than the traditional MES system in factory operations.

Surveillance – Where a sensitive digital eye meets an efficient digital brain. Transform surveillance systems to the next level by garnering insights to predict, prevent and protect valuable assets. Amplify the effectiveness of home security systems, office security systems, theft prevention and more through smart surveillance.

AIVI relies on complex spatial and time-bound patterns to detect anomalies. It filters approved behaviours and not only provides robust pattern detection but also exposes capability to build predictive systems from the metadata (colour, feature, contour, texture) of the detected objects and their features.

Cogniphi’s AI engines, with self-learning and contextual computing capabilities, enable quick prototyping, testing, and product optimisation and development to deliver transformational outcomes that will delight the user. The best part is that solutions deployed can be taught to learn new patterns and anomalies on-the-go to adapt to varying needs.

Join Cogniphi on Facebook at www.facebook.com/Cogniphi for more conversations about Vision AI and how you can use it in your business

How important is it to Regulate Innovation in AI?

Public reaction on Artificial Intelligence (AI) today probably ranges from extreme positivity to extreme fear of the unknown..the fear that AI would lead to more social harm than good.

On the one hand is the gung-ho feeling, growing stronger by the day, that programming computers to perform tasks is absolutely essential for solving many societal challenges, from treating diseases and creating smart cities to minimising the environmental impact of farming and predicting natural disasters. But, on the other hand is the concern that development of high-level machine intelligence would impinge on privacy, be a threat to jobs, and even that robots would take over the world in the near future.

Explosive Growth

Research firm Gartner expects the global AI economy to increase from about $1.2 trillion last year to about $3.9 Trillion by 2022, while McKinsey sees it delivering global economic activity of around $13 trillion by 2030. It is partly this gigantic projected growth that has also lead to beliefs that AI could actually lead to an “intelligence explosion” that might take an unsuspecting human race by surprise.

There are already examples as in the case of AlphaZero (a computer program developed by AI research company DeepMind to master the games of chess), which show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly.

Perception

While people generally talk about AI in a positive bent of mind, there is also no doubt that greater public understanding of AI is needed. Very naturally, people tend to be influenced by futuristic movies dominated by powerful forms of Robotics, and perceive AI as a domineering technology that creates robotic monsters, lethal weaponry and driverless cars. The numerous more mundane instances of AI benefiting the society tend to be overlooked. Often there is the perceptible lack of confidence that the corporate giants of the AI world will ensure the best interests of the public are not compromised.

Public education

Perhaps the most important issues to be worried about at the moment are more everyday things like internet search engines and fake news. Everybody agrees that the IT industry, research communities and governments must get together to support public education around the issues raised by the technology.

It is critical that the public understand both the positive and negative potentials of AI technology, and also get the confidence that there are well-informed and regulated frameworks in place to govern and promote responsible innovation in AI. This would mean development of public sector policies and laws for promoting and regulating AI, focusing both on technical and economic implications and also on trustworthy and human-centered AI systems.

Regulation

One of the most astonishing news in recent times has been Elon Musk’s call for regulation of AI development. The Tesla CEO was reported to be “clearly not thrilled” to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without norms and laws are too high. There could not have been a more potent message conveyed by the head of a company in the vanguard of the AI dream. The broad takeaway from Musk’s message was that AI has become an area of such strategic importance and a key driver of economic development that socio-economic, legal and ethical impacts have to be carefully addressed.

Regulation is considered necessary to both encourage AI and manage associated risks. Both the European Union and the World Economic Forum have confirmed their intention to develop global rules for AI and create common ground on policy between nations on the potential of AI and other emerging technologies.

Regulatory compliance can also be incredibly complicated, especially with new technologies being implemented into business models on an almost daily basis. Due to constantly evolving technology many regulations quickly become obsolete, which means that steps are taken to make sure that any iteration of it also gets covered.

Applications vs Technology

AI will continue to have a profound impact on society. On the opposite end of the spectrum is the bias caused by hacking and AI terrorism, and the larger issues that can arise from misuse of the technology. Hence, it would make more sense for the applications themselves to be regulated, and not the technology as such. The AI application requirements, for example, in healthcare are different from banking. And so will the ethical, legal and economic issues around both.

Policymakers will need to work closely with professional bodies from each industry, with regard to what the technology is needed for, how they’ll make it work, how it may impact their workforce and retraining, and what support they need from the government, rather than drawing a blanket regulation on the whole technology.

Looking ahead into the 2020s

Smart regulation is what is called for.  That would mean providing the framework for further innovations in AI technologies, while at the same time regulating applications and ensuring a future where AI is ethical and useful, in a world where workers displaced by automation have been re-trained to be more suited for other, more important roles.

Leaving the supervised unsupervised: Cogniphi CEO, Rohith Raveendranath shares his experiences

Disclaimer 1 – I am an Engineer by profession and I am trained to judge by measurements.  I am taught to be process oriented and to be focused on repeatability.  Again, I am an engineer by profession and I take pride in solving everyday problems.

Disclaimer 2 – Whilst the hardcore AI evangelists will have their eyebrows raised when I say AI ( As most of the what the world is talking as miracle systems are predominantly Deep Learning and Machine Learning Systems) I will still go with that terminology for interest of ‘Oh I get it mortals’ (Like me).

With these disclaimers let me start. I can’t hold myself back amidst this discussion involving adoption of AI and the new prefixes that are unfolding for AI (By hour) and many a times these are being perceived/marketed and projected as new paradigms. You don’t believe me or get it (on the prefixes). Look at the recent trending AI prefixes – ‘Repeatable AI’, ‘Explainable AI’, ‘Un-Biased AI’, ‘Fair AI’, ‘Trustworthy AI’, ‘Ethical AI’ and so on. As such the debate on AI and DL was handful, these new dimensions (Oh yeah, I know them, you want to read on, google dimensions in DL :)) are adding more hidden layers in the minds of potential adopters. And now there are more things to confuse, scuttle project implementations and many more open threads getting created in mind of early adopters.

Wait a second, am I saying these are not important questions that needs to be answered ? The thinking judgement that you have reached until now(By reading the first para on which camp I am in) is a classical example of how years of learning have made you predict a plausible outcome that I am hinting, though I have not explicitly spelt it. And what has helped you do that, is years of supervised learning.

The systems that we are talking are taught/fed information through a supervised manner. The learnings, inferences and predictions are no doubt a function of what it has seen. The challenge is, when we expose such ‘heavily supervised’ system to do ‘un-supervised’ jobs(Hence the heading of article. Read un-supervised as final decision/outcome/action being executed without any manual intervention). By virtue of common sense, in a real world we will not do this with a rookie who has been trained, until the rookie is heavily tested and exposed to real world systems. Rather there are clear milestones set where the active to passive to nil supervision transition takes place. Working with machines should not be much different and should be given logical milestones to improve.

More importantly, there are plenty of business cases which do not warrant the need for un-supervised job. For. example i wouldn’t dare to call an AI (Ok DL) system that can detect cancer as an un-supervised activity. Since such a prediction would be overseen by a Doctor before any treatment is made, just that such a prediction/detection could ease the job of Dr, and can also help in consolidating learnings across many expert into one single system (think of it as a super observant technician). The users of such system are not expected rest their common sense and say ‘But system said so’.

In the next few article series I would like to share my experience as an engineer, on working with AI, implementing systems with AI and reaping benefits from such implementations. Whilst the quest for the ‘AI Holy Grail’ is on, We as engineering community should also be concerned about alignment of various roles to reap success even in its current form (For example, how does a Traditional BA differ from a BA who is working on AI system, how can he suspend many of the concerns raised above. Or how a traditional QA vs a AI System QA should function, or even how can a functional test case be written for AI Systems). One of the largest mechanism which we still do not have much clue on its workings, is the Human Brain, yet we trust it and function every day with it. AI systems are loaded with measurements that can be harnessed and can help to calm the nerves to a great extent.

As I started, I am an engineer by profession and I am taught to work with measurements, and in these AI measurements I trust.