Automated Pathogen Detection is set to transform pathology and here is what you should know about it

52-year-old Rakesh Singh walks into a primary health care center in rural Rajasthan with what seems to be a suspected case of Tuberculosis. After hours of waiting in line, his sputum sample is collected and sent to a lab for a sputum smear microscopy. Days go by but Rakesh’s results turn out to be inconclusive, and his diagnosis and line of treatment are subsequently incorrect. Although Rakesh’s case is fictional, the reality of TB in India isn’t. According to the WHO, India recorded about 2.69 million cases of TB in 2018 and the country’s caseload is the highest in the world.

What if there was a way to make TB diagnosis better, faster and error-free? What if there was a way to fine tune the process of sputum smear microscopies? Here is where Artificial Intelligence (AI) and Automated Pathogen Detection hold some answers.

Automated pathogen detection might sound like futuristic words out of a medical lexicon, but thanks to advances in AI, it is fast becoming a reality in laboratories across the world. In essence, Automated Pathogen Detection is a process that combines the power of AI and automation to help test samples of human tissue, sputum etc. in a faster and more accurate manner by eliminating the need for manual human labour. Take for example the process of a Sputum Smear Microscopy (SSM), which is still the primary method for diagnosis of pulmonary tuberculosis in developing countries like India. For an SSM, sputum collected from a patient’s lung is placed on a slide and stained to highlight the bacteria which are then counted by hand. The process of counting thousands of tiny strains is extremely tedious, manual, and time-consuming.

Automated pathogen detection that is aided by advanced artificial intelligence offers a revolutionary solution to this. Vision-enabled AI software can help analyze microscope output that is fed from digital cameras as video. The video is then converted into a series of images and the bacterial load is identified and counted from these images. Aided by AI neural networks and a workflow that is augmented by vision intelligence, the possibility of human error is completely weeded out and samples can be tested 24×7 at a much faster rate. Tuberculosis is just one of the myriad examples; AI-augmented workflow is now also beginning to play a pivotal role in cancer diagnoses wherein tissue biopsy samples can be analyzed more pertinently and effectively, leading to potentially life-saving diagnosis. And in the years to come, AI-aided workflow will find more applications in diagnostic pathology.

One of the main reasons why technologies like Automated Pathogen Detection are finding a stronger foothold in medicine is because they help tackle the decades-long challenges posed by traditional pathology. Medical professionals have been sounding the alarm about scarcity of pathologists and the issues with physical storage of slides in diagnostic pathology for many years now. India, for example, has a load of nearly 40 million sputum samples that are collected annually, and the volume is only set to increase year-on-year. Proliferation of AI-based technology could mean that images of slides don’t need to be physically stored and instead, they can then be digitally archived and even printed in a report. AI-augmented workflow is also largely operator-independent and requires very little human intervention. This could mean that low and middle income countries like India that have a shortage of skilled pathologists need not lose out on high quality and accurate diagnosis. AI-augmented workflow that is empowered by vision intelligence has the potential to address these and a host of other challenges in medicine.

Apart from being a game changer in diagnostic medicine, Automated Pathogen Detection is also the perfect embodiment of the promises that new-age tech, AI, and automation hold. In an article developed by the World Economic Forum, it was envisioned that in the Fifth Industrial Revolution, humans and machines will dance together! This of course is metaphorical, but it perfectly encapsulates the essence of technology such as Automated Pathogen Detection that is not meant to replace pathologists but instead support them and help them make rapid and accurate decisions that can save lives.

SLIP, TRIP, FALL – Four-letter problems with a 4-letter solution

One of the leading causes of workplace injuries is STF – an abbreviation for the three dreaded words, namely Slip, Trip, and Fall. In Australia, there are more deaths from Slips, Trips, and Falls than there are from fires. In the USA, more than 2000 people need emergency medical care after a slip and fall accident every day, the medical bills for which can often run into astronomical figures of USD 30,000 per case. On an average, 11 working days are also lost as a result of slip and fall injuries. Hence, it is no wonder that insurance claims for incidents involving STF run into billions.

Most STF cases are caused by a lack of active monitoring and shortcomings in safety practices. In fact, negligence is identified as the main reason for STFs and proving it is the easiest route for an accident victim to claim compensation. It is now acknowledged that Slip and Fall accidents are a public health problem because they are so common and costly.

Many slip and fall accidents are preventable and several nations have guidelines for employers to keep workplaces safe and minimize the chances of accidents. If businesses and individuals take the initiative to keep their property safe for customers, other guests, and employees, then they can take preemptive action and prevent these accidents before they happen.

This is where AIVI (Vision enabled Artificial Intelligence) can play a leading role.  AIVI is a technology platform developed by AI experts Cogniphi; it enables an easy and practical solution that can help to continuously track, monitor, and send out real-time alerts whenever there are any shortcomings in safety practices at work places. Be it a poorly lit corner or a slippery surface or a poorly maintained walkway and badly stacked goods, AIVI technology can detect these problems and flag them before disaster strikes.

The AIVI Artificial Intelligence software, which harnesses the power of Computer Vision and Data-driven Learning, works with existing or newly installed camera hardware to detect anomalies in a series of existing conditions and practices followed at retail outlets, factory floors, gas stations, hospitals, nightclubs, or any other workplace. Through its Machine Learning capability, AIVI filters approved conditions and keeps updating itself so as to fine tune its algorithms for pattern recognition and become a literal third eye that warns you of inadequacies in real-time. Solutions deployed can also be taught to learn new patterns and anomalies, and adapt to varying needs as well as build predictive systems.

Even in cases where a Slip, Trip, and Fall does happen in a situation monitored by an AI-enabled video, the instant detection of a Fall can be rapidly relayed to the authorities concerned and illicit a quick response instead of delayed medical care. Timely handling of an STF injury can lead to lesser damage for the person and company.

Talk to Cogniphi and get a further feel of how Vision Intelligence can predict and prevent STF accidents and save your business immense loss caused by Negligence.

Industry 5.0: Vision Intelligence in the new-age Smart Factory

Our world today is in the midst of its fifth industrial revolution. It is an era that is pushing the boundaries of science and technology to harness its best possible potential for the benefit of mankind. To have a deeper understanding of what Industry 5.0 is all about and how it is transforming our lives today, we need to delve into what constituted its predecessor – Industry 4.0. The fourth industrial revolution was all about introducing the basics of automation to the world and applying it heavily in the manufacturing space. 4.0 essentially brought together robots and other interconnected devices to execute repetitive and routine tasks that are best done by machines. Industry 4.0, like most other industrial revolutions, was a giant leap for human innovation, but it also brought to the fore, fears about machines replacing humans and this gave rise to a lot of negative sentiments that led to robots and technology being cast as the enemy. Industry 5.0 is dispelling all such notions and showing us how man and robot are not rivals and in fact can work together as partners.

Industry 5.0 takes the founding pillars of 4.0 – automation and efficiency – and adds a human touch to it via artificial intelligence and smart machines. And if Industry 4.0 was all about by automation, then Industry 5.0 will be about a sort of synergy and harmony between humans and machines. Industry 5.0 is constantly demonstrating to us that pairing humans and machines to further utilize human brain power and creativity is the way to go in the future. Take for example Cobots or collaborative robots that are specially designed to share space with humans. They are one of the best examples of Industry 5.0 because they are designed to integrate with humans; a good example of this would be surgery cobots or co-pilot cobots that assist humans to perform highly specialized tasks during surgery and flying respectively.

Another fascinating and remarkable leap made by Industry 5.0 is vision intelligence. At its core, vision intelligence is a subset of artificial intelligence that works towards making computers and machines visually enabled – it very literally is the process of giving machines the very human ability to see. Through vision intelligence, machines can be given the ability to see and process visuals the same way humans do. Computers don’t subjectively react to visuals the way humans do and hence lack decision making capabilities. However, through programming a photo recognition software or cobots and robots, machines can be taught to mimic ­­­human qualities and thus enable us to live enhanced lives.

 

Vision Intelligence and the Smart Factory

 

A pertinent example of vision intelligence’s uses would be its applications on a factory floor. Manufacturers today have the ability to run smart factory floors with the vast applications of vision intelligence technology. CCTV cameras can be programmed to do much more than just capture moving grainy images, instead, they can be programmed to perform cognitive functions, for e.g., segregating damaged goods from good food produce. Picture hundreds of ears of corn moving on a conveyor belt as workers sort the good ones from the bad as fast as humanly possible. Now imagine a vision-enabled machine aiding human workers to spot the poor-quality corn ears through their AI enabled vision technology. Aiding in quality checks of corn produce is just one of the myriad examples of vision intelligence applications in factories.

CCTV infrastructures can be further adapted to build intelligence into factory designs. A surveillance system at a chemical factory for example can be taught to gauge distance between a worker and a vat of dangerous chemicals, thereby sounding a real-time warning alarm and reducing the risk of industrial accidents. Similarly, vision AI tech can be useful at construction sites where each and every process can be monitored real time and chances of mishaps are thereby reduced.

The human decision-making process is steeped in context and analysis. Our brains interpret visuals, contextualize the situation and make a prediction or decision based on a number of variables. Up until now, machines only had the capacity to perform repetitive pre-programmed tasks because they lacked the ability to see and process visuals. However, with vision intelligence, machines can now observe human patterns and make predictive decisions by learning from the big data they collect, thereby becoming almost-apprentices to workers in factories.

In a factory setup, vision intelligence is thus a game-changing development that can be used to streamline complex processes and aid human beings to perform better.

Ushering in a new era of Healthcare with Vision AI

The global pandemic has forced us to rethink our existing healthcare system and has created a need to harness advances in technology. Vision-enabled Artificial Intelligence (AI) that combines Computer Vision and Machine Learning (ML) has the proven technologies and potential to improve patient care and hospital efficiencies.

With increasing disease complexities, rising expenses and shortcomings in infrastructure, the healthcare sector needs a panacea for development and growth. By deploying Vision AI, with little addition to existing infrastructure, hospitals and clinics can bring about a system of continuous quality improvement and make healthcare more accessible and inclusive.

The primary areas of Healthcare that are leveraging cutting-edge advances like Vision AI quicker than any other are research, diagnostics, health monitoring, treatment, patient outcomes, Covid protocol monitoring, and facilities management. Here’s a brief look at how.

 

Research, Diagnostics, Health Monitoring and Treatment

Vision-enabled AI, by developing patterns and correlations in events and data, paves the way for research discoveries that can be life-saving, and also help in error-free and speedy diagnosis that leads to precise and enhanced treatment.

 

What ML does is it provides, by identifying certain critical patterns and signals that the human mind might miss, an extended arm to the doctor to fine tune his interpretation of available medical data. Further, advanced video analytics, by providing facial analysis and subtle clues about a patient’s behaviour, can often enhance the physician’s own expertise to get an accurate understanding of what a person is actually experiencing, and ensuring that nothing goes unnoticed.

 

Elevated Patient Satisfaction

 

 

AI-driven innovations hold great potential in connecting better with patients by delivering more personalized care and streamlined services. By tracking nursing care to needy patients, patient mobility, tendency to wander from the bed zone, discomfort, injury-prone situations and unusual behavior, Vision AI is already playing a critical remote control role in the vital areas of Patient Safety and Patient Satisfaction. On the advanced technology front it is not far away that Vision-based patterns and insights on patient distress (through face expressions) will help detect instances of Shock or Cardiac so that critical medical attention can reach him in time.

 

Adherence to Safety protocols

The pandemic is driving changes in hospital safety and this is where the application of Vision AI can be effectively implemented straight away. Compliance monitoring in health centres is now automated and remotely controlled through practical applications that account for the importance of touch-free, contact-less in-patient care.

 

Vision AI, for instance, helps in tracking glove, gown, and mask utilization, and in analysing utilization of hand sanitizers/hand hygiene. These applications are now available to continuously check patients, hospital staff, vendors and visitors for all contamination protocols to ensure compliance throughout critical areas. It can detect and alert, in real time, patient flow and crowding of waiting rooms and corridors such that compliance protocols are not violated.

Quicker Turn around

At a time when hospital occupancy is at an abnormal high, making room for more patients has also become a top priority. In large facilities operational efficiency jumps multi fold by automating room assignment, tracking room turnover step-by-step, and detecting the true cause of delays.Vision Intelligence can easily be integrated with existing facilities management systems to remotely monitor patient discharge, room cleaning and readiness so as to reduce turn-around time to the minimum and optimize patient flow.

 

The global COVID-19 pandemic has opened our eyes to the need of better support to our hospitals and essential frontline workers who risk their lives to keep us healthy and safe. As AI is increasingly becoming a part of our daily lives, it is time we harness it to build a smarter and more connected healthcare system that benefits all of us, every day.

 

 

 

 

 

 

A working technology for Loss Prevention in Retail

Inventory loss, also known as Shrink or Shrinkage, is a BIG problem in the retail industry. Usually caused by shoplifting, employee theft, and neglect, this accounts for 2 to 2.5% of Sales, which means a lot of potential revenue disappearing into thin air. Add the Corona Virus pandemic to that and you have Health safety too in the Problem mix.

Which is why more retailers are developing and implementing strategies for their store and earmarking budgets into in-store security measures to track and deter inventory losses, improve performance and support safety.

Digital technologies are constantly projected as the answer as future-proof options for retailers. One of the most widely discussed technologies is artificial intelligence (AI), and one of the forms of AI most easily applicable to the retail environment is Vision.

How does Vision Intelligence work?
Artificial Intelligence in Vision is an emerging technology that enables retailers to harness the power of video to automate the process of identifying and alerting threats in real time. It attempts to enable computers to “see” and understand, in much the same way as the human eye and mind. Computer algorithms use deep learning models to process visual content received from cameras to identify and classify objects. They further analyse for distinctions such as shapes, colors, borders, spacing, and other patterns to build a profile in such a manner that the software will be use this learned data to find other images that match that profile.

Solutions for Shrinkage already exist
Vision Intelligence is already in use to provide traffic and behavior analytics by using real-time, accurate visitor counts and classification, so retailers can understand customer traffic by knowing a customer’s path through the store, where they spend time, and how much time is spent there. Its deep-learning features also provide insights into behaviors and demographics, which can help in optimising marketing, sales, and rewards programs.

Facial recognition is another form of the technology that has been tested and proven in retail. It is particularly useful in helping retailers detect shoplifters and alert when known bad eggs are in the parking zone or about to enter the stores.

Advanced solutions have also been implemented that detect real-time potential loss of billing caused by BoB (unemptied items at the Bottom of the Basket/Trolley), or “Sweethearting” or “Buddy billing” (neglecting to scan all of a friend or family member’s items) or No billing at self check-out counters.  The software can also be taught to identify definitive pattern of habitual shop lifters, like loitering in parking lots, and to recognize actions like putting objects into a pocket or a handbag.

Pandemic Challenge
Existing Computer vision technology can be easily adapted to address challenges caused by the Covid pandemic, such as temperature screening, mask compliance, and social distancing. Thermal imaging, originally intended to detect intense heat for early indication of fire, can be used to screen temperature and detect elevated body temperature of individuals entering a facility. Mask detection used to identify a person as a robbery threat can be adapted to detect a face mask for health compliance. Facial recognition that helps to determine unique customer counts can also now allow retailers to stay within social distancing guidelines.

Investing small

Vision AI has the great advantage of being a flexible technology. Data with a visual context already exists. It is up to you to do what you want with it. Investing in solutions on a small scale to begin with makes absolute sense. That won’t prevent you from being able to expand its use seamlessly in the future, meaning it is future-proof. You can adopt a particular solution to integrate with your retail loss prevention method in any way that you think fit.

 

How Vision Intelligence can improve Business Outcomes

From deep reinforcement learning to wavelet powered deep networks, explore how Cogniphi’s AIVI (Artificial Intelligence Vision) is taking this challenge head-on and transforming businesses with new level of efficiencies.

AIVI is a cutting-edge hybrid system that brings computer vision and artificial intelligence together into one powerful tool. It is propelled by data driven learning, feedback-based supervised learning and advanced computer vision algorithms.

Harnessing the outcomes, AIVI is able to enhance functions in processes in a range of industries, such as these:

Healthcare – Next generation tech-enabled solutions, redefining the health system and hospital operations through AIVI. From prognostic prediction to disease detection to patient experience, reinvent medical technology and healthcare to face new challenges posed by COVID. Automate pathogeny detections, enable vision-based tracking of nursing care to critical patients, monitor crucial assets, as well as derive new insights into pathogen and patient behavior.

Retail – Cutting-edge insights and SMART data-points around customers’ retail behaviour. Digitise operations, in-store learning and customer perception patterns using technology which can influence margins and keep bringing shoppers back for an awesome experience.

Manufacturing – Technology that enables predictability, improves intelligent design and reduces wastage. AIVI has revealed an immediate 18% spike in efficiency improvement and upto 23% loss reduction than the traditional MES system in factory operations.

Surveillance – Where a sensitive digital eye meets an efficient digital brain. Transform surveillance systems to the next level by garnering insights to predict, prevent and protect valuable assets. Amplify the effectiveness of home security systems, office security systems, theft prevention and more through smart surveillance.

AIVI relies on complex spatial and time-bound patterns to detect anomalies. It filters approved behaviours and not only provides robust pattern detection but also exposes capability to build predictive systems from the metadata (colour, feature, contour, texture) of the detected objects and their features.

Cogniphi’s AI engines, with self-learning and contextual computing capabilities, enable quick prototyping, testing, and product optimisation and development to deliver transformational outcomes that will delight the user. The best part is that solutions deployed can be taught to learn new patterns and anomalies on-the-go to adapt to varying needs.

Join Cogniphi on Facebook at www.facebook.com/Cogniphi for more conversations about Vision AI and how you can use it in your business

How important is it to Regulate Innovation in AI?

Public reaction on Artificial Intelligence (AI) today probably ranges from extreme positivity to extreme fear of the unknown..the fear that AI would lead to more social harm than good.

On the one hand is the gung-ho feeling, growing stronger by the day, that programming computers to perform tasks is absolutely essential for solving many societal challenges, from treating diseases and creating smart cities to minimising the environmental impact of farming and predicting natural disasters. But, on the other hand is the concern that development of high-level machine intelligence would impinge on privacy, be a threat to jobs, and even that robots would take over the world in the near future.

Explosive Growth

Research firm Gartner expects the global AI economy to increase from about $1.2 trillion last year to about $3.9 Trillion by 2022, while McKinsey sees it delivering global economic activity of around $13 trillion by 2030. It is partly this gigantic projected growth that has also lead to beliefs that AI could actually lead to an “intelligence explosion” that might take an unsuspecting human race by surprise.

There are already examples as in the case of AlphaZero (a computer program developed by AI research company DeepMind to master the games of chess), which show that AI systems can sometimes progress from narrow human-level ability to narrow superhuman ability extremely rapidly.

Perception

While people generally talk about AI in a positive bent of mind, there is also no doubt that greater public understanding of AI is needed. Very naturally, people tend to be influenced by futuristic movies dominated by powerful forms of Robotics, and perceive AI as a domineering technology that creates robotic monsters, lethal weaponry and driverless cars. The numerous more mundane instances of AI benefiting the society tend to be overlooked. Often there is the perceptible lack of confidence that the corporate giants of the AI world will ensure the best interests of the public are not compromised.

Public education

Perhaps the most important issues to be worried about at the moment are more everyday things like internet search engines and fake news. Everybody agrees that the IT industry, research communities and governments must get together to support public education around the issues raised by the technology.

It is critical that the public understand both the positive and negative potentials of AI technology, and also get the confidence that there are well-informed and regulated frameworks in place to govern and promote responsible innovation in AI. This would mean development of public sector policies and laws for promoting and regulating AI, focusing both on technical and economic implications and also on trustworthy and human-centered AI systems.

Regulation

One of the most astonishing news in recent times has been Elon Musk’s call for regulation of AI development. The Tesla CEO was reported to be “clearly not thrilled” to be advocating for government scrutiny that could impact his own industry, but believed the risks of going completely without norms and laws are too high. There could not have been a more potent message conveyed by the head of a company in the vanguard of the AI dream. The broad takeaway from Musk’s message was that AI has become an area of such strategic importance and a key driver of economic development that socio-economic, legal and ethical impacts have to be carefully addressed.

Regulation is considered necessary to both encourage AI and manage associated risks. Both the European Union and the World Economic Forum have confirmed their intention to develop global rules for AI and create common ground on policy between nations on the potential of AI and other emerging technologies.

Regulatory compliance can also be incredibly complicated, especially with new technologies being implemented into business models on an almost daily basis. Due to constantly evolving technology many regulations quickly become obsolete, which means that steps are taken to make sure that any iteration of it also gets covered.

Applications vs Technology

AI will continue to have a profound impact on society. On the opposite end of the spectrum is the bias caused by hacking and AI terrorism, and the larger issues that can arise from misuse of the technology. Hence, it would make more sense for the applications themselves to be regulated, and not the technology as such. The AI application requirements, for example, in healthcare are different from banking. And so will the ethical, legal and economic issues around both.

Policymakers will need to work closely with professional bodies from each industry, with regard to what the technology is needed for, how they’ll make it work, how it may impact their workforce and retraining, and what support they need from the government, rather than drawing a blanket regulation on the whole technology.

Looking ahead into the 2020s

Smart regulation is what is called for.  That would mean providing the framework for further innovations in AI technologies, while at the same time regulating applications and ensuring a future where AI is ethical and useful, in a world where workers displaced by automation have been re-trained to be more suited for other, more important roles.

Leaving the supervised unsupervised: Cogniphi CEO, Rohith Raveendranath shares his experiences

Disclaimer 1 – I am an Engineer by profession and I am trained to judge by measurements.  I am taught to be process oriented and to be focused on repeatability.  Again, I am an engineer by profession and I take pride in solving everyday problems.

Disclaimer 2 – Whilst the hardcore AI evangelists will have their eyebrows raised when I say AI ( As most of the what the world is talking as miracle systems are predominantly Deep Learning and Machine Learning Systems) I will still go with that terminology for interest of ‘Oh I get it mortals’ (Like me).

With these disclaimers let me start. I can’t hold myself back amidst this discussion involving adoption of AI and the new prefixes that are unfolding for AI (By hour) and many a times these are being perceived/marketed and projected as new paradigms. You don’t believe me or get it (on the prefixes). Look at the recent trending AI prefixes – ‘Repeatable AI’, ‘Explainable AI’, ‘Un-Biased AI’, ‘Fair AI’, ‘Trustworthy AI’, ‘Ethical AI’ and so on. As such the debate on AI and DL was handful, these new dimensions (Oh yeah, I know them, you want to read on, google dimensions in DL :)) are adding more hidden layers in the minds of potential adopters. And now there are more things to confuse, scuttle project implementations and many more open threads getting created in mind of early adopters.

Wait a second, am I saying these are not important questions that needs to be answered ? The thinking judgement that you have reached until now(By reading the first para on which camp I am in) is a classical example of how years of learning have made you predict a plausible outcome that I am hinting, though I have not explicitly spelt it. And what has helped you do that, is years of supervised learning.

The systems that we are talking are taught/fed information through a supervised manner. The learnings, inferences and predictions are no doubt a function of what it has seen. The challenge is, when we expose such ‘heavily supervised’ system to do ‘un-supervised’ jobs(Hence the heading of article. Read un-supervised as final decision/outcome/action being executed without any manual intervention). By virtue of common sense, in a real world we will not do this with a rookie who has been trained, until the rookie is heavily tested and exposed to real world systems. Rather there are clear milestones set where the active to passive to nil supervision transition takes place. Working with machines should not be much different and should be given logical milestones to improve.

More importantly, there are plenty of business cases which do not warrant the need for un-supervised job. For. example i wouldn’t dare to call an AI (Ok DL) system that can detect cancer as an un-supervised activity. Since such a prediction would be overseen by a Doctor before any treatment is made, just that such a prediction/detection could ease the job of Dr, and can also help in consolidating learnings across many expert into one single system (think of it as a super observant technician). The users of such system are not expected rest their common sense and say ‘But system said so’.

In the next few article series I would like to share my experience as an engineer, on working with AI, implementing systems with AI and reaping benefits from such implementations. Whilst the quest for the ‘AI Holy Grail’ is on, We as engineering community should also be concerned about alignment of various roles to reap success even in its current form (For example, how does a Traditional BA differ from a BA who is working on AI system, how can he suspend many of the concerns raised above. Or how a traditional QA vs a AI System QA should function, or even how can a functional test case be written for AI Systems). One of the largest mechanism which we still do not have much clue on its workings, is the Human Brain, yet we trust it and function every day with it. AI systems are loaded with measurements that can be harnessed and can help to calm the nerves to a great extent.

As I started, I am an engineer by profession and I am taught to work with measurements, and in these AI measurements I trust.