Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
We all get nervous about the privacy implications of visual surveillance. But digital camera technology can be used for good, not just for evil.
In purely tech terms, the intersection of high-resolution cameras, image processing, and big data means it’s possible to create systems that understand and interpret images (sometimes with the use of high-performance computing) without any human help, and then rely on robotic systems to take action on the data analysis.
The result is amazing opportunities for groundbreaking digital services. Here are a few examples of cameras improving healthcare, agriculture, and business.
My initial interest in this subject was generated when, four years ago, my spouse had a medical emergency that required a lot of poking and prodding. Among the then-new technologies he experienced—once the danger was over—was capsule endoscopy, “a procedure that uses a tiny wireless camera to take pictures of your digestive tract.” In other words, he swallowed a vitamin-size capsule that contained a tiny camera. For the next 24 hours, he wore a Bluetooth-equipped hard disk recorder around his waist, which captured images taken every 10 seconds; the doctor could look at the imagery in detail and assure us all that the problem was resolved. It was a lot more fun than a colonoscopy. (Then again, nearly anything is.)
But that’s just one example of cameras being developed for health and pharmaceutical purposes. Others include ClaroNav's NaviENT system, which uses optical tracking to guide surgeons during sinus and skull-base endoscopic surgery. The company recently received clearance from the U.S. Food and Drug Administration to market and sell its product, which dynamically determines the position of the tip of any surgical instrument in 3D using an optical triangulation technique.
Another example is a medical camera that could one day flag people at risk of stroke or heart attack by providing a better view of potential problem areas. The camera goes inside a blood vessel, showing its surface and any lesions, such as a ruptured plaque, that could cause a stroke. It’s now being tested for a new application: acquiring high-quality images of possible stroke-causing regions of the carotid artery that may not be detected with conventional radiological techniques. A paper in Nature Biomedical Engineering reports proof-of-concept results for this new imaging platform for atherosclerosis.
Many of the cool camera-equipped examples I found were agriculture- and food-related (but then again, it was lunchtime). Here are a few that appealed to me:
In Singapore, Sushi Express worked with Hewlett Packard Enterprise to install cameras that track the type and popularity of dishes on a sushi belt, gauge which are most popular, and advise chefs when to prepare fresh dishes. Doing so reduces waste on less popular dishes.
Researcher Guo Feng and his colleagues at the Shanghai Jiao Tong University developed a mobile fruit-grading robot to harvest and classify strawberries. One camera, mounted on top of the robot frame, captures images of eight to 10 strawberries; another camera, installed on the end effector of the robot, images a few more berries with higher resolution for more “Is it ripe?” analysis. This is done with the help of a ring-shaped fluorescent lamp that provides stable lighting for fruit location.
Then there’s the vision-guided robot that trims tomato plants. Traditionally, on commercially grown tomatoes, someone has to manually cut older leaves from the lower part of the stems to promote ripening. A consortium of commercial tomato growers in the Netherlands joined forces with automation specialists to create a robot that de-leafs tomato crops grown in greenhouses. The vision system is built around a pair of stereoscopic cameras mounted on a moveable platform, together with a telescopic cutting arm. The custom-built stereo cameras capture a wide field of view from the left and right side of each tomato plant.
Another fruit-and-vision research project aims to let consumers use affordable camera technology to tell if a piece of fruit is perfectly ripe or identify what’s rotting in the fridge. A 2015 paper describes a hyperspectral camera that uses both visible and invisible near-infrared light to “see” beneath surfaces and capture unseen details. For instance, the team took hyperspectral images of 10 different fruits, from strawberries to mangoes to avocados, over the course of a week. “The HyperCam images predicted the relative ripeness of the fruits with 94 percent accuracy, compared with only 62 percent for a typical camera,” the university reported.
Cameras and machine vision are being used to optimize the creation and distribution of products across supply chains, whether it’s to detect gear defects in the assembly line or to drive sales and manage processes when the products reach retail customers. That quickly gets tied into Internet of Things (IoT) and sensor applications, which is another topic entirely but underlines the fact that a camera is, ultimately, a specialized type of sensor.
Austrian automation solution provider digMAR developed a 3D image processing system for carpet- and textile-cutting machines. The multi-camera machine vision system scans textile materials and then calculates the optimal cutting coordinates. That data guides and controls cutting equipment.
When we think of cameras, however, it's mainly in terms of images that are human readable by the naked eye. But image processing can capture all sorts of data, with a huge range of sensor technology and features employed in manufacturing. That can lead to ultraviolet illumination aiding in inspection applications, as one example. For another, an infrared camera-equipped drone now can collect gas samples from inside Italy’s Mount Etna. Relatively simple drones can fly right into the volcano craters to help researchers gain scientific understanding and develop evacuation plans in case of an eruption.
I meant to give machine vision examples from a wide range of industries, such as automated inspection systems, transportation (avoiding the too-obvious-to-mention self-driving cars), and retail. But I find I’ve barely scratched the surface.
It’s clear that machine vision will affect entire production processes, encourage automation and robotics, and change how consumers accomplish things. Idealistically, this can increase efficiency and product quality—eventually. Some of the technologies required have plenty of room to improve before this vision can emerge. To our credit, most technologists are considering the consequences as well as the benefits, whether it’s careful attention to privacy issues or examining the ethical decisions in automated vehicles. Here’s hoping that the emphasis can be on the things that make us say, “Oh, cool!” and not “Oh, no!”
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.