Computer vision is a cutting-edge technology that’s being used more and more in the business world each year.
Here, we’re going to take a look at everything you need to know about it, if you’re interested in learning more about what it could do for you.
So, what is computer vision?
Computer vision is the ability of computers to gain high-level understanding from digital images or videos.
Essentially, it attempts to do the same for computers that our eyes do for us. Needless to say, it’s a remarkably complex task.
Attempts at achieving effective computer vision have been taking place since at least the 1970s. For example, traditional scanners are a primitive form of computer vision.
In recent years, however, the increase in processing power have seen computer vision technology reach a level where it’s now capable of saving businesses millions of dollars.
Which industries can benefit from computer vision, and how?
One of the main pioneers of computer vision, the automotive sector is ahead of the curve, with companies like Tesla having used driver-assistance as far back as 2014.
Fully self-driving cars are being tested now, and several prototypes have been completed and tried out on public roads. There are still some issues that need ironing out before fully self-driving cars are released commercially, but it’s likely that by 2020 at the latest, computer vision will be used everyday on our roads.
Just as they have done for other technologies, Amazon is leading the way in the retail sector.
On January 22nd this year, they opened up their Amazon Go store, a partially automated shop with no checkouts or cashiers.
How is this possible? A combination of deep learning, sensors and computer vision means that the store itself is able to see which items the customers leave with, and to charge their Amazon account accordingly.
It’s a truly remarkable jump forward, and given Amazon’s resources, you can bet that more Amazon Go stores will open in future. It’s also very likely that other retailers will follow Amazon’s lead sooner rather than later.
Computer vision is still in its trial stage within financial services, which is understandable given the necessary scrutiny the industry has to put on security.
However, the potential for positive disruption is huge, and a couple of banks have already started to use computer vision as part of their document check services.
For instance, Spanish banking group BBVA gave customers the chance to open a new bank account quickly by uploading their ID and a selfie. The computer vision tech then analyzed the shots to confirm identity.
There’s little doubt that more banks will begin to use the tech in this way soon, simply because it will save a remarkable amount of time and money. Once they’re satisfied that the technology’s security is at a sufficient level, expect an explosion in use.
Marketing and sales
Marketing is an industry that’s always relied substantially on data. Needless to say, the amount of information computer vision can generate far outstrips pure online analytics.
For instance websites have always conducted AI ‘tests’ – that is, recording the faces of people using their website to measure how happy they are. Computer vision could automate this process, making it a lot faster.
In a crossover with the retail industry, meanwhile, some startups have started using computer vision to anonymously analyze customer behavior to look at factors like:
- Where shoppers go within a store and the direction in which they move around
- Which areas of the shop engage them most, and which don’t
- How long they stay in store in general
And other similar factors. This area of marketing analytics alone could completely revolutionize the flagging bricks and mortar retail sector.
Digital technology has already helped improve the world of healthcare substantially through the use of apps. However, computer vision has the potential to improve things even further.
One company, Gauss Surgical, is working on a real-time blood monitor, capable of countering the issue of inaccurate blood loss measurement during surgeries and in the event of more severe injuries. The cost of blood transfusions that weren’t needed is upwards of ten billion dollars each year, so the industry savings could be huge.
Microsoft are also working on a project called InnerEye, which is designed to be able to analyze 3D radiology images – when complete, it could help improve the speed of the process by around forty-fold, which could be a genuine life-saver.
What are the main challenges inherent in computer vision?
Lighting is a primary issue, as anyone that’s taken a digital photo in poor light will tell you!
Remember that computer image sensors can’t adapt to light in the same way that our eyes can, and computer vision relies solely on image: we can use smells and sounds to help complete the picture!
One counter to poor lighting is traditional flash technology, like you’d get in any camera but other options include infra-red lenses, or laser technology.
A lot of vision algorithms will use a basic shape as their outline.
So, for instance, Facebook ability to tag your friends in your photos (yes, they can now do this!) depends heavily on first of all being able to spot where the faces are in a photo. Then the algorithm uses other data to fill in the picture.
If it can’t get a grasp on the initial shape, it may struggle.
So, let’s say that a computer vision lens is shown a photo of a deflated soccer ball: without the circular shape to start as its base, it may struggle to work out what it’s looking at.
The same principle could occur with any object out of shape: a car wreck, for instance, or even an oddly shaped building!
The background has a huge image on how easy an object is to recognize.
In an ideal world, of course, the computer would always have a blank background: in reality, that’s unlikely.
One example of a potential issue could be if the computer was confronted with an image of an object that was against a background of exactly the same color, or even of the same object.
Edge detection technology is, of course, becoming more ad more effective, but this is an area that’s not quite been mastered.
Objects being partially covered up
Again, using Facebook as an example, imagine you’ve taken a photo of a few friends in a busy bar.
Usually, this photo will fully show your friends, but there will also be shots of other people half turned away, facing the back of the room, obscured by other people’s arms, that kind of thing.
As a rule, Facebook will often struggle to identify these as people at all, let alone their identity. (Though they’re getting better at it.)
This sums up the issue with computer vision as a whole: identifying partial objects is still an issue that’s being worked on.
Computer visions rely heavily on pixel detail to produce excellent results.
As anyone who’s ever taken a photo of an object from far away will tell you, you’re always going to struggle to get the same pixel detail.
The natural result of this is that computer vision will struggle to process images as effectively if they’re further away. This won’t necessarily be an issue for many uses, but in something like self-driving cars, for instance, it presents an obvious concern.
More and more industries are cottoning onto the potential benefits of using computer vision, and you can expect to see tech used a lot more in the next couple of years.
If you’re interested in finding out more about how computer vision could assist your business, get in touch with Iconic Solutions today. We’d love to help.