Photos are changing the way we communicate. Popular social apps like Snapchat, Facebook, and Instagram are all focused on using photos as a primary way to communicate. Recently, social ‘stories’—a collection of visual media recounting various parts of people’s lives—were predicted to be the most used form of communication. Photos provide an information-rich medium and can bridge language barriers, and with nearly everyone having access to a camera on their mobile phone, communicating with pictures has become ubiquitous.
Now, companies are looking at how we can leverage digital images and video to automate customer service. Leveraging artificial intelligence and training, companies can use computers to automate activities previously done manually by humans. This ability of computers to acquire, process, analyze, and understand images is called computer vision. Let’s look at a few concepts of how computer vision can be applied for customer service.
Smart City
A city is a large area to manage, and city employees cannot adequately cover such an extensive area for safety and maintenance purposes. As such, the municipality relies heavily on citizens to report issues that they see. Empowering citizens with the capability to report problems can increase awareness, ensure that critical items receive priority and do not result in critical failure, and help save lives and money.
Leveraging computer vision, the AI engine can recognize, classify, and prioritize items from information and photos that citizens send in. For example, computer vision can identify and determine the size and severity of potholes in the roads. With geolocation tagged on the photo, the AI can apply higher fix priority to large potholes and those that are on busy streets.
Another example is the classification of graffiti, trees blocking a road, etc.—after a severe storm, citizens can report damage via pictures so the city can gain a sense of the effort needed and prioritize what needs attention first.
Computer vision makes it easy for citizens to report a problem. With a few simple steps, the citizen can open a smart city app and take a photo. Citizens don’t have to find the number to call, wait on the phone, describe the issue, determine the street address, etc. With this information, the AI now has the issue type, location, and contact information—everything necessary to evaluate and prioritize a resolution. This frees up time for agents to focus on scheduling and dispatching a repair.
Smart Manufacturing
There is a wide variety for the application of computer vision in manufacturing. But one of the most pervasive uses is industrial 3D vision for inspection and quality assurance.
Pointing a camera at a production line can help detect deformities and anomalies in the product, its components, and even the manufacturing process itself. This task has become more difficult in some industries as robots have become prevalent in the production line and printed circuit boards have become smaller and smaller. Quality assurance is previously a function that was done manually, often by inspecting a sample at random, so the process was not able to catch every instance. This may not be as big a problem if a garment is deformed, but could you imagine getting a laptop or TV that doesn’t work?
In addition to faulty products, an automated production line itself can be very costly if it malfunctions. Computer vision can help with predictive maintenance by identifying potential problems in the production line before they occur. For example, you might be able to detect when a robot arm is not moving at the desired speed or right angle and provide preventative care. Predictive maintenance can help predict when a failure is likely to occur and allow for scheduled maintenance when it’s most cost-effective.
With computer vision automation, fault identification can be detected at rates nearing 100%. And depending on the probability threshold, items can be escalated to a technician for further evaluation and action.
Smart Field Technician
Industry trends show a significant attrition rate as older experienced employees retire. Inability to hire and ramp-up a younger workforce directly impacts the ability to meet customer demand.
Computer vision can have a tremendous impact on helping field service technicians by reducing training ramp to proficiency, reducing service duration times, and increasing first-time fix rates. Computer vision is also well-aligned to support field technicians with parts identification. The system can then let the technician know if there are replacement parts in the warehouse and provide information about how to fix or replace the part. Computer vision can even help verify the installation was done accurately and per specification.
Leveraging Google Glass or Microsoft HoloLens, computer vision can even provide guided help with augmented reality. This is gaining traction in industrial machinery services, such as generators and oil platforms, where a bad fix has severe financial and safety implications. With AR tools, technicians don’t have to be SMEs on every issue.
Another industry adopting computer vision is drone inspection services where AI can be used to process large volumes of data. Inspecting things like bridges, ship hulls, and pipelines, AI can be used to detect changes in form and cracks that could indicate failure.
These are only a few examples, but computer vision is helping everywhere across industries: high tech, manufacturing, biotech, automotive, chemicals, utilities, healthcare, and more.
So, how does computer vision help in the contact center?
Salesforce Service Cloud and Einstein Vision
How can Computer Vision improve CSAT, NPS, AHT, FCR, and other key metrics in the contact center?
Salesforce Service Cloud with Einstein Vision enables the customer and service agent to communicate more effectively and solve issues more accurately and with greater speed. Computer Vision and AI can identify, classify, prioritize, and route issues to agents as needed. With the right training, computer vision can be used for self-service as well.
For agents in the contact center, Einstein Vision can help identify devices and configurations. As a customer, how many times have you had to crawl, climb, or contort to get the part needed for your car? This takes up your valuable time and the support agents’ time as well. Now imagine the same experience if you could simply point your mobile phone camera and let Einstein Vision identify all the information needed for the support agent. Further, Salesforce could also automatically provide the agent with the right knowledge article for the specific model number. This would certainly speed up case resolution.
Another example could be filing an insurance claim. What if you could point the camera at the car damage and Einstein Vision could determine if the type of damage qualifies with your deductible. If it is compatible, it could escalate the call directly to the insurance agent to process the claim, opening a case with the appropriate fields and criteria already preselected.
However, for computer vision to work, the AI engine needs to be trained, and the right business processes need to be built into Service Cloud. Training the AI engine includes taking lots and lots of photos of a wide variety of images. Then you need to create business rules that pull in a human to help identify when the AI gets stuck with a low-probability threshold. Lastly, you have to build business processes in the CRM to escalate to an agent when assisted support is needed.
It’s important to recognize that AI learning will be ongoing. New images will need to be input to the system. Environmental parameters will influence the identification and classification of items. This will have an impact on the contact center. While AI will have the overall benefits on key metrics, SoPs, business processes, skills, and staffing will have to be updated to realize the benefits and get it right.
0 Comments