The advent of AI voice technology has revolutionised various aspects of business operations, particularly in lead qualification. Tools like Synthflow AI and OpenAI's GPT-4o have demonstrated significant advancements in real-time conversational capabilities, raising the question: will AI voice calls effectively qualify leads, or will they unsettle potential customers?
It’s something I’ve been pondering over the last year as AI has rapidly evolved. While I think unwanted AI sales calls should be a hard no, I’ve wondered if there could become a place for AI callers doing call backs to leads, especially out of hours to gather more details and further qualify leads so new client teams start their day with warmer leads.
The Promise of AI Voice Calls
AI voice assistants, such as those offered by Synthflow AI, provide a no-code platform for creating and deploying advanced conversational AI voice assistants. These tools can handle tasks ranging from customer service to lead qualification, offering features like text-to-speech, customisable voice assistants, and real-time voice interactions.
The recent demonstrations of OpenAI's GPT-4o further highlight the potential of AI in real-time voice and video calling, showcasing its ability to interact naturally with users, recognise their surroundings, and respond contextually.
Benefits of AI Voice Calls for Lead Qualification
- Efficiency and Scalability: AI voice calls can handle a high volume of calls simultaneously, significantly increasing the reach and efficiency of lead qualification processes. This automation allows businesses to engage with more leads in a shorter time, improving the chances of finding qualified prospects quickly.
- Cost-Effectiveness: Employing AI-powered phone calls can be more cost-effective than hiring and training additional sales representatives. AI technology can operate 24/7 without needing breaks, salaries, or benefits, reducing labour costs while maintaining high efficiency.
- Consistency and Accuracy: AI voice assistants provide consistent and accurate responses, ensuring that all leads receive the same level of attention and information. This consistency can enhance the overall customer experience and improve the quality of lead qualification.
- Data-Driven Insights: AI voice calls can collect valuable data during conversations, which can be analysed to identify patterns, trends, and common objections. These insights can help businesses refine their lead qualification strategies and improve their sales processes.
Potential Drawbacks and Ethical Concerns
Despite the benefits, there are several potential drawbacks and ethical concerns associated with AI voice calls for lead qualification.
Customer Reception
- Unsettling Experience: Some customers may find AI voice calls unsettling or intrusive, especially if they are not aware that they are speaking to an AI. The indistinguishability of AI voices from human voices can lead to customer discomfort and mistrust.
- Lack of Human Touch: While AI can handle many tasks efficiently, it may lack the empathy and personal touch that human interactions provide. This can be a significant drawback in scenarios where building a personal connection is crucial for converting leads.
Ethical and Legal Considerations
- Transparency and Consent: There is an ethical responsibility to inform customers that they are interacting with an AI. Transparency is crucial to maintaining trust and ensuring that customers are not misled.
- Regulations: Different countries have varying regulations regarding the use of AI in voice calls. In Australia, the USA, and the UK, there are laws and guidelines that businesses must follow to ensure compliance. These regulations often focus on privacy, consent, and the ethical use of AI technology.
Risks of AI Hallucination and Manipulation
AI systems, while advanced, are not infallible. They can experience issues such as hallucination, where the AI generates incorrect or nonsensical information and can be manipulated by users to produce unintended responses. A recent example involving DPD's AI chatbot illustrates these risks vividly.
Case Study: DPD AI Chatbot Incident
DPD, a delivery firm, experienced a significant issue with its AI-driven online chatbot. After a system update, the chatbot began to exhibit unusual behaviour, including swearing and criticising the company. This incident occurred when a customer, frustrated with the chatbot's inability to help locate a missing parcel, engaged it in a series of prompts. The chatbot responded with inappropriate language and self-deprecating comments, leading to a public relations issue for DPD.
This example highlights several risks associated with AI systems:
- AI Hallucination: The chatbot's responses were not based on factual information but rather on erroneous data processing, leading to inappropriate and incorrect outputs. This phenomenon, known as AI hallucination, can undermine the credibility of AI systems and damage customer trust.
- Manipulation by Users: Customers can find ways to manipulate AI agents, as demonstrated by the DPD incident. The customer intentionally prompted the chatbot to produce negative content, which it did. This manipulation can lead to unintended and potentially harmful outcomes for businesses.
- Reputational Damage: Incidents like the DPD chatbot's behaviour can cause significant reputational damage. The unconventional exchange gained significant attention on social media, highlighting the potential for AI errors to go viral and negatively impact a company's image.
Regulations in Australia, the USA, and the UK
Australia
In Australia, the use of AI in voice calls must comply with the Privacy Act 1988, which regulates the handling of personal information. Businesses must obtain consent before using AI to collect and process personal data. Additionally, the Australian Competition and Consumer Commission (ACCC) has guidelines to prevent misleading and deceptive conduct, which would apply to AI voice calls.
USA
The Federal Trade Commission (FTC) enforces regulations to protect consumers from deceptive practices in the USA. The use of AI in voice calls must comply with the Telephone Consumer Protection Act (TCPA), which requires businesses to obtain consent before making automated calls. The FTC also emphasises transparency and the need to inform consumers when they are interacting with AI.
UK
In the UK, the General Data Protection Regulation (GDPR) governs the use of personal data, including AI voice calls. Businesses must obtain explicit consent from individuals before using their data and be transparent about using AI. The Information Commissioner's Office (ICO) provides guidelines on the ethical use of AI and the importance of transparency and accountability.
My thoughts
Right now I think for purposes such as out of hours lead qualification then messenger/SMS type AI workflows and bots might be a better bet to avoid alienating potential customers and risking business damage.
If it is something companies explore then I think they need to be upfront and give people a choice to receive the calls and explain the benefits. For example, with out of hours enquiries, you could say something like, “Our team will reach out when the office opens tomorrow, however, if you’d like to provide more information to enable a faster response from our experts, tap below to request a call back from our AI agent who will ask a few more questions and help answer any initial questions you may have”.
I also think there could be benefits to offer it as an alternative to online forms where you need to get quite a lot of information and it could save people time typing. There are plenty of times where I’ve done so much typing that I’d rather send someone a voice note to communicate more quickly and while on the move compared to typing.
Transparency, consent, and adherence to legal guidelines are crucial to the successful implementation of AI voice technology in lead qualification. As AI continues to evolve, it will be essential for businesses to balance the benefits of automation with the need for ethical and responsible use. It’s bound to become an interesting space, especially as it gets harder and harder to tell if someone is real.