Listen to this story |
Hume AI has introduced a conversational AI named Empathic Voice Interface (EVI), with emotional intelligence. EVI sets itself apart by comprehending the user’s tone of voice, adding depth to every interaction and tailoring its responses accordingly.
Interestingly, it almost feels like you are talking to a human.
Click here to check it out for yourself.
EVI is a new AI system that understands and generates expressive speech, trained on millions of human conversations. Developers can now seamlessly integrate EVI into various applications using Hume’s API, offering a unique voice interface experience.
EVI boasts several distinctive empathic capabilities:
- Human-Like Tone: EVI responds with tones resembling human expressions, enhancing the conversational experience.
- Responsive Language: It adapts its language based on the user’s expressions, addressing their needs effectively.
- State-of-the-Art Detection: EVI uses the user’s tone to detect the end of a conversation turn accurately, ensuring seamless interactions.
- Interruption Handling: While it stops when interrupted, EVI can effortlessly resume from where it left off.
- Self-Improvement: EVI learns from user reactions to continuously improve and enhance user satisfaction over time.
In addition to its empathic features, EVI offers fast, reliable transcription and text-to-speech capabilities, making it versatile and adaptable to various scenarios. It seamlessly integrates with any Language Model Library (LLM), adding to its flexibility and utility.
EVI is set to be publicly available in April, offering developers an innovative tool to create immersive and empathetic voice interfaces. Developers eager for early access to the EVI API can express their interest by filling out the form at https://bit.ly/evi-waitlist.
Founded in 2021, Hume is a research lab and technology company with a mission to ensure that artificial intelligence is built to serve human goals and emotional well-being. It is founded by Alan Cowen, a former researcher at Google AI.
“We believe voice interfaces will soon be the default way we interact with AI. Speech is four times faster than typing; frees up the eyes and hands; and carries more information in its tune, rhythm, and timbre. That’s why we built the first AI with emotional intelligence to understand the voice beyond words. Based on your voice, it can better predict when to speak, what to say, and how to say it,” wrote Cowen on LinkedIn.
The company raised a $50 million Series B funding from EQT Group, Union Square Ventures, Nat Friedman, Daniel Gross, Northwell Holdings, Comcast Ventures, LG Technology Ventures, and Metaplanet.
OpenAI’s Plans With Voice
OpenAI is currently working on a Voice Engine, according to a user on X. Voice Engine will include features like voice and speech recognition, processing voice commands, and converting between text and speech.
It will also have automatic speech and voice recognition and generation, along with creating and generating voice and audio outputs based on natural language prompts, speech, visual prompts, images, and video
In the episode of Unconfuse Me with Bill Gates, Altman pointed out that OpenAI is on ‘this long, continuous curve’ to create newer and better models. He highlighted the importance of multimodality as the key aspect of GPT-5 that enables it to process video input and generate new videos while confirming that the work on the model has already begun.
Altman also spoke at length with Gates about how GPT-5 would emphasise on customisation and personalisation. “The ability to know about you, your email, your calendar, how you like appointments booked, connected to other outside data sources—all of that. Those will be some of the most important areas of improvement,” said Altman.
Last year, OpenAI launched a voice assistant in the ChatGPT app on Android and iOS, enabling users to engage in back-and-forth conversations. The ChatGPT Voice feature includes diverse voices such as Ember, Sky, Breeze, and Cove.
OpenAI recently partnered with Figure AI to build generative AI powered Humanoids. In a recent video released by Figure, the humanoid robot Figure 01 was seen perfectly holding a natural conversation with a human, passing him the apple.
Emotional Intelligence Matters
Conversational AI chatbot that understands emotional intelligence is the future. “Chatbots that are polite, and understand sentiment, emotion, etc give rise to better businesses. Chatbots that are closer to human beings, emotional and sentiment, bring commercial profits along, which is quite motivating,” said IIT Bombay professor and computer scientist Pushpak Bhattacharyya in an exclusive interview with AIM.
"interface" - Google News
March 28, 2024 at 01:44AM
https://ift.tt/oawZm7I
Forget OpenAI's ChatGPT, Hume AI's Empathetic Voice Interface (EVI) Might Be the Next Big Thing in AI! - Analytics India Magazine
"interface" - Google News
https://ift.tt/lwvVU1o
https://ift.tt/Akx0iba
Bagikan Berita Ini
0 Response to "Forget OpenAI's ChatGPT, Hume AI's Empathetic Voice Interface (EVI) Might Be the Next Big Thing in AI! - Analytics India Magazine"
Post a Comment