This thesis is focused on the concept of voice pleasantness, a form of subtle emotion, and explores its definition, classification and intensity estimation. In the latest years, the creation of an expressive artificial voice, that contains metalinguistic information and can transmit feelings and emotions along with the acoustic speech message, has been the center of many attentions. Many paramount accomplishments have been achieved but the creation of the perfect artificial voice remains a challenge. The first synthesis apparatus developed by Christian Kratzenstein or the von Kempelen Machine have been developed more than 200 years ago and since then the scientific research in speech synthesis system has never stopped. An overall accuracy of 81% was obtained for classifying eight different emotions by using the proposed model on the RAVDESS dataset. The proposed model’s performance was compared with that of similar studies, and the results were evaluated. The multilayer perceptron (MLP) classifier, a widely used supervised learning algorithm, was preferred for classification. It was aimed at detecting eight different emotion classes, including neutral, calm, happy, sad, angry, fearful, disgusted, and surprised moods. Data were collected for eight different moods from the actors. The RAVDESS dataset contains more than 2000 data recorded as speeches and songs by 24 actors. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) open-source dataset was used in the study. One of the most important requirements of artificial intelligence works is data. This study aims to detect emotions in the human voice using artificial intelligence methods. Today, the development of artificial intelligence and the high performance of deep learning methods bring studies on live data to the fore. However, it was not possible to provide an emotional analysis of people in a live speech. Many sound analysis methods have been developed in the past. Moreover, the findings of the several professions involved in emotion recognition are difficult to combine. As a result, emotion recognition from speech has become critical in current human-computer interaction systems. While word analysis enables the speaker’s request to be understood, other speech features disclose the speaker’s mood, purpose, and motive. Particularly, speech has a great deal of information, conveying information about the speaker’s inner condition and his/her aim and desire. Human-computer interaction (HCI) has seen a paradigm shift from textual or display-based control toward more intuitive control modalities such as voice, gesture, and mimicry.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |