TY - GEN
T1 - Enhancing Forensic Audio Transcription with Neural Network-Based Speaker Diarization and Gender Classification
AU - Ullah, Rahmat
AU - Asghar, Ikram
AU - Malik, Hassan
AU - Evans, Gareth
AU - Ahmad, Jawad
AU - Roberts, Dorothy Anne
PY - 2025/3/12
Y1 - 2025/3/12
N2 - Forensic audio transcription is often compromised by low-quality recordings, where indistinct speech can hinder the accuracy of conventional Automatic Speech Recognition (ASR) systems. This study addresses this limitation by developing a machine learning-based approach to improve speaker diarization, a process critical for distinguishing between speakers in sensitive audio data. Previous research highlights the inadequacy of traditional ASR in forensic settings, particularly where audio quality is poor and speaker overlap is common. This paper presents a neural network specifically designed for gender classification, using 20 key acoustic features extracted from real forensic audio data. The model architecture includes input, hidden, and output layers tailored to differentiate male and female voices, with dropout regularization to prevent overfitting and hyperparameter optimization ensuring robust generalization across test data. The neural network achieved an average recall of 86.81%, F1 score of 85.67%, precision of 87.95%, and accuracy of 86.83% across varied audio conditions. This model significantly improves transcription accuracy, reducing errors in legal contexts and supporting judicial processes with more reliable, interpretable evidence from sensitive audio data.
AB - Forensic audio transcription is often compromised by low-quality recordings, where indistinct speech can hinder the accuracy of conventional Automatic Speech Recognition (ASR) systems. This study addresses this limitation by developing a machine learning-based approach to improve speaker diarization, a process critical for distinguishing between speakers in sensitive audio data. Previous research highlights the inadequacy of traditional ASR in forensic settings, particularly where audio quality is poor and speaker overlap is common. This paper presents a neural network specifically designed for gender classification, using 20 key acoustic features extracted from real forensic audio data. The model architecture includes input, hidden, and output layers tailored to differentiate male and female voices, with dropout regularization to prevent overfitting and hyperparameter optimization ensuring robust generalization across test data. The neural network achieved an average recall of 86.81%, F1 score of 85.67%, precision of 87.95%, and accuracy of 86.83% across varied audio conditions. This model significantly improves transcription accuracy, reducing errors in legal contexts and supporting judicial processes with more reliable, interpretable evidence from sensitive audio data.
U2 - 10.1109/ICEET65156.2024.10913726
DO - 10.1109/ICEET65156.2024.10913726
M3 - Conference contribution
SN - 9798331532901
T3 - International Conference on Engineering and Emerging Technologies
SP - 1
EP - 6
BT - 2024 International Conference on Engineering and Emerging Technologies (ICEET)
PB - IEEE
CY - Dubai, UAE
ER -