How to use the hmmlearn library to model MFCC in python?

I have some audio. Each frame of audio is marked with voiced and unvoiced sounds. I want to train HMM, by the MFCC characteristics of audio, and then use the training model to determine whether the new audio frames are voiced or unvoiced. At present, when my HMM basic data are available (MFCC characteristics of each frame, state transition matrix, state initial probability), how can I train through GMMHMM in hmmlearn?

Mar.02,2021
Menu