Streaming question
what are the best settings for the model streaming at low latency?
Left Context (seconds)
Larger values improve quality
Chunk Size (seconds)
Processing chunk size
Right Context (seconds)
Future context for better accuracy
help will be appreciated
what are the best settings for the model streaming at low latency?
Left Context (seconds)
Larger values improve qualityChunk Size (seconds)
Processing chunk sizeRight Context (seconds)
Future context for better accuracyhelp will be appreciated
10, 2, 2 by default (https://docs.nvidia.com/nemo-framework/user-guide/latest/nemotoolkit/asr/streaming_decoding/canary_chunked_and_streaming_decoding.html)
This is from Canary model NVIDIA webpage but I have tested on Parakeet also. BTW share your code if its not a problem, these values are the least of the problems in overall.
Anyway, the larger the better but less real-time, and what do you mean Streaming, which decoding method :) Interesting topic, if you looking for someone to together build something onParakeet let meknow:)