Streaming transcriber with whisper
Find a file
2022-09-24 15:27:02 +09:00
.github Initial commit 2022-09-23 19:20:29 +09:00
whisper_streaming Add debug log 2022-09-24 15:27:02 +09:00
.gitignore Initial commit 2022-09-23 19:20:29 +09:00
.markdownlint.json Initial commit 2022-09-23 19:20:29 +09:00
LICENSE Initial commit 2022-09-23 19:20:29 +09:00
LICENSE.whisper Initial commit 2022-09-23 19:20:29 +09:00
Makefile Fix setting for isort 2022-09-23 20:05:33 +09:00
package-lock.json Initial commit 2022-09-23 19:20:29 +09:00
package.json Initial commit 2022-09-23 19:20:29 +09:00
poetry.lock Updated whisper 2022-09-24 10:55:01 +09:00
pyproject.toml Updated whisper 2022-09-24 10:55:01 +09:00
README.md Add beta version 2022-09-24 14:34:03 +09:00
setup.cfg Fix setting for isort 2022-09-23 20:05:33 +09:00

whisper_streaming (beta version)

CI CodeQL Typos

Streaming transcriber with whisper. Transcribing in real time, enough machine power is needed.

Example

# Setup
git clone https://github.com/shirayu/whisper_streaming.git
cd whisper_streaming
poetry install --only main

# If you use GPU, install proper torch and torchaudio with "poetry run pip install -U"
# Example : torch for CUDA 11.6
poetry run pip install -U torch torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

# Run in English
poetry run whisper_streaming --language en --model base -n 20
  • --help shows full options
  • --language sets the language to transcribe. The list of languages are shown with poetry run whisper_streaming -h
  • -t sets temperatures to decode. You can set several like (-t 0.0 -t 0.1 -t 0.5), but too many temperatures exhaust decoding time
  • -n sets interval of parsing. Larger values can improve accuracy but consume more memory.
  • --debug outputs logs for debug

Tips

PortAudio Error

If you get OSError: PortAudio library not found: Install portaudio

# Ubuntu
sudo apt-get install portaudio19-dev

License