Streaming transcriber with whisper
Find a file
dependabot[bot] 271c3e96ba Bump snok/install-poetry from 1.3.1 to 1.3.2
Bumps [snok/install-poetry](https://github.com/snok/install-poetry) from 1.3.1 to 1.3.2.
- [Release notes](https://github.com/snok/install-poetry/releases)
- [Commits](https://github.com/snok/install-poetry/compare/v1.3.1...v1.3.2)

---
updated-dependencies:
- dependency-name: snok/install-poetry
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-10-10 17:43:18 +09:00
.github Bump snok/install-poetry from 1.3.1 to 1.3.2 2022-10-10 17:43:18 +09:00
scripts Fix for Windows 2022-10-10 00:42:06 +09:00
tests Add test 2022-10-10 00:01:31 +09:00
whispering Add test 2022-10-10 00:01:31 +09:00
.gitignore Initial commit 2022-09-23 19:20:29 +09:00
.markdownlint.json Initial commit 2022-09-23 19:20:29 +09:00
LICENSE Initial commit 2022-09-23 19:20:29 +09:00
LICENSE.whisper Initial commit 2022-09-23 19:20:29 +09:00
Makefile Fix for Windows 2022-10-10 00:42:06 +09:00
package-lock.json Bump pyright from 1.1.273 to 1.1.274 2022-10-10 17:42:29 +09:00
package.json Bump pyright from 1.1.273 to 1.1.274 2022-10-10 17:42:29 +09:00
poetry.lock Add test 2022-10-10 00:01:31 +09:00
pyproject.toml Add test 2022-10-10 00:01:31 +09:00
README.md Moved memo 2022-10-10 00:53:09 +09:00
setup.cfg Fix setting for isort 2022-09-23 20:05:33 +09:00

Whispering

MIT License Python Versions

CI CodeQL Typos

Streaming transcriber with whisper. Enough machine power is needed to transcribe in real time.

Setup

pip install -U git+https://github.com/shirayu/whispering.git@v0.5.0

# If you use GPU, install proper torch and torchaudio
# Check https://pytorch.org/get-started/locally/
# Example : torch for CUDA 11.6
pip install -U torch torchaudio --extra-index-url https://download.pytorch.org/whl/cu116

If you get OSError: PortAudio library not found in Linux, install "PortAudio".

sudo apt -y install portaudio19-dev

Example of microphone

# Run in English
whispering --language en --model tiny
  • --help shows full options
  • --model set the model name to use. Larger models will be more accurate, but may not be able to transcribe in real time.
  • --language sets the language to transcribe. The list of languages are shown with whispering -h
  • --no-progress disables the progress message
  • -t sets temperatures to decode. You can set several like -t 0.0 -t 0.1 -t 0.5, but too many temperatures exhaust decoding time
  • --debug outputs logs for debug
  • --no-vad disables VAD (Voice Activity Detection). This forces whisper to analyze non-voice activity sound period
  • --output sets output file (Default: Standard output)

Parse interval

Without --allow-padding, whispering just performs VAD for the period, and when it is predicted as "silence", it will not be passed to whisper. If you want to change the VAD interval, change -n.

If you want quick response, set small -n and add --allow-padding. However, this may sacrifice the accuracy.

whispering --language en --model tiny -n 20 --allow-padding

Example of web socket

No security mechanism. Please make secure with your responsibility.

Run with --host and --port.

Host

whispering --language en --model tiny --host 0.0.0.0 --port 8000

Client

whispering --host ADDRESS_OF_HOST --port 8000 --mode client

You can set -n, --allow-padding and other options.

License