f3057cc28f
Bumps [awalsh128/cache-apt-pkgs-action](https://github.com/awalsh128/cache-apt-pkgs-action) from 1.1.1 to 1.1.2. - [Release notes](https://github.com/awalsh128/cache-apt-pkgs-action/releases) - [Commits](https://github.com/awalsh128/cache-apt-pkgs-action/compare/v1.1.1...v1.1.2) --- updated-dependencies: - dependency-name: awalsh128/cache-apt-pkgs-action dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> |
||
---|---|---|
.github | ||
scripts | ||
tests | ||
whispering | ||
.gitignore | ||
.markdownlint.json | ||
LICENSE | ||
LICENSE.whisper | ||
Makefile | ||
package-lock.json | ||
package.json | ||
poetry.lock | ||
pyproject.toml | ||
README.md | ||
setup.cfg |
Whispering
Streaming transcriber with whisper. Enough machine power is needed to transcribe in real time.
Setup
pip install -U git+https://github.com/shirayu/whispering.git@v0.6.3
# If you use GPU, install proper torch and torchaudio
# Check https://pytorch.org/get-started/locally/
# Example : torch for CUDA 11.6
pip install -U torch torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
If you get OSError: PortAudio library not found
in Linux, install "PortAudio".
sudo apt -y install portaudio19-dev
Example of microphone
# Run in English
whispering --language en --model tiny
--help
shows full options--model
sets the model name to use. Larger models will be more accurate, but may not be able to transcribe in real time.--language
sets the language to transcribe. The list of languages are shown withwhispering -h
--no-progress
disables the progress message-t
sets temperatures to decode. You can set several like-t 0.0 -t 0.1 -t 0.5
, but too many temperatures exhaust decoding time--debug
outputs logs for debug--vad
sets VAD (Voice Activity Detection) threshold. The default is0.5
. 0 disables VAD and forces whisper to analyze non-voice activity sound period--output
sets output file (Default: Standard output)
Parse interval
By default, whispering performs VAD for every 3.75 second.
This interval is determined by the value of -n
and its default is 20
.
When an interval is predicted as "silence", it will not be passed to whisper.
If you want to disable VAD, please make VAD threshold 0 by adding --vad 0
.
By default, Whisper does not perform analysis until the total length of the segments determined by VAD to have speech exceeds 30 seconds.
However, if silence segments appear 16 times (the default value of --max_nospeech_skip
) after speech is detected, the analysis is performed.
Example of web socket
⚠ No security mechanism. Please make secure with your responsibility.
Run with --host
and --port
.
Host
whispering --language en --model tiny --host 0.0.0.0 --port 8000
Client
whispering --host ADDRESS_OF_HOST --port 8000 --mode client
You can set -n
and other options.
For Developers
-
Install poetry to use
poetry
command -
Clone and install libraries
# Clone git clone https://github.com/shirayu/whispering.git # With poetry poetry config virtualenvs.in-project true poetry install --all-extras poetry run pip install -U torch torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 # With npm npm install
-
Run test and check that no errors occur
poetry run make -j4
-
Make fancy updates
-
Make style
poetry run make style
-
Run test again and check that no errors occur
poetry run make -j4
-
Check typos by using typos. Just run
typos
command in the root directory.typos
-
Send Pull requests!
License
- MIT License
- Some codes are ported from the original whisper. Its license is also MIT License