From 683571e059c3514430af8e5ce5a665c3cf0f8fdc Mon Sep 17 00:00:00 2001 From: Yuta Hayashibe Date: Mon, 26 Sep 2022 00:25:44 +0900 Subject: [PATCH] Updated README --- README.md | 15 +++++++++------ 1 file changed, 9 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index dda9311..5e20643 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,9 @@ # whisper_streaming +[![MIT License](https://img.shields.io/apm/l/atomic-design-ui.svg?)](LICENSE) +[![Python Versions](https://img.shields.io/badge/Python-3.8%20--%203.10-blue)](https://pypi.org/project/bunkai/) + [![CI](https://github.com/shirayu/whisper_streaming/actions/workflows/ci.yml/badge.svg)](https://github.com/shirayu/whisper_streaming/actions/workflows/ci.yml) [![CodeQL](https://github.com/shirayu/whisper_streaming/actions/workflows/codeql-analysis.yml/badge.svg)](https://github.com/shirayu/whisper_streaming/actions/workflows/codeql-analysis.yml) [![Typos](https://github.com/shirayu/whisper_streaming/actions/workflows/typos.yml/badge.svg)](https://github.com/shirayu/whisper_streaming/actions/workflows/typos.yml) @@ -13,7 +16,7 @@ Enough machine power is needed to transcribe in real time. ```bash pip install -U git+https://github.com/shirayu/whisper_streaming.git -# If you use GPU, install proper torch and torchaudio with "poetry run pip install -U" +# If you use GPU, install proper torch and torchaudio # Example : torch for CUDA 11.6 pip install -U torch torchaudio --extra-index-url https://download.pytorch.org/whl/cu116 ``` @@ -22,11 +25,11 @@ pip install -U torch torchaudio --extra-index-url https://download.pytorch.org/w ```bash # Run in English -poetry run whisper_streaming --language en --model tiny +whisper_streaming --language en --model tiny ``` - ``--help`` shows full options -- ``--language`` sets the language to transcribe. The list of languages are shown with ``poetry run whisper_streaming -h`` +- ``--language`` sets the language to transcribe. The list of languages are shown with ``whisper_streaming -h`` - ``-t`` sets temperatures to decode. You can set several like (``-t 0.0 -t 0.1 -t 0.5``), but too many temperatures exhaust decoding time - ``--debug`` outputs logs for debug @@ -36,7 +39,7 @@ If you want quick response, set small ``-n`` and add ``--allow-padding``. However, this may sacrifice the accuracy. ```bash -poetry run whisper_streaming --language en --model tiny -n 20 --allow-padding +whisper_streaming --language en --model tiny -n 20 --allow-padding ``` ## Example of web socket @@ -48,7 +51,7 @@ Run with ``--host`` and ``--port``. ### Host ```bash -poetry run whisper_streaming --language en --model tiny --host 0.0.0.0 --port 8000 +whisper_streaming --language en --model tiny --host 0.0.0.0 --port 8000 ``` You can set ``--allow-padding`` and other options. @@ -56,7 +59,7 @@ You can set ``--allow-padding`` and other options. ### Client ```bash -poetry run whisper_streaming --model tiny --host ADDRESS_OF_HOST --port 8000 --mode client +whisper_streaming --model tiny --host ADDRESS_OF_HOST --port 8000 --mode client ``` You can set ``-n`` and other options.