Skip to content
Snippets Groups Projects
Commit 5613e5bc authored by Ahmet Öner's avatar Ahmet Öner
Browse files

Update default model download paths to `~/.cache/whisper`

parent 80fcd6f2
No related branches found
No related tags found
No related merge requests found
...@@ -7,6 +7,12 @@ Unreleased ...@@ -7,6 +7,12 @@ Unreleased
### Updated ### Updated
- Updated model conversion method (for Faster Whisper) to use Hugging Face downloader - Updated model conversion method (for Faster Whisper) to use Hugging Face downloader
- Updated default model paths to `~/.cache/whisper`.
- For customization, modify the `ASR_MODEL_PATH` environment variable.
- Ensure Docker volume is set for the corresponding directory to use caching.
```bash
docker run -d -p 9000:9000 -e ASR_MODEL_PATH=/data/whisper -v ./yourlocaldir:/data/whisper onerahmet/openai-whisper-asr-webservice:latest
```
### Changed ### Changed
......
...@@ -179,10 +179,18 @@ docker run -d --gpus all -p 9000:9000 -e ASR_MODEL=base whisper-asr-webservice-g ...@@ -179,10 +179,18 @@ docker run -d --gpus all -p 9000:9000 -e ASR_MODEL=base whisper-asr-webservice-g
``` ```
## Cache ## Cache
The ASR model is downloaded each time you start the container, using the large model this can take some time. If you want to decrease the time it takes to start your container by skipping the download, you can store the cache directory (/root/.cache/whisper) to an persistent storage. Next time you start your container the ASR Model will be taken from the cache instead of being downloaded again. The ASR model is downloaded each time you start the container, using the large model this can take some time.
If you want to decrease the time it takes to start your container by skipping the download, you can store the cache directory (`~/.cache/whisper`) to a persistent storage.
Next time you start your container the ASR Model will be taken from the cache instead of being downloaded again.
**Important this will prevent you from receiving any updates to the models.** **Important this will prevent you from receiving any updates to the models.**
```sh ```sh
docker run -d -p 9000:9000 -e ASR_MODEL=large -v //c/tmp/whisper:/root/.cache/whisper onerahmet/openai-whisper-asr-webservice:latest docker run -d -p 9000:9000 -v ./yourlocaldir:~/.cache/whisper onerahmet/openai-whisper-asr-webservice:latest
```
or
```sh
docker run -d -p 9000:9000 -e ASR_MODEL_PATH=/data/whisper -v ./yourlocaldir:/data/whisper onerahmet/openai-whisper-asr-webservice:latest
``` ```
...@@ -10,7 +10,7 @@ from faster_whisper import WhisperModel ...@@ -10,7 +10,7 @@ from faster_whisper import WhisperModel
from .utils import ResultWriter, WriteTXT, WriteSRT, WriteVTT, WriteTSV, WriteJSON from .utils import ResultWriter, WriteTXT, WriteSRT, WriteVTT, WriteTSV, WriteJSON
model_name = os.getenv("ASR_MODEL", "base") model_name = os.getenv("ASR_MODEL", "base")
model_path = os.getenv("ASR_MODEL_PATH", "/root/.cache/whisper") model_path = os.getenv("ASR_MODEL_PATH", os.path.join(os.path.expanduser("~"), ".cache", "whisper"))
if torch.cuda.is_available(): if torch.cuda.is_available():
model = WhisperModel(model_size_or_path=model_name, device="cuda", compute_type="float32", download_root=model_path) model = WhisperModel(model_size_or_path=model_name, device="cuda", compute_type="float32", download_root=model_path)
......
...@@ -8,10 +8,12 @@ import whisper ...@@ -8,10 +8,12 @@ import whisper
from whisper.utils import ResultWriter, WriteTXT, WriteSRT, WriteVTT, WriteTSV, WriteJSON from whisper.utils import ResultWriter, WriteTXT, WriteSRT, WriteVTT, WriteTSV, WriteJSON
model_name = os.getenv("ASR_MODEL", "base") model_name = os.getenv("ASR_MODEL", "base")
model_path = os.getenv("ASR_MODEL_PATH", os.path.join(os.path.expanduser("~"), ".cache", "whisper"))
if torch.cuda.is_available(): if torch.cuda.is_available():
model = whisper.load_model(model_name).cuda() model = whisper.load_model(model_name, download_root=model_path).cuda()
else: else:
model = whisper.load_model(model_name) model = whisper.load_model(model_name, download_root=model_path)
model_lock = Lock() model_lock = Lock()
......
...@@ -15,13 +15,12 @@ services: ...@@ -15,13 +15,12 @@ services:
environment: environment:
- ASR_MODEL=base - ASR_MODEL=base
ports: ports:
- 9000:9000 - "9000:9000"
volumes: volumes:
- ./app:/app/app - ./app:/app/app
- cache-pip:/root/.cache/pip - cache-pip:/root/.cache/pip
- cache-poetry:/root/.cache/poetry - cache-poetry:/root/.cache/poetry
- cache-whisper:/root/.cache/whisper - cache-whisper:~/.cache/whisper
- cache-faster-whisper:/root/.cache/faster_whisper
volumes: volumes:
cache-pip: cache-pip:
......
...@@ -8,13 +8,12 @@ services: ...@@ -8,13 +8,12 @@ services:
environment: environment:
- ASR_MODEL=base - ASR_MODEL=base
ports: ports:
- 9000:9000 - "9000:9000"
volumes: volumes:
- ./app:/app/app - ./app:/app/app
- cache-pip:/root/.cache/pip - cache-pip:/root/.cache/pip
- cache-poetry:/root/.cache/poetry - cache-poetry:/root/.cache/poetry
- cache-whisper:/root/.cache/whisper - cache-whisper:~/.cache/whisper
- cache-faster-whisper:/root/.cache/faster_whisper
volumes: volumes:
cache-pip: cache-pip:
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment