Setup Path: Remote
This guide will walk you through setting up Whisper2Linux using the Akash Network. With this setup, you can enjoy a scalable and globally accessible solution, ensuring that you can run the application from anywhere, regardless of your location.
Why Use the Akash Network?
The Akash Network is a decentralized network of Kubernetes clusters that supports GPU usage and permissionless deployments with the AKT token. By deploying Whisper2Linux on Akash, you gain access to a robust and scalable infrastructure. With as little as 0.5 AKT (around $1.25), you can start testing Whisper2Linux for about an hour, allowing you to explore its capabilities before deciding if remote hosting is a good fit for your needs.
Deploy with an LLM to Akash
To simplify the deployment process, follow these steps:
-
Visit LLMHeaven.com:
- Go to the LLMHeaven website.
-
Select the Whisper2Linux Template:
- Click on "Templates" in the sidebar.
- Find and select the "whisper2linux" template.
-
Deploy the Configuration:
- Click "Deploy" and wait for the LLM to deploy the required configuration.
- Make note of all the connection details provided after deployment.
-
Update Configuration:
- Edit the
whisper2linux.py
file on your local machine and replace the existing API URLs and other variables with the new connection details.
- Edit the
Deploy Manually to Akash
If you prefer to manually deploy Whisper2Linux to the Akash Network, follow the instructions below.
Step 1: Prepare the SDL
Use the following Service Definition Language (SDL) file to deploy all the required services for Whisper2Linux on the Akash Network. Copy and paste this SDL into the Akash Network Console.
---
version: "2.0"
services:
whisper2linux-ollama:
image: ollama/ollama
expose:
- port: 11434
as: 11434
to:
- global: true
command:
- /bin/sh
- -c
- |
ollama serve &
while ! ollama pull mistral-nemo:12b; do
echo "Waiting for ollama pull to succeed..."
sleep 5
done
ollama list
pkill ollama
ollama serve
whisper2linux-open-webui:
image: ghcr.io/open-webui/open-webui
expose:
- port: 8080
as: 80
to:
- global: true
depends_on:
- ollama
env:
- OLLAMA_BASE_URL=http://ollama:11434
- WEBUI_SECRET_KEY=
whisper2linux-openedai-speech-server:
image: ghcr.io/matatonic/openedai-speech:latest
expose:
- port: 8000
as: 8000
to:
- global: true
whisper2linux-whisper-asr:
image: onerahmet/openai-whisper-asr-webservice:latest-gpu
expose:
- port: 9000
as: 9000
to:
- global: true
env:
- ASR_ENGINE=openai_whisper
- ASR_MODEL=large-v3
profiles:
compute:
whisper2linux-ollama:
resources:
cpu:
units: 8
memory:
size: 24Gi
storage:
- size: 12Gi
gpu:
units: 1
attributes:
vendor:
nvidia:
whisper2linux-open-webui:
resources:
cpu:
units: 1
memory:
size: 4Gi
storage:
- size: 4Gi
whisper2linux-openedai-speech-server:
resources:
cpu:
units: 8
memory:
size: 8Gi
storage:
- size: 16Gi
gpu:
units: 1
attributes:
vendor:
nvidia:
whisper2linux-whisper-asr:
resources:
cpu:
units: 8
memory:
size: 8Gi
storage:
- size: 16Gi
gpu:
units: 1
attributes:
vendor:
nvidia:
placement:
akash:
pricing:
whisper2linux-ollama:
denom: uakt
amount: 10000
whisper2linux-open-webui:
denom: uakt
amount: 10000
whisper2linux-openedai-speech-server:
denom: uakt
amount: 10000
whisper2linux-whisper-asr:
denom: uakt
amount: 10000
signedBy:
anyOf:
- akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63
- akash18qa2a2ltfyvkyj0ggj3hkvuj6twzyumuaru9s4
deployment:
whisper2linux-ollama:
akash:
profile: whisper2linux-ollama
count: 1
whisper2linux-open-webui:
akash:
profile: whisper2linux-open-webui
count: 1
whisper2linux-openedai-speech-server:
akash:
profile: whisper2linux-openedai-speech-server
count: 1
whisper2linux-whisper-asr:
akash:
profile: whisper2linux-whisper-asr
count: 1
Step 2: Deploy the Services
-
Log in to the Akash Network Console:
- Go to the Akash Network Console and log in with your account.
-
Paste the SDL:
- Paste the SDL file into the console to deploy the required services.
-
Confirm Deployment:
- Confirm and start your deployment. The services will be set up according to the configuration specified in the SDL.
-
Get the provider ingress URL and ports for the API endpoints:
- Once deployed, check Akash Console for the port mapping to each API service.