You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To run DB-GPT with DeepSeek proxy, you must provide the DeepSeek API key in the `configs/dbgpt-proxy-deepseek.toml`.
143
158
144
-
And you can specify your embedding model in the `configs/dbgpt-proxy-deepseek.toml` configuration file, the default embedding model is `BAAI/bge-large-zh-v1.5`. If you want to use other embedding models, you can modify the `configs/dbgpt-proxy-deepseek.toml` configuration file and specify the `name` and `provider` of the embedding model in the `[[models.embeddings]]` section. The provider can be `hf`.
145
-
159
+
And you can specify your embedding model in the `configs/dbgpt-proxy-deepseek.toml` configuration file, the default embedding model is `BAAI/bge-large-zh-v1.5`. If you want to use other embedding models, you can modify the `configs/dbgpt-proxy-deepseek.toml` configuration file and specify the `name` and `provider` of the embedding model in the `[[models.embeddings]]` section. The provider can be `hf`.Finally, you need to append `--extra "hf"` at the end of the dependency installation command. Here's the updated command:
160
+
```bash
161
+
uv sync --all-packages \
162
+
--extra "base" \
163
+
--extra "proxy_openai" \
164
+
--extra "rag" \
165
+
--extra "storage_chromadb" \
166
+
--extra "dbgpts" \
167
+
--extra "hf"
168
+
```
169
+
**Model Configurations**:
146
170
```toml
147
171
# Model Configurations
148
172
[models]
@@ -178,7 +202,7 @@ uv run python packages/dbgpt-app/src/dbgpt_app/dbgpt_server.py --config configs/
178
202
```bash
179
203
# Use uv to install dependencies needed for GLM4
180
204
# Install core dependencies and select desired extensions
181
-
uv sync --all-packages --frozen \
205
+
uv sync --all-packages \
182
206
--extra "base" \
183
207
--extra "hf" \
184
208
--extra "rag" \
@@ -214,6 +238,94 @@ Then run the following command to start the webserver:
214
238
215
239
```bash
216
240
uv run dbgpt start webserver --config configs/dbgpt-local-glm.toml
241
+
```
242
+
243
+
</TabItem>
244
+
<TabItem value="vllm" label="VLLM(local)">
245
+
246
+
```bash
247
+
# Use uv to install dependencies needed for vllm
248
+
# Install core dependencies and select desired extensions
249
+
uv sync --all-packages \
250
+
--extra "base" \
251
+
--extra "vllm" \
252
+
--extra "rag" \
253
+
--extra "storage_chromadb" \
254
+
--extra "quant_bnb" \
255
+
--extra "dbgpts"
256
+
```
257
+
258
+
### Run Webserver
259
+
260
+
To run DB-GPT with the local model. You can modify the `configs/dbgpt-local-vllm.toml` configuration file to specify the model path and other parameters.
261
+
262
+
```toml
263
+
# Model Configurations
264
+
[models]
265
+
[[models.llms]]
266
+
name = "THUDM/glm-4-9b-chat-hf"
267
+
provider = "vllm"
268
+
# If not provided, the model will be downloaded from the Hugging Face model hub
269
+
# uncomment the following line to specify the model path in the local file system
In the above configuration file, `[[models.llms]]` specifies the LLM model, and `[[models.embeddings]]` specifies the embedding model. If you not provide the `path` parameter, the model will be downloaded from the Hugging Face model hub according to the `name` parameter.
280
+
281
+
Then run the following command to start the webserver:
282
+
283
+
```bash
284
+
uv run dbgpt start webserver --config configs/dbgpt-local-vllm.toml
# Use uv to install dependencies needed for llama-cpp
292
+
# Install core dependencies and select desired extensions
293
+
uv sync --all-packages \
294
+
--extra "base" \
295
+
--extra "llama_cpp" \
296
+
--extra "rag" \
297
+
--extra "storage_chromadb" \
298
+
--extra "quant_bnb" \
299
+
--extra "dbgpts"
300
+
```
301
+
302
+
### Run Webserver
303
+
304
+
To run DB-GPT with the local model. You can modify the `configs/dbgpt-local-llama-cpp.toml` configuration file to specify the model path and other parameters.
305
+
306
+
```toml
307
+
# Model Configurations
308
+
[models]
309
+
[[models.llms]]
310
+
name = "DeepSeek-R1-Distill-Qwen-1.5B"
311
+
provider = "llama.cpp"
312
+
# If not provided, the model will be downloaded from the Hugging Face model hub
313
+
# uncomment the following line to specify the model path in the local file system
In the above configuration file, `[[models.llms]]` specifies the LLM model, and `[[models.embeddings]]` specifies the embedding model. If you not provide the `path` parameter, the model will be downloaded from the Hugging Face model hub according to the `name` parameter.
324
+
325
+
Then run the following command to start the webserver:
326
+
327
+
```bash
328
+
uv run dbgpt start webserver --config configs/dbgpt-local-llama-cpp.toml
0 commit comments