自学内容网 自学内容网

docker run指定gpu,后台拉镜像

 

root@node37:/ollama# docker run -d --gpus '"device=2,3"' -v /ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
c12c23004c3957a8cba38376dbb17904db9381932f9578b2dd5de87794f40a9d
root@node37:/ollama# 
root@node37:/ollama# 
root@node37:/ollama# 
root@node37:/ollama# docker exec -it ollama /bin/bash
root@c12c23004c39:/# 
root@c12c23004c39:/# 
root@c12c23004c39:/# nvidia-smi
Thu Nov 14 06:05:34 2024       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.15              Driver Version: 550.54.15      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA L40                     Off |   00000000:82:00.0 Off |                    0 |
| N/A   37C    P8             34W /  300W |      18MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
|   1  NVIDIA L40                     Off |   00000000:83:00.0 Off |                    0 |
| N/A   37C    P8             37W /  300W |      18MiB /  46068MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+
root@c12c23004c39:/#exit
上述进入容器确认了容器使用了2块空闲的2,3
下面执行后台跑拉镜像,避免网络断开失败。
===================
删除容器命令:
docker stop ollama
docker rm ollama
如果不知道名称,则先docker ps查看id
再
docker stop id
docker rm id
=======================

root@node37:/ollama# cat o.sh
docker exec -i ollama ollama run qwen2.5-coder:32b
root@node37:/ollama# chmod +x o.sh
root@node37:/ollama# nohup ./o.sh > ./o.log &
[1] 51844
root@node37:/ollama# nohup: ignoring input and redirecting stderr to stdout
pulling manifest 
pulling ac3d1ba8aa77...   0% ▕                ▏ 4.4 MB/ 19 GB  297 KB/s  18h32m^C

网络不稳定伤脑筋,不断的重来,还好能续传:

 


原文地址:https://blog.csdn.net/jycjyc/article/details/143769930

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!