docker安装

方法一:一键安装

1
curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

方法二:常规安装

1.安装yum-utilsyum-utils提供了yum-config-manager管理工具

1
yum install -y yum-utils

2.配置国内镜像源

1
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.安装Docker Engine-Community 和 containerd :

1
yum install docker-ce docker-ce-cli containerd.io

方法三:本地安装

1.查看操作系统内核,从官网下载对应版本

1
uname -r

官网下载地址:https://download.docker.com/linux/static/stable/

下载好的文件解压并放置到/usr/bin目录中

1
2
3
4
# 解压
tar -zxvf docker-20.10.x.tgz
# 移动解压出来的二进制文件到 /usr/bin 目录中
mv docker/* /usr/bin/

2.配置添加systemd,编辑docker的系统服务文件vim /usr/lib/systemd/system/docker.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target

3.重新加载系统服务和重启docker

1
2
systemctl daemon-reload
systemctl restart docker

基础命令

启动docker服务和开机自启:

1
2
systemctl start docker
systemctl enable docker
1
2
3
4
5
6
7
8
9
10
11
12
# 查看镜像
docker images
# 删除镜像
docker rmi <image_id>
# 查看容器
docker container ls
# 启动/重启容器
docker start/restart <container_id>
# 删除容器
docker rm <container_id>
# 查看挂载目录
docker inspect <container_id> | grep "Mounts" -A 20

运行容器

如需指定端口映射使用-p,如需暴露容器所有端口(和宿主机共享网络),使用--net=host

Tomcat8

app根目录上传至服务器,然后将根目录映射至容器/usr/local/tomcat/webapps/目录下:

1
2
3
4
docker run --name={app_name} \
--net=host \
-v /root/tomcat/webapps/{app_name}:/usr/local/tomcat/webapps/{app_name} \
-d tomcat:8.5.38-jre8

Mysql

1
2
3
4
5
6
7
8
9
10
11
12
docker run --name=mysql \
-p 3306:3306 \
-v /root/mysql/conf:/etc/mysql/conf.d \
-v /root/mysql/logs:/logs \
-v /root/mysql/data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD='password' \
-d mysql:5.7 mysqld \
--innodb-buffer-pool-size=80M \
--character-set-server=utf8mb4 \
--collation-server=utf8mb4_unicode_ci \
--default-time-zone=+8:00 \
--lower-case-table-names=1

Redis

1
2
3
4
docker run --name=redis \
--net=host \
-d redis:latest redis-server \
--requirepass "password"

ActiveMQ

1
2
3
4
5
6
7
8
docker run --name=activemq \
-p 8161:8161 \ # web页面管理端口
-p 61616:61616 \ # 业务端口
-v /root/activemq/data:/data/activemq \
-v /root/activemq/log:/var/log/activemq \
-e ACTIVEMQ_ADMIN_LOGIN=admin \
-e ACTIVEMQ_ADMIN_PASSWORD={passowrd} \
-d webcenter/activemq:latest

MongoDB

1
2
3
4
5
docker run --name=mongodb \
-p 27017:27017 \
-v /root/mongodb/data:/data/db \
-d mongo:4.0.6 \
--auth

进入容器设置超管账号密码:

1
2
docker exec -it mongodb mongo admin
db.createUser({ user: 'admin', pwd: 'password', roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] });

为其他库设置访问权限:

1
2
3
db.auth("admin","password");   #验证超管身份
use yourdatabase;
db.createUser({user:'user',pwd:'password',roles:[{role:'dbOwner',db:'yourdatabase'}]});

ElasticSearch

1
2
3
4
5
6
7
8
docker run --name=es \
--net=host \
-v /root/es/data:/usr/share/elasticsearch/data \
-v /root/es/logs:/usr/share/elasticsearch/logs \
-v /root/es/es/plugins:/usr/share/elasticsearch/plugins \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms8g -Xmx8g" \
-d elasticsearch:7.3.2

设置访问密码

进入容器:

1
docker exec -it es /bin/bash

编辑配置文件elasticsearch.yml,添加如下内容:

1
2
3
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true

重启容器,之后执行如下命令开始设置密码:

1
bin/elasticsearch-setup-passwords interactive

安装分词插件ik

进入容器执行如下命令,然后重启容器,注意选择和es版本匹配的分词插件:

1
bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.3.2/elasticsearch-analysis-ik-7.3.2.zip

Kibana

注意镜像使用和es使用一样的版本号:

1
docker pull kibana:7.3.2   

宿主机创建配置文件/root/elk/kibana.yml

1
2
3
4
5
6
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://172.17.0.1:9200" ] # 注意是容器内访问
elasticsearch.username: "kibana" #ES中配置的kibana账号和密码
elasticsearch.password: "password"
xpack.monitoring.ui.container.elasticsearch.enabled: true

运行:
1
2
3
4
5
6
7
8
docker run --name=kibana \
--net=host \
-v /root/elk/kibana.yml:/usr/share/kibana/config/kibana.yml \
-d kibana:7.3.2 \
--log-driver json-file \
--log-opt max-size=100m \
--log-opt max-file=2 \
--restart=always

Nacos

事先准备好配置文件/root/nacos/conf/application.properties

1
2
3
4
5
6
7
8
9
10
11
12
docker run --name=nacos \
-p 8848:8848 \
-p 9848:9848 \
-p 9849:9849 \
-v /root/nacos/logs/:/home/nacos/logs \
-v /root/nacos/data/:/home/nacos/data \
-v /root/nacos/conf/application.properties:/home/nacos/conf/application.properties \
-e MODE=standalone \
-e JVM_XMS=512m \
-e JVM_XMX=512m \
-e JVM_XMN=256m \
-d nacos/nacos-server

application.properties样例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
#
# Copyright 1999-2021 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

#*************** Spring Boot Related Configurations ***************#
### Default web context path:
server.servlet.contextPath=/nacos
### Default web server port:
server.port=8848

#*************** Network Related Configurations ***************#
### If prefer hostname over ip for Nacos server addresses in cluster.conf:
# nacos.inetutils.prefer-hostname-over-ip=false

### Specify local server's IP:
# nacos.inetutils.ip-address=


#*************** Config Module Related Configurations ***************#
### If use MySQL as datasource:
# spring.datasource.platform=mysql

### Count of DB:
# db.num=1

### Connect URL of DB:
# db.url.0=jdbc:mysql://127.0.0.1:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC
# db.user.0=nacos
# db.password.0=nacos

### Connection pool configuration: hikariCP
db.pool.config.connectionTimeout=30000
db.pool.config.validationTimeout=10000
db.pool.config.maximumPoolSize=20
db.pool.config.minimumIdle=2

#*************** Naming Module Related Configurations ***************#
### Data dispatch task execution period in milliseconds: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.delayMs
# nacos.naming.distro.taskDispatchPeriod=200

### Data count of batch sync task: Will removed on v2.1.X. Deprecated
# nacos.naming.distro.batchSyncKeyCount=1000

### Retry delay in milliseconds if sync task failed: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.retryDelayMs
# nacos.naming.distro.syncRetryDelay=5000

### If enable data warmup. If set to false, the server would accept request without local data preparation:
# nacos.naming.data.warmup=true

### If enable the instance auto expiration, kind like of health check of instance:
# nacos.naming.expireInstance=true

### will be removed and replaced by `nacos.naming.clean` properties
nacos.naming.empty-service.auto-clean=true
nacos.naming.empty-service.clean.initial-delay-ms=50000
nacos.naming.empty-service.clean.period-time-ms=30000

### Add in 2.0.0
### The interval to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.interval=60000

### The expired time to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.expired-time=60000

### The interval to clean expired metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.interval=5000

### The expired time to clean metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.expired-time=60000

### The delay time before push task to execute from service changed, unit: milliseconds.
# nacos.naming.push.pushTaskDelay=500

### The timeout for push task execute, unit: milliseconds.
# nacos.naming.push.pushTaskTimeout=5000

### The delay time for retrying failed push task, unit: milliseconds.
# nacos.naming.push.pushTaskRetryDelay=1000

### Since 2.0.3
### The expired time for inactive client, unit: milliseconds.
# nacos.naming.client.expired.time=180000

#*************** CMDB Module Related Configurations ***************#
### The interval to dump external CMDB in seconds:
# nacos.cmdb.dumpTaskInterval=3600

### The interval of polling data change event in seconds:
# nacos.cmdb.eventTaskInterval=10

### The interval of loading labels in seconds:
# nacos.cmdb.labelTaskInterval=300

### If turn on data loading task:
# nacos.cmdb.loadDataAtStart=false


#*************** Metrics Related Configurations ***************#
### Metrics for prometheus
#management.endpoints.web.exposure.include=*

### Metrics for elastic search
management.metrics.export.elastic.enabled=false
#management.metrics.export.elastic.host=http://localhost:9200

### Metrics for influx
management.metrics.export.influx.enabled=false
#management.metrics.export.influx.db=springboot
#management.metrics.export.influx.uri=http://localhost:8086
#management.metrics.export.influx.auto-create-db=true
#management.metrics.export.influx.consistency=one
#management.metrics.export.influx.compressed=true

#*************** Access Log Related Configurations ***************#
### If turn on the access log:
server.tomcat.accesslog.enabled=true

### The access log pattern:
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i

### The directory of access log:
server.tomcat.basedir=

#*************** Access Control Related Configurations ***************#
### If enable spring security, this option is deprecated in 1.2.0:
#spring.security.enabled=false

### The ignore urls of auth, is deprecated in 1.2.0:
nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**

### The auth system to use, currently only 'nacos' and 'ldap' is supported:
nacos.core.auth.system.type=nacos

### If turn on auth system:
nacos.core.auth.enabled=false

### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username
# nacos.core.auth.ldap.url=ldap://localhost:389
# nacos.core.auth.ldap.userdn=cn={0},ou=user,dc=company,dc=com

### The token expiration in seconds:
nacos.core.auth.default.token.expire.seconds=18000

### The default token:
nacos.core.auth.default.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789

### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
nacos.core.auth.caching.enabled=true

### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version.
nacos.core.auth.enable.userAgentAuthWhite=false

### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false.
### The two properties is the white list for auth and used by identity the request from other server.
nacos.core.auth.server.identity.key=serverIdentity
nacos.core.auth.server.identity.value=security

#*************** Istio Related Configurations ***************#
### If turn on the MCP server:
nacos.istio.mcp.server.enabled=false

#*************** Core Related Configurations ***************#

### set the WorkerID manually
# nacos.core.snowflake.worker-id=

### Member-MetaData
# nacos.core.member.meta.site=
# nacos.core.member.meta.adweight=
# nacos.core.member.meta.weight=

### MemberLookup
### Addressing pattern category, If set, the priority is highest
# nacos.core.member.lookup.type=[file,address-server]
## Set the cluster list with a configuration file or command-line argument
# nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809
## for AddressServerMemberLookup
# Maximum number of retries to query the address server upon initialization
# nacos.core.address-server.retry=5
## Server domain name address of [address-server] mode
# address.server.domain=jmenv.tbsite.net
## Server port of [address-server] mode
# address.server.port=8080
## Request address of [address-server] mode
# address.server.url=/nacos/serverlist

#*************** JRaft Related Configurations ***************#

### Sets the Raft cluster election timeout, default value is 5 second
# nacos.core.protocol.raft.data.election_timeout_ms=5000
### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute
# nacos.core.protocol.raft.data.snapshot_interval_secs=30
### raft internal worker threads
# nacos.core.protocol.raft.data.core_thread_num=8
### Number of threads required for raft business request processing
# nacos.core.protocol.raft.data.cli_service_thread_num=4
### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat
# nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe
### rpc request timeout, default 5 seconds
# nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000

#*************** Distro Related Configurations ***************#

### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second.
# nacos.core.protocol.distro.data.sync.delayMs=1000

### Distro data sync timeout for one sync data, default 3 seconds.
# nacos.core.protocol.distro.data.sync.timeoutMs=3000

### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds.
# nacos.core.protocol.distro.data.sync.retryDelayMs=3000

### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds.
# nacos.core.protocol.distro.data.verify.intervalMs=5000

### Distro data verify timeout for one verify, default 3 seconds.
# nacos.core.protocol.distro.data.verify.timeoutMs=3000

### Distro data load retry delay when load snapshot data failed, default 30 seconds.
# nacos.core.protocol.distro.data.load.retryDelayMs=30000

如果配置了账号鉴权,默认账号密码:nacos/nacos

Sentinel

1
2
3
docker run --name=sentinel \
-p 8858:8858 \
-d bladex/sentinel-dashboard

默认账号密码:sentinel/sentinel

Nginx

1
2
3
4
5
6
7
docker run --name=nginx \
--net=host \
-v /root/nginx/html:/usr/share/nginx/html \
-v /root/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
-v /root/nginx/logs:/var/log/nginx \
-v /root/nginx/conf.d:/etc/nginx/conf.d \
-d nginx

注意:如果指定了-v,则宿主机目录会覆盖容器目录(-v的参数只能是目录)。如果需要使用自定义配置,则应在nginx.confconf.d存入配置文件再启动,否则应取消这2个-v,以使用默认配置。

Gitlab

拉取Gitlab镜像

1
docker pull gitlab/gitlab-ce:latest

运行容器

1
2
3
4
5
6
7
docker run --name=gitlab \
-p 9980:80 \
-p 9922:22 \
-v /opt/docker/gitlab/config:/etc/gitlab \
-v /opt/docker/gitlab/logs:/var/log/gitlab \
-v /opt/docker/gitlab/data:/var/opt/gitlab \
-d gitlab/gitlab-ce

注意:因为容器里会使用到22端口,所以这里不要使用--net=host,以免和宿主机的22端口发生冲突。

修改配置

进入容器内部

1
docker exec -it gitlab /bin/bash

修改gitlab.rb,填写gitlab访问地址,这里应当填写宿主机的ip地址

1
2
3
4
5
6
7
8
9
10
11
12
vi /etc/gitlab/gitlab.rb

#加入如下内容:
#gitlab访问地址,可以写域名。如果端口不写的话默认为80端口
external_url 'http://192.168.124.194'
#ssh主机ip
gitlab_rails['gitlab_ssh_host'] = '192.168.124.194'
#ssh连接端口
gitlab_rails['gitlab_shell_ssh_port'] = 9922

# 退出vi,让配置生效
gitlab-ctl reconfigure

修改访问端口号:

1
vi /opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml

找到以下内容,修改默认的80端口为自定义

1
2
3
4
5
6
...
gitlab:
host: 192.168.124.194
port: 80 # 这里改为9980
https: false
...

重启gitlab后退出容器

1
2
gitlab-ctl restart
exit

访问地址为:http://192.168.124.194:9980/

首次进入需要使用root账号,可以使用如下命令查看root账号密码

1
docker exec -it gitlab cat /etc/gitlab/initial_root_password

重置密码

可通过gitlab控制台直接设置用户密码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 进入容器内部
docker exec -it gitlab /bin/bash

# 进入控制台
gitlab-rails console -e production

# 查询id为1的用户,id为1的用户是超级管理员
user = User.where(id:1).first
# 修改密码为abc123456
user.password='abc123456'
# 保存
user.save!
# 退出
exit

其他

修改容器时区为中国时区

1
docker cp /usr/share/zoneinfo/Asia/Shanghai {container_id}:/etc/localtime

FAQ

Centos安装docker报错

如果安装docker时出现了container-selinux >= 2.9错误,类似如下:

1
2
3
4
5
6
Error: Package: containerd.io-1.2.13-3.2.el7.x86_64 (docker-ce-stable)
Requires: container-selinux >= 2:2.74
Error: Package: 3:docker-ce-19.03.12-3.el7.x86_64 (docker-ce-stable)
Requires: container-selinux >= 2:2.74
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest

这个报错是container-selinux版本低或者是没安装的原因,yum 安装container-selinux一般的yum源又找不到这个包,需要安装epel源才能yum安装container-selinux,然后再安装docker-ce就可以了。

1
2
3
4
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum install epel-release
yum makecache
yum install container-selinux