基于k8s集群部署Nexus与旧数据的迁移
共 19165字,需浏览 39分钟
·
2021-05-02 08:51
目录
1、环境介绍
1.1 kubernetes集群环境
1.2 存储环境
1.3 nexus版本
2、部署nexus
3、访问检查
4、旧数据的迁移
4.1 同版本部署及数据同步
4.2 版本升级
本文是对之前旧博客文章的丰富,添加了一些当时实操的笔记分享出来~
Nexus是一个强大的Maven仓库管理器,通过 nexus 可以搭建 Maven仓库。它极大地简化了自己内部仓库的维护和外部仓库的访问,利用Nexus你可以只在一个地方就能够完全控制访问和部署在你所维护仓库中的每个Artifact。Nexus是一套“开箱即用”的系统不需要数据库,并且还提供强大的仓库管理、构建、搜索等功能。它使用文件系统加Lucene来组织数据。Nexus使用ExtJS来开发界面,利用Restlet来提供完整的REST APIs,通过m2eclipse与Eclipse集成使用。Nexus支持WebDAV与LDAP安全身份认证。
1、环境介绍
1.1 kubernetes集群环境
# kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master-01 Ready,SchedulingDisabled master 198d v1.15.0
k8s-master-02 Ready,SchedulingDisabled master 198d v1.15.0
k8s-node-01 Ready node 198d v1.15.0
k8s-node-02 Ready node 155d v1.15.0
k8s-node-03 Ready node 133d v1.15.0
k8s-node-04 Ready node 198d v1.15.0
1.2 存储环境
本集群中kubernetes底层存储使用的是nfs,并且以nfs作为存储创建了storageclass便于动态创建pv
# kubectl get sc
NAME PROVISIONER AGE
managed-nfs-storage (default) fuseim.pri/ifs 198d
1.3 nexus版本
nexus
版本:3.20.1
2、部署nexus
部署nexus
使用官方的docker镜像,并且这里先对官方的dockerfile
进行了分析,官方的dockerfile
在github上
dockerfile
中指定运行容器进程的用户是nexus
,声明了nexus
的数据目录是/nexus-data
,声明了jvm
的参数是INSTALL4J_ADD_VM_PARAMS
,容器暴露8081
端口。内容如下:
FROM registry.access.redhat.com/ubi8/ubi
LABEL vendor=Sonatype \
maintainer="Sonatype <cloud-ops@sonatype.com>" \
com.sonatype.license="Apache License, Version 2.0" \
com.sonatype.name="Nexus Repository Manager base image"
ARG NEXUS_VERSION=3.20.1-01
ARG NEXUS_DOWNLOAD_URL=https://download.sonatype.com/nexus/3/nexus-${NEXUS_VERSION}-unix.tar.gz
ARG NEXUS_DOWNLOAD_SHA256_HASH=fba9953e70e2d53262d2bd953e5fbab3e44cf2965467df14a665b0752de30e51
# configure nexus runtime
ENV SONATYPE_DIR=/opt/sonatype
ENV NEXUS_HOME=${SONATYPE_DIR}/nexus \
NEXUS_DATA=/nexus-data \
NEXUS_CONTEXT='' \
SONATYPE_WORK=${SONATYPE_DIR}/sonatype-work \
DOCKER_TYPE='rh-docker'
ARG NEXUS_REPOSITORY_MANAGER_COOKBOOK_VERSION="release-0.5.20190212-155606.d1afdfe"
ARG NEXUS_REPOSITORY_MANAGER_COOKBOOK_URL="https://github.com/sonatype/chef-nexus-repository-manager/releases/download/${NEXUS_REPOSITORY_MANAGER_COOKBOOK_VERSION}/chef-nexus-repository-manager.tar.gz"
ADD solo.json.erb /var/chef/solo.json.erb
# Install using chef-solo
# Chef version locked to avoid needing to accept the EULA on behalf of whomever builds the image
RUN yum install -y --disableplugin=subscription-manager hostname procps \
&& curl -L https://www.getchef.com/chef/install.sh | bash -s -- -v 14.12.9 \
&& /opt/chef/embedded/bin/erb /var/chef/solo.json.erb > /var/chef/solo.json \
&& chef-solo \
--recipe-url ${NEXUS_REPOSITORY_MANAGER_COOKBOOK_URL} \
--json-attributes /var/chef/solo.json \
&& rpm -qa *chef* | xargs rpm -e \
&& rm -rf /etc/chef \
&& rm -rf /opt/chefdk \
&& rm -rf /var/cache/yum \
&& rm -rf /var/chef \
&& yum clean all
VOLUME ${NEXUS_DATA}
EXPOSE 8081
USER nexus
ENV INSTALL4J_ADD_VM_PARAMS="-Xms1200m -Xmx1200m -XX:MaxDirectMemorySize=2g -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs"
CMD ["sh", "-c", "${SONATYPE_DIR}/start-nexus-repository-manager.sh"]
根据上面的dockerfile
文件,编写部署在k8s
集群中的资源清单,通过nfs
的storageclass
来动态提供pv
,将nexus
的数据做持久化存储,并且以NodePort
方式暴露服务。
# cat nexus3/nexus3.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: nexus3
name: nexus3
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: nexus3
template:
metadata:
labels:
k8s-app: nexus3
name: nexus3
namespace: kube-system
spec:
containers:
- name: nexus3
image: sonatype/nexus3:3.20.1
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
name: web
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 540
periodSeconds: 30
failureThreshold: 6
readinessProbe:
httpGet:
path: /
port: 8081
initialDelaySeconds: 540
periodSeconds: 30
failureThreshold: 6
resources:
limits:
cpu: 1000m
memory: 2Gi
requests:
cpu: 500m
memory: 512Mi
volumeMounts:
- name: nexus-data
mountPath: /nexus-data
volumes:
- name: nexus-data
persistentVolumeClaim:
claimName: nexus-data-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexus-data-pvc
namespace: kube-system
spec:
accessModes:
- ReadWriteMany
storageClassName: "managed-nfs-storage"
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
name: nexus3
namespace: kube-system
labels:
k8s-app: nexus3
spec:
selector:
k8s-app: nexus3
type: NodePort
ports:
- name: web
protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30005
执行kubectl apply
创建资源,并检查对应的pv
,pvc
以及日志
# kubectl apply -f nexus3.yaml
deployment.apps/nexus3 created
persistentvolumeclaim/nexus-data-pvc created
service/nexus3 created
# kubectl -n kube-system get pv,pvc|grep nexus
persistentvolume/pvc-70f810b4-824a-4c4c-8582-6253afe1a350 10Gi RWX Delete Bound kube-system/nexus-data-pvc managed-nfs-storage 1m
persistentvolumeclaim/nexus-data-pvc Bound pvc-70f810b4-824a-4c4c-8582-6253afe1a350 10Gi RWX managed-nfs-storage 1m
# kubectl -n kube-system get pods|grep nexus
nexus3-59c8f8759-sktfv 0/1 Running 0 2m
第一次部署nexus
时需要初始化数据等,消耗的时间比较长,直到在日志中能看到如下字样表示nexus
容器启动完成了,因此上面的部署yaml
文件中的健康检查时间设置为经过测试的540s
2020-02-06 10:41:52,109+0000 INFO [jetty-main-1] *SYSTEM org.eclipse.jetty.server.Server - Started @437947ms
2020-02-06 10:41:52,110+0000 INFO [jetty-main-1] *SYSTEM org.sonatype.nexus.bootstrap.jetty.JettyServer -
-------------------------------------------------
Started Sonatype Nexus OSS 3.20.1-01
-------------------------------------------------
3、访问检查
当pod
通过健康检查之后,可以通过NodePort
方式访问到nexus
第一次点击登录会提示修改密码,且默认的初始密码在服务器的/nexus-data/admin.password
文件中
# kubectl -n kube-system exec nexus3-59c8f8759-sktfv -it cat /nexus-data/admin.password
fe8da3fb-b35b-4a8b-95f4-e39ccdc7f760
登录后进入到页面
4、旧数据的迁移
这里记录的是经过实际验证的迁移主要流程以及遇到的坑
目标:docker
安装的nexus3.14.0
迁移到k8s
安装的3.20.1
至于为什么要部署相同版本的
nexus
,是通过踩坑的结论。发现只有部署一个同版本的nexus
,然后平滑自动升级才能成功
4.1 同版本部署及数据同步
部署相同版本的nexus到k8s中并做数据同步
部署过程就不用重复了,在本文的前面内容中修改下镜像的版本就可以了
记录下备份和数据同步的主要流程:
step1 在迁出机器,备份databases
在管理界面System-Tasks界面,点击“Create task” 选择Admin-Export databases for backup 填写好名称,保存路径,Task frequency可以选择Manual,保存之后,立即执行一次
step2 在迁出机器,备份blobs
进入 /nexus-data/blobs 将所有文件夹打包
step3 迁入机器,导入databases
停止NEXUS服务
cd /opt/nesus/bin
nexus /stop
删除 /nexus-data/db 下的如下目录
accesslog
analytics
audit
component
config
security
将步骤一中,选择的路径下的所有文件,拷贝到这个目录
/nexus-data/restore-from-backup
step4 在迁入机,导入blobs
将步骤二中,打包的所有文件,按照原样,解压到迁入机的
/nexus-data/blobs
重启迁入机的nexus
cd /opt/nesus/bin
nexus /start
同步完成后启动,然而并不能正常启动运行,提示
es version 1.3
but the latest supported by this version of nexus is 1.2
4.2 版本升级
将k8s中部署的nexus切换镜像,由3.14.0滚动更新为3.20.1,解决上面的报错 启动日志中会发现相关升级日志
Begin upgrade
- - - - - - - - - - - - - - - - - - - - - - - - -
2020-02-07 07:09:38,289+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Checkpoint security
2020-02-07 07:09:39,018+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Checkpoint component
2020-02-07 07:09:40,659+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Checkpoint config
2020-02-07 07:09:41,708+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl -
- - - - - - - - - - - - - - - - - - - - - - - - -
Apply upgrade
- - - - - - - - - - - - - - - - - - - - - - - - -
2020-02-07 07:09:41,709+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade security from 1.0 to 1.1
2020-02-07 07:09:41,710+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade component from 1.12 to 1.13
2020-02-07 07:09:46,835+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade component from 1.13 to 1.14
2020-02-07 07:09:51,597+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade config from 1.5 to 1.6
2020-02-07 07:09:51,602+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade config from 1.6 to 1.7
2020-02-07 07:09:51,714+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade config from 1.7 to 1.8
2020-02-07 07:09:51,906+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade pypi from 1.0 to 1.1
2020-02-07 07:09:52,012+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade security from 1.1 to 1.2
2020-02-07 07:09:52,027+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Upgrade security from 1.2 to 1.3
2020-02-07 07:09:52,032+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl -
- - - - - - - - - - - - - - - - - - - - - - - - -
Commit upgrade
- - - - - - - - - - - - - - - - - - - - - - - - -
2020-02-07 07:09:52,033+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Commit security
2020-02-07 07:09:52,034+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Commit component
2020-02-07 07:09:52,034+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Commit config
2020-02-07 07:09:52,034+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Cleaning up security
2020-02-07 07:09:52,074+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Cleaning up component
2020-02-07 07:09:52,216+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl - Cleaning up config
2020-02-07 07:09:52,241+0000 INFO [FelixStartLevel] *SYSTEM org.sonatype.nexus.upgrade.internal.UpgradeServiceImpl -
- - - - - - - - - - - - - - - - - - - - - - - - -
Upgrade complete
有了这些日志,表示基于docker
部署的低版本已经顺利迁移到了基于k8s
部署的较新版本了