git clone https://github.com/nc2U/ibs
cd ibs
cd deploy
cp docker-compose.yml.tmpl docker-compose.yml
Check what must be defined in docker-compose.yml file.
Enter the actual data for your environment as described in the following items. If you use a database image such as postgresql or mariadb with Docker, be sure to use the default port.
postgres:
master:
web:
docker-compose up -d --build
The commands below sequentially run the python manage.py makemigrations
and python manage.py migrate
commands.
docker-compose exec web sh migrate.sh
docker-compose exec web python manage.py collectstatic
※ Place your Django project in the django directory and develop it.
cd ..
cd app/vue3
pnpm i # npm i (or) yarn
Vue application development -> node dev server on.
pnpm dev # npm run dev (or) yarn dev
or Vue application deploy -> node build
pnpm build # npm run build (or) yarn build
cd ..
cd app/svelte
pnpm i # npm i (or) yarn
Svelte application development -> node dev server on.
pnpm dev # npm run dev (or) yarn dev
or Svelte application deploy -> node build
pnpm build # npm run build (or) yarn build
Configure a Kubernetes cluster by setting up the required number of nodes.
The ci/cd server uses the master node of the Kubernetes cluster or a separate server or PC.
If you use the master node in the cluster as a ci/cd server, set up external access through ssh and install helm on the master node.
If you are using a server or PC outside the cluster, configure it to connect via ssh from outside, install helm, and then copy and configure the kubeconfig file to the user's home directory to access and control the master node.
Check the IP or domain that can access the ci/cd server.
For the nfs storage server, it is recommended to prepare a separate server if a large amount of data will be used in the future, but you can also use the cluster's master node or ci/cd server.
Install the necessary packages according to the operating system on the server to be used as a storage server, run it as an NFS server, and connect it to the Kubernetes cluster nodes.
Also, enable connection via ssh and check the accessible IP or domain.
Secure the domain to be used for this project and connect each cluster node to the domain.
Full installation instructions, including details on how to configure extra functionality in cert-manager can be found in the installation docs.
Before installing the chart, you must first install the cert-manager CustomResourceDefinition resources. This is performed in a separate step to allow you to easily uninstall and reinstall cert-manager without deleting your installed custom resources.
$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.18.2/cert-manager.crds.yaml
To install the chart with the release name cert-manager
:
## Add the Jetstack Helm repository
$ helm repo add jetstack https://charts.jetstack.io --force-update
## Install the cert-manager helm chart
$ helm install cert-manager --namespace cert-manager --version v1.18.2 jetstack/cert-manager --create-namespace
Get repo info
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
install Chart
helm install [RELEASE_NAME] -n ingress-nginx ingress-nginx/ingress-nginx --create-namespace
Use an existing GitHub account or create a new one and fork this project.
Afterward, go to the Settings > Secrets and variables > Actions menu and click the 'New repository secret' button to create Repository secrets with the keys and values below.
Go to the action tab in the GitHub repository.
Click 'Show more workflows...' at the bottom of all workflows, click _initial [Prod Step1]
, and then use the
Kubernetes watch
command on the cicd server to check whether the relevant PODs are created and operating normally.
When all database pods operate normally,
Click _initial [Prod Step2]
at the bottom of all workflows in the action tab.
cd deploy/helm
if ! helm repo list | grep -q 'nfs-subdir-external-provisioner'; then
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
fi
if ! helm status nfs-subdir-external-provisioner -n kube-system >/dev/null 2>&1; then
helm upgrade --install nfs-subdir-external-provisioner \
nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
-n kube-system \
--set nfs.server={ CICD_HOST} \
--set nfs.path=/mnt/nfs-subdir-external-provisioner
fi
kubectl apply -f deploy/kubectl/class-roles; cd deploy/helm
helm upgrade {DATABASE_USER} . -f ./values.yaml \
--install -n ibs-prod --create-namespace --history-max 5 --wait --timeout 10m \
--set global.dbPassword={DATABASE_PASS} \
--set global.cicdPath={CICD_PATH} \
--set global.cicdServerHost={CICD_HOST} \
--set global.nfsPath={NFS_PATH} \
--set global.nfsServerHost={NFS_HOST} \
--set global.domainHost={DOMAIN_HOST} \
--set global.emailHost={EMAIL_HOST} \
--set global.emailHostUser={EMAIL_HOST_USER} \
--set-string global.emailHostPassword='{EMAIL_HOST_PASSWORD}' \
--set global.defaultFromEmail={EMAIL_DEFAULT_FROM} \
--set postgres.auth.postgresPassword={DATABASE_PASS} \
--set postgres.auth.password={DATABASE_PASS} \
--set postgres.auth.replicationPassword={DATABASE_PASS} \
--set 'nginx.ingress.hosts[0].host'={DOMAIN_NAME} \
--set 'nginx.ingress.hosts[0].paths[0].path'=/ \
--set 'nginx.ingress.hosts[0].paths[0].pathType'=Prefix \
--set 'nginx.ingress.hosts[1].paths[0].path'=/ \
--set 'nginx.ingress.hosts[1].paths[0].pathType'=Prefix \
--set 'nginx.ingress.hosts[2].paths[0].path'=/ \
--set 'nginx.ingress.hosts[2].paths[0].pathType'=Prefix \
--set 'nginx.ingress.hosts[3].paths[0].path'=/ \
--set 'nginx.ingress.hosts[3].paths[0].pathType'=Prefix \
--set 'nginx.ingress.tls[0].hosts[0]'={DOMAIN_NAME} \
--set 'nginx.ingress.tls[0].secretName'=web-devbox-kr-cert # Replace {TEXT} part with the corresponding setting value
If all pods are running normally, run the following procedure.
The commands below sequentially run the python manage.py makemigrations
and python manage.py migrate
commands.
kubectl exec -it {web-pod} sh migrate.sh # Replace {web-pod} with the actual pod name.
kubectl exec -it {web-pod} python manage.py collectstatic # Replace {web-pod} with the actual pod name.
※ Place your Django project in the django directory and develop it.
cd ..
cd app/vue3
pnpm i # npm i (or) yarn
Vue application development -> node dev server on.
pnpm dev # npm run dev (or) yarn dev
or Vue application deploy -> node build
pnpm build # npm run build (or) yarn build
cd ..
cd app/svelte
pnpm i # npm i (or) yarn
Svelte application development -> node dev server on.
pnpm dev # npm run dev (or) yarn dev
or Svelte application deploy -> node build
pnpm build # npm run build (or) yarn build