I need to install Nginx on my target which there is no internet connection, how can I install Nginx with all dependencies in an offline mode?? thanks in advance for your answers.
I have recently gone through this procedure and this is what worked for me on centos7:
You need an online Linux server to download dependencies. You can use virtual machines or anything else.
On your online server create a .sh file and copy script below in it. (I named it download_dependencies)
#!/bin/bash
# This script is used to fetch external packages that are not available in standard Linux distribution
# Example: ./fetch-external-dependencies ubuntu18.04
# Script will create nms-dependencies-ubuntu18.04.tar.gz in local directory which can be copied
# into target machine and packages inside can be installed manually
set -eo pipefail
# current dir
PACKAGE_PATH="."
mkdir -p $PACKAGE_PATH
declare -A CLICKHOUSE_REPO
CLICKHOUSE_REPO['ubuntu18.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['ubuntu20.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['centos7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['centos8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
declare -A NGINX_REPO
NGINX_REPO['ubuntu18.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['ubuntu20.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['centos7']="https://nginx.org/packages/mainline/centos/7/x86_64/RPMS/"
NGINX_REPO['centos8']="https://nginx.org/packages/mainline/centos/8/x86_64/RPMS/"
NGINX_REPO['rhel7']="https://nginx.org/packages/mainline/rhel/7/x86_64/RPMS/"
NGINX_REPO['rhel8']="https://nginx.org/packages/mainline/rhel/8/x86_64/RPMS/"
CLICKHOUSE_KEY="https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG"
NGINX_KEY="https://nginx.org/keys/nginx_signing.key"
declare -A CLICKHOUSE_PACKAGES
# for Clickhouse package names are static between distributions
# we use ubuntu/centos entries as placeholders
CLICKHOUSE_PACKAGES['ubuntu']="
clickhouse-server_21.3.10.1_all.deb
clickhouse-common-static_21.3.10.1_amd64.deb"
CLICKHOUSE_PACKAGES['centos']="
clickhouse-server-21.3.10.1-2.noarch.rpm
clickhouse-common-static-21.3.10.1-2.x86_64.rpm"
CLICKHOUSE_PACKAGES['ubuntu18.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['ubuntu20.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['centos7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['centos8']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel8']=${CLICKHOUSE_PACKAGES['centos']}
declare -A NGINX_PACKAGES
NGINX_PACKAGES['ubuntu18.04']="nginx_1.21.3-1~bionic_amd64.deb"
NGINX_PACKAGES['ubuntu20.04']="nginx_1.21.2-1~focal_amd64.deb"
NGINX_PACKAGES['centos7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['centos8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
download_packages() {
local target_distribution=$1
if [ -z $target_distribution ]; then
echo "$0 - no target distribution specified"
exit 1
fi
mkdir -p "${PACKAGE_PATH}/${target_distribution}"
# just in case delete all files in target dir
rm -f "${PACKAGE_PATH}/${target_distribution}/*"
readarray -t clickhouse_files <<<"${CLICKHOUSE_PACKAGES[${target_distribution}]}"
readarray -t nginx_files <<<"${NGINX_PACKAGES[${target_distribution}]}"
echo "Downloading Clickhouse signing keys"
curl -fs ${CLICKHOUSE_KEY} --output "${PACKAGE_PATH}/${target_distribution}/clickhouse-key.gpg"
echo "Downloading Nginx signing keys"
curl -fs ${NGINX_KEY} --output "${PACKAGE_PATH}/${target_distribution}/nginx-key.gpg"
for package_file in "${clickhouse_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${CLICKHOUSE_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
for package_file in "${nginx_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${NGINX_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
bundle_file="${PACKAGE_PATH}/nms-dependencies-${target_distribution}.tar.gz"
tar -zcf $bundle_file -C "${PACKAGE_PATH}/${target_distribution}" .
echo "Bundle file saved as $bundle_file"
}
target_distribution=$1
if [ -z $target_distribution ]; then
echo "Usage: $0 target_distribution"
echo "Supported target distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
# check if target distribution is supported
if [ -z ${CLICKHOUSE_REPO[$target_distribution]} ]; then
echo "Target distribution is not supported."
echo "Supported distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
download_packages "${target_distribution}"
Then on the same directory that contains download_dependencies.sh run command below:
download_dependencies.sh <your linux version>
In my case, I ran code below (leave it blank to see options):
download_dependencies.sh centos7
It should start to download and when it finished you should see nms-dependencies-rhel7.tar.gz in your directory.
Copy that file(.tar.gz) to your offline target.
Now on your target machine, go to directory which you copied your file and run the code below:
tar -zxvf nms-dependencies-rhel7.tar.gz
sudo yum install *.rpm
After installation you can start nginx using systemctl:
sudo systemctl start clickhouse-server
sudo systemctl start nginx
Your nginx service must be running now!
you can download tar file in another system and copy
did you try this link?
https://gist.github.com/taufiqibrahim/d7f697de6bb8b93ca348a5b94d6adbfc
Related
When using WP CLI in docker, I need to execute it as root.
I need to add the flag --allow-root directly in .bashrc and I am trying to figure out why it doesn't work.
FROM webdevops/php-dev:7.3
# configure postfix to use mailhog
RUN postconf -e "relayhost = mail:1025"
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp && \
echo 'wp() {' >> ~/.bashrc && \
echo '/usr/local/bin/wp "$#" --allow-root' >> ~/.bashrc && \
echo '}' >> ~/.bashrc
WORKDIR /var/www/html/
my .bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# Note: PS1 and umask are already set in /etc/profile. You should not
# need this unless you want different defaults for root.
# PS1='${debian_chroot:+($debian_chroot)}\h:\w\$ '
# umask 022
# You may uncomment the following lines if you want `ls' to be colorized:
# export LS_OPTIONS='--color=auto'
# eval "`dircolors`"
# alias ls='ls $LS_OPTIONS'
# alias ll='ls $LS_OPTIONS -l'
# alias l='ls $LS_OPTIONS -lA'
#
# Some more alias to avoid making mistakes:
# alias rm='rm -i'
# alias cp='cp -i'
# alias mv='mv -i'
wp() {
/usr/local/bin/wp "$#" --allow-root
}
when I try to execute any wp command I get this error:
Error: YIKES! It looks like you're running this as root. You probably meant to run this as the user that your WordPress installation exists under.
If you REALLY mean to run this as root, we won't stop you, but just bear in mind that any code on this site will then have full control of your server, making it quite DANGEROUS.
If you'd like to continue as root, please run this again, adding this flag: --allow-root
If you'd like to run it as the user that this site is under, you can run the following to become the respective user:
sudo -u USER -i -- wp <command>
It looks like that command line doesn't consider what I input into .bashrc
Guys, do you have any suggestion how to fix this problem?
You are struggling with the classic conundrum: What goes in bashrc and what in bash_profile and which one is loaded when?
The extreme short version is:
$HOME/.bash_profile: read at login shells. Should always source $HOME/.bashrc. Should only contain environmental variables that can be passed on to other functions.
$HOME/.bashrc: read only for interactive shells that are not login
(eg. opening a terminal in X). Should only contain aliases and functions
How does this help the OP?
The OP executes the following line:
$ sudo -u USER -i -- wp <command>
The flag -i of the sudo-command initiates a login-shell
-i, --login: Run the shell specified by the target user's password database entry as a login shell. This means that login-specific resource files such as .profile, .bash_profile or .login will be read by the shell. If a command is specified, it is passed to the shell for execution via the shell's -c option. If no command is specified, an interactive shell is executed.
So the OP initiates a login-shell which only reads the .bash_profile. The way to solve the problem is now to source the .bashrc file in there as is strongly recommended.
# .bash_profile
if [ -n "$BASH" ] && [ -r ~/.bashrc ]; then
. ~/.bashrc
fi
more info on dot-files:
http://mywiki.wooledge.org/DotFiles
man bash
What's the difference between .bashrc, .bash_profile, and .environment?
About .bash_profile, .bashrc, and where should alias be written in?
related posts:
Run nvm (bash function) via sudo
Can I run a command loaded from .bashrc with sudo?
I recently had the same problem. In my Dockerfile, I was running:
RUN wp core download && wp plugin install woocommerce --activate --allow-root
I looked at the error message, and thought that from the way it was worded, the --allow-root gets ignored the first time you use it. So I added it to the first wp command, and It worked.
RUN wp core download --allow-root && wp plugin install woocommerce --activate --allow-root
The problem is that ~/.bashrc is not being sourced. It will only be sourced in an interactive Bash shell.
You might get better results doing it via executables. Something like this:
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp-cli.phar && \
echo '#!/bin/sh' >> /usr/local/bin/wp && \
echo 'wp-cli.phar "$#" --allow-root' >> /usr/local/bin/wp && \
chmod +x /usr/local/bin/wp
Getting below error during platform installation:
"Required libaio package is not found. ..."
However, above package is already installed:
rpm -q libaio
libaio-0.3.107-10.el6.x86_64
Here is output from the installation script:
./platform-setup-x64-linux-4.4.3.10393.sh
Unpacking JRE ...
Preparing JRE ...
Starting Installer ...
May 30, 2018 6:51:23 PM java.util.prefs.FileSystemPreferences$2 run
INFO: Created system preferences directory in java.home.
Verifying if the libaio package is installed. /opt/appdynamics/platform/installer/checkLibaio.sh
I got this too... I was running from command-line as a non-root user:
./platform-setup-x64-linux-4.4.3.10393.sh -q -varfile /appd/home/Install/response.varfile
I added the shell expand(-x) switch and log to the command(s) like so:
bash -x ./platform-setup-x64-linux-4.4.3.10393.sh -q -varfile /appd/home/Install/response.varfile > install.log 2>&1
If we tail the last bit of that log you get, this response in debug mode:
Verifying if the libaio package is installed. /opt/appdynamics/platform/installer/checkLibaio.sh
Required libaio package is not found. For instructions on installing
the missing package, refer to https://docs.appdynamics.com/display/PRO44/Enterprise+Console+Requirements
and the script checkLibaio.sh isn't left there... so you cannot figure it out easily. I also have a RedHat variant with the packages installed:
rpm -qa | grep libaio
libaio-0.3.109-13.el7.x86_64
Strangely enough I have one VM from the same image that will install the distribution just fine, and one that will not, so on the broken install (where I really want to install this). I ran another command from the expanded view of the install.log, which was a really long JVM command line. Anyways I got it to work and then made a looping script to retrieve the file (Because AppD for some reason removes the check script before you can look at it). The script is as follows:
#!/bin/sh
# Script used to check if the machine has libaio on it or not.
cat /dev/null > /opt/appdynamics/platform/installer/.libaio_status
chmod 777 /opt/appdynamics/platform/installer/.libaio_status
# Check if the dpkg or rpm command exists before running it.
command -v dpkg >/dev/null 2>&1
OUT=$?
if [ $OUT -eq 0 ];
then
if [ `dpkg -l | grep -i libaio* | wc -l` -gt 0 ];
then
echo SUCCESS >> /opt/appdynamics/platform/installer/.libaio_status
exit 0
fi
else
command -v rpm >/dev/null 2>&1
OUT=$?
if [ $OUT -eq 0 ];
then
if [ `rpm -qa | grep -i libaio* | wc -l` -gt 0 ];
then
echo SUCCESS >> /opt/appdynamics/platform/installer/.libaio_status
exit 0
fi
fi
fi
echo FAILURE >> /opt/appdynamics/platform/installer/.libaio_status
exit 1
I you run this script like me on the faulty platform what you will discover is that your version of Linux has both:
dpkg
and
rpm
installed. To work around this you should temporarily make a name change to one of these two package manager executables so it cannot be found (by your shell environment).
Most common here will be that you are running a RedHat variant where someone chose to install dpkg (For who knows what reason). If so desired remove that package and the install should be successful.
For the https://github.com/ellakcy/piwik-with-wordpress I am making a restore bash script in order to restore the backup generated from the https://github.com/ellakcy/piwik-with-wordpress/blob/master/scripts/pre-backup script
The main idea is to set a path with a tarball containing the backup and recreating the folders that volumes are mounted.
The script is the following:
#!/bin/bash
# Printing functions
black='\E[30;40m'
red='\E[31;40m'
green='\E[32;40m'
yellow='\E[33;40m'
blue='\E[34;40m'
magenta='\E[35;40m'
cyan='\E[36;40m'
white='\E[37;40m'
#Echo a string with color
cecho () # Color-echo.
# Argument $1 = message
# Argument $2 = color
{
local default_msg="No message passed."
# Doesn't really need to be a local variable.
message=${1:-$default_msg} # Defaults to default message.
color=${2:-$black} # Defaults to black, if not specified.
echo -e "$color"
echo "$message"
tput sgr0 # Reset to normal.
return
}
#Echo a string as error with color
cecho_err () # Color-echo.
# Argument $1 = message
# Argument $2 = color
{
local default_msg="No message passed."
# Doesn't really need to be a local variable.
message=${1:-$default_msg} # Defaults to default message.
color=${2:-$red} # Defaults to black, if not specified.
echo >&2 -e "$color"
echo >&2 "$message"
tput sgr0 # Reset to normal.
return
}
backup_file=${1}
cecho "Creating the correct folders" $cyan
cecho "Deleting data folder in order to recreate it" $red
sudo rm -rf ./data
mkdir ./data/
sudo chown root:root ./data/
sudo chmod 755 ./data/
if [ ! -f restore ]; then
mkdir ./restore/
fi
tar -xf ${backup_file} -C ./restore/
cecho "Restoring backup data for wordpress" $cyan
sudo mkdir ./data/wordpress
sudo chown root:root ./data/wordpress
sudo chmod 755 ./data/wordpress
sudo mv ./restore/wordpress/data/www ./data/wordpress/
sudo chown www-data:www-data ./data/wordpress/www
cecho "Restoring environment" $cyan
wordpress_env=$(tr '\n' ' ' <./restore/wordpress/env.txt)
echo ${wordpress_env}
cecho "Restoring database" $cyan
sudo mkdir ./data/wordpress/db
echo "sudo env ${wordpress_env} docker run --volume \"./data/wordpress/db\":/var/lib/mysql --volume ./restore/wordpress/db:/docker-entrypoint-initdb.d -e MYSQL_ROOT_PASSWORD=\$WORDPRESS_MYSQL_ROOT_PASSWORD -e MYSQL_DATABASE=\"wordpress\" -e MYSQL_USER=\$WORDPRESS_MYSQL_USER -e MYSQL_PASSWORD=\$WORDPRESS_MYSQL_PASSWORD mariadb" > ./restore_db.sh
chmod +x ./restore_db.sh
./restore_db.sh
# rm -rf ./restore_db.sh
rm -rf ./restore
And I get this error when I try to restore the database:
docker: Error response from daemon: create ./data/wordpress/db: "./data/wordpress/db" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.
See 'docker run --help'.
As you can see it generates a temporary scripts (that later will be deleted) one example of generated script is:
sudo env WORDPRESS_MYSQL_ROOT_PASSWORD=passwd WORDPRESS_MYSQL_USER=wordpress WORDPRESS_MYSQL_PASSWORD=wordpress WORDPRESS_ADMIN_USER=admin WORDPRESS_ADMIN_PASSWORD=admin WORDPRESS_URL=http://0.0.0.0:8080 docker run --volume "./data/wordpress/db":/var/lib/mysql --volume ./restore/wordpress/db:/docker-entrypoint-initdb.d -e MYSQL_ROOT_PASSWORD=$WORDPRESS_MYSQL_ROOT_PASSWORD -e MYSQL_DATABASE="wordpress" -e MYSQL_USER=$WORDPRESS_MYSQL_USER -e MYSQL_PASSWORD=$WORDPRESS_MYSQL_PASSWORD mariadb
What is the best option in order to generate the correct volume data in ./data/wordpress/db that mounts on a container's /var/lib/mysql?
When we specify --volume <host_dir>:<container_dir>, host_dir must be an absolute path. If it is not an absolute path, then it considered to be the volume's name. Hence the message invalid characters for a local volume name. Try providing absolute path for the host directory.
I have a a simple meteor 1.0 app that I want to deploy on my Digital Ocean Droplet. I can access this Droplet using ssh.
How can I deploy this app? Is there anything I should install and what are the settings I should use on my Droplet?
I've used arunoda's solution to deploy to my DO Droplet
https://github.com/arunoda/meteor-up
As in the docs after installing the module you'll get the mup command
You can find the detail documentation on how to deploy here
https://meteorhacks.com/deploy-a-meteor-app-into-a-server-or-a-vm.html
All the solution I found were not working well with Ubuntu 10.04. An easy solution is to simply write a bash script to send the code on the remote server and reload the meteor application:
Share a public key between your development environment and the remote server (How tohere)
Create the following script file (myscript.sh) with the following instructions in it (make sure you edit the variables in the header!):
myscript.sh:
#!/bin/bash
#*************** ONLY EDIT THIS PART
SERVER='<SERVER_IP>'
PORT='22'
USERNAME="root"
PROJECT_NAME="<PROJECT_FOLDER_NAME>"
DESTINATION_PATH="</home/any_user/projects>"
ORIGIN_PATH="</home/any_user/projects/project_folder_name>"
COPY_METEOR_PACKAGES=FALSE
#******************
echo ""
echo "Deployment on $USERNAME#$SERVER:$PORT:$DESTINATION_PATH"
echo "Make sure to have a public key on the server! http://www.linuxproblem.org/art_9.html"
echo ""
#copy the files
if $COPY_METEOR_PACKAGES==true; then
echo "Copy packages"
scp -P $PORT -r $ORIGIN_PATH $USERNAME#$SERVER:$DESTINATION_PATH
else
echo "Do not copy packages"
scp -P $PORT -r $ORIGIN_PATH/client $USERNAME#$SERVER:$DESTINATION_PATH
scp -P $PORT -r $ORIGIN_PATH/common $USERNAME#$SERVER:$DESTINATION_PATH
scp -P $PORT -r $ORIGIN_PATH/lib $USERNAME#$SERVER:$DESTINATION_PATH
scp -P $PORT -r $ORIGIN_PATH/public $USERNAME#$SERVER:$DESTINATION_PATH
scp -P $PORT -r $ORIGIN_PATH/server $USERNAME#$SERVER:$DESTINATION_PATH
fi
# reload meteor
ssh $USERNAME#$SERVER bash -c "'
cd $DESTINATION_PATH/$PROJECT_NAME
meteor
exit
'"
Useful info here:
Just run the script using the following command in your development console:
sh myscript.sh
Et voila! When you run this script, it will copy the files and the packages (no need to transfer all the time) to the remote server of your choice using the SSH protocol and it restart the server in case it has crashed (it shouldn't but it was the case for me).
#define program installation destination
%define app_destination /opt
%define app_name MY_APP_NAME
%define app_version 2.1
%define app_release 7%{?dist}
%define app_dir %{app_name}-%{app_version}
%define compress_file %{app_dir}.tar.gz
%define app_service_softlink /etc/init.d/%{app_name}
%define app_dir_softlink %{app_destination}/%{app_name}
Name: %{app_name}
Version: %{app_version}
Release: %{app_release}
Summary: MY APP ONE-SENTENCE SUMMARY %{app_version}
# An open source software license
License: GPLv3+
URL: http://www.starscriber.com/
Source0: http://ftp.gnu.org/gnu/%{compress_file}
%description
MY APP DESCRIPTION
%pre
#each time before install/upgrade RPM, check and remove the softlinks provided below
echo "pre..."
if [ -L %{app_service_softlink} ];then
rm %{app_service_softlink}
elif [ -f %{app_service_softlink} ];then
rm %{app_service_softlink}
fi
if [ -L %{app_dir_softlink} ]; then
rm %{app_dir_softlink}
elif [ -d %{app_dir_softlink} ]; then
rmdir %{app_dir_softlink}
fi
%prep
%setup -q
echo "prep..."
# Script commands to "build" the program (e.g. to compile it) and
# get it ready for installing. The program should come with
# instructions on how to do this.
%build
%install
echo "install..."
# uses relative paths
# creates buildroot/destination directory
mkdir -p %{buildroot}%{app_destination}
# copies tar.gz file from source directory to buildroot/destination directory
cp %{_sourcedir}/%{compress_file} %{buildroot}%{app_destination}
# changes directory to buildroot/destination
cd %{buildroot}%{app_destination}
# extracts compression file
tar xf %{compress_file}
# removes the compression file
rm -rf %{compress_file}
cd %{buildroot}%{app_destination}
#invoked after %post when RPM pkg is removal or upgrade
%preun
echo "preun..."
#leftover cleanup
#invoked after %preun when RPM pkg is removal or upgrade
%postun
echo "postun..."
if [ "$1" == "0" ]; then
rm -rf %{app_destination}/%{app_dir}
fi
if [ ! -d %{app_destination}/%{app_dir} ]; then
if [ -L %{app_service_softlink} ]; then
rm %{app_service_softlink}
elif [ -f %{app_service_softlink} ]; then
rm %{app_service_softlink}
fi
if [ -L %{app_dir_softlink} ]; then
rm %{app_dir_softlink}
elif [ -d %{app_dir_softlink} ]; then
rmdir %{app_dir_softlink}
fi
fi
%files
#all files under the provided folder will be gathered up to create RPM pkg
%{app_destination}/%{app_dir}/bin
%{app_destination}/%{app_dir}/conf
%{app_destination}/%{app_dir}/misc
%post
echo "post"
#symbolic link to the new appdir with version
echo "builds new symbolic link for the app folder"
ln -sf %{app_destination}/%{app_dir} %{app_dir_softlink}
echo "builds new symbolic link for the app service"
# make a symbolic for the service file using the new created softlink
ln -sf %{app_destination}/%{app_name}/misc/%{app_name} %{app_service_softlink}
I am trying to create my own RPM package, and here's the SPEC file, it works properly when install(rpm -ivh app-2.1-6.el6.x86_64.rpm), or upgrade(rpm -Uvh app-2.1-7.el6.x86_64.rpm) or remove (rpm -e app-2.1-7.el6.x86_64.rpm)
For RPM package app-2.1-7.el6.x86_64.rpm, the version is 2.1 and release number is 7.
My question is, no matter how I modify the release number, install/upgrade/remove are working properly, but if I modify the version number to 2.2 or 3.2, the previous version folder(/opt/app-2.1) will not be deleted, can anyone help me, how should I delete the previous version folder(/opt/app-2.1) when I update(-Uvh) the RPM package?
The problem is that your package doesn't "own"
the directory /opt/2.1 directory.
Just like tar, rpm will create all "missing" directories
in order to install content on a path.
But on erase, rpm will only remove directories that are
mentioned explicitly in the %files manifest.
Short answer:
If you want rpm --erase to remove a directory path,
the mention in %files.
Shorter anser:
Add
%dir /opt/app-%{version}
to %files. If the directory is empty (i.e. all other files
in /opt/app-%{version} are "owned" and can be removed), the
the "owned /opt/app=%{version} will be removed as well.