--allow-root doesn't work running wp-cli in docker container - wordpress

When using WP CLI in docker, I need to execute it as root.
I need to add the flag --allow-root directly in .bashrc and I am trying to figure out why it doesn't work.
FROM webdevops/php-dev:7.3
# configure postfix to use mailhog
RUN postconf -e "relayhost = mail:1025"
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp && \
echo 'wp() {' >> ~/.bashrc && \
echo '/usr/local/bin/wp "$#" --allow-root' >> ~/.bashrc && \
echo '}' >> ~/.bashrc
WORKDIR /var/www/html/
my .bashrc
# ~/.bashrc: executed by bash(1) for non-login shells.
# Note: PS1 and umask are already set in /etc/profile. You should not
# need this unless you want different defaults for root.
# PS1='${debian_chroot:+($debian_chroot)}\h:\w\$ '
# umask 022
# You may uncomment the following lines if you want `ls' to be colorized:
# export LS_OPTIONS='--color=auto'
# eval "`dircolors`"
# alias ls='ls $LS_OPTIONS'
# alias ll='ls $LS_OPTIONS -l'
# alias l='ls $LS_OPTIONS -lA'
#
# Some more alias to avoid making mistakes:
# alias rm='rm -i'
# alias cp='cp -i'
# alias mv='mv -i'
wp() {
/usr/local/bin/wp "$#" --allow-root
}
when I try to execute any wp command I get this error:
Error: YIKES! It looks like you're running this as root. You probably meant to run this as the user that your WordPress installation exists under.
If you REALLY mean to run this as root, we won't stop you, but just bear in mind that any code on this site will then have full control of your server, making it quite DANGEROUS.
If you'd like to continue as root, please run this again, adding this flag: --allow-root
If you'd like to run it as the user that this site is under, you can run the following to become the respective user:
sudo -u USER -i -- wp <command>
It looks like that command line doesn't consider what I input into .bashrc
Guys, do you have any suggestion how to fix this problem?

You are struggling with the classic conundrum: What goes in bashrc and what in bash_profile and which one is loaded when?
The extreme short version is:
$HOME/.bash_profile: read at login shells. Should always source $HOME/.bashrc. Should only contain environmental variables that can be passed on to other functions.
$HOME/.bashrc: read only for interactive shells that are not login
(eg. opening a terminal in X). Should only contain aliases and functions
How does this help the OP?
The OP executes the following line:
$ sudo -u USER -i -- wp <command>
The flag -i of the sudo-command initiates a login-shell
-i, --login: Run the shell specified by the target user's password database entry as a login shell. This means that login-specific resource files such as .profile, .bash_profile or .login will be read by the shell. If a command is specified, it is passed to the shell for execution via the shell's -c option. If no command is specified, an interactive shell is executed.
So the OP initiates a login-shell which only reads the .bash_profile. The way to solve the problem is now to source the .bashrc file in there as is strongly recommended.
# .bash_profile
if [ -n "$BASH" ] && [ -r ~/.bashrc ]; then
. ~/.bashrc
fi
more info on dot-files:
http://mywiki.wooledge.org/DotFiles
man bash
What's the difference between .bashrc, .bash_profile, and .environment?
About .bash_profile, .bashrc, and where should alias be written in?
related posts:
Run nvm (bash function) via sudo
Can I run a command loaded from .bashrc with sudo?

I recently had the same problem. In my Dockerfile, I was running:
RUN wp core download && wp plugin install woocommerce --activate --allow-root
I looked at the error message, and thought that from the way it was worded, the --allow-root gets ignored the first time you use it. So I added it to the first wp command, and It worked.
RUN wp core download --allow-root && wp plugin install woocommerce --activate --allow-root

The problem is that ~/.bashrc is not being sourced. It will only be sourced in an interactive Bash shell.
You might get better results doing it via executables. Something like this:
# install wp cli
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
chmod +x wp-cli.phar && \
mv wp-cli.phar /usr/local/bin/wp-cli.phar && \
echo '#!/bin/sh' >> /usr/local/bin/wp && \
echo 'wp-cli.phar "$#" --allow-root' >> /usr/local/bin/wp && \
chmod +x /usr/local/bin/wp

Related

how to install Nginx on CentOs7 without internet connection with root permission?

I need to install Nginx on my target which there is no internet connection, how can I install Nginx with all dependencies in an offline mode?? thanks in advance for your answers.
I have recently gone through this procedure and this is what worked for me on centos7:
You need an online Linux server to download dependencies. You can use virtual machines or anything else.
On your online server create a .sh file and copy script below in it. (I named it download_dependencies)
#!/bin/bash
# This script is used to fetch external packages that are not available in standard Linux distribution
# Example: ./fetch-external-dependencies ubuntu18.04
# Script will create nms-dependencies-ubuntu18.04.tar.gz in local directory which can be copied
# into target machine and packages inside can be installed manually
set -eo pipefail
# current dir
PACKAGE_PATH="."
mkdir -p $PACKAGE_PATH
declare -A CLICKHOUSE_REPO
CLICKHOUSE_REPO['ubuntu18.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['ubuntu20.04']="https://repo.clickhouse.tech/deb/lts/main"
CLICKHOUSE_REPO['centos7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['centos8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel7']="https://repo.clickhouse.tech/rpm/lts/x86_64"
CLICKHOUSE_REPO['rhel8']="https://repo.clickhouse.tech/rpm/lts/x86_64"
declare -A NGINX_REPO
NGINX_REPO['ubuntu18.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['ubuntu20.04']="https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/"
NGINX_REPO['centos7']="https://nginx.org/packages/mainline/centos/7/x86_64/RPMS/"
NGINX_REPO['centos8']="https://nginx.org/packages/mainline/centos/8/x86_64/RPMS/"
NGINX_REPO['rhel7']="https://nginx.org/packages/mainline/rhel/7/x86_64/RPMS/"
NGINX_REPO['rhel8']="https://nginx.org/packages/mainline/rhel/8/x86_64/RPMS/"
CLICKHOUSE_KEY="https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG"
NGINX_KEY="https://nginx.org/keys/nginx_signing.key"
declare -A CLICKHOUSE_PACKAGES
# for Clickhouse package names are static between distributions
# we use ubuntu/centos entries as placeholders
CLICKHOUSE_PACKAGES['ubuntu']="
clickhouse-server_21.3.10.1_all.deb
clickhouse-common-static_21.3.10.1_amd64.deb"
CLICKHOUSE_PACKAGES['centos']="
clickhouse-server-21.3.10.1-2.noarch.rpm
clickhouse-common-static-21.3.10.1-2.x86_64.rpm"
CLICKHOUSE_PACKAGES['ubuntu18.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['ubuntu20.04']=${CLICKHOUSE_PACKAGES['ubuntu']}
CLICKHOUSE_PACKAGES['centos7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['centos8']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel7']=${CLICKHOUSE_PACKAGES['centos']}
CLICKHOUSE_PACKAGES['rhel8']=${CLICKHOUSE_PACKAGES['centos']}
declare -A NGINX_PACKAGES
NGINX_PACKAGES['ubuntu18.04']="nginx_1.21.3-1~bionic_amd64.deb"
NGINX_PACKAGES['ubuntu20.04']="nginx_1.21.2-1~focal_amd64.deb"
NGINX_PACKAGES['centos7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['centos8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel7']="nginx-1.21.4-1.el7.ngx.x86_64.rpm"
NGINX_PACKAGES['rhel8']="nginx-1.21.4-1.el8.ngx.x86_64.rpm"
download_packages() {
local target_distribution=$1
if [ -z $target_distribution ]; then
echo "$0 - no target distribution specified"
exit 1
fi
mkdir -p "${PACKAGE_PATH}/${target_distribution}"
# just in case delete all files in target dir
rm -f "${PACKAGE_PATH}/${target_distribution}/*"
readarray -t clickhouse_files <<<"${CLICKHOUSE_PACKAGES[${target_distribution}]}"
readarray -t nginx_files <<<"${NGINX_PACKAGES[${target_distribution}]}"
echo "Downloading Clickhouse signing keys"
curl -fs ${CLICKHOUSE_KEY} --output "${PACKAGE_PATH}/${target_distribution}/clickhouse-key.gpg"
echo "Downloading Nginx signing keys"
curl -fs ${NGINX_KEY} --output "${PACKAGE_PATH}/${target_distribution}/nginx-key.gpg"
for package_file in "${clickhouse_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${CLICKHOUSE_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
for package_file in "${nginx_files[#]}"; do
if [ -z $package_file ]; then
continue
fi
file_url="${NGINX_REPO[$target_distribution]}/$package_file"
save_file="${PACKAGE_PATH}/${target_distribution}/$package_file"
echo "Fetching $file_url"
curl -fs $file_url --output $save_file
done
bundle_file="${PACKAGE_PATH}/nms-dependencies-${target_distribution}.tar.gz"
tar -zcf $bundle_file -C "${PACKAGE_PATH}/${target_distribution}" .
echo "Bundle file saved as $bundle_file"
}
target_distribution=$1
if [ -z $target_distribution ]; then
echo "Usage: $0 target_distribution"
echo "Supported target distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
# check if target distribution is supported
if [ -z ${CLICKHOUSE_REPO[$target_distribution]} ]; then
echo "Target distribution is not supported."
echo "Supported distributions: ${!CLICKHOUSE_REPO[#]}"
exit 1
fi
download_packages "${target_distribution}"
Then on the same directory that contains download_dependencies.sh run command below:
download_dependencies.sh <your linux version>
In my case, I ran code below (leave it blank to see options):
download_dependencies.sh centos7
It should start to download and when it finished you should see nms-dependencies-rhel7.tar.gz in your directory.
Copy that file(.tar.gz) to your offline target.
Now on your target machine, go to directory which you copied your file and run the code below:
tar -zxvf nms-dependencies-rhel7.tar.gz
sudo yum install *.rpm
After installation you can start nginx using systemctl:
sudo systemctl start clickhouse-server
sudo systemctl start nginx
Your nginx service must be running now!
you can download tar file in another system and copy
did you try this link?
https://gist.github.com/taufiqibrahim/d7f697de6bb8b93ca348a5b94d6adbfc

How to download and unzip in Dockerfile

So, I have, it works, but I want to change the way to immediately download the file and unpack it:
Dockerfile
FROM wordpress:fpm
# Copying themes from local
COPY ./wordpress/ /var/www/html/wp-content/themes/wordpress/
RUN chmod -R 777 /var/www/html/
How can I immediately download the file from the site and unzip it to the appropriate folder?
docker-compose.yml
wordpress:
build: .
links:
- db:mysql
nginx:
image: raulr/nginx-wordpress
links:
- wordpress
ports:
- "8080:80"
volumes_from:
- wordpress
db:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: qwerty
I tried:
#install unzip and wget
RUN \
apt-get update && \
apt-get install unzip wget -y && \
rm -rf /var/lib/apt/lists/*
RUN wget -O /var/www/html/type.zip http://wp-templates.ru/download/2405 \
&& unzip '/var/www/html/type.zip' -d /var/www/html/wp-content/themes/ && rm
/var/www/html/type.zip || true;
Dockerfile has "native command" for copying and extracting .tar.gz files.
So you can change archive type from .zip to .tar.gz (maybe in future versions zip also will be supported, I'm not sure) and use ADD instead of COPY.
Read more about ADD
Best to use a multistage docker build. You will need the latest version of docker and buildkit enabled. Then do something along these lines
# syntax=docker/dockerfile:1
from alpine:latest as unzipper
apk add unzip wget curl
RUN mkdir /opt/ ; \
curl <some-url> | tar xvzf - -C /opt
FROM wordpress:fpm
COPY --from unzipper /opt/ /var/www/html/wp-content/themes/wordpress/
Even better is if there is a Docker image built already with the stuff in you want you just need the 'copy --from' line and give it the image name.
Finally dont worry about any mess in the 1st stage as its discarded when the build completes, so the fact its alpine, and not using no-cache is irrelevant, and none of the installed packages end up in the final image
Found more guidance for remote zipped files in Docker documentation
Because image size matters, using ADD to fetch packages from remote
URLs is strongly discouraged; you should use curl or wget instead.
That way you can delete the files you no longer need after they’ve
been extracted and you don’t have to add another layer in your image.
For example, you should avoid doing things like:
ADD https://example.com/big.tar.xz /usr/src/things/
RUN tar -xJf /usr/src/things/big.tar.xz -C /usr/src/things
RUN make -C /usr/src/things all
And instead, do something like:
RUN mkdir -p /usr/src/things \
&& curl -SL https://example.com/big.tar.xz \
| tar -xJC /usr/src/things \
&& make -C /usr/src/things all

How do I reset and put the zshrc file back to default?

/Users/ello/.zshrc:source:3: no such file or directory:
/Users/ello/Projects/config/env.sh
Ello-MacBook-Pro% /Users/ello/.zshrc:source
zsh: no such file or directory: /Users/ello/.zshrc:source
Ello-MacBook-Pro% /Users/ello/.zshrc
zsh: permission denied: /Users/ello/.zshrc
Ello-MacBook-Pro%
This has been happening, after I foolishly edited the .zshrc file. All that remains in the file now, after attempting to reset the shell, is this:
# Created by newuser for 5.3.1
# Add env.sh
How do I undo everything, reinstall zsh, or remake the .zshrc file?
This is on macOS Sierra.
Edit: I reinstalled oh-my-zsh, leading to this message:
ain() {
# Use colors, but only if connected to a terminal, and that terminal
# supports them.
if which tput >/dev/null 2>&1; then
ncolors=$(tput colors)
fi
if [ -t 1 ] && [ -n "$ncolors" ] && [ "$ncolors" -ge 8 ]; then
RED="$(tput setaf 1)"
GREEN="$(tput setaf 2)"
YELLOW="$(tput setaf 3)"
BLUE="$(tput setaf 4)"
BOLD="$(tput bold)"
NORMAL="$(tput sgr0)"
else
RED=""
GREEN=""
YELLOW=""
BLUE=""
BOLD=""
NORMAL=""
fi
# Only enable exit-on-error after the non-critical colorization
stuff,
# which may fail on systems lacking tput or terminfo
set -e
CHECK_ZSH_INSTALLED=$(grep /zsh$ /etc/shells | wc -l)
if [ ! $CHECK_ZSH_INSTALLED -ge 1 ]; then
printf "${YELLOW}Zsh is not installed!${NORMAL} Please install zsh
first!\n"
exit
fi
unset CHECK_ZSH_INSTALLED
if [ ! -n "$ZSH" ]; then
ZSH=~/.oh-my-zsh
fi
if [ -d "$ZSH" ]; then
printf "${YELLOW}You already have Oh My Zsh installed.${NORMAL}\n"
printf "You'll need to remove $ZSH if you want to re-install.\n"
exit
fi
# Prevent the cloned repository from having insecure permissions.
Failing to do
# so causes compinit() calls to fail with "command not found:
compdef" errors
# for users with insecure umasks (e.g., "002", allowing group
writability). Note
# that this will be ignored under Cygwin by default, as Windows ACLs
take
# precedence over umasks except for filesystems mounted with option
"noacl".
umask g-w,o-w
printf "${BLUE}Cloning Oh My Zsh...${NORMAL}\n"
hash git >/dev/null 2>&1 || {
echo "Error: git is not installed"
exit 1
}
# The Windows (MSYS) Git is not compatible with normal use on cygwin
if [ "$OSTYPE" = cygwin ]; then
if git --version | grep msysgit > /dev/null; then
echo "Error: Windows/MSYS Git is not supported on Cygwin"
echo "Error: Make sure the Cygwin git package is installed and is
first on the path"
exit 1
fi
fi
env git clone --depth=1 https://github.com/robbyrussell/oh-my-zsh.git
$ZSH || {
printf "Error: git clone of oh-my-zsh repo failed\n"
exit 1
}
printf "${BLUE}Looking for an existing zsh config...${NORMAL}\n"
if [ -f ~/.zshrc ] || [ -h ~/.zshrc ]; then
printf "${YELLOW}Found ~/.zshrc.${NORMAL} ${GREEN}Backing up to
~/.zshrc.pre-oh-my-zsh${NORMAL}\n";
mv ~/.zshrc ~/.zshrc.pre-oh-my-zsh;
fi
zsh itself does not have a default user configuration. So the default ~/.zshrc is actually no ~/.zshrc.
But as you tagged the question with oh-my-zsh I would assume that you want to restore the default oh-my-zsh configuration. For this it should be sufficient to copy templates/zshrc.zsh-template from your oh-my-zsh installation path, usually ~/.oh-my-zsh:
cp ~/.oh-my-zsh/templates/zshrc.zsh-template ~/.zshrc
You may want to backup your current ~/.zshrc beforehand. Although it may have some problems now, you still might want to look up some settings once you reverted to default.
There is no such thing as "default". The best you can do, is check if your system has /etc/skel/.zshrc. If yes copy that into your home.
When you log in first time, your home is populated with everything from /etc/skel.
My dumass decided to just put a crash command into the zsh file. Now when I open the terminal, it just kernel panics. so I just deleted the config file using rm -f ~/.zshrc* and by default, it just got replaced with another copy. So good luck.
You can copy .zshrc template from
https://github.com/ohmyzsh/ohmyzsh/blob/master/templates/zshrc.zsh-template
And copy and paste all content in to ~/.zshrc
[MS Windows Friendly Solution - If terminal(using vim editor) steps are confusing]
Actually, there is no default .zshrc file, but if you need to edit is as a simple notepad, do these:
Goto /Users/ Folder via Finder App.
Click Shift + Command + . (Dot) to view hidden system files.
Look on .zshrc file, double click to open, then it will open in a notepad(TextEdit.app) in default.
Clear whichever lines to be removed.
Retype/Edit the file as per the Paths to be added.
Hit Command + s to save and exit.
Make it your default shell using this command:
chsh -s $(which zsh)

Bash restore database to a docker container for a Wordpress + Piwik Solution

For the https://github.com/ellakcy/piwik-with-wordpress I am making a restore bash script in order to restore the backup generated from the https://github.com/ellakcy/piwik-with-wordpress/blob/master/scripts/pre-backup script
The main idea is to set a path with a tarball containing the backup and recreating the folders that volumes are mounted.
The script is the following:
#!/bin/bash
# Printing functions
black='\E[30;40m'
red='\E[31;40m'
green='\E[32;40m'
yellow='\E[33;40m'
blue='\E[34;40m'
magenta='\E[35;40m'
cyan='\E[36;40m'
white='\E[37;40m'
#Echo a string with color
cecho () # Color-echo.
# Argument $1 = message
# Argument $2 = color
{
local default_msg="No message passed."
# Doesn't really need to be a local variable.
message=${1:-$default_msg} # Defaults to default message.
color=${2:-$black} # Defaults to black, if not specified.
echo -e "$color"
echo "$message"
tput sgr0 # Reset to normal.
return
}
#Echo a string as error with color
cecho_err () # Color-echo.
# Argument $1 = message
# Argument $2 = color
{
local default_msg="No message passed."
# Doesn't really need to be a local variable.
message=${1:-$default_msg} # Defaults to default message.
color=${2:-$red} # Defaults to black, if not specified.
echo >&2 -e "$color"
echo >&2 "$message"
tput sgr0 # Reset to normal.
return
}
backup_file=${1}
cecho "Creating the correct folders" $cyan
cecho "Deleting data folder in order to recreate it" $red
sudo rm -rf ./data
mkdir ./data/
sudo chown root:root ./data/
sudo chmod 755 ./data/
if [ ! -f restore ]; then
mkdir ./restore/
fi
tar -xf ${backup_file} -C ./restore/
cecho "Restoring backup data for wordpress" $cyan
sudo mkdir ./data/wordpress
sudo chown root:root ./data/wordpress
sudo chmod 755 ./data/wordpress
sudo mv ./restore/wordpress/data/www ./data/wordpress/
sudo chown www-data:www-data ./data/wordpress/www
cecho "Restoring environment" $cyan
wordpress_env=$(tr '\n' ' ' <./restore/wordpress/env.txt)
echo ${wordpress_env}
cecho "Restoring database" $cyan
sudo mkdir ./data/wordpress/db
echo "sudo env ${wordpress_env} docker run --volume \"./data/wordpress/db\":/var/lib/mysql --volume ./restore/wordpress/db:/docker-entrypoint-initdb.d -e MYSQL_ROOT_PASSWORD=\$WORDPRESS_MYSQL_ROOT_PASSWORD -e MYSQL_DATABASE=\"wordpress\" -e MYSQL_USER=\$WORDPRESS_MYSQL_USER -e MYSQL_PASSWORD=\$WORDPRESS_MYSQL_PASSWORD mariadb" > ./restore_db.sh
chmod +x ./restore_db.sh
./restore_db.sh
# rm -rf ./restore_db.sh
rm -rf ./restore
And I get this error when I try to restore the database:
docker: Error response from daemon: create ./data/wordpress/db: "./data/wordpress/db" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.
See 'docker run --help'.
As you can see it generates a temporary scripts (that later will be deleted) one example of generated script is:
sudo env WORDPRESS_MYSQL_ROOT_PASSWORD=passwd WORDPRESS_MYSQL_USER=wordpress WORDPRESS_MYSQL_PASSWORD=wordpress WORDPRESS_ADMIN_USER=admin WORDPRESS_ADMIN_PASSWORD=admin WORDPRESS_URL=http://0.0.0.0:8080 docker run --volume "./data/wordpress/db":/var/lib/mysql --volume ./restore/wordpress/db:/docker-entrypoint-initdb.d -e MYSQL_ROOT_PASSWORD=$WORDPRESS_MYSQL_ROOT_PASSWORD -e MYSQL_DATABASE="wordpress" -e MYSQL_USER=$WORDPRESS_MYSQL_USER -e MYSQL_PASSWORD=$WORDPRESS_MYSQL_PASSWORD mariadb
What is the best option in order to generate the correct volume data in ./data/wordpress/db that mounts on a container's /var/lib/mysql?
When we specify --volume <host_dir>:<container_dir>, host_dir must be an absolute path. If it is not an absolute path, then it considered to be the volume's name. Hence the message invalid characters for a local volume name. Try providing absolute path for the host directory.

running command with sudo and go into root (sudo -i) and run the command

Question would be
what exactly is the difference between running these two commands.
As a root, I have made a custom env. variable
export A="abcdef"
then in root shell
sudo -i
echo $A
returns
abcdef (as expected)
However, when I go back to normal user and run
sudo -i echo $A
it returns blank line.
So when you run command sudo echo $A, does it use environment variables and shell from the normal user?
and is there a way to get abcdef even if I run sudo echo $A ?
Thanks
EDIT 1
When you say you have made a variable A as root, I assume you mean you did this in root's .profile or something like that. --> (yes!)
EDIT 2
This makes perfect sense
but having some trouble.
When I do
sudo -i 'echo $A'
I get
-bash: echo $A: command not found.
However when I do
su -c 'echo $A'
it gives back
abcdef
What is wrong with the
sudo -i 'echo $A'
command?
If you want to pass your environment to sudo, use sudo -E:
-E The -E (preserve environment) option indicates to the
security policy that the user wishes to preserve their
existing environment variables. The security policy may
return an error if the -E option is specified and the user
does not have permission to preserve the environment.
The environment is preserved both interactively and through whatever you run from the command line.
When you say you have made a variable A as root, I assume you mean you did this in root's .profile or something like that. And I assume you mean that the normal user does not have A set. In that case the following applies:
When you run your command sudo -i echo $A this is first interpreted by the local shell and $A is substituted. That results in sudo -i echo, which is what is actually executed.
What you mean is this:
sudo -i 'echo $A'
That passes echo $A to the sudo shell.
~ rnapier$ sudo -i echo $USER
rnapier
~ rnapier$ sudo -i 'echo $USER'
root
Try this syntax:
sudo -i echo '$USER'
Although I couldn't replicate the results on my machine, the man page for sudo, specifies the -i option will unset/remove a handful of variables.
man sudo
-i [command]
The -i (simulate initial login) option runs the shell specified in the
passwd(5) entry of the target user as a login shell. This means that
login-specific resource files such as .profile or .login will be read
by the shell. If a command is specified, it is passed to the shell for
execution. Otherwise, an interactive shell is executed. sudo attempts
to change to that user's home directory before running the shell. It
also initializes the environment, leaving DISPLAY and TERM unchanged,
setting HOME , MAIL , SHELL , USER , LOGNAME , and PATH , as well as
the contents of /etc/environment on Linux and AIX systems. All other
environment variables are removed.
So I would try without the -i option.

Resources