Insert element in XML document at specific position with xmlstarlet - xmlstarlet

I'm writing a bash script to edit Tomcat's server.xml file. I have it successfully adding a Connector node. To run this example, download and unpack Apache Tomcat 9, go into the conf directory where there is a server.xml file, and run:
xmlstarlet edit -P --inplace \
--subnode "/Server/Service" \
--type elem -n ConnectorNew -v "" \
--insert //ConnectorNew --type attr -n "port" -v "443" \
--insert //ConnectorNew --type attr -n "protocol" -v "org.apache.coyote.http11.Http11NioProtocol" \
--insert //ConnectorNew --type attr -n "keystoreFile" -v "example-key.pem" \
--insert //ConnectorNew --type attr -n "sslProtocol" -v "TLS" \
--insert //ConnectorNew --type attr -n "SSLEnabled" -v "true" \
--subnode "/Server/Service/ConnectorNew" \
--type elem -n "UpgradeProtocolNew" -v "" \
--insert //UpgradeProtocolNew --type attr -n "className" -v "org.apache.coyote.http2.Http2Protocol" \
--rename //ConnectorNew -v Connector \
--rename //UpgradeProtocolNew -v UpgradeProtocol server.xml
which is pretty cool! Upon running that there will now be a TLS Connector on port 443 with the given example key. That would run as usual assuming the key file exists and it's running as root (real server deployments shouldn't run as root but should use jsvc instead).
However that shows up at the very end of the Service element. I would like ideally to put it in the file after the last existing Connector element so the file looks normal. I don't think order of Connector elements has any effect on Tomcat, although I would like it to look like a normal config file that other people would expect, when they go looking for connector elements.
I assume there's some way to do this with xmlstarlet but I couldn't figure it out.
I hope I can avoid using xslt features to do this because I don't want to have to learn and manage another technology to get this script done.
Thank you!

If you have already a Connector defined in you server.xml you can replace --subnode "/Server/Service" by --append /Server/Service/Connector and this will insert your new Connector element right after the first existent Connector.
xmlstarlet edit -P --inplace \
--append /Server/Service/Connector \
--type elem -n ConnectorNew -v "" \
--insert //ConnectorNew --type attr -n "port" -v "443" \
...
If this is the first Connector to insert you would want to do --insert /Server/Service/Engine and your Connector element will be inserted before the Engine element where Connectors usually reside in the default server.xml
xmlstarlet edit -P --inplace \
--insert /Server/Service/Engine \
--type elem -n ConnectorNew -v "" \
--insert //ConnectorNew --type attr -n "port" -v "443" \
...
You may also want to delete all commented xml elements before you start editing the server.xml so that you have a clean and readable file:
xmlstarlet ed -L -d '//comment()' server.xml
and if you do so, you would need to insert a space before the closing "/>", otherwise tomcat will complain that server.xml is corrupt:
sed -i "s/\"\/>/\" \/>/g" server.xml

Related

How to install and setup WordPress using Podman

With docker I was able to run WordPress example for docker-compose on nearly every platform, without prior docker knowledge.
I look for a way to achieve the same with Podman.
In my case, to have a fast cross-platform way to setup a working WordPress installation for development.
As Podman is far younger, a valid answer in 2022 would also be: It is not possible, because... / only possible provided constraint X.
Still I would like to create an entry point for other people, who run into the same issue in the future.
I posted my own efforts below. Before I spend more hours debugging lots of small (but still solvable) issues, I wanted to find out if someone else faced the same problem and already has a solution. If you have, please clearly document its constraints.
My particular issue, as a reference
I am on Ubuntu 20.04 and podman -v gives 3.4.2.
docker/podman compose
When I use docker-compose up with Podman back-end on docker's WordPress .yml-file, I run into the "duplicate mount destination" issue.
podman-compose is part of Podman 4.1.0, which is not available on Ubuntu as I write this.
Red Hat example
The example of Red Hat gives "Error establishing a database connection ... contact with the database server at mysql could not be established".
A solution for the above does not work for me. share is likely a typo. I tried to replace with unshare.
Cent OS example
I found an example which uses pods instead of a docker-compose.yml file. But it is written for Cent OS.
I modified the Cent OS example, see the script below. I get the containers up and running. However, WordPress is unable to connect to the database.
#!/bin/bash
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -P html
mkdir -P database
# Remove previous attempts
sudo podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format eror
sudo podman pull mariadb:latest
sudo podman pull wordpress
# Create a pod instead of --link. So both containers are able to reach each others.
sudo podman pod create -n $POD_NAME -p 80:80
sudo podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
sudo podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
Also, I was a bit unsure where to post this question. If server fault or another stack exchange are a better fit, I will happily post there.
Actually, your code works with just small changes.
I removed the sudo's and changed the pods external port to 8090, instead of 80. So now everything is running as a non-root user.
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
DB_NAME='wordpress_db'
DB_PASS='mysupersecurepass'
DB_USER='justbeauniqueuser'
POD_NAME='wordpress_with_mariadb'
CONTAINER_NAME_DB='wordpress_db'
CONTAINER_NAME_WP='wordpress'
mkdir -p html
mkdir -p database
# Remove previous attempts
podman pod rm -f $POD_NAME
# Pull before run, bc: invalid reference format error
podman pull docker.io/mariadb:latest
podman pull docker.io/wordpress
# Create a pod instead of --link.
# So both containers are able to reach each others.
podman pod create -n $POD_NAME -p 8090:80
podman run --detach --pod $POD_NAME \
-e MYSQL_ROOT_PASSWORD=$DB_PASS \
-e MYSQL_PASSWORD=$DB_PASS \
-e MYSQL_DATABASE=$DB_NAME \
-e MYSQL_USER=$DB_USER \
--name $CONTAINER_NAME_DB -v "$PWD/database":/var/lib/mysql \
docker.io/mariadb:latest
podman run --detach --pod $POD_NAME \
-e WORDPRESS_DB_HOST=127.0.0.1:3306 \
-e WORDPRESS_DB_NAME=$DB_NAME \
-e WORDPRESS_DB_USER=$DB_USER \
-e WORDPRESS_DB_PASSWORD=$DB_PASS \
--name $CONTAINER_NAME_WP -v "$PWD/html":/var/www/html \
docker.io/wordpress
This is what worked for me:
#!/bin/bash
# https://stackoverflow.com/questions/74054932/how-to-install-and-setup-wordpress-using-podman
# Set environment variables:
POD_NAME='wordpress_mariadb'
DB_ROOT_PW='sup3rS3cr3t'
DB_NAME='wp'
DB_PASS='s0m3wh4tS3cr3t'
DB_USER='wordpress'
podman pod create --name $POD_NAME -p 8080:80
podman run \
-d --restart=always --pod=$POD_NAME \
-e MYSQL_ROOT_PASSWORD="$DB_ROOT_PW" \
-e MYSQL_DATABASE="$DB_NAME" \
-e MYSQL_USER="$DB_USER" \
-e MYSQL_PASSWORD="$DB_PASS" \
-v $HOME/public_html/wordpress/mysql:/var/lib/mysql:Z \
--name=wordpress-db docker.io/mariadb:latest
podman run \
-d --restart=always --pod=$POD_NAME \
-e WORDPRESS_DB_NAME="$DB_NAME" \
-e WORDPRESS_DB_USER="$DB_USER" \
-e WORDPRESS_DB_PASSWORD="$DB_PASS" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
-v $HOME/public_html/wordpress/html:/var/www/html:Z \
--name wordpress docker.io/library/wordpress:latest

xmlstarlet add element with namespace and attributes

I'm trying to add a node with a namespace and an attribute to an xml, but it fails if I try to do it as multiple commands in one execution of xmlstarlet:
<?xml version="1.0"?>
<levela xmlns:xi="http://www.w3.org/2001/XInclude">
<levelb>
</levelb>
</levela>
xmlstarlet ed -L -s /levela/levelb -t elem -n xi:input -i //xi:input -t attr -n "href" -v "aHref" file.xml
I'm trying to get:
<?xml version="1.0"?>
<levela xmlns:xi="http://www.w3.org/2001/XInclude">
<levelb>
<xi:input href="aHref"/>
</levelb>
</levela>
But the attribute isn't added. So I get:
<?xml version="1.0"?>
<levela xmlns:xi="http://www.w3.org/2001/XInclude">
<levelb>
<xi:input/>
</levelb>
</levela>
It works if I run it as two executions like this:
xmlstarlet ed -L -s /levela/levelb -t elem -n xi:input file.xml
xmlstarlet ed -L -i //xi:input -t attr -n "href" -v "aHref" file.xml
It also works if I add a tag without a namespace e.g:
xmlstarlet ed -L -s /levela/levelb -t elem -n levelc -i //levelc -t attr -n "href" -v "aHref" file.xml
<?xml version="1.0"?>
<levela xmlns:xi="http://www.w3.org/2001/XInclude">
<levelb>
<levelc href="aHref"/>
</levelb>
</levela>
What am I doing wrong? Why doesn't it work with the namespace?
This will do it:
xmlstarlet edit \
-s '/levela/levelb' -t elem -n 'xi:input' \
-s '$prev' -t attr -n 'href' -v 'aHref' \
file.xml
xmlstarlet edit code can use the convenience $prev (aka
$xstar:prev) variable to refer to the node created by the most
recent -i (--insert), -a (--append), or -s (--subnode) option.
Examples of $prev are given in
doc/xmlstarlet.txt and
the source code's
examples/ed-backref*.
Attributes can be added using -i, -a, or -s.
What am I doing wrong? Why doesn't it work with the namespace?
Update 2022-04-15
The -i '//xi:input' … syntax you use is perfectly logical. As your
own 2 alternative commands suggest it's the namespace xi that
triggers the omission and there's a hint in the edInsert function in
the source code's
src/xml_edit.c
where it says NULL /* TODO: NS */.
When you've worked with xmlstarlet for some
time you come to accept its limitations (or not); in this case the
$prev back reference is useful. I wouldn't expect that TODO to
go away anytime soon.
(end update)
Well, I think xmlstarlet edit looks upon node naming as a user
responsibility, as the following example suggests,
printf '<v/>' |
xmlstarlet edit --omit-decl \
-s '*' -t elem -n 'undeclared:qname' -v 'x' \
-s '*' -t elem -n '!--' -v ' wotsinaname ' \
-s '$prev' -t attr -n ' "" ' -v '' \
-s '*' -t elem -n ' <&> ' -v 'harrumph!'
the output of which is clearly not XML:
<v>
<undeclared:qname>x</undeclared:qname>
<!-- "" =""> wotsinaname </!-->
< <&> >harrumph!</ <&> >
</v>
If you want to indent the new element, for example:
xmlstarlet edit \
-s '/levela/levelb' -t elem -n 'xi:input' \
--var newnd '$prev' \
-s '$prev' -t attr -n 'href' -v 'aHref' \
-a '$newnd' -t text -n ignored -v '' \
-u '$prev' -x '(//text())[1][normalize-space()=""]' \
file.xml
The -x XPath expression grabs the first text node provided it
contains nothing but whitespace, i.e. the first child node of levela.
The --var name xpath option to define an xmlstarlet edit
variable is mentioned in
doc/xmlstarlet.txt
but not in the user's guide.
I used xmlstarlet version 1.6.1.
It seems you can't insert an attribute and attribute value into a namespaced node... Maybe someone smarter can figure out something else, but the only way I could get around that, at least in this case, is this:
xmlstarlet ed -N xi="http://www.w3.org/2001/XInclude" --subnode "//levela/levelb" \
--type elem -n "xi:input" --insert "//levela/levelb/*" --type attr --name "href"\
--value "aHref" file.xml

Getting No FileSystem for Schema WASB . Hdinsight Map Reduce

I am running a simple map reduce job in Azure HDInsight,below is the command that we are running:
java -jar WordCount201.jar wasb://hexhadoopcluster-2019-05-15t07-01-07-193z#hexanikahdinsight.blob.core.windows.net/hexa/CustData.csv wasb://hexhadoopcluster-2019-05-15t07-01-07-193z#hexanikahdinsight.blob.core.windows.net/hexa
Getting the below error :
java.io.IOException: No FileSystem for scheme: wasb
For Java use jdk1.8 and below POM org.apache.hadoop hadoop-mapreduce-examples2.7.3scope>provided org.apache.hadoophadoop-mapreduce-client-common2.7.3providedjdk.toolsjdk.toolsorg.apache.hadoophadoop-common2.7.3provided
WASB is a wrapper around HDFS file system. I am not sure you can use it in normal java program. Do you have any reference / link which you referred to?
You can try to get the https equivalent of the custData.csv file.Below is an example of Spark job I am able to submit on HDInsight cluster using WASB
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/learning-spark-1.0.jar \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/ml-latest/ratings.csv \
wasb://hd-spark-cluster-2019#hdsparkclusterstorage.blob.core.windows.net/ml-latest/movies.csv
And here is an example of passing the same files using their equivalent https URI
spark-submit \
--class com.nileshgule.movielens.MovieRatingAnalysis \
--master yarn \
--deploy-mode cluster \
--executor-memory 1g \
--name MoviesCsvReader \
--conf "spark.app.id=MovieRatingAnalysis" \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/learning-spark-1.0.jar \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/ratings.csv \
https://hdsparkclusterstorage.blob.core.windows.net/hd-spark-cluster-2019/ml-latest/movies.csv
For hadoop job kindly run the jar from root user . Once you login to HDinsight run the command sudo su - . And the create a folder and place the jar to that folder and run the jar .

How to place a file on salt master via salt-api

I want to place a file a file on salt-master via salt-api. I have configured salt-api using rest cherrypy and configured a custom hook for it. I wanted to explore the use-case where we can transfer the file first to salt-master and secondly distribute it to minions. I'm able to achieve the second part but not been able to post data file to the API.
Here is one way to do it using file.write execution module.
First login and save the token to a cookie file (I had to change eauth to ldap, auto didn't work for some reason):
curl -sSk http://localhost:8000/login \
-c ~/cookies.txt \
-H 'Accept: application/x-yaml' \
-d username=USERNAME\
-d password=PASSWORD \
-d eauth=auto
Now run a job to create a file on the salt-master (assuming your salt-master is also running a salt-minion):
curl -sSk http://localhost:8000 \
-b ~/cookies.txt \
-H 'Accept: application/x-yaml' \
-d client=local \
-d tgt='saltmaster' \
-d fun=file.write \
-d arg='/tmp/somefile.txt' \
-d arg='This is some example text
with newlines
A
B
C'
Note that the spacing used in your command will affect how the lines will show up in the file, with the example above is gives the most aesthetically pleasing result.

How to enable HDFS caching on Amazon EMR?

What's the easiest way to enable HDFS Caching on EMR ?
More specifically, how to set dfs.datanode.max.locked.memory and increase the "maximum size that may be locked into memory" (ulimit -l) on all nodes ?
The following code seems to work fine for dfs.datanode.max.locked.memory and I could probably write a custom bootstrap to update /usr/lib/hadoop/hadoop-daemon.sh and call ulimit. Is there any better or faster way ?
elastic-mapreduce --create \
--alive \
--plain-output \
--visible-to-all \
--ami-version 3.1.0 \
-a $access_id \
-p $private_key \
--name "test" \
--master-instance-type m3.xlarge \
--instance-group master --instance-type m3.xlarge --instance-count 1 \
--instance-group core --instance-type m3.xlarge --instance-count 10 \
--pig-interactive \
--log-uri s3://foo/bar/logs/ \
--bootstrap-action s3://elasticmapreduce/bootstrap-actions/configure-hadoop \
--args "--hdfs-key-value,dfs.datanode.max.locked.memory=2000000000" \

Resources