How to enable HDFS caching on Amazon EMR? - emr

What's the easiest way to enable HDFS Caching on EMR ?
More specifically, how to set dfs.datanode.max.locked.memory and increase the "maximum size that may be locked into memory" (ulimit -l) on all nodes ?
The following code seems to work fine for dfs.datanode.max.locked.memory and I could probably write a custom bootstrap to update /usr/lib/hadoop/hadoop-daemon.sh and call ulimit. Is there any better or faster way ?
elastic-mapreduce --create \
--alive \
--plain-output \
--visible-to-all \
--ami-version 3.1.0 \
-a $access_id \
-p $private_key \
--name "test" \
--master-instance-type m3.xlarge \
--instance-group master --instance-type m3.xlarge --instance-count 1 \
--instance-group core --instance-type m3.xlarge --instance-count 10 \
--pig-interactive \
--log-uri s3://foo/bar/logs/ \
--bootstrap-action s3://elasticmapreduce/bootstrap-actions/configure-hadoop \
--args "--hdfs-key-value,dfs.datanode.max.locked.memory=2000000000" \

Related

Exclude Path for GoAccess

I am trying to exclude some Paths in GoAccess to get an better result for my WordPress Installation.
So I am doing this:
#!/bin/sh
set -x
zcat -f /var/log/nginx/access-blog.log* > /var/www/serverstats/work/access-parsed.log | grep -Ev '/wp-config.php|/xmlrpc.php|/wp-json|/adminer|/robots.txt|/app-ads.txt|/ads.txt|/wp-login.php|//feed|/?author=|/wp-content|/wp-admin|/rss|/api/v1|/wp-cron.php' /var/www/serverstats/work/access-parsed.log |goaccess \
--log-file=/var/www/serverstats/work/access-parsed.log \
--log-format=COMBINED \
--exclude-ip=0.0.0.0 \
--geoip-database=/var/www/serverstats/GeoLite2-City.mmdb \
--ignore-crawlers \
--hide-referer=*.dasnetzundich.de \
--hide-referer=dasnetzundich.de \
--browsers-file=/var/www/serverstats/crawler.list \
--anonymize-ip \
--persist \
--db-path=/var/www/serverstats/db \
--real-os \
--output=/var/www/serverstats/index.html
rm /var/www/serverstats/work/access-parsed.log
set +x
but that wont work. Can anyone help me to get it working?
Thanks
Lars
I tried these Scripts and it wont exclude the path like /xmlrpc.php and wp.login.php.
It would be amazing if I can filter the output from GoAccess.

Can I completely customize the flags for ./configure in a SPEC file?

My rpmbuild log tells me all the flags used when calling configure:
./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu \
--program-prefix= \
--disable-dependency-tracking \
--prefix=/usr \
--exec-prefix=/usr \
--bindir=/usr/bin \
--sbindir=/usr/sbin \
--sysconfdir=/etc \
--datadir=/usr/share \
--includedir=/usr/include \
--libdir=/usr/lib64 \
--libexecdir=/usr/libexec \
--localstatedir=/var \
--sharedstatedir=/var/lib \
--mandir=/usr/share/man \
--infodir=/usr/share/info \
--prefix=/opt/custom/SENSOR/Qt-5.15.2 \
--confirm-license \
--opensource
My problem is that the 'build' and 'host' flags (plus several others) are unknown commands for this particular configure script. How can I take complete control of the call to configure in my SPEC file? It's obviously not enough to add new flags to the %configure scriptlet, I need to remove flags that rpmbuild is adding by default.
It looks like the answer is to call configure directly instead of using the scriptlet. I.e., replace %configure with:
./configure --prefix=/opt/custom/SENSOR -confirm-license -opensource

Protoc does not export the TS file version of *_grpc_pb.js?

I am new to setting up the gRPC web based client side. Our backend is already up and running on Go with gRPC. I am testing out what it's like converting the .proto file into TS. I am successfully able to generate some of the files, however, I am missing the TypeScript "Service" file.
I pretty much followed the instructions from the grpc_tools_node_protoc_ts site.
Setup a script to generate files for 1) the service and 2) the client model:
PROTOC_GEN_TS_PATH="./node_modules/.bin/protoc-gen-ts"
GRPC_TOOLS_NODE_PROTOC_PLUGIN="./node_modules/.bin/grpc_tools_node_protoc_plugin"
GRPC_TOOLS_NODE_PROTOC="./node_modules/.bin/grpc_tools_node_protoc"
OUT_DIR="./_protos_/proto/"
# JavaScript code generating
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-grpc="${GRPC_TOOLS_NODE_PROTOC_PLUGIN}" \
--js_out=import_style=commonjs,binary:"${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
${GRPC_TOOLS_NODE_PROTOC} \
--plugin=protoc-gen-ts="${PROTOC_GEN_TS_PATH}" \
--ts_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
What I get on the output is missing the *_grpc_pb.d.ts. I am under the impression I need this? 🤷🏻‍♂️
I have also tried adding the service option to the flag:
--ts_out="service=grpc-web:${OUT_DIR}" \
This now generates a *_pb_service.d.ts output file, still without the *_grpc_pb.d.ts file. I was reading the docs more and am thinking this service=grpc-web is actually the option I need since we're not running a node server.
Does this seem right? This is what I have now:
# Note the ts_out flag "service=grpc-node":
# This does generate the *_grpc_pb.d.ts but not the service files
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-node:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto
# Note the ts_out flag "service=grpc-web":
# This does generate the service files, but not the *_grpc_pb.d.ts file
protoc \
--plugin="protoc-gen-ts=${PROTOC_GEN_TS_PATH}" \
--plugin=protoc-gen-grpc=${GRPC_TOOLS_NODE_PROTOC_PLUGIN} \
--js_out="import_style=commonjs,binary:${OUT_DIR}" \
--ts_out="service=grpc-web:${OUT_DIR}" \
--grpc_out="${OUT_DIR}" \
-I "${OUT_DIR}" \
"${OUT_DIR}"/*.proto

How to place a file on salt master via salt-api

I want to place a file a file on salt-master via salt-api. I have configured salt-api using rest cherrypy and configured a custom hook for it. I wanted to explore the use-case where we can transfer the file first to salt-master and secondly distribute it to minions. I'm able to achieve the second part but not been able to post data file to the API.
Here is one way to do it using file.write execution module.
First login and save the token to a cookie file (I had to change eauth to ldap, auto didn't work for some reason):
curl -sSk http://localhost:8000/login \
-c ~/cookies.txt \
-H 'Accept: application/x-yaml' \
-d username=USERNAME\
-d password=PASSWORD \
-d eauth=auto
Now run a job to create a file on the salt-master (assuming your salt-master is also running a salt-minion):
curl -sSk http://localhost:8000 \
-b ~/cookies.txt \
-H 'Accept: application/x-yaml' \
-d client=local \
-d tgt='saltmaster' \
-d fun=file.write \
-d arg='/tmp/somefile.txt' \
-d arg='This is some example text
with newlines
A
B
C'
Note that the spacing used in your command will affect how the lines will show up in the file, with the example above is gives the most aesthetically pleasing result.

How to solve the CSScomb error in PhpStorm?

I wanted to install on PhpStorm 8.0.1 CSScomb.js
I do everything as it is written on a page on github. Established CSScomb globally and locally (so sure). Prescribed ways and...
...run and out this error:
Error running CSScomb: Can not run program "C: \ Users \ Kanat \
AppData \ Roaming \ npm \ node_modules \ csscomb \ bin \ csscomb" (in
directory "D: \ OpenServer \ domains \ LPDevplate \ src \ scss \
modules"): CreateProcess error = 193% 1 is not a valid Win32
application
Someone faced with this error and help solve it?
the same error "Error running 'CSScomb': Cannot run program"

Resources