I am sending encoded URL after replacing +,/ and = to NginX, but not able to replace them back on Nginx before decoding.
I don't wont to use something like php, which handle the request and response back from PHP server.
I want something in nginx only to manipulate string.
Till now I have tried with replace-filter-nginx-module and let module, but not get sucess.
Any help will be praised!
Download Lua from http://luajit.org/download.html and unzip the folder. Then, run the following commands (remember to replace [FOLDER_PATH] by the actual place where you unzipped it):
cd [FOLDER_PATH]/
make
make install
Lua NGINX Module
Put inside add module in ./configure
https://github.com/openresty/lua-nginx-module/tags
Assuming NGINX is installed at /opt/nginx, run:
./configure --prefix=/opt/nginx --with-http_ssl_module \
--with-http_secure_link_module \
--add-module=/opt/nginxdependencies/ngx_devel_kit-master \
--add-module=[FOLDER_PATH]/set-misc-nginx-module-0.23 \
--add-module=[FOLDER_PATH]/lua-nginx-module-0.9.10 \
--with-ld-opt='-Wl,-rpath,/usr/local/lib'
Now download https://github.com/openresty/lua-resty-string and add entry to nginx.conf above server { ... }:
lua_package_path "[FOLDER_PATH]/lua-resty-string-master/lib/?.lua;;";
Related
I already have a docker file for customized image for nginx and this works fine.
FROM library/nginx:1.13.2
LABEL maintainer="san#test.com"
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log \
# Make PageSpeed cache writable
&& mkdir -p /var/cache/ngx_pagespeed && \
chmod -R o+wr /var/cache/ngx_pagespeed
ADD server.crt /etc/nginx/ssl/
ADD server.key /etc/nginx/ssl/
ADD conf.d/ /etc/nginx/conf.d/
ADD proxy.d/ /etc/nginx/proxy.d/
CMD ["nginx", "-g", "daemon off;"]
I am trying to also have aws cli installed so I can copy some s3 files and dynamically change nginx configuration which i will do with CMD[] once awsCli is available within the container.
I tried and read many a links from google, but the documentation or reads are not helping especially how to have credentials passed.
I am creating the image in two ways. First is via jenkins pipeline (snippet below
stages {
stage('Build Docker image') {
steps {
script {
docker.withRegistry("http://xyz-1.amazonaws.com", "ecr:eu-central-1:aws-credentials") {
def customImage = docker.build("web-proxy:${CY_RELEASE_VERSION}", ".")
customImage.push()
}
}
}
}
And other way is in local I manually build the image like following
docker build -t web-proxy-dev_san_1:1.11 .
What I am not sure is how I can have aws-cli in DockerFile and have the image take credentials automatically both locally and in jenkins. I think for jenkins it may work if I manage to have aws-cli installed as I am using aws-credentials specified in pipeline but I havent reached that stage yet.
You can use AWS plugin to interact AWS in your pipeline, check the following example: link
I want to use wget to download a whole folder from ftp (i know about -r), curl does not allow downloading folder in one request. I have the following syntax for curl working but can't work out the syntax for even downloading a single file via wget. The key things here are that ftp has auth and getting to ftp is via http proxy (with diff credentials).
This is the working curl command:
curl --proxy-anyauth --proxy-user NTADMIN\proxyuser:proxypass --proxy http://httpproxyhost:8080 --anyauth -u ftpuser:ftppass -v 'ftp://ftphost/file'
What is the equivalent in wget?
You can try to use canonical URI format:
curl --proxy http://proxyuser:proxypass#httpproxyhost:8080 -v 'ftp://ftpuser:ftppass#ftphost/file'
With wget you can use command like this:
ftp_proxy=http://proxyuser:proxypass#httpproxyhost:8080 wget ftp://ftpuser:ftppass#ftphost/file
I had been using a proxy for a long time. Now I need to remove it. I have forgotten how I have added the proxy to wget. Can someone please help me get back to the normal wget where it doesn't use any proxy. As of now, I'm using
wget <link> --proxy=none
But I'm facing a problem when I'm installing using a pre-written script. It's painstaking to search all through the scripts and change each command.
Any simpler solution will be very much appreciated.
Thanks
Check your
~/.wgetrc
/etc/wgetrc
and remove proxy settings.
Or use wget --no-proxy command line option to override them.
In case your OS is alpine/busybox then the wget might vary from the one used by #Logu.
There the correct command is
wget --proxy off http://server:port/
Running wget --help outputs:
/ # wget --help
BusyBox v1.31.1 () multi-call binary.
Usage: wget [-c|--continue] [--spider] [-q|--quiet] [-O|--output-document FILE]
[-o|--output-file FILE] [--header 'header: value'] [-Y|--proxy on/off]
[-P DIR] [-S|--server-response] [-U|--user-agent AGENT] [-T SEC] URL...
Retrieve files via HTTP or FTP
--spider Only check URL existence: $? is 0 if exists
-c Continue retrieval of aborted transfer
-q Quiet
-P DIR Save to DIR (default .)
-S Show server response
-T SEC Network read timeout is SEC seconds
-O FILE Save to FILE ('-' for stdout)
-o FILE Log messages to FILE
-U STR Use STR for User-Agent header
-Y on/off Use proxy
Sadly although mup deploy works perfectly, I have a folder called ".uploads", which users can upload files into.
Each deploy deletes the files in the directory. I would like to exclude or protect the file from the deploy deleting the files, any ideas?
Currently I filed an issue: https://github.com/arunoda/meteor-up/issues/1022
Not sure if its an issue with mup or my system setup. I use tomitrescak:meteor-uploads and also cfs:file-collection. They both have the same issue.
But from what I see it shall be easy to do, you need to modify the script at https://github.com/arunoda/meteor-up/blob/mupx/templates/linux/start.sh#L26
And add a new mapping. You can map multiple volumes such as following: Mounting multiple volumes on a docker container?
So your script would look like (mounting the folder on the host machine /opt/uploads/myapp to the folder /opt/uploads in the container):
docker run \
-d \
--restart=always \
--publish=$PORT:80 \
--volume=$BUNDLE_PATH:/bundle \
--volume=/opt/uploads/myapp:/opt/uploads/ \ .... this is the new line
--env-file=$ENV_FILE \
--link=mongodb:mongodb \
--hostname="$HOSTNAME-$APPNAME" \
--env=MONGO_URL=mongodb://mongodb:27017/$APPNAME \
--name=$APPNAME \
meteorhacks/meteord:base
This can be found in "start.sh" in MUP until this issue is resolved and you can mount volumes via the options.
Also see discussion here: https://github.com/tomitrescak/meteor-uploads/issues/235#issuecomment-228618130
Has anyone written something like davcopy for Livelink? (davcopy works with SharePoint)
I have downloaded davcopy and it hangs when trying to use it with Livelink.
I've asked Open Text and their response is "There is not way to do this out of the box, it will requires writing a webservices application."
I'm not sure how to write a webservice application for livelink; so, before I explore that I was wondering if anyone had done an implementation of davcopy for Livelink.
I know about a command line application which is using MS powershell to do what you want (http://www.gatevillage.net/public/content-server-desktop-library-powershell-suite)
It wouldn't be too difficult to write something like this with Ruby or Perl. Both support WS/SOAP.
With which version of Livelink (or Content Server) do you work?
You can use the curl command line tool to upload, download or delete files in Livelink. It makes HTTP requests against CS REST API, which is available in CS 10.0 or newer.
For example, uploading a file "file.ext" to folder 8372 at http://server/instance/cs as Admin:
curl \
-F "type=144" \
-F "parent_id=8372" \
-F "name=file.ext" \
-F "file=#/path/to/file.ext" \
-u "Admin:password" \
-H "Expect:" \
http://server/instance/cs/api/v1/nodes
The "Expect" header has to be forced empty, because CS REST API does not support persistent connections, but curl would always enable them for this request.