How to perform an upload of a jar via curl the Nexus 3?
I tried using the link tips but without success.
Here are my attempts:
curl -v -F r = -F releases hasPom = true and = -F jar -F file = #. / v12.1.0.1 / pom.xml -F file = #. / v12.1.0.1 / ojdbc7.jar -u admin: admin123 http: // localhost: 8081 / repository / maven releases
curl -v -F r = -F releases hasPom = false -F and -F jar = g = com.oracle.jdbc -F = ojdbc7 -F v = 1.0 p = -F jar -F file = #. / v12 .1.0.1 / ojdbc7.jar -u admin: admin123 http: // localhost: 8081 / repository / maven releases
Both have 400 Bad Request.
Contents of directory
cert_for_nexus.pem
curl.exe
pom.xml
utils-1.0.jar
Nexus v3 is configured for http
curl -v -u admin:admin123 --upload-file pom.xml http://localhost:8081/nexus/repository/maven-releases/org/foo/utils/1.0/utils-1.0.pom
curl -v -u admin:admin123 --upload-file utils-1.0.jar http://localhost:8081/nexus/repository/maven-releases/org/foo/utils/1.0/utils-1.0.jar
Nexus v3 is configured for https
prerequisite: must have curl with SSL enabled (link - left menu)
curl -v --cacert cert_for_nexus.pem -u admin:admin123 --upload-file pom.xml https://localhost:8443/nexus/repository/maven-releases/org/foo/utils/1.0/utils-1.0.pom
curl -v --cacert cert_for_nexus.pem -u admin:admin123 --upload-file utils-1.0.jar https://localhost:8443/nexus/repository/maven-releases/org/foo/utils/1.0/utils-1.0.jar
Contents of pom.xml
<project>
<modelVersion>4.0.0</modelVersion>
<groupId>org.foo</groupId>
<artifactId>utils</artifactId>
<version>1</version>
</project>
EDIT: fixed -u order for both https examples
You could use nexus-cli.
docker run -ti -v $(pwd):$(pwd):ro sjeandeaux/nexus-cli:0.2.0 \
-repo=http://nexus:8081/repository/maven-releases \
-user=admin \
-password=admin123 \
-file=$(pwd)/upload.jar \
-groupID=your.group \
-artifactID=yourArtifactID \
-version=0.1.0 \
-hash md5 \
-hash sha1
I've modified your code as below. Please try this.
curl -v -F r=releases -F hasPom=false -F e=jar -F g=com.oracle.jdbc -F a=ojdbc7 -F v=1.0 -F p=jar -F file=#"./v12.1.0.1/ojdbc7.jar" -u admin:admin123 http://localhost:8081/nexus/service/local/artifact/maven/content
Also I would suggest using the full path rather than relative path. Can you share where you are using this curl snippet? Any CI tool like Jenkins?
Related
I have read a lot of questions on StackOverflow. For many people they were helpful, but for me not.
I need to hide the server name, or at least change it.
I wrote a docker file, to download a dynamic module and inject it into the configuration in the next step.
ARG VERSION=alpine
FROM nginx:${VERSION} as builder
ENV MORE_HEADERS_VERSION=0.34
ENV MORE_HEADERS_GITREPO=openresty/headers-more-nginx-module
RUN wget "http://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz" -O nginx.tar.gz && \
wget "https://github.com/${MORE_HEADERS_GITREPO}/archive/v${MORE_HEADERS_VERSION}.tar.gz" -O extra_module.tar.gz
RUN apk add --no-cache --virtual .build-deps \
gcc \
libc-dev \
make \
openssl-dev \
pcre-dev \
zlib-dev \
linux-headers \
libxslt-dev \
gd-dev \
geoip-dev \
perl-dev \
libedit-dev \
mercurial \
bash \
alpine-sdk \
findutils
SHELL ["/bin/ash", "-eo", "pipefail", "-c"]
RUN rm -rf /usr/src/nginx /usr/src/extra_module && mkdir -p /usr/src/nginx /usr/src/extra_module && \
tar -zxC /usr/src/nginx -f nginx.tar.gz && \
tar -xzC /usr/src/extra_module -f extra_module.tar.gz
WORKDIR /usr/src/nginx/nginx-${NGINX_VERSION}
RUN CONFARGS=$(nginx -V 2>&1 | sed -n -e 's/^.*arguments: //p') && \
sh -c "./configure --with-compat $CONFARGS --add-dynamic-module=/usr/src/extra_module/*" && make modules
FROM nginx:${VERSION}
COPY --from=builder /usr/src/nginx/nginx-${NGINX_VERSION}/objs/*_module.so /etc/nginx/modules/
COPY devops/nginx/nginx.conf /etc/nginx/
EXPOSE 81 82
CMD ["nginx", "-g", "daemon off;"]
Then, I added the module into the nginx.conf file. (I also tried to load the module without "")
load_module "modules/ngx_http_headers_more_filter_module.so";
And finally, I wrote into http block
http{
more_clear_headers server;
more_set_headers "server: hidden";
server_tokens off;
proxy_pass_header server; //Tried to add it for reverse proxying, but it did not work
}
Only server_tokens off; works. I have removed nginx version, but not it's signature. more_clear_headers and more_set_headers do not affect it. What am I missing?
P.s. Checked the modules folder in the server, and my module loaded correctly
P.p.s. Tried Server with capital S, and it did not work either. (As many suggested, but my response returns it with lowercase)
Please we're facing some issues when installing Varnish 6.0.8 on ubutnu 18.04.6 OS, it doesn't create the secret file inside the /etc/varnish dir as shown below:
enter image description here
we use the following script to for installation :
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish60lts/script.deb.sh | sudo bash
can someone please help ?
PS: we tried to install later versions (6.6 and 7.0.0) and we got the same issue.
Form a security point of view, remote CLI access is not enabled by default. You can see this when looking at /lib/systemd/system/varnish.service:
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target nss-lookup.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
There are no -T and -S parameters in the standard systemd configuration. However, you can enable this by modifying the systemd configuration yourself.
Just run sudo systemctl edit --full varnish to edit the runtime configuration and add a -T parameter to enable remote CLI access.
Be careful with this and make sure you restrict access to this endpoint via firewalling rules.
Additionally you'll add -S /etc/varnish/secret as a varnishd runtime parameter in /lib/systemd/system/varnish.service.
You can use the following command to add a random unique value to the secret file:
uuidgen | sudo tee /etc/varnish/secret
This is what your runtime parameters would look like:
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,2g \
-S /etc/varnish/secret \
-T :6082
When you're done just run the following command to restart Varnish:
sudo systemctl restart varnish
I'm trying to create a Dockerfile which runs as non-root user.
When i building this all works fine, but nginx cannot write the log file because it dosen't have enough permissions. Can I, when building a Docker, give root permissions only for nginx?
I'm trying chmod, chown for blocked directories. Doesn't work
FROM php:7.1-fpm-alpine
RUN apk add --no-cache shadow
RUN apk add --no-cache --virtual .ext-deps \
openssl \
unzip \
libjpeg-turbo-dev \
libwebp-dev \
libpng-dev \
freetype-dev \
libmcrypt-dev \
imagemagick-dev \
nodejs-npm \
nginx \
git \
inkscape
# imagick
RUN apk add --update --no-cache autoconf g++ imagemagick-dev libtool make pcre-dev \
&& pecl install imagick \
&& docker-php-ext-enable imagick \
&& apk del autoconf g++ libtool make pcre-dev
# Install Blackfire
RUN version=$(php -r "echo PHP_MAJOR_VERSION.PHP_MINOR_VERSION;") \
&& curl -A "Docker" -o /tmp/blackfire-probe.tar.gz -D - -L -s https://blackfire.io/api/v1/releases/probe/php/linux/amd64/$version \
&& tar zxpf /tmp/blackfire-probe.tar.gz -C /tmp \
&& mv /tmp/blackfire-*.so $(php -r "echo ini_get('extension_dir');")/blackfire.so \
&& printf "extension=blackfire.so\nblackfire.agent_socket=tcp://blackfire:8707\n" > $PHP_INI_DIR/conf.d/blackfire.ini
RUN apk add -y icu-dev \
&& docker-php-ext-configure intl \
&& docker-php-ext-install intl
RUN docker-php-ext-configure pdo_mysql && \
docker-php-ext-configure opcache && \
docker-php-ext-configure exif && \
docker-php-ext-configure pdo && \
docker-php-ext-configure zip && \
docker-php-ext-configure gd \
--with-jpeg-dir=/usr/include --with-png-dir=/usr/include --with-webp-dir=/usr/include --with-freetype-dir=/usr/include && \
docker-php-ext-configure sockets && \
docker-php-ext-configure mcrypt
RUN docker-php-ext-install pdo zip pdo_mysql opcache exif gd sockets mcrypt && \
docker-php-source delete
RUN ln -s /usr/bin/php7 /usr/bin/php && \
curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \
mkdir -p /run/nginx
COPY ./init.sh /
COPY ./default.conf /etc/nginx/conf.d/default.conf
COPY ./.env /
RUN chmod +x /init.sh
EXPOSE 80
RUN addgroup -g 1001 node \
&& adduser -u 1001 -G node -s /bin/sh -D node
ARG UID=1001
ARG GID=1001
ENV UID=${UID}
ENV GID=${GID}
RUN usermod -u $UID node \
&& groupmod -g $GID node
RUN chown 1001:1001 /var/lib/nginx -R
RUN mkdir -p /var/tmp/nginx
RUN chown 1001:1001 /var/tmp/nginx -R
USER node
ENTRYPOINT [ "/init.sh" ]
There are quite a few unknowns in your question, for example, the contents of your default.conf file. By default the nginx logs are stored in /var/log/nginx, but I'll assume you're overriding that in the configuration.
The next thing is that the master process of nginx needs to be run as root if you wan't it to be able to bind to system ports (0 - 1023) so in case you are using nginx as a web server and intend to use ports 80 and 443 you should stick with running the nginx process as root.
In case you plan to use other ports and are set on the idea of running the master process as non-root, then you can check this answer for suggestions on how to do that - https://stackoverflow.com/a/42329561/5359953
I am using the term master process a lot here, because nginx spawns worker processes to handle the actual requests and those can be run as a different user (Defined in the nginx configuration file)
I found the solution. I just changed RUN chown 1001:1001 /var/lib/nginx -R to RUN chown -R 1001:1001 /var/. Thats works fine
RUN chown -R 1001:1001 /var/
sometimes it's will be actually bad decision.
u can try add permissions like this
RUN chown -R 1001:1001 /var/tmp/nginx
RUN chown -R 1001:1001 /var/lib/nginx
RUN chown -R 1001:1001 /var/log/nginx
RUN chown -R 1001:1001 /run/nginx
I guess RUN chown 1001:1001 /var/lib/nginx -R work wrong because I set the flag -R too late
curl -v -u admin:admin123 --upload-file abclog.jar http://111.111.1.121:8081/nexus/content/repositories/releases/com/keshri/fileupload/
This works on nexus 3.12 (from windows, uploading a nuget package)
$curl.exe" -u ${NUGET_DEPLOYER_USER}:${NUGET_DEPLOYER_PASS} -F filename=the_artifact.nupkg
http://mynexus.example.org:9881/nexus/service/extdirect -F file=c:\\fakepath\\the_artifact.nupkg
-F repositoryName=nuget-hosted -F extTID=36 -F extAction=coreui_Upload
-F extMethod=doUpload -F extType=rpc -F extUpload=true
On 3.14 it fails with something about an csrf token being missing.
I need to send a fax where the source file is coming from an HTTP URL. I have configured hylaFax. When trying a local file, it works fine. But with a URL it gives an error.
The command I am using is something like this:
sendfax -v -h faxhost -f kaur#xyz.com -D -d 1234567890 \
'http://kaur.dev.xyz.com:7771/app-name/proxy?bName=Test&oName=1.txt'
The error:
Error : 'Can not open file'
The file is downloading when connecting through browser.
sendfax will process stdin so you can pipe documents in:
wget -O - 'http://kaur.dev.xyz.com:7771/app-name/proxy?bName=Test&oName=1.txt' | sendfax -v -h faxhost -f kaur#xyz.com -D -d 1234567890
or
curl 'http://kaur.dev.xyz.com:7771/app-name/proxy?bName=Test&oName=1.txt' | sendfax -v -h faxhost -f kaur#xyz.com -D -d 1234567890