I am trying to set up a mock server using wireMock as a standalone process. I downloaded the jar file and executed the following command:
java -jar wiremock-standalone-2.23.2.jar --port 0
I had to dynamically determine a port because I am already using the default 8080 port for another program running on my machine. It gave me the port number 55142, but when I tried accessing that on the web, it gave me the following error:
HTTP ERROR 403
Problem accessing /__files/. Reason:
Forbidden
Powered by Jetty://
It's probably due to the fact that you just entered http://localhost:55142
and as there are no mappings in ./mappings directory and files in ./files directory (the same where you have your wiremock.jar file is located)
2019-06-04 00:10:58.890 Request was not matched as there were no stubs registered:
{
"url" : "/"
...
}
please try call with __admin endpoint to see if WireMock is working
http://localhost:55142/__admin
please see also docs here for more nice admin commands.
Related
1. Problem
The git push command returns the following error if one file is larger than ~1MB:
Pushing to http://mygitlabserver.pitunnel.com/root/my_project.git
POST git-receive-pack (1163897 bytes)
error: RPC failed; HTTP 413 curl 22 The requested URL returned error: 413
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
The server is an RPi 4 with an SSD attached. Accessed via pitunnel (standard subscribtion).
The push fails if one file is larger than 1MB
The push returns no error even if the commit is 150MB (a lot of small files)
The push returns no error if an mp3 file of multiple MBs gets pushed.
2. Problem
Not really a problem but it can be related to the other one
If a large project is imported that was exported from gitlab.com it returns the same error:
413 Request Entity Too Large
nginx/1.10.3 (Ubuntu)
But only if connected via pitunnel (link), it works if the project is uploaded in the local network.
The nginx seems to be the problem.
In the gitlab.rb file the following parameters are set and the gitlab service was restarted according to the gitlab docs:
nginx['enable'] = true
nginx['client_max_body_size'] = '900m'
PS: The repo will use git LFS after this problem is solved.
for all with similar a similair problem:
Pitunnel was the problem.
I'm working on a project for deploying pentest lab with terraform & ansible. All is working good except that last problem.
In my lab I have a nginx server running on a Windows server. Nginx with php works when I start them as Administrator with ansible but i need them to run with a non admin local account.
For the php i've made a wrapper using this tools : https://github.com/antonioCoco/RunasCs
But it doesn't work with nginx cause of a working directory problem :
Here is the error :
PS C:\Users\Administrator> .\RunAsCs.exe nginx ***** C:\Web\nginx-1.19.6\nginx.exe
[*] Warning: GetUserProfileDirectory failed with error code: 2
[*] Warning: Unable to obtain environment for user 'nginx'.
[*] Warning: Environment of created process might be incorrect.
nginx: [alert] could not open error log file: CreateFile() "logs/error.log" failed (3: The system cannot find the path specified)
2021/03/06 10:18:33 [emerg] 5556#6124: CreateFile() "C:\Windows\system32/conf/nginx.conf" failed (3: The system cannot find the path specified)
And that's normal because as you can see my wrapper start in Windows/System32
I would like to know if there is a solution either with nginx.conf or with ansible to start this exe as the "nginx" user.
This is a working code for starting nginx as Administrator
- name: Starting web server
win_shell: .\nginx.exe
args:
chdir: C:\Web\nginx-1.19.6
async: 180
poll: 0
I know that there is a psexec module in ansible but psexec will work only for Local Admin account and the goal of that is that my nginx don't run as Local Admin.
Thanks for the help !
I have a web app running on machine with ip : 172.10.10.10.
The basic API call exposed by this app is : GET - http://172.10.10.10
and it will return a response as OK.
On another machine I added an entry in /etc/hosts file as below.
172.10.10.10 webserver1.com
With this the ping command is resolved successfully. e.g. : ping webserver1.com
Now I want to resolve the curl command as well.
e.g. : curl http://webserver1.com
Result : curl: (6) Could not resolve host: webserver1.com
How to achieve this for curl command with http url?
You can setup a DNS server and point your IP in /etc/resolv.conf
There are many options out there in marker ( paid / free ) for a Local DSN Server dockerized and non-dockerized too.
My institute recently installed a new proxy server for our network. I am trying to configure my Cygwin environment to be able to run wget and download data from a remote repository.
Browsing the internet I have found two different solutions to my problem, but no one of them seem to work in my case.
The first one I tried was to follow these instructions, so in Cygwin:
cd /cygdrive/c/cygwin64/etc/
nano wgetrc
at the end of the file, I added:
use_proxy = on
http_proxy=http://username:password#my.proxy.ip:my.port/
https_proxy=https://username:password#my.proxy.ip:my.port/
ftp_proxy=http://username:password#my.proxy.ip:my.port/
(of course, using my user and password)
The second approach was what was suggested by this SO post, so in my Cygwin environment:
export http_proxy=http://username:password#my.proxy.ip:my.port/
export https_proxy=https://username:password#my.proxy.ip:my.port/
export ftp_proxy=http://username:password#my.proxy.ip:my.port/
in both cases, if I try to test my wget, I get the following:
$ wget http://www.google.com
--2020-01-30 12:12:22-- http://www.google.com/
Resolving my.proxy.ip (my.proxy.ip)... 10.1XX.XXX.XX
Connecting to my.proxy.ip (my.proxy.ip)|10.1XX.XXX.XX|:8XXX... connected.
Proxy request sent, awaiting response... 407 Proxy Authentication Required
2020-01-30 12:12:22 ERROR 407: Proxy Authentication Required.
It looks like if my user and password are not ok, but I actually checked them on my browsers and my credentials work just fine.
Any idea on what this could be due to?
This problem was solved thanks to the suggestion of a User of the community AskUbuntu.
Basically, instead of editing the global configuration file wgetrc, I should have created a new .wgetrc with my proxy configuration in my Cygwin home directory.
In summary:
Step 1 - Create a .wgetrc file;
nano ~/.wgetrc
Step 2 - record in this file the proxy info:
use_proxy=on
http_proxy=http://my.proxy.ip:my.port
https_proxy=https://my.proxy.ip:my.port
ftp_proxy=http://my.proxy.ip:my.port
proxy_user=username
proxy_password=password
I have installed intern on my local machine (192.168.1.50) and want to use the QT Browser webdriver on a remote machine (192.168.1.76). I've changed the intern.js and added the correct hostname as shown beneath:
tunnelOptions: {
hostname: '192.168.1.207:9517'
},
The qt browser is called as well:
environments: [
{ browserName: 'QTBrowser', version: '5.4' , platform: [ 'LINUX' ] }
],
Tunnel is set to NullTunnel.
When executing the tests, following error is shown
C:\intern-tutorial>intern-runner config=tests/intern.js Listening on 0.0.0.0:9000 Tunnel started Suite QTBrowser 5.4 on LINUX FAILED Error: [POST http://192.168.1.207:9517/wd/hub/session] connect ETIMEDOUT
192.168.1.207:4444 at Server.createSession at
at retry
at
at
runCallbacks
at at run
at
at
nextTickCallbackWith0Args at process._tickCallback
TOTAL: tested 0 platforms, 0/0 tests failed; fatal error occurred
Error: Run failed due to one or more suite errors at
emitLocalCoverage
at
finishSuite
at at
at
runCallbacks
at at run
at
at
nextTickCallbackWith0Args at process._tickCallback
I am able to access the remote webdriver myself via the browser using url http://192.168.1.76:9517/status
So the connection is correct, but intern does add the /wd/hub/session which actually isn't needed.
How can I get my intern from not doing this?
You can get past the 'wd/hub' issue by setting pathname on in the tunnel options:
tunnelOptions: {
pathname: '/',
hostname: '192.168.1.207',
port: 9517
}
However, there are currently a couple of incompatibilities between Intern and QtWebDriver. One is that QtWebDriver requires that headers use a specific capitalization scheme, like 'Content-Type'. However, the library Intern uses to handle its requests currently normalizes header names to lowercase. This should be fine, because headers are supposed to be case insensitive, but not everything follows the standard.
Another problem is that, unlike most other WebDriver implementations, QtWebDriver responds to a session creation call with a 303 response rather than a 200, and the redirect address is relative. While that should be fine, the version of the Leadfoot library used by Intern doesn't properly follow relative redirect addresses.
These issues should be fixed in a future version of Intern, but for the moment Intern doesn't work out-of-the-box with QtWebDriver.