When I use Scrapy Splash + Crawlera in my Linux server, it always gets 503 errors. It works just fine in Windows. Why is that?
It turns out, it doesn't work if I set SPLASH_URL = '0.0.0.0:8050'. I have to set it using the IP instead of localhost IP, like SPLASH_URL = 'xxx.xxx.xxx.xxx:8050'
Related
I set up a wp-env environment a while ago to tinker with a website I was working on..
Today I went to work on it some more and when I ran wp-env start I got the error in the image.
I have tried using the default "localhost" instead of 127.0.0.1 as well as including the port like 127.0.0.1:8888 to no avail. I have run all docker clean and destroy commands and restarted and updated the mysql server and Docker itself. What are some next steps I can take? Thanks!
I have Gitlab 8.6 running on an Ubuntu 14.04 server that seems to have gotten messed up. I consistently get a 502 error when accessing the site. The server likely has not been restarted since installing Gitlab initially, and a power outage caused the server to reboot. Now, I cannot start/restart Gitlab due to what appears to be port conflicts.
I installed Gitlab via source, I don't have any custom port configurations, and am using NGINX. nginx -t shows that the configuration appears to be correct syntax-wise.
When I run netstat -tupln, I see that Unicorn & a Gitlab instance is already running on :8080 and :80 respectively at boot up. I suspect that a 2nd instance of Gitlab was installed which is being run at boot and that is causing the proper instance to have port conflicts when I try to run it via service gitlab restart. I'm not even sure if that's possible, but I can't seem to figure out where to go from here. Every time I run sudo gitlab-ctl reconfigure or service gitlab start, it fails and the unicorn.stderror.log shows bind errors to the :8080 port. I tried moving the Unicorn service to :8081 as well, but I still receive the port binding error.
Does anyone know how I can detect if there are multiple Gitlab instances running, and maybe if there is a way to remove a duplicated one if it's possible? Thank you!
EDIT: Here is what is in the /etc/gitlab/gitlab.rb file. Everything else is commented out.
## Url on which GitLab will be reachable
external_url 'http://my-gitlab-instance.domain.com'
EDIT 2: My /home/git/gitlab/ directory is mapped to https://gitlab.com/gitlab-org/gitlab-ce.git, and is on the 8-7-stable branch. gitlab-shell and gitlab-workhorse are on the correct versions according to https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/update/8.6-to-8.7.md
EDIT 3: I have gotten to a point where the Gitlab seems to self-check okay by removing the gitlab-ce package (https://gitlab.com/gitlab-org/omnibus-gitlab/issues/135), but the server returns a 404. NGINX, Unicorn, Sidekiq, and gitlab-workhorse all say that they're running. I see that unicorn.rb is listening on :8080, and nginx is listening on 0.0.0.0:80 and :::80. I guess now I'm troubleshooting this 404 and hopefully I will be back to my install-from-source.
What I have found is that there were 2 issues causing the errors I had.
First, I removed a "gitlab-ce" package that was installed, following the instructions here: https://gitlab.com/gitlab-org/omnibus-gitlab/issues/135. For some reason, when I restart the machine now I have to restart these services, in order, for Gitlab to run properly redis-server, gitlab, nginx. However, Gitlab does start responding properly after that.
Second, the 404 error was due to a different server that was also listening on that IP address, causing a conflict.
I will likely move to using the omnibus package on a fresh, new server going forward, but at least the immediate issues appear resolved. Thanks for your help, SLY!
I have a small nginx based test application that I want to run inside a docker container. So I followed the example given here docker installation
So I have a foder name restartTest and it contains an index.html file that has this single line in it that says Docker Test 1. I mount this up as my volume during runtime for docker container. So the commmand I use is
docker run -dP -v /Users/Sachin/restartTest/:/usr/share/nginx/html --name engine2 nginx
And it runs fine. I use curl to verify that the volume has mounted properly and the application is running as desired. Now what I do is that I change the content of the index.html file (from my localhost) to Docker test 2 and then I restart the container. I execute the following command to verify that the content has indeed changed inside the docker container
docker exec engine2 cat /usr/share/nginx/html/index.html
And as expected, the file reads Docker Test 2. However, when I use the curl command to see if the webpage also reflects the change I see that I still get Docker Test 1 as the response. The index.html reflects the change however when I run the curl command or if I access the app from the browser, I still get the same result. I have tried the following but to no avail.
Restart the service
Stop and start the container
Stop and start the boot2docker VM and docker daemon.
I have no clue as to why this is happening.
So I found this known bug with VirtualBox VM that is used for running Docker on Mac.
When we have shared content between our host machine and the VirtualBox, then only we face this bug. There is a optimisation as far as web servers like nginx, apache (and apparently vertx) are concerned. Whenever we request a static file from the server, it uses sendfile to provide us with the file. The bug is that in case of VirtualBox (in the scenario described above) we always get the first version of the file no matter what we try. The workaround for this in case of nginx and apache is to turn sendfile off . However, there is a hack that we use as far as vertx is concerned.
rename the file say login.html to login.html.moved (anything)
curl :/….../login.html (we won’t get anything)
rename the file back to its original name login.html.moved to login.html
Hard refresh the page (Command + Shift + R).
For further reading about this bug consult the following
Link1
Link2
Link3
Link4
I assume it is a caching problem. Did you try to set expires -1 in your index.html location configuration to disable server side caching for static files?
From previous versions of the question, there is this: Browse website with ip address rather than localhost, which outlines pretty much what I've done so far...I've got the local IP working. Then I found ngrok, and apparently I don't need to connect via the IP.
What I am trying to do is expose my website running on localhost to the internet. I found a tool that will do this: ngrok.
Running the website in visual studio, the website starts up on localhost/port#. I run the command "ngrok http port#" in the command line. Everything seems to start up fine. I generate a couple of URLs, and the ngrok inspection url (localhost:4040) works.
The only problem is that when I go to the generated URLs, I get an HTTP error 400: bad request invalid hostname. This is a different error than when I run "ngrok http wrongport#", which is a host not found error...so I think something good is happening. I just can't tell what...
Is there a step I am missing in exposing my site to the internet via the tunneling service? If there is, I can't find it in the ngrok documentation.
Troubleshot this issue with ngrok. In the words of inconshrevable, some applications get angry when they see a different host header than expected.
Running the following command should fix the problem:
ngrok http [port] --host-header="localhost:[port]"
Depending on the version, you may also want to try:
ngrok http [port] --host-header="localhost:[port]"
Following command will fix the issue
ngrok http -host-header=localhost 8080
This didn't work for me.
you could do the following:
For IIS Express
In VS 2015:
Go to the .vs\config\applicationhost.config folder in your project
In VS 2013 and earlier:
Go to %USERPROFILE%\My Documents\IISExpress\config\applicationhost.config
Find the binding that says:
<binding protocol="http" bindingInformation="*:5219:localhost" />
For me it was a project running on port 5219
change it to
<binding protocol="http" bindingInformation="*:5219:" />
IIS Express will now accept all incoming connections on that port.
Disadvantage: you need to run IIS Express as admin.
or you could rewrite the host header in Ngrok:
ngrok.exe http -host-header=rewrite localhost:5219
For https this works:
ngrok http https://localhost:<PORT> --host-header="localhost:<PORT>"
UPDATED COMMAND FOR LATEST VERSION
Tested with: (Windows) (ngrok v3.0.5)
Use -- instead of -
ngrok http --host-header=localhost 8080
The simplest thing for me was using iisexpress-proxy + ngrok.
First I install iisexpress-proxy globally with npm
npm install -g iisexpress-proxy
Then I proxy my localhost with it. Say for instance my site is running on 3003.
iisexpress-proxy 3003 to 12345 where 12345 is the new http port I want to proxy to.
Then I can run ngrok on it.
./ngrok.exe http 12345
It just works! 😃
But I think it works only with http. Right now I don't use https to test, but even if it works, usually it's a lot of work as always.
For https this works:
ngrok http https://localhost:<PORT> --host-header="localhost:<PORT>"
Try with different locations from the Global infrastructure > Locations
ngrok http -region eu 8080
You can make a request and view any traffic passing through your tunnel using the ngrok traffic inspector at http://localhost:4040.
OR in command line
ngrok http -region eu 8080 --log=stdout
If one region fails then try with another.
ngrok runs tunnel servers in datacenters around the world. The location of the datacenter within a given region may change without notice (e.g. the European servers may move from Frankfurt to London).
us - United States (Ohio)
eu - Europe (Frankfurt)
ap - Asia/Pacific (Singapore)
au - Australia (Sydney)
sa - South America (Sao Paulo)
jp - Japan (Tokyo)
in - India (Mumbai)
First open ngrok configuration YAML file, run from terminal:
ngrok config edit
Example of yaml for localhost setup (client & server):
version: "2"
authtoken: {YOUR_AUTH_TOKEN_FROM_NGROK_WEBSITE}
tunnels:
client:
addr: 3000
proto: http
host_header: localhost
server:
addr: 4000
proto: http
host_header: localhost
Save the config file based on your client and server ports and run the following command:
ngrok start --all
This will make ngrok open a tunnel for all the configurations declared in the yaml file
Had IIS Express .net web API, had installed NGROK in docker (windows as a host)
Had "Bad Request" error, the next command worked for me:
docker run -it -e NGROK_AUTHTOKEN=<token> ngrok/ngrok --host-header=localhost:21852 http host.docker.internal:21852
As I understood later, --host-header needed because IIS Express refuses all requests from outside (must be "localhost:port
"), host.docker.internal I've used instead of localhost, because NGROK was running inside docker, while IIS Express was running on a windows host.
I had the same issue and used the following solution:
Make sure your application binding in your IIS is set to All Unassigned IP address
Run ngrok HTTP 127.0.0.1:173 --region=eu --hostname=yourcustomdomain.eu.ngrok.io
That's it. Works perfectly. This solution is also for paid pro accounts
Steps.
Run command on your console from ngrok.exe directory . ngrok http
port i.e ngrok http 80 https://www.screencast.com/t/oyuEPlR6Z Set
Ngrok url to your app .
It will create a tunnel to your application.
Thanks .
Just getting started with OpenStack.
got everything set up on a Ubuntu VM (under Parallels).
When I attempt to log into the browser console as admin (the password was set during the DevStack install) - I get:
HTTPConnectionPool(host='10.211.55.16', port=8774): Max retries exceeded with url: /v2/a586870bde4c4dfc993dc40cab8047b7/extensions (Caused by <class 'socket.error'>: [Errno 111] Connection refused)
I am however able to run CLI commands such as keystone-tenant-list, and all others, on the actual server.
I made sure that I'm able to ping the virtual Ubuntu host from my Mac. When I first enter http://myhost.mydomain I do get a login page, but, as soon as I enter admin's credentials - I get this ugly (and super long error)
What things could I check to fix this?
Resolution:
1) Wiped clean my Ubuntu host
2) Followed set-by-step instructions here: http://www.stackgeek.com/guides/gettingstarted.html
Everything now works without a glitch.