I get a 400 Bad Request error when trying to load medium.com. This only happens on my home network (if I take my laptop outside my network on turn off wifi on my phone, the site loads). I have cleared my cache, cookies, and browsing history on every device, restarted every device, and restarted my modem, to no avail. I searched this site for answers, but all the questions asked seem to be related to development, and at the moment, I'm just trying to load a page. How can I fix this? Thanks!
Related
I can't enter "www.google.com" from my desktop on any browser. It always says "Error code 118 (net::ERR_CONNECTION_TIMED_OUT)".
But I can enter if I write www.google.com.tr (I'm from Turkey) or if I use VPN. Due to this reason, I can't open gmail too.
This also causes google captchas not opening too. Because of that I can't login into some web sites.
This isn't happening on my phone or tablet, they are all normal. So it's not a problem of my modem.
I have tried flushing dns with outher command prompt commands, uninstalling virus protection, turning off windows defender completely, using other dns etc.
I have tried pinging "google.com" and it works. But I can't ping "www.google.com". It again throws timeout error.
I don't want to reset my windows just for this.
Any ideas?
Found the fix. If anyone having similar issue, check this out:
My hosts file was containing "216.58.208.164 www.google.com" line. I deleted it and reset my ethernet adapter. After that, I'm able to enter google and open captchas.
Have you tried clearing cache and cookies? or closing every tab except the one you are using and uninstalling extensions that aren't needed, maybe it's a memory issue as it is simply timing out
edit: found this from google support that may help
https://support.google.com/chrome/forum/AAAAP1KN0B0kPkVzx_HQWI/?hl=en
I am having an issue understanding the problem of why my site will load on some networks, but not others.
I am getting "ERR_NAME_NOT_RESOLVED" on Chrome in the following environments:
1) Office Network - This made me believe that it was a firewall issue
2) Open WiFi Network at a Coffee Shop - This made me believe that it was not a firewall issue
I am able to access the site without any issues from my home network and also when turning my phone into a hotspot.
I did some digging and was hopeful that adding an IPv6 record to my DNS would have resolved the issue(Maybe those networks were pulling from IPv6) but that also did not resolve the issue.
I ended up running a test on MXToolBox.com and received these warnings back.
MXToolBox Response
I do not have any email setup with this domain so I am not concerned about the email record errors, but the one I can't figure out is the top one with Category: DNS and the message of "DNS Record not found". The "More Info" link says "We did not find a DNS record at the location specified. You should check with DNS provider to ensure the record has been published and there are no typos."
I have an A record pointing to the correct IP(As I am able to access it from my home and cell phone networks). I added an AAAA record for IPv6(Though not sure I fully understand why it's necessary). My A record has been added for several weeks now, so it's not a propagation issue and my AAAA record was added a few days ago with a SOA of 10 minutes, so do not believe it to be a propagation issue either.
I have opened this question because I'm not even sure how to google this any further as everything returned back is typically client side(Clear your cache/cookies) and not what my issue is.
My VPS is hosted by Linode.
Thanks in advance for any advice/help.
I running an Ubuntu VM via Vagrant on a Windows 10 host. On the Vagrant machine I am running a fairly standard PHP/nginx app.
Whenever I try to access the web app, it takes forever to load. Chrome network inspector shows this:
Chrome network timeline
This huge latency is completely gone on subsequent requests, but whenever I pop back into the browser and try again after a while, it crops up yet again.
I am using NFS.
I have disabled firewalls on both guest and host machines.
I increased keepalive_timeout in nginx which helped hide the problem, as it increased the time window for latency-free subsequent requests.
This latency occurs even when accessing static files, so I don't think it's a PHP-FPM/MySQL problem.
I successfully figured out what my problem was!
After looking at my Windows hosts file, it looked like my vagrant-hostmanager plugin had not been properly clearing out older IP entries (i.e. I had three seperate IP entries for myapp.dev even though only one IP was active). Probably because I'd forgotten to properly vagrant halt before shutting down my PC a few times.
Windows was clearly spending ages trying to resolve the two older entries before successfully resolving the 'real' one.
It's weird: you'd think this problem would cause the latency to show up in the DNS Lookup portion of the Chrome network timeline, rather than Initial connection, but oh well!
I'm not sure if this is the right place to ask this, but I'll do it anyway.
I have developed an uploader for the Umbraco CMS that lets people upload a queue of files in one go. This uses some simple flash app that just calls a .NET ashx to upload the files one at a time. When one is done, the next one starts.
Recently I've had a user hit a problem where 1 or 2 uploads will go up fine, but then the rest fail. This happens for himself and a client of his. After some debugging, he thinks he's found the problem, but it seems weird so was wondering if anyone else has had this problem?
Both him and his client are on a fibre optic broadband connection so have got really fast upload speeds. When it was tested on a lesser speed broadband connection, all the files were uploaded no problem. According to one of his developer friends, apparently they had come across it before and had to put a slight delay in the upload script to make it work.
Does this sound possible? Had anyone else hit this problem? Is there a known workaround to prevent the uploads from failing?
I have not struck this precise problem before, but I have done a lot of diagnosis of DSL and broadband troubleshooting before, so will do my best to answer this.
There are 2 possible causes for this particular symptom, both generally outside of your network control (I would have thought).
1) Packet loss
Of course where some links receive a very high volume of traffic then they can chose to just dump a lot of data (eg all that is over that link maximum set size), but TCP/IP should be controlling that, and also expecting that sort of thing to drop from time to time, so this seems less likely.
2) The receiving server
May have some HTTP bottlenecks into that server or even the receiving server CPU / RAM etc, may be at capacity.
From a troubleshooting perspective, even if these symptoms shouldn't (in theory) exist, the fact that they do, and you have a specific
Next steps if you really need to understand how it is all working might be to get some sort of packet sniffer (like WireShark) to try to work out at a packet level what exactly is happening.
Also Socket programming can often program directly to the TCP/IP sockets, so you would be processing at the lower network layers, and seeing the responses and timeouts etc.
Also if you control the receiving server, then you can do the same from that end, or at least review the error logs to see what is getting thrown up as a problem.
A really basic method could be to send a pathping to the receiving server if that is possible, and that might highlight slow nodes getting the server, or packet loss between your local machine and the end server.
The upshot? Put in a slow down function in the upload code, and that should at least make the code work.
Get in touch if you need any analysis of the WireShark stuff.
I have run into a similar problem with an MVC2 website using Flash uploader and Firefox. The servers were load balanced with a Big-IP load balancer. What we found out in debugging this is that Flash, in Firefox, did not send the session ID on continuation requests and the load balancer would send continuation requests off to another server. Because the user had no session on the new server, the request failed.
If a file could be sent in one chunk, it would upload fine. If it required a second chunk, it failed. Because of this the upload would fail after an undetermined number of files being uploaded.
To fix it, I wrote a Silverlight uploader.
I have a website running a basic ASP.NET application that is mostly used from a single location, which is my client's office. The server is at a high-class datacenter.
Whenever I've been testing or using my application from outside their office I have consistently good connections but from their office the connection seems inconsistent. Sometimes requests just don't seem to make it to the server from the browser. I'm not familiar with the network hardware in the office, but they do have a T1 connection which should always be on.
I've tried ping and tracert and everything looks normal. When running Firebug during a failed request the request shows up in the log, then just sits there without showing it is sending any data, eventually it times out.
My question is, what tools can I use to diagnose this connection problem and start to narrow it down to a specific cause so I can fix it? Its an intermittent problem so a long running tool would probably make more sense, if there is any available.
Thanks for any help.
All of your standard ping and traceroute tools are probably your best bet. I'm not understanding though, where is the site located?
If you open command prompt, run ping -t aspwebsiteurl.domain <- will show if there is packet loss.
From command prompt again, tracert aspwebsiteurl.domain <- will show you what route the packet is taking to get the site. May also show you if there is one particular hop that is giving you the hickup.
Is there a proxy between the office and the datacenter that could be causing issues?
Also you could try Wireshark to try to debug the problem in more detail.
Speed Test - Internet Network Connection Speed may be of some help with some links to test out the connection at the client's office to see how well it works.
Another question is how far away is the client and the datacenter? If one is in New York and the other in Los Angeles then the distance apart may be a factor. Also, have you examined any possible DNS issues?