I was using wget to download a file,like this:
wget link/file.zip
the file.zip was about 100M, but I just receive 5552B:
and whatever I downloaded(large than 5552B, from some other hosts), I just receive 5552B!
The HTTP response header content-length was 5552B too!
I was using Ubuntu14.04, Are there some network configurations to solve this problem?
Thank you very much!!
Related
I am creating an ngrok server using the command: ngrok http 2567
In my application i'm doing various console.log calls and I'd like to be able to view them!
I have tried using ngrok help and using the --log flag, but with no luck.
How can I view my logs when hosting on ngrok?
If you want to see the log from ngrok you should use localhost:4040, there is a list from all the requests you have done.
Try running command: ./ngrok <http/https> <port_number> --log=stdout
Ex: ./ngrok http 8080 --log=stdout
Worked for me.
Use following command with > ngrok.log & to output ngrok logs into one file named ngrok.log (You can rename accordingly based on your need):
ngrok <http/https> <port_number> --log=stdout > ngrok.log &
After that, open ngrok.log to see logs details.
My institute recently installed a new proxy server for our network. I am trying to configure my Cygwin environment to be able to run wget and download data from a remote repository.
Browsing the internet I have found two different solutions to my problem, but no one of them seem to work in my case.
The first one I tried was to follow these instructions, so in Cygwin:
cd /cygdrive/c/cygwin64/etc/
nano wgetrc
at the end of the file, I added:
use_proxy = on
http_proxy=http://username:password#my.proxy.ip:my.port/
https_proxy=https://username:password#my.proxy.ip:my.port/
ftp_proxy=http://username:password#my.proxy.ip:my.port/
(of course, using my user and password)
The second approach was what was suggested by this SO post, so in my Cygwin environment:
export http_proxy=http://username:password#my.proxy.ip:my.port/
export https_proxy=https://username:password#my.proxy.ip:my.port/
export ftp_proxy=http://username:password#my.proxy.ip:my.port/
in both cases, if I try to test my wget, I get the following:
$ wget http://www.google.com
--2020-01-30 12:12:22-- http://www.google.com/
Resolving my.proxy.ip (my.proxy.ip)... 10.1XX.XXX.XX
Connecting to my.proxy.ip (my.proxy.ip)|10.1XX.XXX.XX|:8XXX... connected.
Proxy request sent, awaiting response... 407 Proxy Authentication Required
2020-01-30 12:12:22 ERROR 407: Proxy Authentication Required.
It looks like if my user and password are not ok, but I actually checked them on my browsers and my credentials work just fine.
Any idea on what this could be due to?
This problem was solved thanks to the suggestion of a User of the community AskUbuntu.
Basically, instead of editing the global configuration file wgetrc, I should have created a new .wgetrc with my proxy configuration in my Cygwin home directory.
In summary:
Step 1 - Create a .wgetrc file;
nano ~/.wgetrc
Step 2 - record in this file the proxy info:
use_proxy=on
http_proxy=http://my.proxy.ip:my.port
https_proxy=https://my.proxy.ip:my.port
ftp_proxy=http://my.proxy.ip:my.port
proxy_user=username
proxy_password=password
https://drive.google.com/a/uci.edu/uc?export=download&confirm=LJ_a&id=0Bxy-54SBqeekTlE4Qy1mWWpsYTQ
I am attempting to use Wget to download the file above. However, it only generates 1 KB log file. I enter:
wget https://drive.google.com/a/uci.edu/uc?export=download&confirm=a-GD&id=0Bxy-54SBqeekTlE4Qy1mWWpsYTQ
However, this gives me a log file instead of actually downloading the file.
The file size is 13 GB tar. The log file looks like this:
--2017-11-14 13:59:32-- https://drive.google.com/a/uci.edu/uc export=download
Resolving drive.google.com (drive.google.com)... [IP ADDRESS GIVEN]
Connecting to drive.google.com (drive.google.com)|[IP ADDRESS GIVEN]... connected.
HTTP request sent, awaiting response... 400 Bad Request
2017-11-14 13:59:33 ERROR 400: Bad Request.
Open download link in the browser with incognito mode. Proceed normally as you would to download that file.
Click "Download" button. When download starts in the browser, pause it and copy the download link.
At command prompt, do
wget copied_link
It should work for any file. or you can directly download your file from here
Is there a command line utility where you can simply set up an HTTP request and have the trace simply output back to the console?
Also specifying the method simply would be a great feature instead of the method being a side effect.
I can get all the information I need with cURL but I can't figure out a way to just display it without dumping everything to files.
I'd like the output to show the sent headers the received headers and the body of the message.
There must be something out there but I haven't been able to google for it. Figured I should ask before going off and writing it myself.
I dislike answering my own question but c-smile's answer lead me down the right track:
Short answer shell script over cURL:
curl --dump-header - "$#"
The - [dash] meaning stdout is a convention I was unaware of but also works for wget and a number of other unix utilities. It is apparently not part of the shell but built into each utility. The wget equivalent is:
wget --save-headers -qO - "$#"
Did you try wget:
http://www.gnu.org/software/wget/manual/wget.html#Wgetrc-Commands ?
Like wget --save-headers ...
To include the HTTP headers in the output (as well as the server response), just use curl’s -i/--include option. For example:
curl -i "http://www.google.com/"
Here’s what man curl says about this setting:
-i/--include
(HTTP) Include the HTTP-header in the output. The HTTP-header
includes things like server-name, date of the document, HTTP-
version and more...
If this option is used twice, the second will again disable
header include.
Try http, e.g.
http -v example.org
Further into at https://httpie.org
It even includes a page to try online:
https://httpie.org/run
Telnet has for long been a well-known (though now forgotten, I guess) tool for looking at a web page. The general idea is to telnet to the http port, to type an http 1.1 GET command, and then to see the served page on the screen.
A good detailed explanation is http://support.microsoft.com/kb/279466
A Google search yields a whole bunch more.
Use telnet on port 80
For example:
telnet telehack.com 80
GET / HTTP/1.1
host: telehack.com
<CR>
<CR>
<CR> means Enter
Wordpress not using direct linking for the download links (looks like enterprise software developer who generate links dynamically to track installation).
Use wget http://wordpress.org/latest.tar.gz is not getting the right file name.
I dont want save in desktop and upload to server because I'm running slow internet connection.
I fail to see what the problem is:
marc#panic:~$ wget http://wordpress.org/latest.tar.gz
--2011-04-01 11:19:27-- http://wordpress.org/latest.tar.gz
Resolving wordpress.org... 72.233.56.139, 72.233.56.138
Connecting to wordpress.org|72.233.56.139|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: `latest.tar.gz'
[ <=> ] 2,365,766 1.09M/s