Wordpress not using direct linking for the download links (looks like enterprise software developer who generate links dynamically to track installation).
Use wget http://wordpress.org/latest.tar.gz is not getting the right file name.
I dont want save in desktop and upload to server because I'm running slow internet connection.
I fail to see what the problem is:
marc#panic:~$ wget http://wordpress.org/latest.tar.gz
--2011-04-01 11:19:27-- http://wordpress.org/latest.tar.gz
Resolving wordpress.org... 72.233.56.139, 72.233.56.138
Connecting to wordpress.org|72.233.56.139|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: `latest.tar.gz'
[ <=> ] 2,365,766 1.09M/s
Related
I am using Saucelabs to test my application on Mac, chrome configuration as I am using windows machine.
As per Saucelabs documentation, downloaded the Saucelabs Connect Proxy. Extracted the file and went to bin folder in command line and executed the below command
bin/sc -u <sauce_username> -k <sauce_accesskey> -x <sauce_data_center> -i <tunnel_id>
I got the message on the command line as "Sauce Connect is up, you may start your tests." Showed one tunnel is active on the SauceLabs my account under tunnel tab.
I started the session by going to Live --> Crossbrowser; selected the tunnel, localhost application url, browser(chrome 90) and Mac-Sierra and Start Session
Opened the application but it didn't show the feature which are on localhost.
Anyone, please help me on this, is there anything wrong i am doing in the proxy connection, because the same is working fine, if i directly open the application url on my windows machine with chrome.
I found the answer in the sauce labs documentation itself. The problem i am getting is related to SSL and here the solution.
If you don't want any domains to be SSL re-encrypted, you can specify all with the argument (i.e., -B all or --no-ssl-bump-domains all)
Now when run the below command to start the tunnel it resolve the issue.
bin/sc -u <sauce_username> -k <sauce_accesskey> -x <sauce_data_center> -i <tunnel_id> -B all
My institute recently installed a new proxy server for our network. I am trying to configure my Cygwin environment to be able to run wget and download data from a remote repository.
Browsing the internet I have found two different solutions to my problem, but no one of them seem to work in my case.
The first one I tried was to follow these instructions, so in Cygwin:
cd /cygdrive/c/cygwin64/etc/
nano wgetrc
at the end of the file, I added:
use_proxy = on
http_proxy=http://username:password#my.proxy.ip:my.port/
https_proxy=https://username:password#my.proxy.ip:my.port/
ftp_proxy=http://username:password#my.proxy.ip:my.port/
(of course, using my user and password)
The second approach was what was suggested by this SO post, so in my Cygwin environment:
export http_proxy=http://username:password#my.proxy.ip:my.port/
export https_proxy=https://username:password#my.proxy.ip:my.port/
export ftp_proxy=http://username:password#my.proxy.ip:my.port/
in both cases, if I try to test my wget, I get the following:
$ wget http://www.google.com
--2020-01-30 12:12:22-- http://www.google.com/
Resolving my.proxy.ip (my.proxy.ip)... 10.1XX.XXX.XX
Connecting to my.proxy.ip (my.proxy.ip)|10.1XX.XXX.XX|:8XXX... connected.
Proxy request sent, awaiting response... 407 Proxy Authentication Required
2020-01-30 12:12:22 ERROR 407: Proxy Authentication Required.
It looks like if my user and password are not ok, but I actually checked them on my browsers and my credentials work just fine.
Any idea on what this could be due to?
This problem was solved thanks to the suggestion of a User of the community AskUbuntu.
Basically, instead of editing the global configuration file wgetrc, I should have created a new .wgetrc with my proxy configuration in my Cygwin home directory.
In summary:
Step 1 - Create a .wgetrc file;
nano ~/.wgetrc
Step 2 - record in this file the proxy info:
use_proxy=on
http_proxy=http://my.proxy.ip:my.port
https_proxy=https://my.proxy.ip:my.port
ftp_proxy=http://my.proxy.ip:my.port
proxy_user=username
proxy_password=password
https://drive.google.com/a/uci.edu/uc?export=download&confirm=LJ_a&id=0Bxy-54SBqeekTlE4Qy1mWWpsYTQ
I am attempting to use Wget to download the file above. However, it only generates 1 KB log file. I enter:
wget https://drive.google.com/a/uci.edu/uc?export=download&confirm=a-GD&id=0Bxy-54SBqeekTlE4Qy1mWWpsYTQ
However, this gives me a log file instead of actually downloading the file.
The file size is 13 GB tar. The log file looks like this:
--2017-11-14 13:59:32-- https://drive.google.com/a/uci.edu/uc export=download
Resolving drive.google.com (drive.google.com)... [IP ADDRESS GIVEN]
Connecting to drive.google.com (drive.google.com)|[IP ADDRESS GIVEN]... connected.
HTTP request sent, awaiting response... 400 Bad Request
2017-11-14 13:59:33 ERROR 400: Bad Request.
Open download link in the browser with incognito mode. Proceed normally as you would to download that file.
Click "Download" button. When download starts in the browser, pause it and copy the download link.
At command prompt, do
wget copied_link
It should work for any file. or you can directly download your file from here
How can I stop the http server, downloaded using 'npm install http-server" comand in terminal (console) and launched then?
Simply Ctrl+C, if you read the output after you launch it, you should see:
Starting up http-server, serving xxx
Available on:
http://<some ip>:<some port>
Hit CTRL-C to stop the server
Its built on node so Kill the node process for stopping it if it stuck. You can find all the node process ids and see what I'd your server have and kill that.
I have a service that redirects users to temporary pre-signed AWS downloads. These are large files, often 5-10gb. To prevent download sharing, we have a relatively short (30 seconds) valid lifespan.
Everything is working except that on slow internet connections, they tend to fail or get interrupted. wget has a feature that automatically retries the download. However, instead of retrying the original URL (eg: http://service.com/download/file.zip), wget retries the redirected pre-signed URL (eg: http://service.s3.amazonaws.com/file.zip?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=1468000000)
Since these are large files, and the pre-signed lifespan is so short, that temporary url is no longer valid and the user gets a 403 Forbidden result.
Originally, when we noticed the problem, we were using 302 Found temporary redirects. A little research seemed to indicate we SHOULD have been using 307 Temporary Redirect. However, that didn't resolve the problem with wget. For grins and giggles, we tried 303 See Other, but that didn't work either.
Does anyone have any idea how get wget to retry the original URL instead of the redirected URL?
below is a wget example log:
--2016-07-06 10:29:51-- https://service.com/download/file.zip
Connecting to service.com (service.com)|10.0.0.1|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location:
https://service.s3.amazonaws.com/file.zip?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=1468000000
[following]
--2016-07-06 10:29:52-- https://service.s3.amazonaws.com/file.zip?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=1468000000
Resolving service.s3.amazonaws.com (service.s3.amazonaws.com)...
54.231.12.129
Connecting to service.s3.amazonaws.com
(service.s3.amazonaws.com)|54.231.12.129|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2070666907 (1.9G) [application/zip]
Saving to: ‘file.zip’
file.zip 53%[=========> ] 1.03G --.-KB/s in 18m 7s
2016-07-06 10:47:59 (995 KB/s) - Read error at byte
1107205784/2070666907 (The specified session has been invalidated for
some reason.). Retrying.
--2016-07-06 10:48:00-- (try: 2) https://service.s3.amazonaws.com/file.zip?AWSAccessKeyId=XXXX&Signature=XXXX&Expires=1468000000
Connecting to service.s3.amazonaws.com
(service.s3.amazonaws.com)|54.231.12.129|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2016-07-06 10:48:01 ERROR 403: Forbidden.
I had a similar issue, and a similar answer as #panzerito, but broke it up into a script i called loopdone
#!/bin/bash
until `$1`; do sleep 1; echo restarting; done
then I can just do loopdone "wget -c http://my.url/" (incl quotes) to force it to run again and again (and resume, unless server does not support it) until exit code is 0. (meaning no error)
Bash-code:
initial_error_EXIT_STATUS; until [ "$?" -eq "0" ]; do wget https://example.com/download/file.zip -c; done