How to change relative URL to absolute URL in wget - unix

I am writing a shell script to download and display the content from a site and I am saving this content to my local file system.
I have used the following command in the script to get the content:
/usr/sfw/bin/wget -q -p -nH -np --referer=$INFO_REF --timeout=300 -P $TMPDIR $INFO_URL
where INFO_REF is the page where I need to display the content from INFO_URL.
The problem is that I am able to get the content (images/css) as an html page but in this html the links on the images and headlines,which are pointing to different site are not working and the path of the URLs (image links) are changing to my local file system path.
I tried adding the -k option in wget and with this option these URLs are pointing to correct location but now the images are not coming as the images path are changing from relative to absolute location. Without -k images are coming properly.
Please tell what option can I use so that images and the links in the page both come properly. Do I need to use two seperate wget commands one for images and another for links in the page?

As per the wget manual:
Actually, to download a single page
and all its requisites (even if they
exist on separate websites), and make
sure the lot displays properly
locally, this author likes to use a
few options in addition to -p:
wget -E -H -k -K -p http://site/document
In order to adjust it to your needs:
/usr/sfw/bin/wget -q -E -H -k -K -p -nH --referer=$INFO_REF --timeout=300 -P $TMPDIR $INFO_URL
I removed the -np because I think it's wrong (maybe a page dependency is in the parent directory).

Related

How to make HTML changes or customize module like poll in BigBlueButton html5-client?

I'm using bigbluebutton (2.3-dev) in Ubuntu 18.04 server I installed it using bbb-install (# wget -qO- https://ubuntu.bigbluebutton.org/bbb-install.sh | bash -s -- -v bionic-230-dev -s bbb.example.com -e info#example.com -a -w) and its work perfect.
Now I want to make some changes in html5-client (https://doamin/html5client/join?sessionToken=e)
I found the file path - /usr/share/meteor/bundle and it's served from this path /usr/share/meteor/bundle/programs/web.browser but problem is this is a build file so I can't make any changes because every time this file is new generator when stop and start or restart.
I want to add one link in left side menu (http://prntscr.com/umy63l). How can I do this and where I can do this?
Thanks in advance!
Did you install a dev environement for bbb-html5 ? You can find the doc about it here :
https://docs.bigbluebutton.org/2.2/dev.html#developing-the-html5-client

wget, recursively download all jpegs works only on website homepage

I'm using wget to download all jpegs from a website.
I searched a lot and this should be the way:
wget -r -nd -A jpg "http://www.hotelninfea.com"
This should recursively -r download files jpegs -A jpg and store all files in a single directory, without recreating website directory tree -nd
Running this command downloads only the jpegs from the homepage of the website, not the whole jpegs of all the website.
I know that a jpeg file could have different extensions (jpg, jpeg) and so on, but this is not the case, also there aren't any robots.txt restrictions acting.
If I remove the filter from the previous command, it works as expected
wget -r -nd "http://www.hotelninfea.com"
This is happening on Lubuntu 16.04 64bit, wget 1.17.1
Is this a bug or I am misunderstanding something?
I suspect that this is happening because the main page you mention contains links to the other pages in the form http://.../something.php, i.e., there is an explicit extension. Then the option -A jpeg has the "side-effect" of removing those pages from the traversal process.
Perhaps a bit dirty workaround in this particular case would be something like this:
wget -r -nd -A jpg,jpeg,php "http://www.hotelninfea.com" && rm -f *.php
i.e., to download only the necessary extra pages and then delete them if wget successfully terminates.
ewcz anwer pointed me to the right way, the --accept acclist parameter has a dual role, it define the rules of file saving and the rules of following links.
Reading deeply the manual I found this
If ‘--adjust-extension’ was specified, the local filename might have ‘.html’ appended to it. If Wget is invoked with ‘-E -A.php’, a filename such as ‘index.php’ will match be accepted, but upon download will be named ‘index.php.html’, which no longer matches, and so the file will be deleted.
So you can do this
wget -r -nd -E -A jpg,php,asp "http://www.hotelninfea.com"
But of course a webmaster could have been using custom extensions
So I think that the most robust solution would be a bash script, something
like
WEBSITE="http://www.hotelninfea.com"
DEST_DIR="."
image_urls=`wget -nd --spider -r "$WEBSITE" 2>&1 | grep '^--' | awk '{ print $3 }' | grep -i '\.\(jpeg\|jpg\)'`
for image_url in $image_urls; do
DESTFILE="$DEST_DIR/$RANDOM.jpg"
wget "$image_url" -O "$DESTFILE"
done
--spider wget will not download the pages, just check that they are there
$RANDOM asks a random number to the operating system

How to run my script using wget?

I have a URL in my custom module which runs a long script. If i call url via wget it downloads the page content. It doesn't run the script. How to do it?
I would have thought that even though it downloaded the page it would still run the script.
To run without downloading the file use:
wget -O - -q -t 1 http://example.com/path/to/file.php
From memory:
-O and the hyphen are redirecting the output so it's not saved to a file.
-q is for quiet
-t is the number of attempts.
You can use man wget to look any more options up.

wget wont download actual files

I've looked around for quite a while now and haven't figured out how to sort this out.
I'm trying to download files from a website, but only ever get an 'index.html' returned. This is useless to me, as I need the actual files.
I've been using commands like
wget --no-check-certificate -nc -nH -r -k -p -np --cut-dirs=3 \https://websitename/directory/folder_of_interest/
(I have my username and password set up in the .wgetrc file).
The above code will return the recursive directories and in the final one will just be the index.html file.
I could really use a hand here.
In your question you have
wget \https://websitename/directory/folder_of_interest
This originally might have been
wget \
https://websitename/directory/folder_of_interest
which is correct because the backslash is escaping the newline, but with your example is it incorrectly escaping the h. Remove the backslash or move the URL to the next line.

Get list of files via http server using cli (zsh/bash)

Greetings to everyone,
I'm on OSX. I use the terminal a lot as a habit from my Linux old days that I never surpassed. I wanted to download the files listed in this http server: http://files.ubuntu-gr.org/ubuntistas/pdfs/
I select them all with the mouse, put them in a txt files and then gave the following command on the terminal:
for i in `cat ../newfile`; do wget http://files.ubuntu-gr.org/ubuntistas/pdfs/$i;done
I guess it's pretty self explanatory.
I was wondering if there's any easier, better, cooler way to download this "linked" pdf files using wget or curl.
Regards
You can do this with one line of wget as follows:
wget -r -nd -A pdf -I /ubuntistas/pdfs/ http://files.ubuntu-gr.org/ubuntistas/pdfs/
Here's what each parameter means:
-r makes wget recursively follow links
-nd avoids creating directories so all files are stored in the current directory
-A restricts the files saved by type
-I restricts by directory (this one is important if you don't want to download the whole internet ;)

Resources