and the result will be saved in a file "index.html" in the current directory. You can open and view the file with a web browser.
If there are connection problems wget will try up to 20 times to reconnect. You can use the -t option to change the maximum number of attempts. For example, with
it will try only up to 10 times.
wget -t 10 linux.about.com
Instead of having the progress messages displayed on the standard output, you can save it to a log file with the -o option:
wget -o logfile linux.about.com
To run it in the background you would put an ampersand at the end as usual:
To download a copy of a complete web site, up to five levels deep, you use the -r option (for recursive):
wget -o logfile linux.about.com &
To enable offline viewing of this web site, you need to convert the links to point to the downloaded files. This is done as follows:
wget -r linux.about.com -o logfile
Finally, yo make sure all files linked to in the documents are are downloaded to enable complete offline viewing, you can use the the -p option:
wget --convert-links -r linux.about.com -o logfile
wget -p --convert-links -r linux.about.com -o logfile
For a complete description of all the options of the wget command see the wget man page, or type
at the command line.