WGET
Section: GNU Wget (1)
Updated: GNU Wget 1.7
Index
Return to Main Contents
Copyright (c) 1996, 1997, 1998, 2000, 2001 Free Software Foundation, Inc.
Permission is granted to make and distribute verbatim copies of
this manual provided the copyright notice and this permission notice
are preserved on all copies.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.1 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License" and "GNU Free
Documentation License", with no Front-Cover Texts, and with no
Back-Cover Texts. A copy of the license is included in the section
entitled "GNU Free Documentation License".
NAME
wget - GNU Wget Manual
SYNOPSIS
wget [
option]... [
URL]...
DESCRIPTION
GNU Wget is a freely available network utility to retrieve files from
the World Wide Web, using HTTP (Hyper Text Transfer Protocol) and
FTP (File Transfer Protocol), the two most widely used Internet
protocols. It has many useful features to make downloading easier, some
of them being:
- *
-
Wget is non-interactive, meaning that it can work in the background,
while the user is not logged on. This allows you to start a retrieval
and disconnect from the system, letting Wget finish the work. By
contrast, most of the Web browsers require constant user's presence,
which can be a great hindrance when transferring a lot of data.
- *
-
Wget is capable of descending recursively through the structure of
HTML documents and FTP directory trees, making a local copy of
the directory hierarchy similar to the one on the remote server. This
feature can be used to mirror archives and home pages, or traverse the
web in search of data, like a WWW robot. In that
spirit, Wget understands the norobots convention.
- *
-
File name wildcard matching and recursive mirroring of directories are
available when retrieving via FTP. Wget can read the time-stamp
information given by both HTTP and FTP servers, and store it
locally. Thus Wget can see if the remote file has changed since last
retrieval, and automatically retrieve the new version if it has. This
makes Wget suitable for mirroring of FTP sites, as well as home
pages.
- *
-
Wget works exceedingly well on slow or unstable connections,
retrying the document until it is fully retrieved, or until a
user-specified retry count is surpassed. It will try to resume the
download from the point of interruption, using REST with FTP
and Range with HTTP servers that support them.
- *
-
By default, Wget supports proxy servers, which can lighten the network
load, speed up retrieval and provide access behind firewalls. However,
if you are behind a firewall that requires that you use a socks style
gateway, you can get the socks library and build Wget with support for
socks. Wget also supports the passive FTP downloading as an
option.
- *
-
Builtin features offer mechanisms to tune which links you wish to follow.
- *
-
The retrieval is conveniently traced with printing dots, each dot
representing a fixed amount of data received (1KB by default). These
representations can be customized to your preferences.
- *
-
Most of the features are fully configurable, either through command line
options, or via the initialization file .wgetrc. Wget allows you to define global startup files
(/usr/local/etc/wgetrc by default) for site settings.
- *
-
Finally, GNU Wget is free software. This means that everyone may use
it, redistribute it and/or modify it under the terms of the GNU General
Public License, as published by the Free Software Foundation.
OPTIONS
Basic Startup Options
- -V
-
- --version
-
Display the version of Wget.
- -h
-
- --help
-
Print a help message describing all of Wget's command-line options.
- -b
-
- --background
-
Go to background immediately after startup. If no output file is
specified via the -o, output is redirected to wget-log.
- -e command
-
- --execute command
-
Execute command as if it were a part of .wgetrc. A command thus invoked will be executed
after the commands in .wgetrc, thus taking precedence over
them.
Logging and Input File Options
- -o logfile
-
- --output-file=logfile
-
Log all messages to logfile. The messages are normally reported
to standard error.
- -a logfile
-
- --append-output=logfile
-
Append to logfile. This is the same as -o, only it appends
to logfile instead of overwriting the old log file. If
logfile does not exist, a new file is created.
- -d
-
- --debug
-
Turn on debug output, meaning various information important to the
developers of Wget if it does not work properly. Your system
administrator may have chosen to compile Wget without debug support, in
which case -d will not work. Please note that compiling with
debug support is always safe---Wget compiled with the debug support will
not print any debug info unless requested with -d.
- -q
-
- --quiet
-
Turn off Wget's output.
- -v
-
- --verbose
-
Turn on verbose output, with all the available data. The default output
is verbose.
- -nv
-
- --non-verbose
-
Non-verbose output---turn off verbose without being completely quiet
(use -q for that), which means that error messages and basic
information still get printed.
- -i file
-
- --input-file=file
-
Read URLs from file, in which case no URLs need to be on
the command line. If there are URLs both on the command line and
in an input file, those on the command lines will be the first ones to
be retrieved. The file need not be an HTML document (but no
harm if it is)---it is enough if the URLs are just listed
sequentially.
However, if you specify --force-html, the document will be
regarded as html. In that case you may have problems with
relative links, which you can solve either by adding <base
href="url"> to the documents or by specifying
--base=url on the command line.
- -F
-
- --force-html
-
When input is read from a file, force it to be treated as an HTML
file. This enables you to retrieve relative links from existing
HTML files on your local disk, by adding <base
href="url"> to HTML, or using the --base command-line
option.
- -B URL
-
- --base=URL
-
When used in conjunction with -F, prepends URL to relative
links in the file specified by -i.
Download Options
- --bind-address=ADDRESS
-
When making client TCP/IP connections, bind() to ADDRESS on
the local machine. ADDRESS may be specified as a hostname or IP
address. This option can be useful if your machine is bound to multiple
IPs.
- -t number
-
- --tries=number
-
Set number of retries to number. Specify 0 or inf for
infinite retrying.
- -O file
-
- --output-document=file
-
The documents will not be written to the appropriate files, but all will
be concatenated together and written to file. If file
already exists, it will be overwritten. If the file is -,
the documents will be written to standard output. Including this option
automatically sets the number of tries to 1.
- -nc
-
- --no-clobber
-
If a file is downloaded more than once in the same directory, Wget's
behavior depends on a few options, including -nc. In certain
cases, the local file will be clobbered, or overwritten, upon
repeated download. In other cases it will be preserved.
When running Wget without -N, -nc, or -r,
downloading the same file in the same directory will result in the
original copy of file being preserved and the second copy being
named file.1. If that file is downloaded yet again, the
third copy will be named file.2, and so on. When
-nc is specified, this behavior is suppressed, and Wget will
refuse to download newer copies of file. Therefore,
``no-clobber'' is actually a misnomer in this mode---it's not
clobbering that's prevented (as the numeric suffixes were already
preventing clobbering), but rather the multiple version saving that's
prevented.
When running Wget with -r, but without -N or -nc,
re-downloading a file will result in the new copy simply overwriting the
old. Adding -nc will prevent this behavior, instead causing the
original version to be preserved and any newer copies on the server to
be ignored.
When running Wget with -N, with or without -r, the
decision as to whether or not to download a newer copy of a file depends
on the local and remote timestamp and size of the file. -nc may not be specified at the same
time as -N.
Note that when -nc is specified, files with the suffixes
.asp or (yuck) .htm will be loaded from the local disk
and parsed as if they had been retrieved from the Web.
- -c
-
- --continue
-
Continue getting a partially-downloaded file. This is useful when you
want to finish up a download started by a previous instance of Wget, or
by another program. For instance:
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z
If there is a file named ls-lR.Z in the current directory, Wget
will assume that it is the first portion of the remote file, and will
ask the server to continue the retrieval from an offset equal to the
length of the local file.
Note that you don't need to specify this option if you just want the
current invocation of Wget to retry downloading a file should the
connection be lost midway through. This is the default behavior.
-c only affects resumption of downloads started prior to
this invocation of Wget, and whose local files are still sitting around.
Without -c, the previous example would just download the remote
file to ls-lR.Z.1, leaving the truncated ls-lR.Z file
alone.
Beginning with Wget 1.7, if you use -c on a non-empty file, and
it turns out that the server does not support continued downloading,
Wget will refuse to start the download from scratch, which would
effectively ruin existing contents. If you really want the download to
start from scratch, remove the file.
Also beginning with Wget 1.7, if you use -c on a file which is of
equal size as the one on the server, Wget will refuse to download the
file and print an explanatory message. The same happens when the file
is smaller on the server than locally (presumably because it was changed
on the server since your last download attempt)---because ``continuing''
is not meaningful, no download occurs.
On the other side of the coin, while using -c, any file that's
bigger on the server than locally will be considered an incomplete
download and only (length(remote) - length(local)) bytes will be
downloaded and tacked onto the end of the local file. This behavior can
be desirable in certain cases---for instance, you can use wget -c
to download just the new portion that's been appended to a data
collection or log file.
However, if the file is bigger on the server because it's been
changed, as opposed to just appended to, you'll end up
with a garbled file. Wget has no way of verifying that the local file
is really a valid prefix of the remote file. You need to be especially
careful of this when using -c in conjunction with -r,
since every file will be considered as an ``incomplete download'' candidate.
Another instance where you'll get a garbled file if you try to use
-c is if you have a lame HTTP proxy that inserts a
``transfer interrupted'' string into the local file. In the future a
``rollback'' option may be added to deal with this case.
Note that -c only works with FTP servers and with HTTP
servers that support the Range header.
- --dot-style=style
-
Set the retrieval style to style. Wget traces the retrieval of
each document by printing dots on the screen, each dot representing a
fixed amount of retrieved data. Any number of dots may be separated in
a cluster, to make counting easier. This option allows you to
choose one of the pre-defined styles, determining the number of bytes
represented by a dot, the number of dots in a cluster, and the number of
dots on the line.
With the default style each dot represents 1K, there are ten dots
in a cluster and 50 dots in a line. The binary style has a more
``computer''-like orientation---8K dots, 16-dots clusters and 48 dots
per line (which makes for 384K lines). The mega style is
suitable for downloading very large files---each dot represents 64K
retrieved, there are eight dots in a cluster, and 48 dots on each line
(so each line contains 3M). The micro style is exactly the
reverse; it is suitable for downloading small files, with 128-byte dots,
8 dots per cluster, and 48 dots (6K) per line.
- -N
-
- --timestamping
-
Turn on time-stamping.
- -S
-
- --server-response
-
Print the headers sent by HTTP servers and responses sent by
FTP servers.
- --spider
-
When invoked with this option, Wget will behave as a Web spider,
which means that it will not download the pages, just check that they
are there. You can use it to check your bookmarks, e.g. with:
wget --spider --force-html -i bookmarks.asp
This feature needs much more work for Wget to get close to the
functionality of real WWW spiders.
- -T seconds
-
- --timeout=seconds
-
Set the read timeout to seconds seconds. Whenever a network read
is issued, the file descriptor is checked for a timeout, which could
otherwise leave a pending connection (uninterrupted read). The default
timeout is 900 seconds (fifteen minutes). Setting timeout to 0 will
disable checking for timeouts.
Please do not lower the default timeout value with this option unless
you know what you are doing.
- -w seconds
-
- --wait=seconds
-
Wait the specified number of seconds between the retrievals. Use of
this option is recommended, as it lightens the server load by making the
requests less frequent. Instead of in seconds, the time can be
specified in minutes using the m suffix, in hours using h
suffix, or in days using d suffix.
Specifying a large value for this option is useful if the network or the
destination host is down, so that Wget can wait long enough to
reasonably expect the network error to be fixed before the retry.
- --waitretry=seconds
-
If you don't want Wget to wait between every retrieval, but only
between retries of failed downloads, you can use this option. Wget will
use linear backoff, waiting 1 second after the first failure on a
given file, then waiting 2 seconds after the second failure on that
file, up to the maximum number of seconds you specify. Therefore,
a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55
seconds per file.
Note that this option is turned on by default in the global
wgetrc file.
- -Y on/off
-
- --proxy=on/off
-
Turn proxy support on or off. The proxy is on by default if the
appropriate environmental variable is defined.
- -Q quota
-
- --quota=quota
-
Specify download quota for automatic retrievals. The value can be
specified in bytes (default), kilobytes (with k suffix), or
megabytes (with m suffix).
Note that quota will never affect downloading a single file. So if you
specify wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz, all of the
ls-lR.gz will be downloaded. The same goes even when several
URLs are specified on the command-line. However, quota is
respected when retrieving either recursively, or from an input file.
Thus you may safely type wget -Q2m -i sites---download will be
aborted when the quota is exceeded.
Setting quota to 0 or to inf unlimits the download quota.
Directory Options
- -nd
-
- --no-directories
-
Do not create a hierarchy of directories when retrieving recursively.
With this option turned on, all files will get saved to the current
directory, without clobbering (if a name shows up more than once, the
filenames will get extensions .n).
- -x
-
- --force-directories
-
The opposite of -nd---create a hierarchy of directories, even if
one would not have been created otherwise. E.g. wget -x
http://fly.srk.fer.hr/robots.txt will save the downloaded file to
fly.srk.fer.hr/robots.txt.
- -nH
-
- --no-host-directories
-
Disable generation of host-prefixed directories. By default, invoking
Wget with -r http://fly.srk.fer.hr/ will create a structure of
directories beginning with fly.srk.fer.hr/. This option disables
such behavior.
- --cut-dirs=number
-
Ignore number directory components. This is useful for getting a
fine-grained control over the directory where recursive retrieval will
be saved.
Take, for example, the directory at
ftp://ftp.xemacs.org/pub/xemacs/. If you retrieve it with
-r, it will be saved locally under
ftp.xemacs.org/pub/xemacs/. While the -nH option can
remove the ftp.xemacs.org/ part, you are still stuck with
pub/xemacs. This is where --cut-dirs comes in handy; it
makes Wget not ``see'' number remote directory components. Here
are several examples of how --cut-dirs option works.
No options -> ftp.xemacs.org/pub/xemacs/
-nH -> pub/xemacs/
-nH --cut-dirs=1 -> xemacs/
-nH --cut-dirs=2 -> .
--cut-dirs=1 -> ftp.xemacs.org/xemacs/
...
If you just want to get rid of the directory structure, this option is
similar to a combination of -nd and -P. However, unlike
-nd, --cut-dirs does not lose with subdirectories---for
instance, with -nH --cut-dirs=1, a beta/ subdirectory will
be placed to xemacs/beta, as one would expect.
- -P prefix
-
- --directory-prefix=prefix
-
Set directory prefix to prefix. The directory prefix is the
directory where all other files and subdirectories will be saved to,
i.e. the top of the retrieval tree. The default is . (the
current directory).
HTTP Options
- -E
-
- --html-extension
-
If a file of type text/html is downloaded and the URL does not
end with the regexp \.[Hh][Tt][Mm][Ll]?, this option will cause
the suffix .asp to be appended to the local filename. This is
useful, for instance, when you're mirroring a remote site that uses
.asp pages, but you want the mirrored pages to be viewable on
your stock Apache server. Another good use for this is when you're
downloading the output of CGIs. A URL like
http://site.com/article.cgi?25 will be saved as
article.cgi?25.asp.
Note that filenames changed in this way will be re-downloaded every time
you re-mirror a site, because Wget can't tell that the local
X.asp file corresponds to remote URL X (since
it doesn't yet know that the URL produces output of type
text/html. To prevent this re-downloading, you must use
-k and -K so that the original version of the file will be
saved as X.orig.
- --http-user=user
-
- --http-passwd=password
-
Specify the username user and password password on an
HTTP server. According to the type of the challenge, Wget will
encode them using either the basic (insecure) or the
digest authentication scheme.
Another way to specify username and password is in the URL itself. For more information about security issues with
Wget,
- -C on/off
-
- --cache=on/off
-
When set to off, disable server-side cache. In this case, Wget will
send the remote server an appropriate directive (Pragma:
no-cache) to get the file from the remote service, rather than
returning the cached version. This is especially useful for retrieving
and flushing out-of-date documents on proxy servers.
Caching is allowed by default.
- --cookies=on/off
-
When set to off, disable the use of cookies. Cookies are a mechanism
for maintaining server-side state. The server sends the client a cookie
using the Set-Cookie header, and the client responds with the
same cookie upon further requests. Since cookies allow the server
owners to keep track of visitors and for sites to exchange this
information, some consider them a breach of privacy. The default is to
use cookies; however, storing cookies is not on by default.
- --load-cookies file
-
Load cookies from file before the first HTTP retrieval. The
format of file is one used by Netscape and Mozilla, at least their
Unix version.
- --save-cookies file
-
Save cookies from file at the end of session. Cookies whose
expiry time is not specified, or those that have already expired, are
not saved.
- --ignore-length
-
Unfortunately, some HTTP servers (CGI programs, to be more
precise) send out bogus Content-Length headers, which makes Wget
go wild, as it thinks not all the document was retrieved. You can spot
this syndrome if Wget retries getting the same document again and again,
each time claiming that the (otherwise normal) connection has closed on
the very same byte.
With this option, Wget will ignore the Content-Length header---as
if it never existed.
- --header=additional-header
-
Define an additional-header to be passed to the HTTP servers.
Headers must contain a : preceded by one or more non-blank
characters, and must not contain newlines.
You may define more than one additional header by specifying
--header more than once.
wget --header='Accept-Charset: iso-8859-2' \
--header='Accept-Language: hr' \
http://fly.srk.fer.hr/
Specification of an empty string as the header value will clear all
previous user-defined headers.
- --proxy-user=user
-
- --proxy-passwd=password
-
Specify the username user and password password for
authentication on a proxy server. Wget will encode them using the
basic authentication scheme.
- --referer=url
-
Include `Referer: url' header in HTTP request. Useful for
retrieving documents with server-side processing that assume they are
always being retrieved by interactive web browsers and only come out
properly when Referer is set to one of the pages that point to them.
- -s
-
- --save-headers
-
Save the headers sent by the HTTP server to the file, preceding the
actual contents, with an empty line as the separator.
- -U agent-string
-
- --user-agent=agent-string
-
Identify as agent-string to the HTTP server.
The HTTP protocol allows the clients to identify themselves using a
User-Agent header field. This enables distinguishing the
WWW software, usually for statistical purposes or for tracing of
protocol violations. Wget normally identifies as
Wget/version, version being the current version
number of Wget.
However, some sites have been known to impose the policy of tailoring
the output according to the User-Agent-supplied information.
While conceptually this is not such a bad idea, it has been abused by
servers denying information to clients other than Mozilla or
Microsoft Internet Explorer. This option allows you to change
the User-Agent line issued by Wget. Use of this option is
discouraged, unless you really know what you are doing.
FTP Options
- -nr
-
- --dont-remove-listing
-
Don't remove the temporary .listing files generated by FTP
retrievals. Normally, these files contain the raw directory listings
received from FTP servers. Not removing them can be useful for
debugging purposes, or when you want to be able to easily check on the
contents of remote server directories (e.g. to verify that a mirror
you're running is complete).
Note that even though Wget writes to a known filename for this file,
this is not a security hole in the scenario of a user making
.listing a symbolic link to /etc/passwd or something and
asking root to run Wget in his or her directory. Depending on
the options used, either Wget will refuse to write to .listing,
making the globbing/recursion/time-stamping operation fail, or the
symbolic link will be deleted and replaced with the actual
.listing file, or the listing will be written to a
.listing.number file.
Even though this situation isn't a problem, though, root should
never run Wget in a non-trusted user's directory. A user could do
something as simple as linking index.asp to /etc/passwd
and asking root to run Wget with -N or -r so the file
will be overwritten.
- -g on/off
-
- --glob=on/off
-
Turn FTP globbing on or off. Globbing means you may use the
shell-like special characters (wildcards), like *,
?, [ and ] to retrieve more than one file from the
same directory at once, like:
wget ftp://gnjilux.srk.fer.hr/*.msg
By default, globbing will be turned on if the URL contains a
globbing character. This option may be used to turn globbing on or off
permanently.
You may have to quote the URL to protect it from being expanded by
your shell. Globbing makes Wget look for a directory listing, which is
system-specific. This is why it currently works only with Unix FTP
servers (and the ones emulating Unix ls output).
- --passive-ftp
-
Use the passive FTP retrieval scheme, in which the client
initiates the data connection. This is sometimes required for FTP
to work behind firewalls.
- --retr-symlinks
-
Usually, when retrieving FTP directories recursively and a symbolic
link is encountered, the linked-to file is not downloaded. Instead, a
matching symbolic link is created on the local filesystem. The
pointed-to file will not be downloaded unless this recursive retrieval
would have encountered it separately and downloaded it anyway.
When --retr-symlinks is specified, however, symbolic links are
traversed and the pointed-to files are retrieved. At this time, this
option does not cause Wget to traverse symlinks to directories and
recurse through them, but in the future it should be enhanced to do
this.
Note that when retrieving a file (not a directory) because it was
specified on the commandline, rather than because it was recursed to,
this option has no effect. Symbolic links are always traversed in this
case.
Recursive Retrieval Options
- -r
-
- --recursive
-
Turn on recursive retrieving.
- -l depth
-
- --level=depth
-
Specify recursion maximum depth level depth. The default maximum depth is 5.
- --delete-after
-
This option tells Wget to delete every single file it downloads,
after having done so. It is useful for pre-fetching popular
pages through a proxy, e.g.:
wget -r -nd --delete-after http://whatever.com/~popular/page/
The -r option is to retrieve recursively, and -nd to not
create directories.
Note that --delete-after deletes files on the local machine. It
does not issue the DELE command to remote FTP sites, for
instance. Also note that when --delete-after is specified,
--convert-links is ignored, so .orig files are simply not
created in the first place.
- -k
-
- --convert-links
-
After the download is complete, convert the links in the document to
make them suitable for local viewing. This affects not only the visible
hyperlinks, but any part of the document that links to external content,
such as embedded images, links to style sheets, hyperlinks to non-HTML
content, etc.
Each link will be changed in one of the two ways:
- *
-
The links to files that have been downloaded by Wget will be changed to
refer to the file they point to as a relative link.
Example: if the downloaded file /foo/doc.asp links to
/bar/img.gif, also downloaded, then the link in doc.asp
will be modified to point to ../bar/img.gif. This kind of
transformation works reliably for arbitrary combinations of directories.
- *
-
The links to files that have not been downloaded by Wget will be changed
to include host name and absolute path of the location they point to.
Example: if the downloaded file /foo/doc.asp links to
/bar/img.gif (or to ../bar/img.gif), then the link in
doc.asp will be modified to point to
http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file was
downloaded, the link will refer to its local name; if it was not
downloaded, the link will refer to its full Internet address rather than
presenting a broken link. The fact that the former links are converted
to relative links ensures that you can move the downloaded hierarchy to
another directory.
Note that only at the end of the download can Wget know which links have
been downloaded. Because of that, the work done by -k will be
performed at the end of all the downloads.
- -K
-
- --backup-converted
-
When converting a file, back up the original version with a .orig
suffix. Affects the behavior of -N.
- -m
-
- --mirror
-
Turn on options suitable for mirroring. This option turns on recursion
and time-stamping, sets infinite recursion depth and keeps FTP
directory listings. It is currently equivalent to
-r -N -l inf -nr.
- -p
-
- --page-requisites
-
This option causes Wget to download all the files that are necessary to
properly display a given HTML page. This includes such things as
inlined images, sounds, and referenced stylesheets.
Ordinarily, when downloading a single HTML page, any requisite documents
that may be needed to display it properly are not downloaded. Using
-r together with -l can help, but since Wget does not
ordinarily distinguish between external and inlined documents, one is
generally left with ``leaf documents'' that are missing their
requisites.
For instance, say document 1.asp contains an <IMG> tag
referencing 1.gif and an <A> tag pointing to external
document 2.asp. Say that 2.asp is similar but that its
image is 2.gif and it links to 3.asp. Say this
continues up to some arbitrarily high number.
If one executes the command:
wget -r -l 2 http://I<site>/1.asp
then 1.asp, 1.gif, 2.asp, 2.gif, and
3.asp will be downloaded. As you can see, 3.asp is
without its requisite 3.gif because Wget is simply counting the
number of hops (up to 2) away from 1.asp in order to determine
where to stop the recursion. However, with this command:
wget -r -l 2 -p http://I<site>/1.asp
all the above files and 3.asp's requisite 3.gif
will be downloaded. Similarly,
wget -r -l 1 -p http://I<site>/1.asp
will cause 1.asp, 1.gif, 2.asp, and 2.gif
to be downloaded. One might think that:
wget -r -l 0 -p http://I<site>/1.asp
would download just 1.asp and 1.gif, but unfortunately
this is not the case, because -l 0 is equivalent to
-l inf---that is, infinite recursion. To download a single HTML
page (or a handful of them, all specified on the commandline or in a
-i URL input file) and its (or their) requisites, simply leave off
-r and -l:
wget -p http://I<site>/1.asp
Note that Wget will behave as if -r had been specified, but only
that single page and its requisites will be downloaded. Links from that
page to external documents will not be followed. Actually, to download
a single page and all its requisites (even if they exist on separate
websites), and make sure the lot displays properly locally, this author
likes to use a few options in addition to -p:
wget -E -H -k -K -nh -p http://I<site>/I<document>
In one case you'll need to add a couple more options. If document
is a <FRAMESET> page, the ``one more hop'' that -p gives you
won't be enough---you'll get the <FRAME> pages that are
referenced, but you won't get their requisites. Therefore, in
this case you'll need to add -r -l1 to the commandline. The
-r -l1 will recurse from the <FRAMESET> page to to the
<FRAME> pages, and the -p will get their requisites. If
you're already using a recursion level of 1 or more, you'll need to up
it by one. In the future, -p may be made smarter so that it'll
do ``two more hops'' in the case of a <FRAMESET> page.
To finish off this topic, it's worth knowing that Wget's idea of an
external document link is any URL specified in an <A> tag, an
<AREA> tag, or a <LINK> tag other than <LINK
REL="stylesheet">.
Recursive Accept/Reject Options
- -A acclist --accept acclist
-
- -R rejlist --reject rejlist
-
Specify comma-separated lists of file name suffixes or patterns to
accept or reject.
- -D domain-list
-
- --domains=domain-list
-
Set domains to be accepted and DNS looked-up, where
domain-list is a comma-separated list. Note that it does
not turn on -H. This option speeds things up, even if
only one host is spanned.
- --exclude-domains domain-list
-
Exclude the domains given in a comma-separated domain-list from
DNS-lookup.
- --follow-ftp
-
Follow FTP links from HTML documents. Without this option,
Wget will ignore all the FTP links.
- --follow-tags=list
-
Wget has an internal table of HTML tag / attribute pairs that it
considers when looking for linked documents during a recursive
retrieval. If a user wants only a subset of those tags to be
considered, however, he or she should be specify such tags in a
comma-separated list with this option.
- -G list
-
- --ignore-tags=list
-
This is the opposite of the --follow-tags option. To skip
certain HTML tags when recursively looking for documents to download,
specify them in a comma-separated list.
In the past, the -G option was the best bet for downloading a
single page and its requisites, using a commandline like:
wget -Ga,area -H -k -K -nh -r http://I<site>/I<document>
However, the author of this option came across a page with tags like
<LINK REL="home" HREF="/"> and came to the realization that
-G was not enough. One can't just tell Wget to ignore
<LINK>, because then stylesheets will not be downloaded. Now the
best bet for downloading a single page and its requisites is the
dedicated --page-requisites option.
- -H
-
- --span-hosts
-
Enable spanning across hosts when doing recursive retrieving.
- -L
-
- --relative
-
Follow relative links only. Useful for retrieving a specific home page
without any distractions, not even those from the same hosts.
- -I list
-
- --include-directories=list
-
Specify a comma-separated list of directories you wish to follow when
downloading Elements
of list may contain wildcards.
- -X list
-
- --exclude-directories=list
-
Specify a comma-separated list of directories you wish to exclude from
download Elements of
list may contain wildcards.
- -nh
-
- --no-host-lookup
-
Disable the time-consuming DNS lookup of almost all hosts.
- -np
-
- --no-parent
-
Do not ever ascend to the parent directory when retrieving recursively.
This is a useful option, since it guarantees that only the files
below a certain hierarchy will be downloaded.
FILES
- /usr/local/etc/wgetrc
-
Default location of the global startup file.
- .wgetrc
-
User startup file.
BUGS
You are welcome to send bug reports about GNU Wget to
<
bug-wget@gnu.org>.
Before actually submitting a bug report, please try to follow a few
simple guidelines.
- 1.
-
Please try to ascertain that the behaviour you see really is a bug. If
Wget crashes, it's a bug. If Wget does not behave as documented,
it's a bug. If things work strange, but you are not sure about the way
they are supposed to work, it might well be a bug.
- 2.
-
Try to repeat the bug in as simple circumstances as possible. E.g. if
Wget crashes on wget -rLl0 -t5 -Y0 http://yoyodyne.com -o
/tmp/log, you should try to see if it will crash with a simpler set of
options.
Also, while I will probably be interested to know the contents of your
.wgetrc file, just dumping it into the debug message is probably
a bad idea. Instead, you should first try to see if the bug repeats
with .wgetrc moved out of the way. Only if it turns out that
.wgetrc settings affect the bug, should you mail me the relevant
parts of the file.
- 3.
-
Please start Wget with -d option and send the log (or the
relevant parts of it). If Wget was compiled without debug support,
recompile it. It is much easier to trace bugs with debug support
on.
- 4.
-
If Wget has crashed, try to run it in a debugger, e.g. gdb `which
wget` core and type where to get the backtrace.
- 5.
-
Find where the bug is, fix it and send me the patches. :-)
SEE ALSO
GNU Info entry for
wget.
AUTHOR
Originally written by Hrvoje Niksic <
hniksic@arsdigita.com>.
COPYRIGHT
Copyright (c) 1996, 1997, 1998, 2000, 2001 Free Software
Foundation, Inc.
Permission is granted to make and distribute verbatim copies of
this manual provided the copyright notice and this permission notice
are preserved on all copies.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.1 or
any later version published by the Free Software Foundation; with the
Invariant Sections being ``GNU General Public License'' and ``GNU Free
Documentation License'', with no Front-Cover Texts, and with no
Back-Cover Texts. A copy of the license is included in the section
entitled ``GNU Free Documentation License''.
Index
- NAME
-
- SYNOPSIS
-
- DESCRIPTION
-
- OPTIONS
-
- Basic Startup Options
-
- Logging and Input File Options
-
- Download Options
-
- Directory Options
-
- HTTP Options
-
- FTP Options
-
- Recursive Retrieval Options
-
- Recursive Accept/Reject Options
-
- FILES
-
- BUGS
-
- SEE ALSO
-
- AUTHOR
-
- COPYRIGHT
-