GNU Wget 1.16 built on linux-gnueabihf
上Raspberry Pi 3
如何强制 wget 获取整个站点(跟随链接,像机器人一样),而不仅仅是第一个索引?
我试过了:
wget -r http://aol.com
wget -r -l0 http://aol.com
wget -r -m -l0 http://aol.com
每个命令都以相同的方式完成:
--2017-11-29 08:05:42-- http://aol.com/
Resolving aol.com (aol.com)... 149.174.149.73, 64.12.249.135, 149.174.110.105, ...
Connecting to aol.com (aol.com)|149.174.149.73|:80... connected.
HTTP request sent, awaiting response... 301 Moved Permanently
Location: https://www.aol.com/ [following]
--2017-11-29 08:05:42-- https://www.aol.com/
Resolving www.aol.com (www.aol.com)... 34.233.220.13, 34.235.7.32, 52.6.64.98, ...
Connecting to www.aol.com (www.aol.com)|34.233.220.13|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Last-modified header missing -- time-stamps turned off.
--2017-11-29 08:05:44-- https://www.aol.com/
Reusing existing connection to www.aol.com:443.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘aol.com/index.html’
aol.com/index.html [ <=> ] 359.95K 751KB/s in 0.5s
2017-11-29 08:05:45 (751 KB/s) - ‘aol.com/index.html’ saved [368585]
FINISHED --2017-11-29 08:05:45--
Total wall clock time: 2.8s
Downloaded: 1 files, 360K in 0.5s (751 KB/s)
我究竟做错了什么?