I say: Point it at a URL and it downloads the entire website. Once you get the hang of the interface it's really easy to use.
They say: "WebReaper is web crawler or spider, which can work its way through a website, downloading pages, pictures and objects that it finds so that they can be viewed locally, without needing to be connected to the internet. The sites can be saved locally as a fully-browsable website which can be viewed with any browser (such as Internet Explorer, Netscape, Opera, etc), or they can be saved into the Internet Explorer cache and viewed using IE's offline mode as if the you'd surfed the sites 'by hand'.
To use WebReaper, simply enter a starting URL, and hit the Go button. The program will then download the page at that URL, parsing the HTML as it goes, looking for links to other pages and objects. It will then extract this list of sub-links and download them. This process continues recursively until either no more links fulfil WebReaper's filter criteria or your hard disk becomes full - which ever happens first!
The locally saved files will have their HTML links adjusted so that they can be browsed as if they were being read directly from the internet.
The download is fully configurable - custom hierarchical filters can be constructed from 12 different filter types to allow targetted downloads. Simple filters can be built using the Filter Wizard, or more complex ones can be hand-crafted."