How to Download an Entire Website for Offline Reading

Joel Lee Updated 19-11-2018

Although Wi-Fi is available everywhere these days, you may find yourself without it from time to time. And when you do, there may be certain websites you wish you could save and access while offline—perhaps for research, entertainment, or posterity.


It’s easy enough to save individual web pages for offline reading, but what if you want to download an entire website? Well, it’s easier than you think! Here are four nifty tools you can use to download any website for offline reading, zero effort required.

1. WebCopy

Read a site offline with WebCopy

Available for Windows only.

WebCopy by Cyotek takes a website URL and scans it for links, pages, and media. As it finds pages, it recursively looks for more links, pages, and media until the whole website is discovered. Then you can use the configuration options to decide which parts to download offline.

The interesting thing about WebCopy is you can set up multiple “projects” that each have their own settings and configurations. This makes it easy to re-download many different sites whenever you want, each one in the same exact way every time.


One project can copy many websites, so use them with an organized plan (e.g. a “Tech” project for copying tech sites).

How to Download an Entire Website With WebCopy

  1. Install and launch the app.
  2. Navigate to File > New to create a new project.
  3. Type the URL into the Website field.
  4. Change the Save folder field to where you want the site saved.
  5. Play around with Project > Rules… (learn more about WebCopy Rules).
  6. Navigate to File > Save As… to save the project.
  7. Click Copy Website in the toolbar to start the process.

Once the copying is done, you can use the Results tab to see the status of each individual page and/or media file. The Errors tab shows any problems that may have occurred and the Skipped tab shows files that weren’t downloaded.

But most important is the Sitemap, which shows the full directory structure of the website as discovered by WebCopy.

To view the website offline, open File Explorer and navigate to the save folder you designated. Open the index.html (or sometimes index.htm) in your browser of choice to start browsing.


2. HTTrack

Grab a webpage for offline reading with WinHTTRack

Available for Windows, Linux, and Android.

HTTrack is more known than WebCopy, and is arguably better because it’s open source and available on platforms other than Windows, but the interface is a bit clunky and leaves much to be desired. However, it works well so don’t let that turn you away.

Like WebCopy, it uses a project-based approach that lets you copy multiple websites and keep them all organized. You can pause and resume downloads, and you can update copied websites by re-downloading old and new files.


How to Download a Website With HTTrack

  1. Install and launch the app.
  2. Click Next to begin creating a new project.
  3. Give the project a name, category, base path, then click Next.
  4. Select Download web site(s) for Action, then type each website’s URL in the Web Addresses box, one URL per line. You can also store URLs in a TXT file and import it, which is convenient when you want to re-download the same sites later. Click Next.
  5. Adjust parameters if you want, then click Finish.

Once everything is downloaded, you can browse the site like normal by going to where the files were downloaded and opening the index.html or index.htm in a browser.

3. SiteSucker

SiteSucker copies websites for offline viewing

Available for Mac and iOS.

If you’re on a Mac, your best option is SiteSucker. This simple tool rips entire websites and maintains the same overall structure, and includes all relevant media files too (e.g. images, PDFs, style sheets).


It has a clean and easy-to-use interface that could not be easier to use: you literally paste in the website URL and press Enter.

One nifty feature is the ability to save the download to a file, then use that file to download the same exact files and structure again in the future (or on another machine). This feature is also what allows SiteSucker to pause and resume downloads.

SiteSucker costs $5 and does not come with a free version or a free trial, which is its biggest downside. The latest version requires macOS 10.13 High Sierra or later. Older versions of SiteSucker are available for older Mac systems, but some features may be missing.

4. Wget

Available for Windows, Mac, and Linux.

Wget is a command-line utility that can retrieve all kinds of files over the HTTP and FTP protocols. Since websites are served through HTTP and most web media files are accessible through HTTP or FTP, this makes Wget an excellent tool for ripping websites.

While Wget is typically used to download single files, it can be used to recursively download all pages and files that are found through an initial page:

wget -r -p //

However, some sites may detect and prevent what you’re trying to do because ripping a website can cost them a lot of bandwidth. To get around this, you can disguise yourself as a web browser with a user agent string:

wget -r -p -U Mozilla //

If you want to be polite, you should also limit your download speed (so you don’t hog the web server’s bandwidth) and pause between each download (so you don’t overwhelm the web server with too many requests):

wget -r -p -U Mozilla --wait=10 --limit-rate=35K //

Wget comes bundled with most Unix-based systems. On Mac, you can install Wget using a single Homebrew command: brew install wget (how to set up Homebrew on Mac How to Install Mac Apps in Terminal Using Homebrew Did you know you can install Mac software in the Terminal? Here's how to use Homebrew to install Mac apps easily. Read More ). On Windows, you’ll need to use this ported version instead.

Which Websites Do You Want to Download?

Now that you know how to download an entire website, you should never be caught without something to read, even when you have no internet access.

But remember: the bigger the site, the bigger the download. We don’t recommend downloading huge sites like MakeUseOf because you’ll need thousands of MBs to store all of the media files we use.

The best sites to download are those with lots of text and not many images, and sites that don’t regularly add new pages or changed. Static information sites, online ebook sites, and sites you want to archive in case they go down are ideal.

If you’re interested in more options for offline reading, take a look at how you can set up Google Chrome for reading books offline How to Set Up Google Chrome for Offline Reading of Books Thanks to Chrome's offline apps, you don't need to rely on the internet to catch up on your ebook reading. Read More . And for other ways to read long articles 5 Smart Ways to Read Long Articles Quickly Instead of Hoarding Them What if you could read a long article in the limited time you have? These smart methods help you go through long reads quickly. Read More instead of downloading them, check out our tips and tricks.

Image Credit:

Related topics: Download Management, Offline Browsing.

Affiliate Disclosure: By buying the products we recommend, you help keep the site alive. Read more.

Whatsapp Pinterest

Leave a Reply

Your email address will not be published. Required fields are marked *

  1. Nicolai Svendsen
    June 5, 2019 at 7:11 am

    Unfortunately, the tool HTTrack just creates the website and the links inside link to inaccurate file paths. The paths are close to correct, except it includes a folder in the path that doesn't actually exist, making the index.html useless for actual browsing if that's what I want to do. This is on Android.

    That's my experience, anyway. I guess it could be some adjusting of options I need to mess around with, I hope, since this would be a nice and tidy solution. I was hoping it would let me browse everything offline instead of going to each individual file to see it. After all, it says "browse mirror" when I'm done.

    I hope I just overlooked something obvious. Granted, it's only been five minutes since I installed it, so I'll keep messing about.

  2. Ahmed Gelemli
    September 9, 2018 at 1:56 pm

    The best site for download:

  3. duane
    May 25, 2018 at 2:49 pm

    are you able to download a site that no longer directs to a current URL? If i change providers, am I able to still download the old website?

  4. Navneet
    May 12, 2018 at 2:09 pm

    great post! is also a great place to download comic to read offline from other sites if you are a comic lover.

  5. Michael1124
    January 1, 2018 at 12:14 pm

    IDM (Internet Download Manager) has a built-in site grabber.

  6. Michael1124
    January 1, 2018 at 12:12 pm

    Did you know IDM (Internet Download Manager) has a built in site grabber?

  7. Rehan Khan
    December 26, 2017 at 8:14 am

    Faris Technology is the cost effective way of devloping websites which will help you to grow your business easy and less invesment.

    We provide Software, Website Development, Themes, Domain registrtion and Webhosting, Email Marketing and Email hosting server @ very reasonable rate kindly contact us

    • ReadandShare
      October 23, 2018 at 8:49 pm

      Right - so cost effective you skip paying for advertising by spamming everyone else instead. Definitely won't be doing business with you - and hope no one else does either.

  8. Silas
    December 1, 2017 at 7:26 pm

    I've installed Homebrew and wget and tried running it but it says "Scheme Missing". I even copy and pasted the code provided on this page and it stills says same thing. What am I missing and how do I correct it? Thanks for your help.

    • no
      December 2, 2017 at 12:22 pm

      Try without the //

      • Silas
        December 2, 2017 at 4:14 pm

        Thank you, that worked.

  9. happy patel
    October 25, 2017 at 2:56 pm

    downloading youtube from last 2 years ...

    • Silas
      December 1, 2017 at 7:27 pm

      Probably still at .01% if I had to guess!?

  10. Paul Gerlich
    October 8, 2017 at 10:44 pm

    Great article, thanks for posting this

  11. Deepak
    September 18, 2017 at 3:14 pm

    none of these downloaders are working against the modern checking of javascript..

    • Paul Gerlich
      October 8, 2017 at 10:44 pm

      Just leveraged the wget, not sure what you think JS is capable of doing - but it's hard to circumvent a fake UserAgent string.