Making Resilient Web Design work offline

Jeremy Keith
5 min readJan 11, 2017

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though — for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
// CACHE
// Did we find the file in the cache?
// If so, fetch a fresh copy from the network in the background
// NETWORK
// Stash the fresh copy in the cache
// NETWORK
// If the file wasn’t in the cache, make a network request
// Stash a fresh copy in the cache in the background
// OFFLINE
// If the request is for an image, show an offline placeholder
// If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
.then( responseFromCache => {
// Did we find the file in the cache?
if (responseFromCache) {
return responseFromCache;
}

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
.then( responseFromCache => {
// Did we find the file in the cache?
if (responseFromCache) {
// If so, fetch a fresh copy from the network in the background
event.waitUntil(
// NETWORK
fetch(request)
.then( responseFromFetch => {
// Stash the fresh copy in the cache
caches.open(staticCacheName)
.then( cache => {
cache.put(request, responseFromFetch);
});
})
);
return responseFromCache;
}

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
event.waitUntil(
fetch(request)
.then( responseFromFetch => {
console.log(‘Got a response from the network.’);
caches.open(staticCacheName)
.then( cache => {
cache.put(request, responseFromFetch);
});
})
);
console.log(‘Got a response from the cache.’);
return responseFromCache;
}

Those log statements will appear in this order:

​ Got a response from the cache. ​ Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake’s polyfill for async waitUntil
importScripts(‘/js/async-waituntil.js’);

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
})

I want to also store that response in the cache, but I want to do it asynchronously — I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
.then( responseFromFetch => {
// Stash a fresh copy in the cache in the background
let responseCopy = responseFromFetch.clone();
event.waitUntil(
caches.open(staticCacheName)
.then( cache => {
cache.put(request, responseCopy);
})
);
return responseFromFetch;
})

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if (‘serviceWorker’ in navigator) {
// If service workers are supported
navigator.serviceWorker.register(‘/serviceworker.js’);
} else if (‘applicationCache’ in window) {
// Otherwise inject an iframe to use appcache
var iframe = document.createElement(‘iframe’);
iframe.setAttribute(‘src’, ‘/appcache.html’);
iframe.setAttribute(‘style’, ‘width: 0; height: 0; border: 0’);
document.querySelector(‘footer’).appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.

This was originally posted on my own site.

--

--

Jeremy Keith
Jeremy Keith

Written by Jeremy Keith

A web developer and author living and working in Brighton, England. Everything I post on Medium is a copy — the originals are on my own website, adactio.com

Responses (1)