Timing out

Jeremy Keith
8 min readMay 8, 2019

Service workers are great for creating a good user experience when someone is offline. Heck, the book I wrote about service workers is literally called Going Offline.

But in some ways, the offline experience is relatively easy to handle. It’s a binary situation; either you’re online or you’re offline. What’s more challenging — and probably more common — is the situation that Jake calls Lie-Fi. That’s when technically you’ve got a network connection …but it’s a shitty connection, like one bar of mobile signal. In that situation, because there’s technically a connection, the user gets a slow frustrating experience. Whatever code you’ve got in your service worker for handling offline situations will never get triggered. When you’re handling fetch events inside a service worker, there’s no automatic time-out.

But you can make one.

That’s what I’ve done recently here on adactio.com. Before showing you what I added to my service worker script to make that happen, let me walk you through my existing strategy for handling offline situations.

Service worker strategies

Alright, so in my service worker script, I’ve got a block of code for handling requests from fetch events:

addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
// Do something with this request.
});

I’ve got two strategies in my code. One is for dealing with requests for pages:

if (request.headers.get('Accept').includes('text/html')) {
// Code for handling page requests.
}

By adding an else clause I can have a different strategy for dealing with requests for anything else—images, style sheets, scripts, and so on:

if (request.headers.get('Accept').includes('text/html')) {
// Code for handling page requests.
} else {
// Code for handling everthing else.
}

For page requests, I’m going to try to go the network first:

fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
})

My logic is:

When someone requests a page, try to fetch it from the network.

If that doesn’t work, we’re in an offline situation. That triggers the catch clause. That’s where I have my offline strategy: show a custom offline page that I’ve previously cached (during the install event):

.catch( fetchError => {
return caches.match('/offline');
})

Now my logic has been expanded to this:

When someone requests a page, try to fetch it from the network, but if that doesn’t work, show a custom offline page instead.

So my overall code for dealing with requests for pages looks like this:

if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
})
.catch( fetchError => {
return caches.match('/offline');
})
);
}

Now I can fill in the else statement that handles everything else—images, style sheets, scripts, and so on. Here my strategy is different. I’m looking in my caches first, and I only fetch the file from network if the file can’t be found in any cache:

caches.match(request)
.then( responseFromCache => {
return responseFromCache || fetch(request);
})

Here’s all that fetch-handling code put together:

addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
return responseFromFetch;
})
.catch( fetchError => {
return caches.match('/offline');
})
);
} else {
caches.match(request)
.then( responseFromCache => {
return responseFromCache || fetch(request);
})
}
});

Good.

Cache as you go

Now I want to introduce an extra step in the part of the code where I deal with requests for pages. Whenever I fetch a page from the network, I’m going to take the opportunity to squirrel it away in a cache. I’m calling that cache “pages”. I’m imaginative like that.

fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
const copy = responseFromFetch.clone();
try {
fetchEvent.waitUntil(
caches.open('pages')
.then( pagesCache => {
return pagesCache.put(request, copy);
})
)
} catch(error) {
console.error(error);
}
return responseFromFetch;
})

You’ll notice that I can’t put the response itself (responseFromCache) into the cache. That’s a stream that I only get to use once. Instead I need to make a copy:

const copy = responseFromFetch.clone();

That’s what gets put in the pages cache:

fetchEvent.waitUntil(
caches.open('pages')
.then( pagesCache => {
return pagesCache.put(request, copy);
})
)

Now my logic for page requests has an extra piece to it:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, show a custom offline page instead.

Here’s my updated fetch-handling code:

addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
fetch(request)
.then( responseFromFetch => {
const copy = responseFromFetch.clone();
try {
fetchEvent.waitUntil(
caches.open('pages')
.then( pagesCache => {
return pagesCache.put(request, copy);
})
)
} catch(error) {
console.error(error);
}

return responseFromFetch;
})
.catch( fetchError => {
return caches.match('/offline');
})
);
} else {
caches.match(request)
.then( responseFromCache => {
return responseFromCache || fetch(request);
})
}
});

I call this the cache-as-you-go pattern. The more pages someone views on my site, the more pages they’ll have cached.

Now that there’s an ever-growing cache of previously visited pages, I can update my offline fallback. Currently, I reach straight for the custom offline page:

.catch( fetchError => {
return caches.match('/offline');
})

But now I can try looking for a cached copy of the requested page first:

.catch( fetchError => {
caches.match(request)
.then( responseFromCache => {
return responseFromCache || caches.match('/offline');
})
});

Now my offline logic is expanded:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead.

I can also access this ever-growing cache of pages from my custom offline page to show people which pages they can revisit, even if there’s no internet connection.

So far, so good. Everything I’ve outlined so far is a good robust strategy for handling offline situations. Now I’m going to deal with the lie-fi situation, and it’s that cache-as-you-go strategy that sets me up nicely.

Timing out

I want to throw this addition into my logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

The first thing I’m going to do is rewrite my code a bit. If the fetch event is for a page, I’m going to respond with a promise:

if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
new Promise( resolveWithResponse => {
// Code for handling page requests.
})
);
}

Promises are kind of weird things to get your head around. They’re tailor-made for doing things asynchronously. You can set up two parameters; a success condition and a failure condition. If the success condition is executed, then we say the promise has resolved. If the failure condition is executed, then the promise rejects.

In my re-written code, I’m calling the success condition resolveWithResponse (and I haven’t bothered with a failure condition, tsk, tsk). I’m going to use resolveWithResponse in my promise everywhere that I used to have a return statement:

addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
new Promise( resolveWithResponse => {
fetch(request)
.then( responseFromFetch => {
const copy = responseFromFetch.clone();
try {
fetchEvent.waitUntil(
caches.open('pages')
then( pagesCache => {
return pagesCache.put(request, copy);
})
)
} catch(error) {
console.error(error);
}
resolveWithResponse(responseFromFetch);
})
.catch( fetchError => {
caches.match(request)
.then( responseFromCache => {
resolveWithResponse(
responseFromCache || caches.match('/offline')
);

})
})
})
);
} else {
caches.match(request)
.then( responseFromCache => {
return responseFromCache || fetch(request);
})
}
});

By itself, rewriting my code as a promise doesn’t change anything. Everything’s working the same as it did before. But now I can introduce the time-out logic. I’m going to put this inside my promise:

const timer = setTimeout( () => {
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
resolveWithResponse(responseFromCache);
}
})
}, 3000);

If a request takes three seconds (3000 milliseconds), then that code will execute. At that point, the promise attempts to resolve with a response from the cache instead of waiting for the network. If there is a cached response, that’s what the user now gets. If there isn’t, then the wait continues for the network.

The last thing left for me to do is cancel the countdown to timing out if a network response does return within three seconds. So I put this in the then clause that’s triggered by a successful network response:

clearTimeout(timer);

I also add the clearTimeout statement to the catch clause that handles offline situations. Here’s the final code:

addEventListener('fetch', fetchEvent => {
const request = fetchEvent.request;
if (request.headers.get('Accept').includes('text/html')) {
fetchEvent.respondWith(
new Promise( resolveWithResponse => {
const timer = setTimeout( () => {
caches.match(request)
.then( responseFromCache => {
if (responseFromCache) {
resolveWithResponse(responseFromCache);
}
})
}, 3000);

fetch(request)
.then( responseFromFetch => {
clearTimeout(timer);
const copy = responseFromFetch.clone();
try {
fetchEvent.waitUntil(
caches.open('pages')
then( pagesCache => {
return pagesCache.put(request, copy);
})
)
} catch(error) {
console.error(error);
}
resolveWithResponse(responseFromFetch);
})
.catch( fetchError => {
clearTimeout(timer);
caches.match(request)
.then( responseFromCache => {
resolveWithResponse(
responseFromCache || caches.match('/offline')
);
})
})
})
);
} else {
caches.match(request)
.then( responseFromCache => {
return responseFromCache || fetch(request)
})
}
});

That’s the JavaScript translation of this logic:

When someone requests a page, try to fetch it from the network and store a copy in a cache, but if that doesn’t work, first look for an existing copy in a cache, and otherwise show a custom offline page instead (but if the request is taking too long, try to show a cached version of the page).

For everything else, try finding a cached version first, otherwise fetch it from the network.

Pros and cons

As with all service worker enhancements to a website, this strategy will do absolutely nothing for first-time visitors. If you’ve never visited my site before, you’ve got nothing cached. But the more you return to the site, the more your cache is primed for speedy retrieval.

I think that serving up a cached copy of a page when the network connection is flaky is a pretty good strategy …most of the time. If we’re talking about a blog post on this site, then sure, there won’t be much that the reader is missing out on — a fixed typo or ten; maybe some additional webmentions at the end of a post. But if we’re talking about the home page, then a reader with a flaky network connection might think there’s nothing new to read when they’re served up a stale version.

What I’d really like is some way to know — on the client side — whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale — click here if you want to check for a fresher version.” I’d also need some way in the service worker to identify any requests originating from that interface element and make sure they always go out to the network.

I think that should be doable somehow. If you can think of a way to do it, please share it. Write a blog post and send me the link.

But even without the option to over-ride the time-out, I’m glad that I’m at least doing something to handle the lie-fi situation. Perhaps I should write a sequel to Going Offline called Still Online But Only In Theory Because The Connection Sucks.

This was originally posted on my own site.

--

--

Jeremy Keith

A web developer and author living and working in Brighton, England. Everything I post on Medium is a copy — the originals are on my own website, adactio.com