Using JavaScript to Identify Whether a Server Exists

Recently, for reasons I’m sure I’ll write about in the
future, I needed to find a way to use JavaScript to test if either of two
web-locations are accessible – my home intranet (which would mean the user is on
my network), or the corporate intranet of the company for which I work (which
would mean the user is on my organization’s network). The page doing this test
is on the public web.

My solution for doing this test was simple. Since neither
resource is accessible publicly I put a small JavaScript file on each, then I
use AJAX and jQuery to try and fetch it. If
that’s successful, I know the user has access to whichever intranet site served
the request and my page can react accordingly.

If neither request is successful I don’t have to do
anything, but the user doesn’t see any errors unless they choose to take a look
in the browser console.

This all worked wonderfully until I enabled SSL on the page
that needs to run these tests, then it immediately fell apart.

Both requests fail, because a page served over HTTPS is
blocked from asynchronously fetching content over an insecure connection. Which
makes sense, but really throws a spanner into the works for me: neither my home
nor corporate intranet sites are available outside the confines of their safe
networks, so neither support HTTPS.

My first attempt at getting around this was to simply change
the URL prefix for each from http:// to https:// and see what happened. Neither
site supports that protocol, but is the error that comes back different for a
site which exists but can’t respond, vs. a site which doesn’t exist? It appears

Sadly, my joy at having solved the problem was extremely
short lived. The browser can tell the difference and reports as much in the
console, but JavaScript doesn’t have access to the error reported in the
console. As far as my code was concerned, both scenario was still identical
with a HTTP response code of 0 and the status description worryingly generic “error.”

We are getting closer to the solution I landed on, however.
The next thing I tried was specifying the port in the URL. I used the https://
prefix to avoid the “mixed content” error, but appended :80 after the hostname
to specify a port that the server was actually listening on.

This was what I was looking for. Neither server is capable
of responding to a HTTPS request on port 80, but the server that doesn’t exist
immediately returns an error (with a status code of 0 and the generic “error”
as the descriptive text), but the server that is accessible simply doesn’t
respond. Eventually the request times out with a status code of 0 but a status
description, crucially, of “timeout.”

From that, I built my imperfect but somewhat workable
solution. I fire a request off to each address, both of which are going to
fail. One fails immediately which indicates the server doesn’t exist, and the
other times-out (which I can check for in my JavaScript), indicating that the
server exists and I can react accordingly.

It’s not a perfect solution. I set the timeout limit in my
code to five seconds, which means a “successful” result can’t possibly come
back in less time than that. I’d like to reduce that time, but when I
originally had it set at 2.5 seconds I was occasionally getting a
false-positive on my corporate network caused by, y’know, an actual timeout
from a request that took longer than that to return in an error state.

Nevertheless if you have a use-case like mine and you need
to test whether a server exists from the client perspective (i.e. the response
from doing the check server-side is irrelevant), I know of no other way. As for
me, I’m still on the lookout for a more elegant design. I’m next going to try
and figure out a reliable way to identify if the user is connected to my home
or corporate network based on their IP address. That way I can do a quick
server-side check and return an immediate result.

It’s good to have this to fall back on, though, and for now
at least it appears to be working.


Being Smarter by Not Thinking

There’s a popular
myth that says we only use 10% of our brains

It’s simply not true. Studies (including the source of all scientific truth: an episode of
MythBusters) have proven that all areas of the brain have a function, and while
the percentage that we’re “using” at any given time varies by task it can
certainly exceed 10%.


One thing that seems very obvious to me without needing to
cite a study about it, however, is that I certainly have unused brain capacity,
and it can do amazing things when you leave it to its own devices.

As an example of what I’m talking about, I refer you to a
link I posted on this very blog some time ago: Why
Great Ideas Always Come in the Shower (and How to Harness Them)

In the brief commentary I added, I mentioned that never in
my life have I had a good idea in a meeting. Great ideas come to me while I’m
doing other things. Specifically, other things that do not take much in the way
of thought and offer little in the way of distraction: things where my brain
gets left to it’s own devices and has an opportunity to wander – showering,
certainly, but also commuting, trying to get to sleep at the very end of the
day (infuriatingly), and when I’m at the gym.

Talking of the latter one, I haven’t been to the gym for
quite some time.

When we lived in our apartment there was a gym in the
building, and that was great. I could easily fit in a solid 45 minutes there at
lunch. Any spare 30 minute window in my schedule could be turned into 20
minutes on the stationary bike.

I want to go back, but now that we’ve bought the house there
is obviously not an on-site gym. There’s a gym at the office (20 minutes away)
and a Goodlife Fitness close by (10 minutes away) where I’d get a discounted
rate, but small though it is even that travel time is putting me off. I will
most likely join Goodlife, since I rarely go to the office these days and
installing a home gym just isn’t in the budget right now, but I’ve been missing
the ability to easily take 30 minutes and get some exercise, and I’m sad that
none of the solutions will offer me that. In the absence of a perfect solution,
I haven’t done anything at all… until yesterday.

Since the weather here in Calgary is distinctly spring-like
these days, I went for a walk before I started my work day. I didn’t go far – a
little less than 2km, according to the Google Fit data from my phone and watch
– just down the road a bit and then back along the pathways that run through
our neighbourhood.

I liked it so much I did it again at lunch time, and then
for a third time this morning.

The physical benefits of this, though I’m sure not huge by any
means, are probably much needed at this point. Really though what I like about
it so much are the mental benefits. I’ve never been much of a morning person
and I would never consider going to the gym before work, but rolling out of bed
and attempting to be productive more or less immediately is not a recipe for
success either. Feeling like my day has already started by the time I sit down
to get some work done definitely gives me a mental boost that I’ve been able to
capitalize on. More significantly though, there’s a lot to be said for the kind
of problem solving that can only come from not thinking about something too
much and letting my subconscious guide me in ways that I’d never have come up
with if I were sitting at my desk consciously trying to focus on something.

It’s amazing what you can do when you’re not trying to do



The journey of a thousand miles starts with a single step, in the opposite direction
— Me, reflecting on some meetings I’ve had recently

Sometimes you need to take a step back and learn to walk before you try and run.


New Code Projects: Backblaze B2 Version Cleaner & VBA SharePoint List Library

It’s been a while since I’ve posted code of any description, but I’ve been working on a couple of things recently that I’m going to make publicly available on my GitLab page (and my mirror repository at

Backblaze B2 Version Cleaner

I wrote last week about transitioning my cloud backup to Backblaze’s B2 service, and I also mentioned a feature of it that’s nice but also slightly problematic to me: it keeps an unlimited version history of all files.

That’s good, because it gives me the ability to go back in time should I ever need to, but over time the size of this version history will add up – and I’m paying for that storage.

So, I’ve written a script that will remove old versions once a newer version of the same file has reached a certain (configurable) “safe age.”

For my purposes I use 30 days, so a month after I’ve overwritten or deleted a file the old version is discarded. If I haven’t seen fit to roll back the clock before then my chance is gone.

Get the code here!

VBA SharePoint List Library

This one I created for work. Getting data from a SharePoint list into Excel is easy, but I needed to write Excel data to a list. I assumed there’d be a VBA function that did this for me, but as it turns out I was mistaken – so I wrote one!

At the time of writing this is in “proof of concept” stage. It works, but it’s too limited for primetime (it can only create new list items, not update existing ones, and each new item can only have a single field).

Out of necessity I’ll be developing this one pretty quickly though, so check back regularly! Once it’s more complete I’ll be opening it up to community contributions.

I have no plans to add functions that read from SharePoint to this library, but once I have the basic framework down that wouldn’t be too hard to add if you’re so inclined. Just make sure you contribute back!

Get the code here!


Raspberry Pi Whole Home Audio: The Death of a Dream?

If you’ve been following my blog for a while, you’ll know
that I’ve written a whole
series of posts
on my efforts to take a few Raspberry Pis and turn them
into a DIY whole home audio solution.

If you’ve ever looked at the product offering within the
whole home audio space, you’ll know that setting such a thing up is either
cripplingly expensive, involves tearing the walls apart to run cables, or both.

Where we left off I’d put together a solution that was
glorious when it worked, but that was rare. Typically the audio was either out
of sync between the devices right from the get go, or quickly got that way.

Getting the Pis to play the same music was relatively
simple, but getting it perfectly in sync so that it could be used in a
multi-room setup eluded me to the end, and eventually I gave up.

The bottom line is that synchronizing audio between multiple
devices in a smart way requires specialized hardware that can properly account
for the differences in network latency between each of the end points. The Pi
doesn’t have that, and it’s not really powerful enough to emulate it through

So is my dream of a reasonably priced whole home audio
solution dead? Hell no.

In October I wrote
about Google’s announcement of the Chromecast Audio
. At the time it didn’t
have support for whole home audio but Google had promised that it was coming.
It’s here.

The day they announced that it had arrived was the day I
headed over to my local BestBuy and picked up four of these things. I plan to
add two more, and I couldn’t be happier with the results.

Plus, it frees up the Pis for other cool projects. Watch
this space!


Cloud Backup, Episode III

I’ve written a couple
of times
before about what I do to backup all my important data.

My last post on the topic was more than a year ago though,
so I’ll forgive you if you’ve forgotten. Here’s a recap: originally I was using
a fairly traditional consumer backup service, ADrive.
This worked well because they’re one of the few services that provides access
by Rsync, which made it easy to run scripted backup jobs on my linux
. Their account structure didn’t really meet my needs, however: you
pay for the storage you have available to you, not what you use. When I hit the
upper limit of my account the next tier up didn’t make financial sense, so I

About 15 months ago I moved my backups over to Google’s Cloud Platform. This gives me an
unlimited amount of storage space, and I just pay for what I use at a rate of
$0.02/GB/Month. This has been working well for me.

In December I
. They offer a service very similar to Google’s (or Amazon S3, or
Microsoft Azure, or any of the other players in this space that you may have
heard of), except they cost a quarter of the price at $0.005/GB/Month. There’s
even a free tier, so you don’t pay anything for the first 10GB. When I first
looked at them their toolset for interacting with their storage buckets really
wasn’t where I needed it to be to make them viable, but they’ve been iterating
quickly. I checked again this week, and I’ve already started moving some of my
backups over.

In time, I plan to switch all my backups over. So far I’ve
moved my documents folder and backups of my webserver, which totals about
2.5GB. That’s nice, because it means I’m still within the free tier. The next
thing to tackle is the backups of all our photos and music, which combine at
around 110GB. That means I have to transfer 110GB of data though, which is
going to be a painful experience. I’m still thinking about the best way to do
it, but probably the direction I’ll go is to spin up a VPS and have it handle
the download of the backup from Google and the upload to Backblaze, then it
doesn’t hog all the bandwidth I have on my home internet connection.

The only other thing to think about with Backblaze is
versioning. Google offers versioning on their storage buckets, but I have it
disabled. With Backblaze there is no option (at least not that I’ve found) to
disable this feature – meaning previous versions of files are retained and
count toward my storage bill.

I’m torn on this. The easy thing to do would be to disable
it, assuming one of Backblaze’s future iterations of their service offering is
in fact the ability to turn this off. I’m thinking though that the smarter
thing to do is make use of it.

For me and my consumer needs, that will most likely mean I
put together a PHP script or two to more intelligently manage it, however. Some
past versions are nice, but some of the files in my backup are changed pretty
frequently, and I definitely don’t need an unlimited history.

Still, I’m very much pleased with the price of B2, and
watching the feature set rapidly improve over the past couple of months gives
me confidence that I can move my backups over and keep them there for the
long-term, because the transition from one service to another is not something
I want to put myself through too often.


With the craziness of buying a new house in the last couple of months we never did get around to making Christmas cards this year, so if you’ve been waiting by your mailbox you can stop that now. I promise we’ll do some extra-cheesy ones next year, and I hope everyone has a very merry Christmas and a happy new year, from our family to yours.