Use FURL to Retrieve Website Headers

By  on  

It's important to know what headers your website and its files are communicating. For example, if your website is providing a 404 status, you're probably streaking toward your computer to fix the problem. Using the FURL library, you may retrieve website headers from the command line.

The Shell Script

furl https://davidwalsh.name

Simple and quick -- just like every shell directive.

The Sample Response

HTTP/1.1 200 OK
Date: Thu, 25 Jun 2009 01:50:50 GMT
Server: Apache/2.2.3 (CentOS)
X-Powered-By: PHP/5.2.6
X-Pingback: https://davidwalsh.name/xmlrpc.php
Cache-Control: max-age=1, private, must-revalidate
Expires: Thu, 25 Jun 2009 01:50:51 GMT
Vary: Accept-Encoding
Connection: close
Content-Type: text/html; charset=UTF-8

Don't have FURL? Install it by scripting this:

sudo port install furl

How is this useful? I would use this to periodically (cron) check my website to make sure it was up. What would you use this for?

Recent Features

  • By
    CSS vs. JS Animation: Which is Faster?

    How is it possible that JavaScript-based animation has secretly always been as fast — or faster — than CSS transitions? And, how is it possible that Adobe and Google consistently release media-rich mobile sites that rival the performance of native apps? This article serves as a point-by-point...

  • By
    Introducing MooTools Templated

    One major problem with creating UI components with the MooTools JavaScript framework is that there isn't a great way of allowing customization of template and ease of node creation. As of today, there are two ways of creating: new Element Madness The first way to create UI-driven...

Incredible Demos

  • By
    Using MooTools to Instruct Google Analytics to Track Outbound Links

    Google Analytics provides a wealth of information about who's coming to your website. One of the most important statistics the service provides is the referrer statistic -- you've gotta know who's sending people to your website, right? What about where you send others though?

  • By
    FileReader API

    As broadband speed continues to get faster, the web continues to be more media-centric.  Sometimes that can be good (Netflix, other streaming services), sometimes that can be bad (wanting to read a news article but it has an accompanying useless video with it).  And every social service does...

Discussion

  1. I’d use it to retrieve the X-Pingback value and if it was included, I’d send a trackback. ;-)

  2. adamnfish

    Or, if you don’t fancy installing furl for this, you can do the same with curl (a powerful and flexible utility for doing performing requests) with the -I flag:

    eg.
    curl -I http://davidwalsh.name

    (you probably have curl installed already)

    to see the headers and the full response, use the verbose flag
    curl -v http://davidwalsh.name

  3. @adamnfish: Thanks for sharing that. On a side note, “adamnfish” sounds like a wacky morning FM radio show.

  4. Not sure where sources are but the Debian package is at http://bertorello.ns0.it/debian/furl/

  5. As already mentioned,

    curl -I HOSTNAME

    Has the same functionality but without installing something extra.

  6. curl -I is good. This is another suggestion…

    lwp-request -ed “http://lindesk.com/”

  7. Marco

    another trick is:

    lynx -head http://davidwalsh.name

    lynx is a linux textual browser

  8. Dang! I should have read this sooner. I was itching to jump all over the “curl -I” suggestion. Everyone got here first!

  9. Rex

    alias furl=’curl -i -X HEAD’

Wrap your code in <pre class="{language}"></pre> tags, link to a GitHub gist, JSFiddle fiddle, or CodePen pen to embed!