Use FURL to Retrieve Website Headers
It's important to know what headers your website and its files are communicating. For example, if your website is providing a 404 status, you're probably streaking toward your computer to fix the problem. Using the FURL library, you may retrieve website headers from the command line.
The Shell Script
furl https://davidwalsh.name
Simple and quick -- just like every shell directive.
The Sample Response
HTTP/1.1 200 OK Date: Thu, 25 Jun 2009 01:50:50 GMT Server: Apache/2.2.3 (CentOS) X-Powered-By: PHP/5.2.6 X-Pingback: https://davidwalsh.name/xmlrpc.php Cache-Control: max-age=1, private, must-revalidate Expires: Thu, 25 Jun 2009 01:50:51 GMT Vary: Accept-Encoding Connection: close Content-Type: text/html; charset=UTF-8
Don't have FURL? Install it by scripting this:
sudo port install furl
How is this useful? I would use this to periodically (cron) check my website to make sure it was up. What would you use this for?
I’d use it to retrieve the X-Pingback value and if it was included, I’d send a trackback. ;-)
Or, if you don’t fancy installing furl for this, you can do the same with curl (a powerful and flexible utility for doing performing requests) with the -I flag:
eg.
curl -I http://davidwalsh.name
(you probably have curl installed already)
to see the headers and the full response, use the verbose flag
curl -v http://davidwalsh.name
@adamnfish: Thanks for sharing that. On a side note, “adamnfish” sounds like a wacky morning FM radio show.
Not sure where sources are but the Debian package is at http://bertorello.ns0.it/debian/furl/
As already mentioned,
curl -I HOSTNAME
Has the same functionality but without installing something extra.
curl -I is good. This is another suggestion…
lwp-request -ed “http://lindesk.com/”
another trick is:
lynx -head http://davidwalsh.name
lynx is a linux textual browser
Dang! I should have read this sooner. I was itching to jump all over the “curl -I” suggestion. Everyone got here first!
alias furl=’curl -i -X HEAD’