Sunday, June 29, 2014
The scourge of link rot
A web researcher or typical web surfer can’t rely on online information being there when they want it. Unlike a physical book, data stored online is ephemeral.
This leads to bookmarked websites and weblinks leading to “page not found” error messages. It’s a problem dubbed “link rot.”
I recently ran an online tool from BrokenLinkCheck.com on Tech-media-tainment. It processed 1,178 web pages and found 570 broken links. It took me hours to go in and clear out the dead links.
I like providing weblinks for readers of my blog. Weblinks provide useful resources and give credit to websites that originate information. But it’s almost not worth it if it eventually lards up your website with dead links.
The worst offenders for broken links are legacy news organizations like print newspapers and magazines. They’re always changing URLs for stories, moving them to new databases or just deleting them when they get old. Websites for the New York Times, USA Today and Chicago Tribune were frequent offenders in my review.
The best websites for keeping links permanent are digital news organizations like the Huffington Post, Business Insider and Wikipedia.
In other cases, I had link rot on Tech-media-tainment because businesses and organizations I had linked to had ceased operation.
In February, media journalist Jim Romenesko reported that U.S. News deleted its archived web content published before 2007.
In March, Re/code reported that NBCUniversal was shutting down two websites, DailyCandy and Television Without Pity, and taking the content offline.
Last fall, the New York Times reported on a Harvard University study that found that 49% of hyperlinks in U.S. Supreme Court decisions no longer work.
A solution might be on the way via a web service called Perma CC, Gigaom reported. But it’s just for academic and legal research.
Knowing my luck, some of these weblinks to news articles won’t work in a year. Fingers crossed though.