• Wonkblog’s new feed

    I’m not happy about Wonkblog’s new, partial RSS feed. Ezra Klein knows better but his overlords at the Washington Post obviously do not. My email to them is below. If you care about the blogosphere or Wonkblog, I encourage you to email too.

    To: ideas@washpost.com

    It’s a bad idea to shift Wonkblog or any blog to a partial RSS feed. You can ask Ezra himself for the argument. Or, you’ll find it all right here:  https://theincidentaleconomist.com/wordpress/full-feed-please/

    If you care about traffic and influence, you won’t ignore this.

    That’s the end of the email. What you shouldn’t tell the Washington Post (though surely they know) is that you can defeat the partial RSS feed any number of ways, including using the Chrome extension Super Full Feeds for Google Reader.

    Before you tell me this is a smart marketing move to increase page hits, please click through to my post and from there to Felix Salmon and the others.

    UPDATE: WaPo writes back,

    We made this change so that Ezra’s feed would be like all of the rest of the feeds on our site and our RSS feeds could be consistent and streamlined as possible. We’re definitely monitoring all the feedback we’re getting on this, so I appreciate you sending this, along with the link below.

    Consistency is not a good reason to shoot yourself in the foot. Are there hidden economies of scale in this streamlining? Could they be realized by making all WaPo feeds full?

    AF
    Share
    Comments closed
     
    • This might be one explanation:

      http://www.creditwritedowns.com/2011/04/more-free-money-from-google-for-site-scrapers.html

      Here’s how it works. Someone who doesn’t have original content picks a few newsworthy topics to follow. Typically we are talking about politics and business. The scraper then sets up a website oriented around those topics and makes sure to check all the boxes that define a normal high-content, multi-author site like having Twitter and Facebook accounts linked to the site, having an about this site and an ‘about our team’ page, and a terms of service page.

      The scraper then finds out which high-quality and high-ranking sites have full RSS feeds so that the scraper can import the content from the RSS feed and duplicate it on the scraper site. After the scraper has stolen enough content and optimized the site with keywords that the search engines deem most relevant to the niche, the scraper then submits the site for inclusion on Google News. When Google News includes the site, validating it as a reputable news site to Google users, the scraper can then make money from advertising as it has a guaranteed stream of visitors to the site via Google search and Google News.”

      • But partial feeds are defeatable. There are lots of free resources out there to do it. I don’t buy that they add much protection.

        • What I’ve observed from using the google super feed reader mentioned in the original post is that some sites don’t provide you with anything useful in the “readable” extract.

          As a result you’d be back to actually visiting the site. For an automated tool this presents the larger hurdle of parsing out the full blast of code from the site to scrape out the article.

          This isn’t an impossible task but it requires more work and probably a lot of tailoring to the specific site you’re trying to scrape. At the very least, you’re now no longer the low-hanging fruit and scrapers may shift to other targets they can lift with their current tools and no extra work.

          • Fair enough. It’s an empirical question whether WaPo would lose a lot to scrapers. It’s probably a hard study to do, actually. So this is largely theoretical.

            I’ll bet you this, though, the more partial fees we see, the more those workarounds will get better and proliferate. Put it this way, if all feeds were partial, would scraping stop? No. It’s an arms race.

            • I agree completely. I only offered it as one factor that may (or may not) have been a consideration based on having seen some bloggers express frustration at seeing their content used, unattributed, by scrapers.

    • This change to a partial feed combined with the absolutely atrocious performance of the WaPo website means I won’t be reading many posts on Wonkblog anymore, which is very unfortunate. Trying to load that site in IE (which I have to use) on my work PC freezes the browser for 10 seconds or so, and on my phone it often takes a good 30 seconds. Terrible decision by the paper. The consistency answer is bogus, being inconsistent is better than being consistently bad.

    • “Consistency is not a good reason to shoot yourself in the foot.”

      What if you shoot yourself in both feet?