Did you know that there are 12 factors to be considered while acquiring data from the web? If no, fret not! Download our free guide on web data acquisition to get started!
Imagine it’s 1995. Your parents have just got back from work, your dad drops onto the sofa, and unfolds a newspaper. Historically, our news only came from a handful of sources. The TV, newspapers, industry-specific magazines, word of mouth, and so on. Content aggregator is the future!
Now let’s fast-forward to 2021. Your phone is constantly pinging you with the news. Social media updates, breaking news reports, friends updating you. You’re inundated with content. If you have a question, you can access the answer within seconds from a choice of hundreds of sites. Your daily news cycle could span everything from current events and gadgets to sewing and landscape gardening. Whatever you’re interested in, there is a news source for it available at the touch of a button. The key to this though? The majority of it is free.
As a result, mainstream publishers such as The Guardian and Wall Street Journal are struggling. Many are implementing fee structures and donation requests to try and make up for the declining print industry and subsequent loss of ad revenue.
In their place is a new form of media consumption. Something that is making accessibility and instant browsing even simpler and enjoyable for the user – content aggregation. Combine this with sophisticated data extraction and you can publish content that’s super-valuable to your users.
Increasingly, review sites and curated content sites are aggregating data from around the web and presenting it in an easy-to-use format for the user. Think about sites like Flipboard, Google News and even Reddit. These all present forms of aggregated content.
The simple idea involves pulling content from other sources and collecting it all in one place. This allows easy browsing for the user and the ability to jump between topics and industries. Users don’t even have to click on an article. They can simply view the headline or summary and move on to the next.
Data extraction is a way of pulling back the flesh of a website and seeing how the parts work – seeing what’s going on under the bonnet. This provides lots of different insights that can be used in a variety of industries and to inform a range of objectives, including content aggregation.
The way that data extraction can help content aggregation is pretty simple – it tells you which web pages are getting the most interactions, directing you to the ones that are most beneficial to source and then collect for redistribution.
Extracting data may seem intimidating, but is incredibly straightforward – with the right service provider like PromptCloud. The value of this in content aggregation is clear, you can quickly find the right content for your users, driving up engagement by publishing the right content.
Automation is the key difference. Content curators painstakingly collate the best content on a given topic and build articles around them. Think articles like ‘2019 music predictions from 10 leading producers’. The author of that article hasn’t spoken to each producer to build that article. They’ve curated content from various outlets — be they social media accounts, blogs, or news sites – and crafted an article around them.
Aggregators use data to build their content. These data aggregators mine keywords from the sites in their databases and build dashboards for users to access content from a variety of sources. This takes the onus of writing content away from the aggregator and makes for a better user experience.
Content aggregator and data aggregators pose a unique challenge for publishers, particularly those who generate revenue from on-site advertising or subscriptions. If users are accessing news via sites like Flipboard or News360, music via Spotify, and reviews via Metacritic, then it means they’re likely not visiting or purchasing the source content. Given that, where does the money come from?
In the case of Flipboard, although it offers all its content for free, it actually benefits the end publisher because it sends traffic to sources. Flipboard has 100 million monthly users and media outlets have noticed recent spikes of traffic referred from the site. This relationship is mutually beneficial.
Flipboard enjoys the traffic from recency and long-tail searches, as well as repeat visits from those who prefer aggregated content, and the end publisher gets the ad revenue from those landing on the site. Metacritic operates a similar model. Consolidating not only review sites but also individual critic pieces into their one-stop summaries.
It’s a tricky situation for publishers. There’s no doubt about it. Users love aggregators. As their popularity increases, we may see sustained traffic drops on end publisher sites in favor of article-specific spikes. But the aggregators need these end publishers to keep creating content in order to exist. Otherwise, what would they aggregate? The benefit to users is the collation of all these highly respected sources in one place.
Some aggregators eventually establish their own expertise and move towards positions of authority and influence in their fields, but not all of them: some continue to aggregate indefinitely, never getting involved in the production. As such, to continue, there must be clear benefits for both publishers and aggregators. Publishers or the media industry should understand the benefit to the end-user of having content in one place, while aggregators need to appreciate the time and effort it takes to create the content they’re displaying.
It’s also important to keep publishers motivated. If social media channels, content aggregator and syndicators are keeping hold of the majority of traffic and ad revenue, then the incentive for publishers to continue creating excellent content diminishes. Why would they? The only way a publisher will be motivated is by generating revenue from their work. This way they can pay contributors, editors and developers to maintain their own site, while enjoying the benefits of traffic spikes from aggregators. It’s a two-way relationship that needs to be mutually beneficial to have a lasting impact on the internet.
Publishers are constantly evolving their revenue streams, optimizing ads and subscription models. Some like the Washington Post is even reporting year-on-year profits, unthinkable a few years ago when many prophesied the decline of the mainstream digital media industry.
But there’s no doubt that the two sides need to work together. Relentless data aggregation with no thought for the publishers will eventually lead to lackluster content. However, it’s also impossible to ignore the value aggregators provide to the end-user. All the content they need from their favourite sources is in one place, accessible instantly. In a world where instant gratification is the new norm, it’s looking increasingly likely that aggregators are the content publishers of the future.
Your email address will not be published. Required fields are marked *
Save my name, email, and website in this browser for the next time I comment.
[contact-form-7 id=”5″ title=”Contact form 1″]