Discover more from CTRL + ALT + REPEAT
What, exactly, is audience editing?
Kim Fletcher, editor of the British Journalism Review, asked me to write about audience editing and the role of data in newsrooms. This piece was first published in December 2019.
“I only wrote this piece for three people — the editor of The Times, the governor of the Bank of England, and the Chancellor of the Exchequer.”
How many editors know their audience like The Times’ Peter Jay did?
Countless publications have seen their digital readership rocket in recent years. Enormous global stories like the Trump presidency and Brexit keep people coming back for more, even amid concerns of fatigue and apathy. Google and Facebook, disastrous as they’ve been for newspapers’ advertising revenues, bring journalism to a staggering number of people every day. But who are those people and what is their value?
It is harder and harder for editors to know their audience. We know who we’re writing for (be it three people or 300 million). We know which pitches will be met with “that’s so us” in the morning news meeting. And we know which topics resonate with loyal, long-time or local readers.
But because people can take so many routes to journalism and publications can reach more people the distinction between regular readers and those passing through requires more attention, scepticism and nuance in data analysis. We’re past the point where we can think about an audience, singular.
Thanks for reading! Subscribe for free to receive new posts.
So who make sense of all this? The audience editor. Audience editors guide their colleagues in using data to inform decisions about producing and distributing digital journalism. They write headlines suited to search engines, run social media accounts, curate email newsletters, send push notifications, establish best practice on publishing (like when to launch a story), collaborate with readers, manage relationships with platforms such as Facebook and Google, and generally advocate for readers.
If that sounds a bit wide-ranging and nebulous, it is. Readers are (hopefully) at the heart of what any publication does, so audience editors often work across multiple teams. They fit pleasingly neatly into the “bridge roles” that the audience specialist Federica Cherubini described in Nieman Lab’s series on predictions for journalism in 2018: “[Bridge roles] are hybrid roles that are breaking down barriers by working at the intersection of various disciplines. They speak the language of journalism, engineering, and product management.” In short, by having an ear to the ground and fingers in many pies, these editors help develop audiences into something understood.
Arguably, newsrooms need this kind of clarity now more than ever. The ways in which people find stories have become more diverse and fragmented. The homepage isn’t dead, but it’s not the destination or traffic driver it once was. Facebook and Google are rebuilding the web to their specifications with products such as Instant Articles and Accelerated Mobile Pages, and although these platforms offer enormous reach, their shadowy and unpredictable algorithms mean no editor can control who sees what.
Aggregators such as Flipboard, Upday and SmartNews — which sort thousands of stories from hundreds of publications — drive massive spikes in traffic, but there are questions around the quality and value of these audiences. Users of SmartNews, the algorithmically-powered Japanese news curation app, have reportedly grown rapidly in the US. The surge has been met with scepticism: speaking to Axios, one audience specialist described the app’s reach as a dose of “sugar rush traffic”. These aggregators reach new readers, but will those people ever come back? Did they actually read the piece? Did they even know which publication they were reading?
Similar questions arose about Flipboard one morning when, out of nowhere, it sent tens of thousands of people in the US to a Guardian story about the then Plaid Cymru leader Leanne Wood. These readers appeared to be as confused as we were — I was working at The Guardian at the time — about why the piece had been surfaced to them. We discussed possible reasons for a few minutes. In that short time, the readers were already long gone, having spent only seconds with the story. We put it down to an unsophisticated algorithm. Sometimes, on especially slow days, you welcome these spikes as they bring your traffic target into range, even though you know that in as little as 60 minutes the referral will likely plummet and you’ll have made few, if any, strategic gains.
These rushes of traffic — whatever their source — aren’t necessarily bad. Obviously, the point of publishing something is that people read it. But reach and traffic isn’t the same as actually reading — and astronomical audience figures can warp a publication’s content if the culture around data isn’t right.
Reach reigned for a long time, especially when publishers realised how much traffic Facebook and Google could drive back to their sites. The scale those platforms offered started to change the stories publishers produced and the way they presented them. Aggressive search engine optimisation (SEO) resulted in some headlines becoming a clumsy jumble of keywords and efforts to get people to click away from social media feeds led to the rise of clickbait. There’s nothing wrong with including search terms in a headline. If you want people to find your piece, it is a smart thing to do. It’s also possible to have search-optimised headlines that are clever, entertaining and creative.
Journalism suffers when decisions are data-led
But things get dumb and ugly if you’re commissioning stories based on trends and terms that result in articles alien to your publication. Chris Moran, The Guardian’s editor for strategic projects, was the newspaper’s first audience editor in a way, initially working under the title of SEO editorial executive to increase digital readers. When he started in the role, he says he was “incredibly careful about walking into a meeting and going ‘Guys, guys! There’s this meme! Let’s do this meme!’ because it’s obviously idiotic”. Instead, “the message became all about ‘We are producing this journalism. Isn’t it good? Don’t we all believe in it? Wouldn’t we like to find an audience for it and connect it with as many people as possible?’ That makes it harder for people to go ‘Here comes the bloke who’s destroying journalism’”.
Journalism suffers when decisions in newsrooms are led by data rather than informed by it. It’s not just legacy organisations that have struggled with creating a positive, sensible culture around data. According to The Outline, Mic, the social justice site that launched in 2010, “mined away at Facebook gold” with painfully formulaic headlines based on previous successes, such as: “‘Science Proves TK’, ‘In One Perfect Tweet TK’, and ‘TK Celebrity Just Said TK Thing About TK Issue. Here’s why that’s important’.” It worked for a while, then Facebook traffic stopped flowing so Mic aimed for Google traffic using a similar technique. Layoffs followed. Then came the inevitable, formidable “pivot to video”. And ultimately, more layoffs.
Audience data belongs in newsrooms but works best when it’s paired with the judgment of experienced editors. The Guardian’s Moran recently led a project to reduce the number of stories that received a small audience. Articles were analysed according to their views: low, medium and high. If he’d based decisions on the data alone, stories in the low tier — a not inconsiderable amount of the site’s content — would have gone untold. He explains why it was so important to be judicious: “How on Earth can you expect [the audience of] someone who is writing about, say, knife crime to compare with somebody writing about film? It’s grim news, it’s really difficult, it’s not necessarily something people would always opt in to read.” Instead, when stories that editors believed in didn’t break through, Moran says: “We didn’t just delete them or stop them being produced. If we believed in them, the question then was ‘Can we do them differently to widen [the audience and engagement] a bit?’.” The availability of data in newsrooms is essential but using it without nuance risks pushing journalists in strange directions with coverage that dilutes what matters to a publication.
Fortunately, the pursuit of empty-calories traffic that serves up huge but unengaged audiences is increasingly treated with caution. Savvier publishers have moved from chasing reach for reach’s sake. Now they talk less about blunt metrics such as “uniques” and “views” and ask about things such as time readers spent with a piece. The measures of success have become sharper and are now aimed at helping journalists understand which stories resonate with readers beyond the first click. After all, if one million people see your story but only a handful read beyond the first paragraph, is that really a success?
One of these publishers with a new measure is the Financial Times. Around a year ago, it introduced “quality reads”, a metric designed to contextualise reach and engagement and which now plays a key role in understanding audiences. The newspaper’s head of audience Renée Kaplan explains that they wanted to know “what readers were actually spending time with [and] what was actually delivering value. Time spent is revealing, but it also varies with the length of a story, so it’s only so relevant. So we created quality reads, [which] looks at the percentage of people who clicked through to a story and then actually read at least half the story. It’s a dynamic metric that calculates completion based on length and the average speed of reading”. This kind of deeper insight is enormously helpful in understanding audiences and is especially important if subscriptions — or any form of payment from readers — are part of your business model. If readers value something (ie read it deeply), there’s a good chance they believe it’s worth paying for.
Measures of success in newsrooms have come a long way in a short time. At some publications, audience editors have to make the case for the most basic metrics each day. At others, they’re grappling with how — or even whether — to share new data that reveals which stories and topics led readers to pay. Matt Skibinski of the Lenfest Institute for Journalism argues on Nieman Lab for democratising reader revenue data in the same way traffic data has been made widely available: “Everyone in the newsroom should know daily — or, at worst, weekly — how many digital subscriptions were sold and from which sections of the site.” He has a point. Digital payments from readers have proved a fruitful, critical lifeline for a number of publications as other revenue streams dwindle, and journalists should be aware of what keeps the lights on. But transactional data like this is deeply complex. Just because a reader clicks from an article to the subscriptions page doesn’t mean that piece was the one which persuaded them to part with cash. It may have been the one before that — or another that they read on a different device which isn’t captured in their journey to the checkout. There are ways to understand these journeys better, such as the “subscription influence reporting” which, as Skibinski describes, maps the stories in a reader’s path that led to them making a payment. But how do you surface that insight to a general newsroom audience without overwhelming them?
Most importantly, how are writers and reporters meant to respond to this data? Do they write more about niche topics that bring in revenue but divert resources from stories that achieve reach? You may end up knowing a small number of your readers — probably more than Peter Jay’s three — but at the cost of discovering larger, newer audiences.
Thanks for reading! Subscribe for free to receive new posts.