Data journalism, computer-assisted reporting and computational journalism: what’s the difference?

Is data journalism more networked and open than computer-assisted reporting (CAR) and computational journalism? The differences are examined in a journal article in Digital Journalism by Mark Coddington of the School of Journalism of the University of Texas at Austin. He has developed four dimensions in his typology, based on his analysis of about 90 texts (academic and professional) about these forms of ‘quantitative journalism’. The four dimensions, each of which he presents as a range between two opposing poles, are:

  1. professional expertise vs networked information — how far is it the limited domain of ‘professionals’ (linked also to the norms and practices of traditional ‘professional’ journalism) vs a more open, networked approach involving ‘non-professionals';
  2. transparency vs opacity — how far does it disclose the processes, practice and/or product;
  3. targeted sampling vs big data — does it gather and analyse a sample (probably then relying on inference or causality to draw conclusions) or a more comprehensive data set or collection (probably emphasising exploratory analysis and correlation); and
  4. seeing the public as active vs passive — the first linked to a more participative, interactive vision of the public, and the second to a more traditional, passive conception.

Mark Coddington’s diagram provides a useful summary of this, and how he situates CAR, data journalism and computational journalism along these four dimensions:

Typology of data-driven journalism

How Mark Coddington characterises data journalism, CAR and computational journalism. From his paper: http://www.tandfonline.com/doi/abs/10.1080/21670811.2014.976400

In some ways, the main dividing line is between CAR and the other two. This is perhaps not surprising, given that CAR has been around much longer and so — almost inherently — is tied more closely to ‘traditional’ ideas of journalism. Data journalism and computational journalism, on this analysis, have more in common, but perhaps differ most clearly in two ways. Data journalism is characterised as more ‘open’ (transparent) than computational journalism, and as less ‘professional’ in its orientation — ie more networked and accessible to those who are not ‘professional journalists’. (Data journalism as the new punk, anyone?)

Most data journalists (plus CA reporters and computational journalists etc) are unlikely to be bothered by how their work is classified, as Mark Coddington notes — mentioning Adrian Holvaty’s “Is data journalism? — Who cares?” post. But it does matter to researchers. Why? Because, he explains, “these definitional questions are fundamental to analyzing these practices as sites of professional and cultural meaning, without which it is difficult for a coherent body of scholarship to be built”.

He adds that this is an initial attempt at classifying CAR, data journalism and computational journalism, in what is still an emerging and developing field. Also, his study relies heavily on research in the USA and Scandinavia. While much of his typology rings true to what I know of data journalism in the UK (and CAR and computational journalism, to a lesser extent), I wonder how far it might differ here, and indeed elsewhere.

My interest (apart from running an MA programme that includes data journalism) stems partly from having written about the development of data journalism in the UK in a chapter in Data Journalism: mapping the future, That is when I came to realise how far the emergence of data journalism in the UK drew on US journalism’s experience of CAR, trainers from the States etc — helped along by the arrival here of the Freedom of Information Act and the open data movement. I’ve also touched on this topic in discussion with a US journalist who said he saw not difference between CAR and data journalism.

Online tools aid coverage of Heathrow crash

Google, YouTube, Wikipedia and a flight simulator PC game helped Rory Cellan-Jones cover this story for the BBC. Responding to comments on this blog post, he emphasises that:

I’m talking about extra help from technology, but that does not mean the old-fashioned journalistic skills go out of the window

Cellan-Jones then goes on to argue that:

We tend to romanticise the good old days when a journalist had nothing but a notebook, some decent contacts, and a plausible manner, but I think the competition is more intense now. My point is that the instant access to information and pictures makes every story move far more quickly. If you refuse to use the new tools – as well as the old ones – then you will be left behind.

Well said, whether directed to established reporters or student journalists.

Via Martin Stabe on Fleet Street 2.0.

The Observer’s tangle with science story — now removed from website

The Observer seems to have pulled a front-page story from its website, after problems emerged with the article, which was published on 8 July 2007.
Observer front page 8 July 2007

The case raises some interesting questions not only about science reporting — but also about corrections and clarifications, and the importance of some journalistic essentials.

Ben Goldacre, who writes the Bad Science column in The Guardian, has analysed the article in detail in his column and on his blog and in the British Medical Journal.

He’s expressed his concerns forcefully (follow the links above to read his detailed analysis):

I am pretty jaded and sceptical, but this front page story has completely stunned and astonished me. The misrepresentations and errors went way beyond simply misunderstanding the science, and after digging right to the bottom of it all, knowing what I know now, I have never resorted to hyperbole before, but I can honestly say: this episode has changed the way I read newspapers.

The difficulties lie not only with the original story, Ben suggests — but also with the clarifications from The Observer’s Readers’ Editor, Stephen Pritchard, which appeared in the two following issues: on 15 July and 22 July 2007.

Ben Goldacre’s assessment of the situation:

Two failed “clarifications” later that clarify nothing, and I am even less impressed. Retract. Delete. Apologise.

One of the journalistic failings seems to have been that no-one from The Observer apparently contacted Dr Fiona Scott, even before publishing the first clarification. She then posted some comments online, which The Observer published as part of its second clarification — again without having spoken to her or exchanged emails, it appears. However, it took Ben Goldacre a quick Google search and a couple of hours to get an email reply, as he notes in this post.

The original Observer article used to be online here. The Google cache of the original story is here — or at least it when I wrote this post. But if the article was pulled for legal reasons, perhaps it won’t be on Google’s cache for much longer.

Will The Observer run a third clarification next Sunday?

Meanwhile, credit to its sister paper, The Guardian, at least, for publishing Ben Goldacre’s Bad Science column on the article.

Using a course blog to encourage critical reflection by students — HEA annual conference

hea-logo.gifMore on this theme — notes from my session at the Higher Education Academy annual conference in Harrogate are available here (PDF file).

If you’re reading this post without having seen anything previously about the project, you might find it useful to read the following outline (the abstract for my conference session). Then the notes from my presentation will probably make more sense. Either way, please add a comment to let me know what you make of the project — click on ‘add a comment’ above (under the title for this post) or, if you’re looking at this post on its own, use the comment box beneath it.

To encourage students on a postgraduate journalism programme to engage with their own learning, they were asked to contribute to a blog on three main themes: their own experiences as journalists; published articles/broadcasts etc, particularly to highlight what they were learning and putting in to practice; and contemporary developments in journalism.

The guidelines and assessment criteria explicitly encouraged students to reflect critically in their posts to the blog; to ‘add value’; and to make connections, particularly with their own experience, assignments and ideas.

This session will discuss the main findings of an evaluation of the blog, using an analysis of students’ contributions (more than 400) drawing on the literature of reflective journals and e-learning, and the results of a questionnaire to gauge students’ experience of using the blog as learners. Initial findings suggest the initiative has highlighted valuable potential for reflective learning, with some recommendations for improving its future application.

Readers who have read my previous post (and notes) on this project, based on my WJEC session, will note similarities! It’s mainly a shift of emphasis for the different participants: journalism educators at WJEC; lecturers from across disciplines, with a serious interest in the scholarship of teaching and learning at the HEA.

Making every comment count: effective formative feedback to journalism students

This is the theme of my research paper at the World Journalism Education Congress — abstract below, and available here as a PDF.

Making every comment count: effective formative feedback to journalism students — Abstract

Effective formative feedback plays a crucial role in student learning, but it has received relatively little attention. Guidelines on policy or quality have rarely addressed formative feedback in depth, yet quality reviews have consistently highlighted concerns about it, as have student surveys. In addition, trends in assessment imply an increasing emphasis on lecturers providing formative feedback to students, as do other developments in policy (eg professional teaching standards) and practical concerns (eg staff workloads, student diversity).

A number of factors make the topic of feedback comments particularly pertinent to journalism educators.

First, journalism students often produce a high volume of work (as journalistic articles) compared to other disciplines – an approach that serves to replicate professional practice in the newsroom as well as providing the opportunity for intensive experiential learning. This makes for a high volume of work for lecturers to read and comment on.

Second, this work often requires detailed scrutiny, because accuracy and succinct writing are rightly emphasised as essential elements in journalism. So assessment and feedback in journalism arguably demand more time and more detailed comments than in other disciplines.

Third, many journalism educators (almost all in higher education in the UK) are journalists by profession and may not have much background in formal education. Despite the growing professionalisation of university teaching, some lecturers may thus lack in-depth prior experience and/or training in the provision of feedback to students.

Fourth, the application of a scholarly approach to journalism education, as a form of scholarship of teaching and learning in the discipline, appears to have been slow to develop.

This paper presents the findings of a study of the content and quality of formative feedback, which involved the development of indicators that were then used to categorise and analyse a sample of written feedback comments to postgraduate journalism students.

The research identified areas of good practice, as well as suggesting some gaps, which can grouped under four themes:

  • How far does the feedback make clear to students why/how they are succeeding or failing?
  • How far does it link students’ work with their wider progress and the module/course curriculum more generally?
  • Does the feedback encourage dialogue?
  • Does the feedback engage students with the content and with their own learning?

The research also raises questions about the availability of suitable tools to review feedback, for both individuals and institutions. More systematic reviews and support for good practice in feedback might help; encouraging lecturers to keep copies of feedback on which to reflect critically, for example, perhaps using indicators such as the ones from this project. They could discuss with colleagues what is often an individual process rarely seen by others. Some established institutional processes could take more systematic account of feedback, too, including programme evaluations, external examiners‚ reports, and student evaluations.

Using a blog to encourage critical reflection

This is the theme of my presentation to a Best Practices in Teaching workshop at the World Journalism Education Congress (WJEC).

The project has involved using a blog not for students to publish their journalistic work but for them to reflect on their practical journalism — primarily as a tool for enhancing their learning.

Notes from my presentation are available as a PDF here — intended particularly for those at the WJEC teaching workshop. I’d be particularly interested in having your comments, whether you came to the WJEC session or not — please add them below.

Formative feedback for student learning — informed by philosophy?

Bizarre, perhaps, that it was research on effective feedback to students that led me to the work of Richard Rorty, philosopher who died last Friday. He introduced the term ‘final vocabulary:

These are the words in which we formulate praise of our friends and contempt for our enemies, our long-term projects, our deepest self-doubts and our highest hopes

(from Contingency, Irony, and Solidarity, CUP, 1989).

For learning and teaching, this matters because using final vocabulary in feedback tends to close down discussion or reflection on the part of the student (or so the theory goes).

In any case, telling a student that their work is ‘good’ or ‘poor’ does little, on its own, to help them learn — explaining how and why, or pointing towards this, offers much more. I suspect that final vocabulary is prevalent in a great deal of feedback to students (including my own) — at some level it’s ‘natural’. But it’s worth keeping an eye on, if one takes Rorty, David Boud and others seriously.

The Telegraph obituary puts Rorty’s influence down to clarity — an essential in journalism, of course:

One of the reasons for Rorty’s popularity, and the esteem in which he was held, was his lucidity as a writer; even in technical works for an academic audience, he was at pains to spell out his analyses clearly, and not to duck their consequences. This alone made him stand out from almost all other writers and philosophers who adopted postmodernism.

deli.cio.us meets education: social bookmarking for educators

Edtags caught my eye: a sector-specific deli.cio.us. And education has plenty of web-using professionals to make it worth trying. It says it has more than 17,000 bookmarks so far, and unsurprisingly much of the content and many of the users appear to be based in North America. The developers have made it compatible with deli.cio.us, which seems sensible.

Still early days, which perhaps is why nothing came up when I searched for “peer assessment”. However, 63 hits for “assessment” — and even 13 for “journalism”.

This is from the Edtags blurb about the project, which seems to have evolved out of an initiative at Harvard (the splendidly named Edtags Sociosemantic Networking Project):

Edtags.org is a website for educators (e.g., teachers, education graduate students, professors, librarians, etc.) to connect with people sharing similar interests, discover relevant materials that may have “eluded” the traditional card catalogue search, and store and categorize your favorite bookmarks.

One to come back to when I have more time to explore more thoroughly. Meanwhile, it’s, erm, bookmarked.