By Alexandra Hoey
Imagine a table that records conversations in meetings, or a mirror that collects data while you get ready in the morning. These are just some of the many projects under the helm of Alexis Lloyd, the creative director of The New York Times Research & Development Lab. Lloyd investigates and develops emerging technologies in media and journalism by exploring interactive interfaces, immersive experiences, and data visualizations.
CJR’s Alexandra Hoey spoke with Lloyd about some exciting projects in the R&D Lab, as well as her vision of the future of reporter and user experiences.
What exactly is the R&D Lab? We’re looking at emerging changes that are happening in the technology landscape, and also what’s happening in terms of reader behavior and changes in response to new technologies or new kinds of ecosystems. We scan the landscape to look at these signals and then we design and build prototypes that explore the impact of what those changes might be for news and media. About three to five years out is generally the time frame that we’re looking at.
What external trends have you noticed in your research? I think we, societally, have the tendency to think that whatever the emerging thing is, is going to be something that replaces whatever came before it. And yet we see that’s not really what happens. What happens is that each new thing adds to this increasingly complex system of media platforms and ways of accessing information. So, the Web didn’t replace print, mobiles didn’t replace the Web, and wearables aren’t going to replace mobile. All these things are going to have to exist in this increasingly complex ecosystem of ways of accessing news and information.
What are some specific projects the R&D Lab is working on? We’ve been thinking a lot about the explosion of sensors over the past several years and the ways that we’ve started to expect as consumers, as people, that we can track all different types of data that we didn’t really have access to before. That’s everything from personal things, like how many steps I’ve walked today, to things that are for more journalistic purposes like data streams that we couldn’t get at before. So part of what we’ve been thinking about is what the sensor network of the near future will look like. How does that change and evolve?
We’ve built a few projects around this. One of them is building semantic listening for Web browsing to capture signals about the topics that I’m reading about or that a group is reading about. We’ve built a listening table that is about how you capture those important moments from a conversation. And we’ve built a semantic listening project for the writing project, which I think is particularly at the heart of what we do journalistically—thinking about how you capture important semantic information about what you are writing about, what a reporter is writing.
Do you think newsrooms are adapting to these technological changes? I do. The technology and the landscape is changing rapidly enough that for all organizations the continual challenge to keep our processes and our ways of thinking in step of what’s happening in the world. I think that one of the biggest changes that we already see—and that I think will continue to be a big challenge for news organizations in the next few years—is that we’ve adapted really well to digital publishing and the Web and to mobile and to thinking about how we publish articles on this platform. But I think we’re still, in many ways, thinking about publishing articles, and the future of news is really beyond the structure of the article.
How do you think readers will be acquiring their news in the future? I think we’ll continue to see change along the lines of what we’ve already seen. As we have different kinds of devices we’ll have a greater range of form for our content, for our news. And we’ll have a greater diversity of ways we interact with it.