Field Test: Can a Zikabot Best an Article on Zika?

I’ve always been a fan of science journalism.  Popular science tomes like Evolution for Everyone (David Sloan Wilson), The Immortal Life of Henrietta Lacks (Rebecca Skloot), and The Dynamic Dance (Barbara J. King) have expanded my sphere of knowledge and influenced the way I view the world around me.  The best science journalism seeks to both inspire and educate, making difficult concepts into easy to digest ones that even laypeople can take something away from.

When we were tasked with coming up with an emerging media field test, I knew that I wanted to explore the use of technology in science journalism.  #SciComm, as it’s popularly referred to on platforms like Twitter and Instagram, is almost inseparable from the words that comprise it, as writers seek to elucidate difficult topics and make them palatable for a broader audience.  In recent years, however, brands like Discovery have branched out into first traditional video, then 360 video, and recently even Virtual Reality (with the advent of Discovery VR).  Scicomm and science journalism are clearly starting to embrace emerging technologies.  Could chatbots be one of those options?  

Continue reading

Standard

Emerging Media + the Future

When I think about my future career and how it might mesh with emerging technologies, I am often draw back to the past.  Specifically, my academic background in forensic science.  I see emerging technologies breaking into not just journalism but all aspects of society – including science, an area that is very close to my heart.  When crime scenes are being reconstructed via hologram or photogrammetry and archaeological ruins are being documented with drones and 360 video, this will not only benefit the scientific fields that are working on these things, but also the science journalists who cover these stories and present them to the public.

Emerging technologies are new ways of looking at old things.  A lot of them (drones, 360 video, photogrammetry, virtual reality) place you right in the center of the action, transporting you to a place that you might not otherwise be able to go, due to time, money, or the fact that they simply don’t exist any more.  And, after all, isn’t that what good journalism is supposed to do on some level?  Transport you right into the middle of a story, making you feel as if you were there and fully understood the topic?

Let’s dig a little deeper, into the field I am currently working in (content marketing).  I see emerging technologies having a place here, too, particularly things like 360 videos, which can enhance the experiential marketing that I write about.  On our corporate blog, we recently covered virtual reality and events, talking about everything from how Microsoft is already exploring VR as part of video conferencing, to how it might be used to enhance conferences, keynotes, demonstrations, trainings, and more.

The bottom line for me with all of this technology is that it’s a step forward in advancing journalism.  If the ultimate purpose of journalism is to situate/transport you to another locale; make you understand and care deeply for the plight of others; and immerse you in the material you’re learning about, then emerging platforms like 360 video, VR, photogrammetry and even drones can only help.  Obviously, it will take mass adoption and laws and regulations will need to be worked out first, but the future looks bright when it comes to combining traditional written content with creative content shaped by technology.

Standard

Drone Journalism in the Nation’s Capital

This week’s assignment was to identify a hometown story that could be enhanced with aerial footage or other types of data captured from a drone.  I live just outside of Washington, DC, where there are any number of stories that could be enhanced in this way, from parades and concerts to aerial coverage of the upcoming inauguration or subsequent protests.

However, none of these will come to pass, since the District and its surrounding 30 miles are under a strict No Drone Zone rule.  From the FAA:

The National Capital Region is governed by a Special Flight Rules Area (SFRA) within a 30-mile radius of Ronald Reagan Washington National Airport, which restricts all flights in the greater DC area.  The SFRA is divided into a 15-mile radius inner ring and a 30-mile radius outer ring.

  • Flying an unmanned aircraft within the 15-mile radius inner ring is prohibited without specific FAA authorization.

  • Flying a UAS for recreational or non-recreational use between 15 and 30 miles from Washington, D.C. is allowed under these operating conditions:

    • Aircraft must weigh less than 55 lbs. (including any attachments such as a camera)

    • Aircraft must be registered and marked

    • Fly below 400 ft.

    • Fly within visual line-of-sight

    • Fly in clear weather conditions

    • Never fly near other aircraft

Since we’re within the 15-mile inner ring, it’s doubtful that we would ever be able to get specific FAA authorization to capture drone footage for any of these stories.  If we move out farther into MD or VA, however, we might be able to capture things like National Football League (NFL) games at Fed Ex Field in Prince George’s County – that’s a tough one, though, because while the stadium is 20+ miles away from the airport by car/roads, as the crow flies, it appears that it might be JUST within that 15 mile No Drone Zone:

DCA to FedEx Field

Moving into the 30-mile outer ring of the No Drone Zone, it might be neat to capture traffic out on Interstate 66  in VA as they move towards making that a toll road (The Washington Post‘s Dr. Gridlock might be interested in this kind of footage!) or to record beachgoers during high tourist seasons at the Maryland and Delaware beaches.  Even then, however, a drone pilot would have to stay cognizant of not flying over crowds, being properly registered, and staying under 400 feet/out of the way of other aircraft.  Still, these could make for very interesting stories in the long run!

Standard

Field Test Proposal: Chatbots for #SciComm

I’m a big proponent of science communication (colloquially, #scicomm) and by extension science journalism.  I think well-written storytelling is key to getting people to understand basic science concepts that they might not otherwise have a handle on, or care about.  Books like The Immortal Life of Henrietta Lacks, The Mismeasure of Man, and Evolution for Everyone have played a large role in helping me understand the world around me.  I spent five years of higher education studying physical anthropology, and while I didn’t end up pursuing that field (very similar to skeletal biology, for those not in the know), I’ve benefited and learned a lot from science.  As a budding journalist, I want to find a way to pay that back.

The site StatNews is a daily stop for me.  They’ve been writing a lot about Zika recently – with good reason, as the disease is a national health concern – and I want to take the opportunity of my Emerging Media Platforms field test to see how a chatbot might help a site like StatNews further this kind of science reporting.  I propose a sort of “ZikaBot” – a chatbot that will answer any questions readers have about one particular disease so that they can better understand the stories they’re reading on that topic.

Consider this the basic grounding of science that readers might not have gotten in classrooms, even those who are scientifically literate, due to the recent emergence of this health concern (after they finished their studies).

Information on Zika is constantly changing as we work towards a vaccine, and an emerging media technology/platform like a chatbot is the perfect way to provide this background information to curious readers, since it can be updated on the backend whenever new information is published and will save reporters valuable time creating their stories since a basic understanding can be assumed.

This is the field test that I propose:

Technology:

Chatfuel chatbot builder

Journalism hypothesis:

The existence of a chatbot that can answer basic questions about a public health concern like the Zika virus can facilitate the work of science journalists by making the public better able to understand a complicated science topic.  While reading an article on a scientific news site like StatNews, for example, they can ask their questions of the chatbot, instead of emailing the journalist, Googling and getting information from dubious (or at least not up-to-date) sources or, worse, reading the entire article and not understanding important parts of it.  The ZikaBot will provide basic clarity in questions that come up as they read the materials and will allow different people to ask different questions of the same article.

Journalism plan:

As a trial run, I will create a chatbot using Chatfuel, which will focus on different questions readers might ask about the Zika virus in particular.  The chat responses will direct them to reputable sources like the National Institute of Allergy and Infectious Disease, CDC.gov, and other reliable science-and-fact-based sites.  In this way, a site visitor curious about the effects of Zika on pregnancy could query the chatbot (in future, built into the article page) rather than having to leave the site to find these answers.

To test my hypothesis that this helps people understand an article better, I will circulate the chatbot along with 1 specific StatNews article, asking people to read the article and then query the chatbot with any questions they may have.  At the end of their exploration, I will ask them to rate the experience with the chatbot and how it informed their reading of the article.

If this hypothesis holds up, in future, science journalism chatbots like the ZikaBot could be created to tackle any number of topics that readers might be interested in, and used in parallel with articles on the subject.

Potential pitfalls:

The most important concern is that of human error; in building the chatbot, I might miss some key topics related to Zika.  To prevent this, I will follow key Q&As from the Center for Disease Control, World Health Organization and more to ensure that the resulting answers/topics are comprehensive.

Other concerns – responses should be in language readers can understand; after all, this is a clarification tool for complex journalism.  What if the sites linked to for explanation are just as confounding?  Rather than providing clarity to readers, we could muddle their understanding further.

Finally, there’s the concern of sending people off the site.  Much of today’s journalism is built around keeping readers on a site to collect advertising revenue from their time on page.  If we send them off the page to ask questions of a chatbot, we could decimate the revenue stream for an article.  It is my hope that in future, this type of chatbot could be embedded on the page itself.  But even then, the current model I am proposing would send readers to a second page to get their answers (since they are not handwritten by the human behind the chatbot, but rather, come from reputable sources [other sites]).

Overall, however, I think a science inquiry bot like the ZikaBot could be really helpful for laypeople interested in a scientific topic but daunted by wading into the literature or high brow reporting.  Stay tuned to this blog for updates as I build the bot, conduct my field test, and report back on whether or not my hypothesis was correct.

Standard

Sensor Journalism: Monitoring Drought Conditions

What’s a piece of journalism that could be done using sensors?  I’d be interested in using the SparkFun Soil Moisture Sensor in areas prone to drought to see when they cross the threshold into drought conditions.

Drought.gov already monitors similar things, but with the homemade Arduino sensors you could get down in a county, neighborhood or community level.  This would be good for local journalism, which seeks to establish a national/environmental impact on a local level.

Journalists could monitor the Arduino sensors to see when a particular neighborhood or city crosses into drought conditions, potentially ahead of when the entire state is declared “in drought,” as is done on the NOAA level.

More information on the actual sensor you’d need:

Sensor: SparkFun Soil Moisture Sensor

Function: Measures the moisture in soil using two exposed pads as probes.  “The more water that is in the soil means the better the conductivity between the pads will be and will result in a lower resistance, and a higher SIG out.”  Tutorials on the SparkFun site show people building auto-watering systems when levels are low, or cutting off watering when levels are too high.  This is obviously on a micro-level, a single plant in someone’s house, but it shows that you could set an arbitrary threshold that means something to you (like establishing drought conditions) and be alerted when the soil you’re monitoring crosses that.

Story pitch: With rising temperatures globally, drought is everywhere  in the news these days.  A quick search for the term “drought” reveals major issues in California, Georgia, Alabama, Massachusetts and New Jersey, among others – multiple geographically disparate locales, and that’s just on a national level.  But what’s happening in smaller communities?  How are they handling/facing drought?  And when does a national crisis become a local one?  The use of multiple SparkFun Soil Moisture Sensors across a local community, and the reporting of subsequent crowdsourced data, will help reporters keep track of drought on a local level.

Standard

Figure Skating’s Grand Prix in VR

This week’s prompt is coming up with a pitch for a journalism story that might utilize the tools we’ve talked about in previous posts (virtual reality and 360 video being chief among them).  I’m an avid figure skating fan, and this season of figure skating competitions is in full swing, with the Trophee de France taking place this weekend.

I’d love to do a VR story that placed you right in the middle of one of these competitions, where you could explore an arena…walking to the ice to see the top competitors performing, going ‘backstage’ to where athletes are warming up and prepping, and more.

Like in Harvest of Change, you could discover markers or checkpoint that when selected would tell you stats about the athletes, their history, their current season standings.  You could gamify it by trying to learn about all of the different types of jumps or spins, for example.

360 video could be employed to feel you were gliding across ice, like in this Rachael Flatt (figure skater) example shared by a classmate:

 

Even though this is sports journalism and a bit different from traditional breaking news, I think it would be a great use of emerging media platforms to put the reader/viewer right in the center of the action for a fast-paced sport, as well taking them to arenas and competitions that are otherwise very expensive and oftentimes difficult to attend in person.

Standard

The Photos of the Future

When I first signed up for Emerging Media Platforms, I anticipated that we’d study virtual reality, drones, wearable technology, and much more.  Nowhere in there did I see us learning about still photography, not in any sense.  While there have been amazing advances with DSLRs and the quality of photos you can capture nowadays, a static photo is a static photo, right?

Wrong.  So wrong.

In this week’s series of lectures, we learned about everything from photogrammetry to 3D scanning, with stops at building our own 3D models in between.  I had no idea what technologists had done with photography in the intervening years since I first learned manual photography and darkroom processing, and I have to admit, I’m very impressed.

How can I see this being used in my future career?  I’m hard pressed to see how 3D scanning of models has a place in “breaking news journalism” but lucky for me, I don’t work in straight journalism, but in content marketing.  And oh boy, is there a place for these types of photo technologies in content marketing!

One of my dream jobs is working in content marketing for a science outlet like Discovery, Smithsonian, or Nat Geo…and who did our professor use as an example for the frontlines of 3D scanning, but the Smithsonian Institute! Check out what they’re doing, it’s really amazing.  I knew they were scanning their collection, but I had no idea how extensive the work they’d done already was or that it was annotated or searchable.  So very interesting.

I see serious applications of these technologies for content marketing, particularly in a scientific field.  The more you can immerse your audience in a story, the more they’re going to want to devour your content and come back for more.

My previous academic life was in anthropology and archaeology; I can only imagine how great it would be to see archaeological digs come to life, digitally, on a blog.  From the other side of the world, you could literally be in the midst of a scene that (as a layperson reading about science), you’ve only read about and will never get to travel to.  You could recreate historical scenes, events, and narratives for students to follow along with.

Photo technology like photogrammetry and 3D scanning brings far away or distant things to life right there in front of you.  It has huge implications for how we interact with content online – maybe we don’t even need words, after all!  If the traditional 2D picture is worth 1000 words, what’s a 3D, exploratory image worth in getting you to understand a scene?  My guess is, quite a lot.  I’m very excited to have learned about this tech and look forward to following it in future.

Standard