Guest Column | July 17, 2018

Adverse Event Reporting On Social Media — What You Need To Know

By Rob Innes, Wyoming Interactive

Adverse Event Reporting On Social Media — What You Need To Know

Adverse events (a suspected reaction to the API or to the API in combination with prescribed medicines or environmental factors) are, naturally, a challenging situation for patients in a clinical trial and may, in rare cases, become severe or even fatal. For sponsors, contract research organizations (CROs), and logistics partners, early notification and accurate information is vital to understanding and responding to a suspected adverse event. What happens if the patient goes off protocol and turns to social media?

We live in a social world. Facebook, Twitter, LinkedIn, WhatsApp, and a plethora of other media are becoming ubiquitous across all demographics, and, consequently, are available to virtually every participant in a clinical trial. U.S. smartphone penetration surpassed 80 percent in December 2016 and likely continues to rise. These social media channels are mobile-optimized and live with the patient, everywhere they go. Patients may be more likely to remember their mobile device than their medication when travelling to work, shopping, socializing with friends, exercising, etc.

If the device is ubiquitous and the channels are ever present, how patients (consumers, every last one of them) relate to each other and to healthcare providers will change. Rather than showing a friend a rash, might a trial participant post an image to Instagram or Facebook and seek guidance? Hopefully, someone in their network quickly directs them to their medical professional or clinic, but should providers rely on the network performing intelligently?

What if their post goes unanswered, or worse, is addressed by someone with non-specialist knowledge and results in the patient delaying or avoiding contact with professionals? Such are the risks of conducting patient trials in a social world.

Patient information sheets are always kept handy by patients, and emergency contact cards are always carried. Well, perhaps. But that is not to be relied upon. Even if it is carried, a reaction may not trigger use of the card. Instead, the patient may reach out to their network on social media first. Any medical professional will tell you that this is a poor choice; however, the industry needs to wake up to this actually being a preferred route for patients.

It’s clear what a patient should do, but what will they do, and how might sponsors, CROs, and the like better engage with the new consumer?

What does social alerting look like?

Social alerts may be entirely private, from one user to another, in which case the sponsor has zero visibility. An alert may reference the sponsor (or investigative site/clinic), in which case visibility is possible, if tracked and escalated (more later). It may refer to symptoms/observed effects, in which case it’s possible to have some visibility, but it’s probably unlikely unless a very particular symptom is listed and the sponsor’s social media monitoring team is playing its A game.

Life Science Training Institute

Looking to use Social Media to recruit patients for your trial? Learn how to craft a targeted message to get patients involved early in the course:

Using Social Media for Patient Recruitment in Clinical Trials


Instagram, Facebook, and Twitter represent three of the common networks that patients may turn to. Instagram and Twitter are open to a greater extent than Facebook, and message visibility is higher. For most users, Facebook posts remain within a narrow set of individuals. For sponsors and CROs, calls for help made on Facebook are, unfortunately, likely hidden from view. In more open networks, access to user messages is possible. However, access does not equal visibility. Detecting pertinent messages in the Twitter deluge is not easy. Social media monitoring software is available to help with this task, including, for example, Social Studio, previously known as Radian 6. This allows for “proximal” searches, where a set of terms is searched and hits reported when the terms are close to each other in a post. For example, “trial” and “rash” could yield a positive hit for: “Just started a trial and I’ve now got a rash. Quite nasty, too. Great!” On the other hand, it would not yield a hit for “Downloaded some trial software for my new Mac and it’s super complicated — instructions are not great. May have been rash thinking I could get this running in a single weekend.”

In the first instance, the keywords are within six words of each other; in the second, they are separated by 16 other words. A proximal filter on 10 words or fewer would pick up the first but not the second example.

What might a workflow for social access, awareness, and action look like?

The following is a simplified list of topics from which a standard operating procedure (SOP) might be built, but it would have to be tailored to the particular requirements of each organization and regularly checked for suitability.

  • Setup and testing: getting monitoring software installed and configured for your protocol
  • Monitoring: running the system and picking up appropriate alerts
  • Filtering: examining possible hits and picking up the ones that need following up
  • Escalation: getting medical staff to urgently review highlighted cases
  • Action: Do something! Do you need to unblind? Do you need more information? Who else needs to know?
  • Follow-up: What can be fed back into the protocol, into the monitoring scheme, and into the onboarding packs for future studies?

Are There Any Challenges With This Approach?

There are many challenges to address, and they are evolving.

Identification. If @tim_martindale is picked up by monitoring software and the profile includes a proper name and a specific location, great. Researchers have a good chance of matching the person to a trial participant list. However, what if @big_tim_260473 is picked up and the profile includes the name Big Tim and a non-specific location? Trying a direct message may not yield a response for a few hours or days, if ever.

False Positives. Whenever an organization starts a social listening project (regardless of industry), there is a flurry of hits. Filters are usually quickly applied, but even then, volumes can be surprising. Lots of possible hits need to be checked, and this raises resource issues.

Privacy. Patient safety is driving this initiative, and that’s a universal good that all can agree on; however, it may not trump privacy issues, particularly with legislation such as GDPR strengthening users’ right to privacy.

Language Coverage. If the trial covers multiple territories, are all pertinent languages covered by the monitoring software and by the review and escalation team?

Emergency Unblinding. What rules govern emergency unblinding, and how does the escalation process get appropriately to action?

Unsocial Hours Coverage. Monitoring software runs 24x7. Set up your keywords, filters, and constraints, and off you go. But what about the monitoring team? Where do hits go on a Sunday evening?

All of these challenges can be overcome through process management, data management, good operations practice, tactical outsourcing, use of technology, etc., but the right choices for each organization may be different. In a future article, we’ll look at some of the strategies organizations can use to limit the impact these challenges and how to embed these strategies into day-to-day operations.

About The Author:

Rob Innes is head of consultancy at Wyoming Interactive. In this role, he manages consulting engagements for the firm’s life sciences clients, helping to drive business transformation in response to industry challenges.