Medtech, innovation

Why Documents and Data Can No Longer Be Treated as Distinct Entities

By MedTech Intelligence Staff
Medtech, innovation

If teams across regulatory, quality and safety functions—as well as further across the life sciences enterprise—are to be able to think and operate in more agile and dynamic ways to achieve what is necessary, then the way that they generate, manage and store documents and data needs to change fundamentally. Regulatory changes (e.g., towards data-driven submissions activity, more dynamic item/label tracking, etc.) are prompting some of this change, but cannot be relied upon exclusively to drive the process improvements now needed.

To delve more deeply into what the future of document and data management looks like, and what companies need to do now to realize that vision, Generis recently invited four industry experts to a virtual debate, also attended by those working across regulatory and adjacent disciplines in the lifesciences industry.

The Experts
Peter Brandsetter, Accenture Caroline Masterman-Smith Steve Gens, Generis Remco Munnik, Iperion James Kelleher, Generis
Peter Brandstetter, Senior Manager for Technology Consulting, Accenture Caroline Masterman-Smith, Senior Engagement Manager, Syneos Health Steve Gens, Managing Partner, Gens & Associates Remco Munnik, Director at Iperion, a Deloitte company James Kelleher, CEO, Generis  (moderator)

Blurred Lines: All Content Must Come from the Same Master Source

All facets of life and business are becoming more data driven now, so it stands to reason that the life sciences industry must move this way to stay dynamic and agile in a fast-moving market. This was the premise of the debate, and the panel started by summarizing the developments that had led to this point. Industry regulators are leading the way, for instance, but it is now incumbent on organizations to transform their own processes if they want to keep pace with the wider healthcare ecosystem and cater more effectively to the needs of patients.

The panel cited the example of ISO IDMP, and its implementation in the EU as set of data standards that will allow a single medicine or product to be uniquely identified, traced and queried in detail. Alongside this development is the coming requirement to submit regulatory information in data form in parallel to document provision, as an interim step to data being the primary source of regulator exchange and the master source of intelligence. Documents—where these continue to exist—will be secondary. This changes everything, opening up all kinds of new possibilities for more seamless data and document management across the enterprise, with the right infrastructure and processes in place.

The Rising Value of Data and Its Quality

In the new set-up, the quality, completeness, governance and maintenance of master product data becomes paramount, placing new emphasis on who owns, and on who is responsible for that data and its integrity over time. With this in mind, the discussion moved on to the growing role and importance of data models—as a means to organize the elements of data and standardize how they relate to one another, and how they relate to the properties of external entities.

Gens & Associates, which has been tracking regulatory information management trends for many years, sees these issues coming to the fore as companies become more ambitious and holistic in their data organization plans. Steve Gens noted that he has witnessed an increasing preoccupation with improving data quality so that, as a definitive source of product “truth” evolves, teams can depend on the reliability and accuracy of that data and do more with it.

Data connectivity and exchange becomes important here, too—supporting spin-off use cases beyond structured data submissions to regulators. These use cases could span regulatory, quality, manufacturing and clinical operations, etc—for streamlining activities such as global labeling management and content change control/variation management. In cross-functional contexts like these, where ostensibly the distance grows from the original information source, the data’s reliability becomes even more critical.

Targeting resources as Regulatory Deadlines Bear Down

One challenge for life sciences companies is how best to target efforts and resources as they strive to meet two ambitions: Regulatory compliance and internal process transformation.

Remco Munnik from Iperion-Deloitte said that matters of data quality must be resolved before multiple processes are connected with that data. Up to now, individual departments have held their own versions of relevant information, each addressing a single use case. So an important first step is to understand where all the original contributing data sources exist—and what needs to happen to it to get to a credible and definitive single source of product truth. EMA’s IDMP can help here, with its highly prescribed ways in which substance, products, organization and referential data need to be formatted, making it possible to work towards a definitive trusted and compliant central data set.

Caroline Masterman-Smith of multinational CRO Syneos Health pointed to the process-related barriers that pharma organizations need to overcome, too, to make data work optimally for them—as they start to adopt integrated solutions that handle both data and documents.

Concurring that data rather than documents will act as the glue that fits regulatory into the wide enterprise in future, she commented that life sciences companies will need to consider the context of particular submissions, as it won’t always be obvious which data elements are required.

In her company’s dealings with clients (calculating their IDMP readiness assessments), Caroline has found that issues of governance soon emerge. These can be addressed by defining better processes/reviewing operating models—but this may not be a one-size-fits-all situation. Some companies are moving toward centrally-managed data entry and governance, she said, while others are gravitating toward a scenario in which everyone accountable for their own data at source.

The Role of Tech

Accenture’s Peter Brandstetter considered how companies might maintain a robust central source of master data in cases where they are outsourcing data-based work to various external vendors across their different departments.

Steve Gens suggested that, based on his company’s research, there’s a strong likelihood that within two to three, years almost two-thirds of the industry will have taken a consolidated platform approach to organizing and repurposing regulatory information/ product data for multiple use cases. Even the remaining 40% were looking to use fewer providers, he added, with an emphasis on greater inter-system connectivity, enabled by consistent use of agreed data standards.

Noting that many of the tools companies need to consolidate and clean up their data are already well established and well within reach (eg., “big data” analytics tools, and the use of AI to extract information from unstructured data), Peter Brandstetter suggested that any residual barriers to tech adoption are likely to have more to do inertia than with access to potential solutions: A preference among life sciences organizations for continuing with the way teams have always worked.

Honing the Business Case for Data-based Process Transformation

The panel agreed that data-driven/IDMP-led transformations are a foundational investment, so companies need to look laterally for ROI. Peter Brandstetter noted that the bigger benefits will extend beyond IDMP and regulatory compliance, and as teams are able to call on reusable master data across the whole product lifecycle, from early research until a product retires.

For Iperion’s Remco Munnik at Iperion, the focus should be on tying the data together with the process: The goal being a target operating model; one set of data that supports a range of downstream processes. So that if there are changes made in one location, there is dialogue and agreement that this is what will be taken forward for the next submission for the next step, etc.

With documents still currently playing a prominent part in regulatory submissions, debate host James Kelleher of Generis wondered what quick wins companies might aim for, in using a concerted approach to data/document management—to help sell the business benefits to senior stakeholders.

One prominent example lies in structured content authoring, the panel agreed: Taking a more data-driven and granular approach to the fragments making up documents—with the potential to remove up to 80% of the manual work involved in managing content variations across application forms, cover letters, eCTD sequences etc. The imperative now must be to pin down these extended business use cases.

Related Articles

About The Author

MedTech Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *