Regulatory demands in the life sciences world are not only increasing in volume and complexity; submissions workloads are also extremely variable. So applying traditional performance and resource measurements has not proved very effective in attempts to improve cost efficiency.
When the work was managed internally and companies’ budgets were ample, this didn’t matter too much – because if loads increased, companies could simply add more staff as needed. But, as budgets have come under pressure and more regulatory work has been outsourced, firms have faced a growing need to monitor, manage and report on resource consumption and value for money.
The default approach has been to do this according to the raw number of documents processed – so if one year 1,000 documents were authored, and the next year the number was 900, there was an expectation that this should cost less.
But simple formulae like this do not account for the increased complexity of submissions.
Over the last four years or so, in line with an even greater focus on drug safety, regulatory submissions have not only multiplied, they have also become much more involved and detailed. Simple changes have turned into next-level submissions. At the same time, many life sciences companies are looking to take existing products into additional markets around the globe to derive more value from investments.
In all of these cases, the likelihood of firms underestimating the additional work and time involved is high. Expanding the territory for products is a lot of work, so increasingly there has been disconnect between firms’ expectations and what’s possible under existing terms.
The trouble is that, if the measure of productivity and output involves simple surface-level metrics, there is no way to expose the granularity that is needed to recalculate resources and measure value. And unless the company allows for the extra load on the Regulatory partner, meeting the desired targets isn’t possible.
Regulatory submissions have become much more involved and detailed
It’s a challenge we’ve been working on with some sizeable clients, companies that have been charged with delivering measurable increases in Regulatory productivity – in one case, doubling this by 2025. Yet without a viable measure of output currently, there would be no way to benchmark this to monitor and put metrics to any improvement.
Driving efficiency in life sciences regulatory functions is and will continue to be a hot topic for the industry, and the impasse when it comes to measuring requirements and quantifying improvements is not going to be overcome. Excuses along the lines of every submission being unique (seemingly making it impossible to get to an accurate measure of formula for calculating workloads, resources and time and cost implications) won’t wash any longer. This is because the pressure to show improvement has become so strong, and there are ways to pin this down, especially when you’re able to combine modern analytics with years of historic submissions as the basis for ongoing projections.
Horses for courses
When, as a service company, you’ve been involved in creating and managing hundreds or thousands of regulatory submissions across an extensive client base – submissions spanning some 50-60 different types – you develop a sense for what makes for a straightforward, middle-of-the-road, or hyper-complex workload, enabling more granular, tiered billing instead of a one-size-fits-all or overly simple volume-based model.
Over time, we have established not only the ability to estimate resources accurately based on global requirements for each submission type, but also the ability to break submissions down into constituent tasks, allowing us to say with considerable accuracy what makes one job more complex than another.
If companies are aiming for accuracy they need to be more pragmatic
The resulting matrix of resource requirement per submission type has contributed towards the development of a repeatable framework, which is detailed enough to overcome the challenge of variability between submissions. The unthinkable alternative is for teams to input every action into a hugely onerous timesheet – the kind of undertaking that invites rebellion, and results in significant inaccuracies because administration is so often put off until the detail has been forgotten. If companies are aiming for accuracy and the ability to make positive comparisons over time, they need to be more pragmatic.
Granularity gives rise to new insight
Using a computational framework has been our approach for around three or four years now, and it’s gained traction with life sciences firms because it paints over any grey areas and builds trust – that companies are not being overcharged for processing Regulatory changes. It removes a big source of potential tension for both parties – the client and the outsourced regulatory service provider.
Also, because the more sophisticated, systematic measurement model is initially crafted based on the number of hours needed to do a job of each kind and level, there is the potential to allow for other factors such as time lost to IT outages, or spent in training. Each time these subtler contributing criteria are recorded, the scope for richer operational analysis increases. If the head of R&D can see in black and white the impact of IT downtime on regulatory productivity and cost, it soon focuses the mind on where action needs to be taken – because the correlation between issues and the bottom line is clear to see. It’s a level of granularity and insight that spreadsheets can’t deliver.
Comparative performance reporting at regular intervals can boost morale
Comparative performance reporting at regular intervals, meanwhile, can boost morale, prompt competition between teams, and provide new levels of intelligence about success factors – i.e. what makes one set of submissions faster to deliver than another? Is it simply the numbers of people allocated, or something more innovative in the approach, for example?
The results can shine a light on improvements or problem areas, prompting managers to ask more of the right questions. And the longer the period of recording extends, the richer the scope for data mapping and the more persuasive and powerful the trend intelligence. This in turn could focus discussions about bringing batches of work forward to avoid busy periods, or times when staff resources are thinner on the ground.
Of course, advances in technology and the rapidly falling price of data storage and cloud-based analytics are contributing to more companies’ ability to be smarter about resource management. Using cloud-based data collection tools can enable multiple people to add data at the same time, and to do this from different locations, for example. And the easier companies make it for people to input measurement data, the more consistent, complete and reliable the intelligence built up will be.
Systems that display everything on one screen – that can be accessed via mobile phones, etc – all contribute to compliance. They also remove the inaccuracies of gut feel, where team members subjectively assess how ‘hard’ a task has felt. With a menu of firm parameters to choose from, that bias potential is removed. The result is a quantifiable, data-driven system for monitoring resources – the kind of supporting documentation budget-holders dream of.
Regulatory outsourcing in life sciences has some way to go to catch up to the sophistication seen elsewhere
Compared to other service disciplines or branches of outsourcing, regulatory outsourcing in life sciences has some way to go to catch up to the sophistication seen elsewhere, but there is a lot that can be learnt from looking horizontally towards other markets and the best practices in use there.
With regulatory demands set to keep increasing, and the pressure on life sciences firms to be smarter in how they manage resources and leverage product lines, the demand for more discrete measurement strategies and techniques can only grow. Without proper aids, there’s only so much improvement service managers will be able to demonstrate. That’s the real equation they need to mull over.
Adrian Leibert is life science and pharmaceutical program manager for regulatory outsourcing at Kinapse. In this leadership role within a team of program managers, he is responsible for embedding world leading project management methodologies into the Kinapse organisation. Kinapse is recognised as a leading advisory and operational services provider to the global Life Sciences industry. Founded by professionals from the biopharmaceutical sector, the company provides its services across the full R&D and commercialisation lifecycle, collaborating with its clients to improve the lives of patients, through a unique ‘Advise – Build – Operate’ delivery model.
Share this article