Methodology

Our research methodology is designed to provide independent, practitioner-grounded insights that are both credible and actionable. The foundation of our work rests on two core pillars: the sources we use and the analysis we apply to generate meaning, context, and clarity.

We’re not just collecting stories. We’re looking for patterns - what works, what doesn’t, and why. And we’re doing it in a way that respects the realities of platform ownership, organisational politics, and the lived experience of transformation.

1. Sources: Where Our Insights Come From

Our research draws on a carefully curated blend of primary and secondary sources. We deliberately seek out a diversity of perspectives—strategic, technical, operational, and cultural—so that the insights we publish reflect the full complexity of real-world implementations.

A. Primary Sources

These are the original inputs we either gather directly or have permission to review. They include:

  • Practitioner interviews: Conversations with platform owners, architects, programme managers, and ServiceNow users who’ve lived through implementations, upgrades, or failed rollouts.

  • Case submissions: Structured case study templates completed by organisations willing to share lessons learned (anonymised if requested).

  • Post-mortem documentation: Internal or external reports reviewing what went well, what went wrong, and why. These often offer the richest insights.

  • Webinars, panel discussions & roundtables: We analyse recorded industry events to capture real-time, unscripted reflections.

  • First-hand implementation artefacts: Sample project plans, stakeholder comms packs, governance playbooks, training materials, and user feedback (where access is granted).

B. Secondary Sources

These provide supporting context, comparative insights, or contrasting narratives. Our library draws from:

  • Published case studies from vendors, implementation partners, or public sector portals (e.g. GDS, Digital Marketplace, or the National Audit Office).

  • Job descriptions and organisation charts to infer maturity, team structure, and internal ownership patterns.

  • LinkedIn profiles of known ServiceNow champions or speakers, to trace influence, tenure, and platform evolution.

  • Conference materials and whitepapers - not to repeat the vendor message, but to compare stated strategies with lived realities.

  • Forums, reviews and practitioner blog - especially where frustration or uncertainty is expressed in long-form responses.

  • Regulatory submissions or board-level papers, where ServiceNow or operational resilience are explicitly mentioned.

We also collect indicators of platform maturity and readiness from job boards, procurement databases, and news aggregators. These help us triangulate where an organisation is in its ServiceNow journey and whether there are signs of friction, fatigue, or re-platforming.

C. Transparency and Traceability

Where possible, we link back to the original source. Where that’s not possible (e.g. anonymised interviews), we make clear what level of synthesis or interpretation has occurred. We hold ourselves accountable to the same standard we expect of platform teams: visibility, context, and traceable assumptions.

2. Analysis: How We Make Meaning

Our analysis is not driven by AI alone—although AI is a helpful tool in helping us cluster, classify, and draw connections. The heart of our work is human: grounded in practitioner experience, curious inquiry, and a commitment to clarity.

We don’t aim to present a single "truth"—we aim to surface patterns, contradictions, and insights that help platform owners and sponsors make better decisions, prepare more robust business cases, and guide stakeholders through complexity with confidence.

A. Thematic Coding

All submissions and case inputs are analysed through a consistent lens of thematic coding. This involves tagging content across key dimensions:

  • Module or focus area (e.g. ITOM, HRSD, SAM Pro, Custom Apps)

  • Phase of implementation (e.g. pre-initiation, go-live, BAU)

  • Stakeholder group (e.g. CFO, Product Owner, Process Owner, End User)

  • Friction types (e.g. data quality, sponsorship drift, unclear success metrics)

  • Evidence of value (e.g. cost avoidance, improved user satisfaction, audit compliance)

  • Signals of misalignment (e.g. high turnover, shadow IT, change fatigue)

These tags allow us to aggregate patterns across industries, organisation sizes, or delivery models—helping readers see what might be coming next, or what others have already navigated.

B. Narrative Synthesis

Rather than just summarising data, we build out narratives that reflect the tensions and trade-offs each organisation has faced. This includes:

  • Identifying what “good” looked like in their context.

  • Highlighting key moments of risk or change—e.g. when a module rollout triggered unexpected political resistance, or when a sponsor shifted mid-implementation.

  • Exploring how clarity (or lack thereof) around ownership, architecture, or funding influenced the outcome.

  • Capturing emotional tone and stakeholder sentiment, especially when it affected decision-making.

Where there’s conflict or divergence (e.g. between vendor claims and internal experience), we don’t smooth it over. We make it visible, so that others can learn.

C. Comparative Maturity Models

We map cases against a structured organisational maturity framework that considers both technical implementation and behavioural readiness. This helps surface patterns like:

  • Organisations that appear mature on paper (e.g. rolled out ITOM, HAM, SAM) but suffer from fragmentation and resistance in practice.

  • Teams that have strong governance and sponsorship but struggle with technical debt and process inconsistency.

  • Successful transformations where business value was delivered through careful sequencing, stakeholder mapping, and role clarity—rather than sheer velocity.

This is not a scorecard. It’s a way of helping others see themselves in the story, spot blind spots, and define what “fit-for-purpose” might look like for them.

D. Synthesis with Stakeholder Value Lens

We cross-map all analysis with our proprietary Stakeholder Value Lens, which categorises insights by:

  • Who benefits (and who doesn’t)?

  • What are the stated and unstated objectives?

  • Where does resistance arise?

  • What gets measured, and what gets ignored?

This helps sponsors and platform owners not only spot technical risks, but also navigate the politics and psychology of enterprise change.

In Summary

We take the long road. Our methodology reflects a commitment to rigour without rigidity. We believe that in the messy reality of transformation, it’s not just what worked. It’s why, for whom, and under what conditions. Our job is to map those conditions, so that others don’t walk in blind.

Insights That Help You Deliver

Independent insights, real-world case studies, and lessons from ServiceNow implementations. Practical, research-backed content to help you build readiness, align stakeholders, and drive adoption.

Email sent once a week. We care about your data in our privacy policy.