Building Trust in Impact
Why Reporting and Rating Must Go Hand in Hand
Impact measurement and management (IMM) sits at the heart of investing for impact. Yet, in some cases, grantees must prepare up to 200 reports a year to satisfy multiple funders. One question continues to challenge practitioners: how can reporting be both credible and useful?
At our recent member-only gathering – co-hosted with Impact Frontiers and dedicated to Impact Europe community of foundations – peers explored that question together. Their answer was clear: impact must be reported with transparency, and it must be rated with rigour.
From Burden to Backbone
As Matt Ripley of Impact Frontiers explained, the Impact Performance Reporting Norms were born out of frustration. Reporting had become long, inconsistent, and often of little use. Through an 18-month consultation with more than 200 organisations, the Reporting Norms were designed to restore value by emphasising transparency over prescription.
Rather than dictating metrics and how to measure them, they follow a “comply or explain” approach, encouraging clarity about what is measured, why, and how. The goal, Ripley stressed, isn’t standardisation for its own sake, but trust:
“The idea is not to prescribe a single way of measuring impact, but to double down on transparency, so readers can see how and why certain claims are made.”
That transparency, several participants agreed, is what turns reporting from a compliance exercise into a foundation for learning.
Ly Verveld-Nguyen of the IKEA Foundation put it plainly:
“Reporting is not an end in itself, but an input into the conversations we have with partners about whether change is happening – and what needs to shift if it isn’t.”
The quote reflected a shared sentiment in the room – that reporting should shift from being a static deliverable to becoming a living, collaborative learning process.
What Foundations Need from Reporting
Across the discussion, three needs stood out:
- Accountability – not only to boards and donors, but also to the people and causes served.
- Learning – using evidence to adapt strategies rather than freeze them.
- Collaboration – aligning funder expectations to ease the burden on grantees.
The need for coordination is pressing. In some networks, grantees prepare up to 200 different reports a year to satisfy multiple funders, a duplication that drains resources. Participants agreed that harmonising templates and aligning on core content can reduce this burden and enable joined-up learning.
They also called for more qualitative evidence to complement data like stories, case studies, and voices from affected communities. As Miriam Rütti of LGT Venture Philanthropy noted:
“Numbers are essential, but without context or community voices, they don’t tell the full story.”
Turning Information into Insight
If reporting builds transparency, rating makes that information actionable.
Matthew MacGregor-Stubbs of UBS Optimus Foundation introduced their Impact Rating Tool, developed with Impact Frontiers and The Good Economy. The tool assesses each grant or investment across three lenses:
- Intentionality – alignment with strategy, genuine need, and inclusion.
- Additionality – whether impact endures and can scale sustainably.
- Measurability – strength of evidence and data quality.
Each initiative is rated from A to C, and scores are revisited over time. Aggregated ratings reveal systemic strengths and gaps, such as weak evidence or limited inclusion, guiding where support is most needed. This approach enables deeper conversations with partners, clearer goal-setting, and stronger support for scaling.
“The more transparent we can be, the more confidence we can give, and the more resources will flow to where they are most needed,” said MacGregor-Stubbs.
The hands-on portion of the gathering focused on Additionality, using anonymised case studies to assess sustainability, scalability, organisational capacity, and funder contribution. The exercise sparked lively debate and practical reflection, especially around how funders can meaningfully measure their own contribution to change.
Collective Learning Builds Trust
The session closed on a unifying insight: reports and ratings are not endpoints – they are feedback loops. Reporting builds transparency; rating creates comparability. Used together, they turn evidence into shared learning and more confident decisions.
Tools like the Reporting Norms and Rating Rubrics move quickly from “nice ideas” to “next practice.” And when many funders align on similar approaches, the whole ecosystem benefits, burdens shrink, data improves, and trust compounds.
Three takeaways stood out:
- Stay lean and transparent. Adopt the Reporting Norms and explain deviations.
- Anchor results in context. Report against needs, thresholds, and targets, not just numbers.
- Pilot a light-touch rating. Test the Intentionality/Additionality/Measurability framework to spot portfolio-level patterns.
As Alessia Gianoncelli closed, she reminded the community:
“Trust isn’t built by numbers alone. It’s built by how openly we share them, how honestly we interpret them, and how collectively we act on them.”
At Impact Europe, we’ll continue to champion this combination of transparency, rigour, and peer exchange – because only together can we turn reporting from paperwork into progress.
