Three myths about data estimation for GRESB and decarbonisation planning (the fourth will shock you)

The role of data, coverage, and estimation in decarbonising real estate

Parag Rastogi
13 min readDec 14, 2023

Data is the foundation of transparency and action towards decarbonisation. Without an accurate assessment of how much energy a building consumes — where, when, and for what — planning capital projects for decarbonisation will yield wasteful, misdirected expense and effort. However obvious this statement may seem, our experience at arbnco has revealed that most property owners and managers on the leading edge of the transition to carbon-free operations struggle with patchy data availability, poor data quality, and inefficient manual data management processes. The bad news is that these are ubiquitous issues, and it is easy to get discouraged, even cynical, about improving data coverage and quality. The good news is that real world organizations have faced these challenges and arrived at a better place. We believe that turning this corner requires, in part, some data coverage myth busting. A discussion about data coverage may never be “fun”, but there are more reasons for optimistic than most people believe.

Before we dive into the article, let’s get a few definitions out of the way. This article is primarily about Data Coverage. In the GRESB framework, coverage implies one of two things:

Spatial coverage — portion of one building, or fraction of all buildings in a portfolio for which data is available;

Temporal coverage — fraction of time in a given year for which data is available.

Remember that both matter for your GRESB score. Finally, we will use the words “asset” and “building” interchangeably here. An asset can sometimes include multiple separate buildings on a single site, e.g., a retail park, but for the purposes of this article, this distinction is irrelevant.

(Sorry, clickbait headlines work, and this article is part of the problem)

Myth 1: The data just isn’t there

Oh it’s there, somewhere…

If owners, occupiers, and operators do not know how energy is consumed in their buildings, large scale decarbonisation of the built environment is unlikely to be efficient or rapid. While one can have an asset upgrade and maintenance plan without energy analytics, what will be missed is a holistic, real-world view on how the performance of one or more buildings can be improved as fast as possible towards an ambitious goal such as net zero or 24/7 carbon-free operation.

Yet, the reality is that the data is there, if you invest in getting to it.

We are long past holding our breath for an era where energy is “too cheap to meter” or waiting for the grid to reliably provide 24/7 carbon-free energy for all. The increasing cost of energy has led to slow, but painful, increases in transparency. However, this financial incentive to measure consumption does not imply that the data is easy to access. The data could be locked away in any of a variety of systems and databases: a utility’s prehistoric repository of bills and meter readings, a building management system released about the same time as The Beatles’ first album* , in a shelf full of paper bills whose format changes monthly, or even in a nicely organised spreadsheet with your landlord (or tenant) who won’t share it. Numerous small variations in billing, units, and reporting across utility operators create unnecessary busy work for bright (and expensive) analysts just to bring the data to one screen, instead of being free to apply themselves to higher value work such as decarbonisation planning.

As we have built more connectors and tools to utilities and other data providers at arbnco, we have found that automation and clever algorithms are of substantial benefit. They cannot (yet) entirely replace the need for a person to check that meters and buildings are properly matched, but they can make it much easier and more cost-effective. Sometimes, we find that data are not shared intentionally or due to dysfunction in the owner-tenant relationship, but, more often than not, it is a case of too much effort to achieve an outcome that feels insubstantial. This inability to get to the data is often mundane and familiar: a complex jumble of authorisations, disparate databases and spreadsheets, and the absence of a stable process for collection, aggregation, and interpretation.

We have spent a lot of time understanding these issues. We have unscrambled these pathways into repeatable units. Yes, it can be like untangling a drawer full of power cords. It doesn’t have to stay that way. We have created consent management modules and a uniform experience of data from different sources. There is no magical cure for all the problems between tenants, owners, and managers. However, we can use automation to reduce labour and complexity in ways that open up new data flows.

This is reflected in the success of our clients with their 2023 GRESB submissions. They applied these tools to dramatically increase data coverage, while reducing the derangement historically caused by the annual ritual. This was true even for managers operating large, geographically diverse portfolios. Best of all, after an initial investment, there is no reason and certainly no motivation to go back to the old way of doing things.

* Please Please Me, 1963, in case you were wondering, or about the same time as the first computers that controlled HVAC systems were released (Bosch — History of Building Automation).

Myth 2: Accurate reporting requires 100% data coverage

Not quite…it depends on which assets you measure and which you estimate.

GRESB has set in motion a process to encourage transparency in reporting energy consumption in buildings in the last decade. We consider the implications of data coverage for the most common use case in GRESB: estimating the average emissions intensity of a commercial real estate portfolio (e.g., the average metric tonnes of CO2e per square meter).

Classical statistics provides several methods for assessing differences, or lack thereof, between the performance of individual buildings and portfolios when data coverage and quality is assured. Given that one rarely has access to all the data about a problem or subject of analysis (in our case, a portfolio of buildings), all analysis of real-world data must sacrifice accuracy for practicality — allowing for imperfections in data availability and quality — to improve reach and applicability. In fact, robust estimation is a cornerstone of quantitative analysis. Yet, an estimate is also not a replacement for real data when real data is available.

We examined four anonymised real estate portfolios (call them portfolio 1–4) from the GRESB submission database, experimenting with statistical sampling to demonstrate the effect of partial data coverage on accurate estimation of building/portfolio performance. The original population data for each portfolio is wide and exponentially distributed [Figure 1]. As in, more buildings have lower annual energy consumption while fewer buildings consume a lot of energy, and the range of consumption per building within one portfolio is wide.

Figures 2 and 3 show the average error in estimating the “true” sum of energy consumption for a given portfolio using samples of size 10% to 100% of the total. A sample of 10% of the total data, for example, implies 10% data coverage for energy data or that the energy performance of only 10% of assets in that portfolio is reported. We estimated the sum of the total consumption of the portfolio by “scaling” the sum of each sample, e.g., multiplying the sum of the 10% sample by 10, the sum of the 20% sample by 5, and so on. Each graph is created by drawing 200 different samples at each size fraction, e.g., selecting 20 random assets from a total of 100 assets in a fund 200 times. The grey line in Figures 2 and 3 represents the average error in estimation per size fraction, the blue band is the bounds within which 68% (“one sigma”) of sample errors will fall, and the orange band is 95% (“two sigma”) of sample errors.

Using truly random samples [Figures 2c, 3c, 4c, 5c], the average error over 200 samples converges rapidly to zero. If we could take enough random samples (200 in this case), the mean of whichever statistic we calculate on those individual samples (e.g., sum of consumption), even without any sample having knowledge of another, would approximate the true value of the population statistic we are trying to calculate (e.g., sum of consumption for the population). The error ranges (1 and 2 sigma bands) shrink rapidly and 95% of the sample means and sums fall within 10% of the true value with only half the assets included.

This is an important result: if a manager can create a statistically representative sample of assets, they only need to sample half of the portfolio to reliably estimate these two key performance indicators.

However, perfectly random sampling is challenging to achieve in the real world. Opportunities for bias and skew are ubiquitous, creeping in due to the influence of historic factors, building characteristics, local regulation, etc. As we have interacted with clients, these problems show up in several cases. For example, states in the United States with benchmarking mandates make it easier to obtain energy data, so buildings in those states are much more likely to form part of a sample. All this means that, in practice, errors increase to 100% or more and possibly never converge to the correct answer. This is a classic problem in statistics and not unique to this problem: the information needed to create truly random samples of their portfolios is often more expensive than simply getting data from each building.

Estimating population statistics from partial coverage rapidly converges to accurate estimates but, given the difficulty of ensuring completely unbiased sampling, maximising coverage is essential.

We represented two common biases in sampling with examples: either beginning with the smallest energy consumers first [Figures 2a, 3a, 4a, 5a] or the largest [Figures 2b, 3b, 4b, 5b]. The errors in these cases are significantly larger on average and the chance of a specific sample being accurate are comparatively slim. Our simulations illustrate that biased sampling can shift the coverage required to achieve a reasonable error from < 30% to nearly 80%. It is notable that increasing coverage from 80% to 100% yields negligible changes in estimates for the average intensity or total emissions.

Figure 1 a,b,c,d: Distributions of energy consumption values in each portfolio. Graphs created by author with data from GRESB.
Figure 2 a, b, c: Mean and distribution of errors at each size of sample fraction from 10–100% for portfolio 1. Graphs created by the author with data from GRESB.
Figure 3 a, b, c: Mean and distribution of errors at each size of sample fraction from 10–100% for portfolio 2. Graphs created by the author with data from GRESB.
Figure 4 a, b, c: Mean and distribution of errors at each size of sample fraction from 10–100% for portfolio 3. Graphs created by the author with data from GRESB.
Figure 5 a, b, c: Mean and distribution of errors at each size of sample fraction from 10–100% for portfolio 4. Graphs created by the author using data from GRESB.

A final word: while the necessity to gather this much data has created a “problem” (opportunity!) for reporting funds to gather data that was previously hidden, it is not an end in itself. The point of gathering accurate consumption data is to facilitate the reduction of carbon in running and constructing buildings. This means we have to both close the coverage gap as well as move from aggregated numbers to finer-grained data capture for meaningful analyses and action.

In the global effort to decarbonise buildings, estimates are, at best, placeholders.

Myth 3: Improvement in coverage is inevitable

Only in very limited circumstances …

This one is less of a myth than a reminder to not get complacent about the seemingly unstoppable progress of technology. Despite best efforts, progress in increasing data coverage has been slow and grinding [Figures 6, 7], particularly in regions where no legal mandates exist to force utilities to centralise data. Poor data coverage is not, by and large, a technological issue. Rather, in our experience, the primary reason coverage has proven to be difficult to improve over time is governance, i.e., data isn’t shared when it could be. As a result, not only are average coverage scores in most regions increasing very slowly, user feedback suggests marginal gains in coverage from additional manual effort will be slim going forward. In other words, the conventional approach of obtaining data building-by-building using more labour tends to have diminishing marginal returns in many markets.

Change in coverage across years in GRESB submissions (global)
Figure 6: Change in data coverage across all regions (extract from the 2023 regional results).
Figure 7: Change in data coverage for submissions from the Americas (extract from the 2023 regional results).

What is needed is a roadmap towards automating the achievement of (nearly) 100% coverage (or a statistically representative sample) with a clear path from annual/monthly aggregated figures towards monthly and hourly data. For technology providers like us, this includes automation and better consent management by design to ease the process of matching buildings to meters, meters to users, and users to data. There is no new physics to be discovered here.

The challenge lies in successfully abstracting the data in a way that is flexible to different use cases and able to transition seamlessly from reporting for compliance through deep analytics to Measurement & Verification (M&V).

For owners and managers of buildings, this includes better communication with clients (which is eased with platforms that facilitate and simplify information exchange) and the adoption and non-confrontational enforcement of green lease clauses around data transparency. It helps when the party requesting the data is clear about what that data will be used for, defined as narrowly as feasible*. Best practices include ensuring the party releasing the data has an easy and scalable way to share data, e.g., sign off for 3 years for 30 sites at once, not randomly-spaced information requests that are annoying at best and incoherent at worst. Green lease clauses such as those from the Better Buildings Partnership should, in theory, solve some of the communication issues by establishing expectations at the start of a tenant-owner relationship. However, our experience suggests that these are more honoured in the breach than the observance.

* Less of “give us your data so we can use it for anything we might think to put our mind to in the future” and more of “give us your data because we need to report our Scope 3 emissions correctly and subsequently analyse it in this and that platform to identify wastage and a path to carbon-free operation”.

Bonus Myth 4: GRESB is the end of my decarbonisation journey

Since this article is written in the context of the GRESB rating system, we feel obliged to end with a call to action disguised as a myth*.

A simplification like “more data is always better” is too vague to be useful and data by itself does not imply better decision-making. However, planning informed by good quality data about baselines, the relative importance of different sources of emissions, asset conditions, grid characteristics, weather data, etc. is likely to be more robust. It is the best way to avoid obvious mistakes such as focussing too much on the wrong metrics (we ditched plastic straws but didn’t change our oil-based heating system) or mis-scoping (we drove down the scope 1 emissions from our corporate office, 10% of our total, at great cost, but neglected emissions from our tenants’ energy use, which constitute 80% of our total).

Our experience has shown us the data is almost always there, somewhere. The issue lies in how it is extracted and harmonised to extract meaningful intelligence in a cost-effective and repeatable manner.

The GRESB rating system is designed to send a simple ESG signal to capital markets, allowing them to allocate capital to real assets in line with their ESG strategies or the role of ESG in their investment strategies. This means that nuance is inevitably lost in translation from building science to investment signals. The idea is that as your assets improve, the score for the fund in which they reside improves in tandem. In theory, anything you do to improve the environmental performance of a building should reflect in your score, but obviously no evaluation system is perfectly sensitive to every change in its subjects.

The principal objective of any ESG rating framework is to provide a score or performance indicator that helps diverse stakeholders evaluate differences in the performance of entities such as buildings (assets) or groups of buildings (funds, portfolios).

GRESB does not produce an action plan for decarbonisation or ideal corporate governance, it is a broad assessment for how a fund/portfolio is performing on a specific set of metrics and measures. This means that GRESB is an essential first step of your decarbonisation journey, within the context of wider ESG performance and issues.

* Got you again with the clickbait, sorry.

Get in touch!

I am a research scientist and VP of Business Development at arbnco in Glasgow, UK, where I help to build and manage strategic partnerships, specify technical products, produce technical literature, and lead and guide internal research and external collaborations. Over the past six years, I have also been a product manager at arbnco, during which time I was responsible for the development of our Indoor Environmental Quality and Building Controls products.

If you would like to continue the conversation, get in touch via LinkedIn or email me at contact@paragrastogi.com.

Acknowledgements

I would like to thank Dr Chris Pyke for his helpful comments and patience, Motassim el Bakali for his help wrangling the data, and Victor Fonseca for his detailed responses to my many questions about GRESB evaluation.

--

--

Parag Rastogi
Parag Rastogi

Written by Parag Rastogi

I work on health and wellbeing in buildings, IoT-based controls, and the use of machine learning and data science in building performance evaluation.

No responses yet