Why Energy Models are Poor Predictors of Energy Use

An earlier version of this article originally appeared on www.sefaira.com, and later on the SketchUp Blog.

Here’s a scenario: You’ve run an energy analysis and compared your results to a benchmark, perhaps from a source like CBECS, to see how your design stacks up against similar buildings. And the results look good—too good. Do you show the results to the client, and risk setting unrealistic expectations? If not, what good are the numbers?

It turns out that most energy models are not accurate predictors of energy use. This has been evidenced by numerous studies, such as those commissioned by the USGBC, which show a wide variation between simulated and measured energy use. As more architects incorporate performance analysis into their design process, it’s important for designers to understand why this is the case—and also why a model doesn’t need to be predictive to be valuable.

Why Models are Wrong

To begin with, it’s good to understand how analysis engines are validated: against one another, through a procedure defined by ASHRAE 140. Engines that have been validated this way must show good agreement with other industry-standard engines, or explain the discrepancies. However, just because engines agree with one another doesn’t mean they’ll agree with real-world results.

There are four primary reasons for this:

  1. Simulations often use efficient inputs. Recent versions of building energy codes prescribe very efficient envelopes, mechanical systems, lighting, and equipment. Standards like ASHRAE 90.1 and the IECC have improved significantly since 2000, targeting an average reduction in energy use of 30 to 40% (see figure below). As a result, we should expect a new building to perform substantially better than a national median that includes the entire building stock (such as CBECS), and comparing your simulation results to these types of standards will often not be meaningful.

  2. Simulations assume perfect construction. In reality, overall R-values, infiltration rates, and glazing properties tend to be worse than assumed. Many times the R-values input into analysis are for “typical” wall sections, and therefore don’t reflect thermal bridging in places like corners, eaves, and around openings.

  3. Simulations make many assumptions about building occupancy, operation, and internal loads, which often under-estimate real-world usage. “Best practice” values for these inputs are often optimistic. And building operation can change throughout a building’s life in ways that can be difficult to predict upfront.

  4. Simulations use typical weather files, but no year is perfectly typical—if anything, weather patterns are getting less typical thanks to climate change.

What Architects Can Do

The goal of most design-stage simulations is not to predict the future, but rather to make good design decisions and provide the greatest value to your clients. Designers want to know whether decisions are moving the design in the right direction, and how different design options compare. These are questions that can be answered with comparative results—in other words, without perfectly predictive models. (You don’t have to take our word for it—see Daniel Overbey’s article “Every Energy Model is Wrong — And Here is Why They’re Indispensable.”)

In addition, the following best practices can help you avoid trouble and get the most out of your analysis:

  • Present results in relative, not absolute terms. Use percentage improvements instead of hard numbers, particularly in the early stages of design. Be sure your clients understand that models entail many assumptions, and that energy use depends not only on design but also on operation and weather.

  • Match the level of detail with the design decision being made. If you’re comparing large-scale massing options, default or typical values based on building usage and climate zone will provide good comparative results. If you’re optimizing insulation levels, you’ll want to spend the time to ensure that other envelope values and internal conditions match your design.

  • Use sensitivity analysis to test the impact of assumptions, and to present simulation results as a range rather than a single number. For instance, you can test different occupancy and usage scenarios to understand how much this impacts performance. For more on this topic, see my article on Predictive Energy Modeling at the Design Stage.

Modeling early and often can help architects build a case for a preferred design option, save clients money—both operating and capital costs—and/or meet specific performance targets (such as LEED, the 2030 Challenge, or performance-based energy codes). The benefits of informing design decisions and setting a project on track typically far outweigh the risks of non-predictive models—particularly those risks are understood and managed.