• | 10:00 am

How to use science to know whether the change you’re making is really working

These behavioral science experts introduce the Idea framework, a methodology that helps leaders gauge whether they’re doing the right thing.

How to use science to know whether the change you’re making is really working
[Source photo: Christina Morillo/Pexels, Vlado Paunovic/Pexels]

Adapt or perish has become a hallmark of modern leadership. Leaders are increasingly faced with new challenges in uncertain contexts, and expected to address complex questions about the future of their organizations, the workforce, and even societal issues. How to plan for increased employee autonomy? Climate change? Social justice? Cost-of-living crisis?

Managing change at multiple levels requires leaders to adjust their approach to suit unique contexts and challenges. To do this, leaders can take inspiration from behavioral science by tapping into the scientific mindset. We’re not suggesting managers go into a lab or build complex categorization systems for their employees. Instead, leaders can experiment with different actions that might improve on the status quo. They can think of those actions as “interventions”—strategies, policies, or behaviors that can be implemented to change an outcome. Adopting an experimental mindset allows leaders to approach workplace change strategically.

An experimental approach starts by considering the unique context of an issue, forming hypotheses about how to change it, testing different interventions, and selecting the most appropriate design based on the unique insight from the experiments. These are building blocks of the Idea framework (created by Grace Lordan for the London School of Economics Inclusive Leadership Through Behavioral Science course.) The method encourages leaders to use science to get to better solutions and evaluate the changes they make in the workplace so they are certain they are working.

This is the Idea Framework:

  • I: Identify a problem.
  • D: Design an intervention.
  • E: Evaluate effectiveness.
  • A: Assess if the solution is fit for purpose.

A classic approach to experimentation is a “randomized control trial.” Here, leaders can randomly choose parts of the organization to participate in their intervention. They can then determine effectiveness by comparing the difference in the effects observed between those exposed to the intervention (treatment group) and those who were not (control group).

Randomization in this way is particularly useful when an intervention can be monitored only over a long-term window. With lots of other things happening in the organization, it allows the impacts of the intervention to be disentangled from other changes taking place simultaneously. Random selection can also increase confidence in the results. Leaders can be more certain that what they found is directly tied to the intervention, not inherently favorable characteristics of the teams selected. This approach is ideal, for example, when making a major change to the promotion process to improve gender balance.

Alternatively, teams can be observed over time to understand if there is a noticeable upward or downward trend in the outcome after an intervention. This before-and-after approach is most suitable when data can be gathered regularly and consistently—for example, to monitor the impact of changes to how weekly meetings are chaired on team dynamics.

These two approaches allow leaders to capture the impact of interventions. Before evaluating the results, it is also important to define what “good” looks like by documenting and quantifying what is considered a successful outcome relative to the cost of the intervention. Costs should include both direct expenses and indirect costs like the time invested in the intervention. Using a predetermined metric for evaluation avoids cognitive dissonance and sets clear parameters for assessing success.

BEWARE OF ADAPTATION

Adaptation can occur when an individual’s behavior changes as they adjust to a new environment. Adaptation can show up in interventions when the intervention initially results in changing behaviors, which then reduce over time as employees adjust to the environment change or the novelty of the intervention fades away. This is why the long-term monitoring of interventions is important to ensure that change is persistent. If not, the leader may want to alter the intervention or try something new.

Consider two very simple examples of the Idea framework in action.

EXAMPLE 1: HUMOR AT WORK

  • Identify: A leader has noticed that team meetings have been lacking in fresh ideas, and people are hesitant to speak up. They suspect there may be an inclusivity problem where employees do not feel psychologically safe to share new ideas and insights. To understand the scale of the problem, the leader observes a week of meetings and writes down the number of new ideas that come up. They find that, on average, each meeting has three new ideas shared by employees. They want to increase this number.
  • Design: The leader then looks to create an intervention to address the lack of ideas being shared by employees. They spend time exploring different options and learn that psychological safety can be cultivated in an environment where humor is used to bond team members, helping people feel safe to share ideas. They decide to use this as the basis for their intervention, defining the outcome as the number of new ideas shared by employees per meeting. The leader decides that if the intervention can raise the number of new ideas shared by 50%, the intervention will be successful. The leader decides that they are willing to spend two hours per week researching, implementing, and analyzing the intervention to reach this outcome. The intervention the leader decides to implement is to begin every team meeting with a joke.
  • Evaluate: The leader collects data on the number of ideas shared per meeting and compares it to the baseline data they collected. They do this over a week and find that since they’ve opened meetings with a joke the average number of shared ideas per meeting increased to five. This means the average number of shared ideas per meeting increased by approximately 67%. The leader also estimates that the time they spent dedicated to implementing and analyzing the intervention was on average one hour and 45 minutes per week.
  • Assess: In looking at these results, the leader notes that the inclusion outcome exceeded their predetermined goal, and the time spent was under the stipulated amount. As such, this intervention is considered to be cost-effective. The leader is happy with these results and decides to continue implementing the intervention for the coming meetings. However, they are wary that the effects over time may diminish if adaptation occurs, so they will continue to collect data for the next few months. The leader also considers whether the intervention is causing other positive or negative spillover effects, so they decide to collect additional data on the team’s innovative performance and team members’ job satisfaction.

EXAMPLE 2: ADVOCACY AT WORK

  • Identify: A leader of a large team is informed by a team member that they feel as though the promotion process is unfair, and that certain individuals are being promoted above others. Specifically, the employee feels that the men are receiving a disproportionate number of promotions, while eligible women are being passed on for such opportunities. The leader starts by assessing this claim to see if it is true. They can verify the employee’s claim using existing data on the demographics of employees who have received promotions in the past year. The leader does this by conducting analysis using gender as a predictor variable and promotion as an outcome. They find that being male significantly and positively predicts an employee receiving a promotion.
  • Design: Based on their experience in previous organizations, the leader has learned that creating an advocacy program can help equalize the promotion rates of individuals. When advocates speak up on behalf of all individuals, it equalizes opportunities, and fewer people will be passed on for promotion if they are ready. The outcome that the leader wants to achieve is gender no longer predicting the likelihood of promotion. They are willing to spend $5,000 on this intervention. The leader decides they will run a randomized trial in which employees are randomly assigned to either be matched up with an executive-level advocate (experimental group) or not (control group). The leader subsidizes the cost of coffee, lunch, and dinner meetings between the employee and their advocate for the next year.
  • Evaluate: After five months, the leader assesses the cost and efficacy of the intervention. They analyze the gender impact on promotion rates in the experimental and control group. In the control group, the results remain unchanged. In the experimental group, gender remains a significant predictor of promotion rate, but the effect is weakened. The cost of the intervention so far is $4,500.
  • Assess: The leader considers what these results mean. Although marginal progress was made, the desired outcome was not reached. The cost was kept slightly under budget. The leader decides to abandon this intervention as it has a limited impact for the investment. They will return to the design stage of the Idea framework and choose another intervention that may be more successful in equalizing the promotion rate for men and women.

In today’s competitive economy, leaders need to be sure that the changes they make in the workplace are cost-effective so that they are not wasting resources and are, instead, moving the dial in the right direction. They need to be able to recognize when ideas fail and fearlessly test new ones that can deliver improvement. Embracing an experimental approach to leadership allows for this, in a manner that is easy to fit alongside the leader’s core tasks.

  Be in the Know. Subscribe to our Newsletters.

ABOUT THE AUTHOR

Grace Lordan is the founding director of the Inclusion Initiative and an associate professor in behavioral science, both at the London School of Economics. More

More Top Stories:

FROM OUR PARTNERS