Growth Hacking and Experimentation in Education: 3 Key Factors to Get Started
Growth hacking is a term mostly used in the context of startups and software companies in business environments. It is an approach to strategies that focus on maximizing growth, typically measured in number of customers or revenue, with as little budget and time spent as possible. This approach is something that institutions and organizations can use when looking to diversify revenue initiatives or even when improving teaching and learning practices. In this post, we’ll explore growth hacking strategies in the context of education from 2 key perspectives:
- Organizations and Professional Learning. How can we use growth hacking from a business perspective? This could be useful when considering new revenue streams, such as an eCommerce project, internal company programs, micro-credentialing, etc.
- Higher Education and Institutional Learning. How can we use an experimental mindset and growth hacking mechanics in the context of academic effectiveness? Some sort of “instruction hacking”, if we may call it that.
For the context of this post, we’ll refer to learners, students, and employees as “users”.
Step 1: Map the learning journey
The first step to growth hacking is to understand and identify the strategy, direction, and goal of your learning program and to create a “roadmap” that visualizes how users will navigate through each stage in order to maximize value and the user experience. This applies whether that be from an online course offering or the learning outcomes from within an actual course. For instance, picture a user that wants to join an engineering course via an eCommerce offering for an example institution. In order to map the learning journey for the user, some initial questions to consider include:
- How does this person get to know about the course?
- What happens after this person arrives at the course description page?
- What information or value will they receive once they’ve completed the course?
- What are the steps that this person needs to take to purchase the course?
- How do they enroll? Is it automatic? If not, what steps do they need to take, if any?
- What factors and criteria will determine whether a user is considered to be engaged in the course or successful at completion?
In the context of academic effectiveness, map the steps users will take in their learning experience and how we can optimize each one:
- How can a user discover a course that could close a skill gap?
- How does the user get enrolled?
- How do they discover the value of the course?
- How do users navigate through course topics and participate/engage with the content or other learners?
- How do they complete the course successfully?
How the journey looks would depend on your context, your users, and the processes already in place. Aim to track the success of each step within this journey, from course enrollment, engagement within the course, to course completion and user evaluation. Each step of this journey should be considered a “conversion'' in itself. Tracking each step in the process provides the information needed to improve courses and conversions across the entire journey, leading to more enrollments, higher engagement, and more completions.
Step 2: Approach L&D improvements with an experimental mindset
Mapping the experience provides us visibility on the things that you can immediately start optimizing: communications, removing friction in some steps, changes to how you present content, etc.
Instead of relying just on traditional practices or intuition about what could work, we can frame the improvements we want to make as experiments to test and validate if we are advancing in the right direction. Not only does this encourage more efficient course approach, evaluation, and improvement, but this also has the benefit of limiting risks and upfront investment by testing various methods with either a subset of your target audience or for a certain time period.
A deep dive into the scientific method or statistical analysis is out of the scope of this post, but here are some things to keep in mind to plan your initiatives as experiments:
- Create a hypothesis you want to test. This could come from an observation, a market trend, a new best practice, etc. This is what you believe could have an effect on the target metric in your user’s journey.
- Communicate the “why”. Map your hypothesis with the stage in the user journey that you want to test and why the result you expect matters. This helps to justify the initiative with others in the team.
- Identify how. Plan how you can test your hypothesis in a way that is representative of the entire target audience to which this would be applied if successful, but in a way that also minimizes initial investment and risks. Remember that in some cases you don’t need to build everything to test your assumptions. Prototypes, “Wizard of Oz” experiments, and other mechanisms could be your ally here.
- Get the team onboard. Sharing why you’re doing experiments and the metrics that you expect to improve is a great way to get the ball rolling with your team and get buy-in. But here’s another aspect that really helps: get them involved in the ideation process. If you’re a training or L&D leader, you can act as a facilitator for understanding why experimentation matters and get people excited about sharing their improvement ideas. Make sure you make them feel appreciated no matter the results of the experiment.
- Create an idea backlog and prioritize. Act as a product manager for your experimentation pipeline and think of a selection criteria that works best for your organization. A very popular framework used in Growth is the Impact, Confidence, and Ease model.
- Save your results and thinking. The biggest recommendation in this area is that you think about how you can document everything related to your experiments, from ideas to results. This is going to be helpful for when you want to discuss findings with other members of your team and when you want to provide results from your experiments as evidence of the choices you can make.
- Consider investing in experimentation itself. From tools to education on statistics, as you prove the value of experimentation in your institution or company it makes sense to step up in the game and improve how everybody can benefit from it.
Step 3: Select the appropriate tools
Following that last bullet point, there are 3 big categories in which we can organize tools depending on the value they provide for your experimental framework. Let’s take a look at some examples.
Tools to organize your experimentation pipeline
The idea here is to be able to set up a process that works for you, and do it in a way that provides quick visibility on the status of your running experiments, results, etc. We recommend starting with the project management tool that you are already using and are familiar with, such as Basecamp, Asana, or Trello for example.
If you want to go a bit further on tweaking your tool to match your exact process, you can think of tools like Air table or Coda that are flexible and provide more customization options.
If you want to use something that was developed specifically for tracking growth hacking or your focus will be mainly on the business side of growth, the Growth hacking experiments platform might be another option for you.
Analytics tools
Analytics tools can help you quantify the user journey that you previously mapped and put numbers in the behaviors that you can observe. In this space, you can start by looking at the number of users that arrive at a stage, the number of people that drop-off or lose interest at a given moment, metrics about events, etc. An important consideration here is to be intentional on what you want to measure, because you’ll be able to measure a variety of factors or behaviors, but only some elements will provide the kind of insight that is going to be useful for your experiments and decisions.
There are a lot of tools in the education space, but here are some that we highly recommend:
- Learning Analytics. A lot can be said about this category, but these are tools that will help you understand what’s happening in your courses, in your program, with your learners, your content, etc. Open LMS provides more than 40 reports out of the box to measure things like activity completion, course metrics, etc. You can use this information to conduct experiments about learning models, content effectiveness, etc. Depending on your institution needs, you can also expand your capabilities and consider tools like IntelliBoard or Watershed to bring additional insight or connect data with other sources of information.
- Web/product analytics. If you want to track some aspects of your initiative from a wider perspective, you can consider adding web/product analytics solutions like Google Analytics or Mixpanel in the mix. These tools can help you understand traffic and user behavior across web properties such as your main website, the LMS, and other web services for learners such as enrollment services. This is especially useful in revenue diversification initiatives such as eCommerce, internal company programs, etc. These tools don’t provide the level of detail that learning analytics tools provide on the learning experience, but can be used as a way to complement your experiments or run them in other stages of the journey you mapped.
- Visualization/recording tools. Maybe this is not exactly analytics, but visualization tools like Visual Web Optimizer or Hotjar that allow you to create heatmaps or make video recordings of user sessions can provide you with an additional layer of insight that could be very useful in understanding usage and interactions. This could provide more clarity on questions like: How do my users navigate the course? How are new users interacting with the platform? What are some ways we can help them get started?
It’s important to note that any analytics tool you want to use or any other application that interacts with user information should be treated with extreme caution and respect for user’s privacy. Be sure to work with your Data privacy team on how you can run your experiments in a way that works for the user and for the organization.
“Lever” and experimentation tools
In this category, we’ll highlight tools that you can use to implement your experiments at different levels.
- An agile Learning Management System (LMS). Do you want to see if eCommerce could work best for a course? Do you want to check if people engage more in internal company programs that have X brand customization vs Y? Want to see if people get better results if they receive frequent feedback or personalized reminders? Your LMS needs to support your experimentation intentions and provide the right capabilities to do so at scale. You want to think of something that not only works for 1 or 2 experiments, but that also allows your team or other audiences like instructors to run their own experiments and use your LMS as a framework to do it. For example, learn how Montana Digital Academy used PLD to automate feedback at scale.
- Onboarding and assistance tools. If your experiments are aiming towards things like engagement, onboarding, assistance, etc., check out tools like Appcues, Intercom, or Drift to create flows that help your users move from one stage in their path towards the next one. This can include helping instructors to create a course or helping students identify navigation. You can also use things like Open LMS’ user tours for this to start with.
- Optimization, testing, and feature flags. Tools like Google Optimize, Optimizely, Split IO, and others have been traditionally used on applications or marketing websites to rollout product features to a subset of users or do A/B testing, but these are also powerful tools that can be used in educational contexts. For example, experimenting with different copy in course content, or a different way to present the course online. There are also ways you can do this without the tools, like running 2 versions of the same course with different cohorts or using Open LMS’s conditional release options, so be sure that these are not an overkill for your experiments. It all depends on your context.
We hope this post is useful on your conversations around experimentation and growth in your institution or company. Growth hacking in eLearning is a key capability to develop for digital transformation and innovation.c We would love to hear your comments about whether or not you’re currently experimenting, the type of experiments that you are using now or what other tools and practices are working for you.