I've been a Software Developer for almost a decade. Once I began learning the basics of coding all those years ago, I knew it was the career I wanted to have and I worked hard to learn and build my skills to get my first job as a Frontend Developer. I've written a post dedicated to my full story already, so I won't rehash it here.
To cut a long story short, I recently started as a Growth Engineer at Fanvue, where content creators can connect with their fans, sharing premium and even personalised content, as well as messaging.
If you don't know what Growth Engineering is, you're not alone - before I was approached for the role, I didn't know it existed. It's essentially a cross between software engineering and marketing. It's not quite the same as growth hacking, which focusses more on marketing, but it's similar.
The idea is to come up with ideas - as well as gather them from other colleagues - that would increase your key metrics. For me, this is ultimately revenue, but there can be other metrics that indirectly contribute to your key metrics. For me, this would also count as subscibers (those are users who pay a monthly subscription fee) and signup (those are users who create an account on the platform).
Typically, the ideas we come up with as a Growth team require some changes to the platform that serve to improve the key metrics. How we do that, up to now, has been a combination of behavioural psychology and, to be completely candid, guesswork. From changing the text or colour of a button, to introducing a brand new, complex, interactive feature, there are a lot of ideas to test and no concrete way to know which one will perform well when put in front of tens of thousands of users.
Given that I'm still quite new at the time of writing, let's take a look at some of my first experiences and what I've learnt from them.
Size Matters
By this I mean sample size, of course, I'm not sure where your mind was at. This goes for the number of users that are exposed to an experiment. Using Amplitude, a tool designed for running experiments, we gather data on which users saw which variant (this would be either control or treatment in a classic A/B test) and how many of them completed the desired action afterwards. We use custom metrics in Amplitude to track this. As mentioned above, this could be signing up, subscribing, or making an additional payment. However, the most important thing you need to run an experiment like this is a large enough volume of users. Otherwise, you would never reach statistical significance*.
*Statistical significance is a terms from statistics that is used to determine the results of hypothesis testing. Statistical significance is reached when the sample size is great enough and one option or another has performed noticeably better than the other. In layman's terms, it means that the results are strong enough that you can be confident they're not down to coincidence, and if you ran the same experiment again, the results would be the same. The maths around this can be complex, so let's not get too bogged down for now. Importantly, Amplitude calculates this for us and we don't have to do any maths day to day.
It's for this reason that I wouldn't be able to run any experiments on this blog, for example, as it only gets a few thousand visits per month (at present), and any experiment would therefore need to be a huge change for it to have any chance of reaching statistical significance in a reasonable amount of time.
Rule of thumb: run experiments on sufficiently large sample sizes - somewhere in the region of thousands or tens of thousands.
Taking Moonshots
Moonshots are just that - shots at the moon. This means you're taking a much bigger gamble in the hopes for a much bigger payoff. This is really the only way to counter the previous point.
If you only have 50 people visiting your ecommerce store per day, changing the order of some items in your navigation bar, for example, isn't going to have enough of an impact on user behaviour to give you valuable results within a reasonable amount of time. If, however, you do a complete site redesign and utilise some hugely persuasive techniques such as urgency and scarcity, you might well see a big uptick in sales and be able to reach statistical significance with a smaller sample size and, crucially, in a much shorter time.
I borrowed the term moonshot from author Rory Sutherland and his book Alchemy, where he delves into the word of behavioural psychology and how it works in the world of marketing. I doubt he first coined the phrase, but it's where I heard it first.
As I listened through his book, I also borrowed a few more ideas that he mentions, which we'll look at next.
Rule of thumb: test small and big; testing big is useful when you have a small userbase.
The Decoy Effect
Another of the ideas I borrowed from Sutherland was the Decoy Effect - a technique he described that the Economist used to great effect to boost sales of their subscriptions. Their experiment was simple. On the website, you could purchase a subscription to only the digital edition of the Economist magazine at a relatively low price or a bundle of the print and digital editions for around double the price. When the sales of the bundled offer were poor, they added a third option. One that dramatically increased the bundled sales - this third offer was the print edition only but for the same price as the bundled option.
The psychology at play here is simple to understand: it's all about framing and price anchoring.
In the minds of the users, when presented with only 2 options, they chose the cheaper one as they didn't perceive the additional value of the print edition justified such a large price increase. It was a no-brainer. But when the third option was introduced, it shifted the framing of the offer. Now the users were comparing the print-only offer with the bundled offer. Counter-intuitively, the decision was still a no-brainer in the eyes of most consumers, but now they were choosing the more expensively-priced bundle over the digital-only offering.
Clearly, nobody in their right mind would choose the print-only edition when the digital edition would be thrown in for free if they choose the bundle, and this is exactly what the decoy effect is all about.
Taking that example into my day job, I thought we were missing an opportunity for some major gains by only offering a single subscription. The pricing of the subscription itself and what it includes is outside of my control, but my idea was simple - devise a second, slightly-cheaper option that contained signficantly less - and would be perceived by our users as much less valuable - in order to make the regular subscription look that much more attractive. By introducing a decoy - the Mini Subscription - that surely nobody would want, I figured it would result in a major uptick in the number of users who took the normal subscription.
At the time of writing, the experiment has not yet begun, stuck in a queue of waiting experiments. If you're interested in seeing the results, I'm considering starting a newsletter to share the details of my Growth Engineering journey. Reach out to me directly and I'll add you to the waitlist.
This experiment was actually an extension of a similar idea that I had devised beforehand, and one that is up next in the queue for boosting subscriptions: the Super Subscription!
Rule of thumb: use decoys as a way of achoring value with a corresponding price; decoys are positioned at a similar price with much less value.
Price Anchoring
A similar idea works in the opposite direction. With a decoy, the framing is done around the relative benefits that are included in each option and how the improved benefits far out-weigh the difference in price. Let's say you can get a small coffee for €4.95 and a medium - double the size of the small for some inexplicable reason - for an even €5, you would be mad not to take the medium coffee. Even if you plan to throw some of it away, a 5 cent difference makes it the easy option.
Similarly, if you then spot the price of the large coffee at €10 that's just a little bigger than the medium, nobody in their right mind would take the large drink. In reality, all coffee shops know this and would never price their drinks like this, unless they really didn't want people to spend more money for some reason.
In my case, I conjured up the idea of a Super Subscription. Again, I have no control over what is included in the subscription, but if we present a much more expensive option - let's say 10 times more expensive - that includes only a few more benefits than the regular old subscription, it seems like an easy choice to go for the normally-priced subscription. In the minds of the users, we've mentally anchored the higher price. Showing that much higher price as an amount of money that someone could reasonably be expected to pay for a broadly similar service primes them mentally to view the regular subscription price as a very well-priced and perhaps even cheap alternative. They think to themselves, "hey, I can save all this money in exchange for these minor benefits that I don't care about too much". It shifts the mental conversation they have with themselves away from one of spending money to one of actually saving money and allows them to post-rationalise the decision after they've purchased.
Rule of thumb: much higher priced offers can anchor the price in the user's mind, making other offers seem like great value for money.
Removing Friction
Friction, in digital terms, is the amount of effort or energy that goes into taking a certain action. Imagine, if you will, an ecommerce store that requires you to enter details of your extended family before buying a sweater. This would be a lot of friction for such a simple purchase and users would rightly get fed up of it and leave without completing their purchase.
It makes perfect, logical sense that reducing friction should result in a better user experience and, hopefully, more users signing up, subscribing and making additional purchases.
In fact, during my first week as a Growth Engineer, I saw this in play in a very subtle way.
The sign up form for Fanvue contained a lot of elements besides the email and password fields, alongside the social sign up buttons for Google and Facebook. Some of this content, like links to terms of service and privacy, are required for compliance reasons, I presume, but others, like listing a bunch of publications where Fanvue has been written about, are not required at all.
My colleague had the bright idea of just simply removing some of these additional elements and seeing whether this would lead to an increase in sign ups. Remarkably, it did, and in a statistically significant way. So he kept going. Removing more and more of the extra elements until barely anything was left.
While this might not count as friction per se. After all, it didn't remove any steps the new users have to take to create their accounts, it did reduce the cognitive load placed on them when first loading the page. After all, if you're presented with a page that is full to the brim of images, text, input fields and buttons, it's much easier to be confused and less sure about what the goal of the page is. This might sound trivial, but there are all sorts of users on the internet, and not all of them are as tech savvy as you, dear reader. Indeed, Fanvue has a very wide demographic, so making sure the site is usable and optimised for everyone is a challenge, but mostly involves keeping things simple.
Rule of thumb: each step in a process or funnel is a chance for users to drop off; removing them can lead to better conversion rates.
Adding Friction
Another Rory Sutherland quote is: "Sometimes the opposite of a good idea is another good idea"
We've just seen an example where removing friction has a positive impact. The significant increase in sign ups from the stripped back sign up form resulted in that one becoming our default sign up form.
But just because sometimes removing friction works, doesn't mean that adding it doesn't also work from time to time. After all, the opposite of a good idea can, on occasion, be another good idea.
Although we're still in the ideation phase on this one, a big concept we're looking at is increasing friction for the user between sign up and subscription. Accounts are free to create, but subscriptions cost actual money. The question then became: "How can we add friction so that the user proves to us - and themselves in the process - that they want to pay for the subscription?" Part of the answer is to present the user with the value of the subscription by outlining and highlighting the benefits they get from subscribing to a creator. Another idea, while still vague, is to have them complete some sort of action that is still free, but that plays into the sunk cost fallacy - using the time and energy the user has already invested to persuade them that the subscription is now just a small step away from where they are.
This is another concept that I'll have to revisit when the time has come.
Rule of thumb: users self-select and qualify themselves by overcoming small hurdles to move towards buying your offer.
Fake Doors
Much like when the Roadrunner painted a fake tunnel on the side of a mountain that Wile E. Coyote dutifully ran into at full speed, we Growth Engineers like to paint fake doors around our application. Unlike the Roadrunner, our aim is not to hurt our users, but to gain insights into their behaviours.
A Fake Door experiment essentially makes a promise of a feature, benefit, or some functionality, that doesn't exist. We gather data on how many users click on buttons that don't end up leading anywhere, but usually give some sort of "Coming soon" or "Something went wrong" message.
This seems a little bit deceitful, and it is, if we're being perfectly candid. But the justification from Growth teams is quite clear. We don't want to spend time (and money) building a complex feature when we're not even sure anyone will use it.
A colleague of mine found a perfect example while looking at the real estate website RightMove. At a certain point in his user journey, he stumbled upon an idyllic-looking house that seemed too good to be true - and it was. The developers were running a Fake Door experiment to see how many people would click on this pretend house to view it, when in actual fact it doesn't exist.
I'm hesitant to make use of this technique for one obvious reason: it doesn't seem like a very nice thing to do to your users by deceiving them. But the idea behind the Super Subscription is essentially that. The one small, but significant distinction I would make in my defence is that the Super Subscription is designed to get more users to subscribe rather than make them believe they're signing up for a subscription that doesn't exist.
Not all of our experiments are with the express goal of boosting metrics however. Sometimes, we just want to learn about our users.
Rule of thumb: Fake Doors can be a good way of validating ideas that could be costly in terms of time and money to fully implement.
Asking Questions
During my application and interview process to get this role at Fanvue, I discussed a lot about A/B testing and how to use data from that to improve metrics and ultimately to make more money for the business.
However, this isn't the only role of the Growth Team. We also like to ask questions and use experiments to answer them.
One main feature of Fanvue is the ability to message directly with a creator. Let's say it's a YouTuber that you like to follow and they sign up as a creator on Fanvue. You, the fan, can send them direct messages and build a connection with that creator. Fans can also leave tips for their favourite creators via the same channel and the creators have the opportunity to offer exclusive pay-per-view content there as well.
One hypothesis that we came up with was: fans who message more will spend more - whether it be on subscriptions or PPV content.
Now, we have data analysts who crunch the numbers here, but we also have the power to devise experiments to test our hypothesis.
One of the first major experiments that I came up with myself was one that exactly tests this hypothesis. The experiment was to introduce a widget at the top of the chat page that details the creator's so-called Inner Circle. The Inner Circle is a simple idea - in order to get into a creator's Inner Circle, you need to send them a certain number of messages per day. Like a little task list, those who complete this simple mission get a nice green badge at the top of the chat page and a reassuring message that they do indeed belong to that creator's Inner Circle.
This experiment plays on the concept of Fear Of Missing Out (FOMO) as well as the attractiveness of scarcity - we want something that is only available to a select few people. In this case, that aspect is more psychological since any of the creator's fans can achieve Inner Circle status.
This is a particularly interesting experiment, and it's still being developed at the time of writing. I'll make sure to come back and fill you in on the results when it's done. I was so confident in the idea when I brought it to the team, but this doesn't mean anything until we get some actual data.
Rule of thumb: A/B tests can be valuable for answering questions you have about your users and their motivations.
Orthogonal Experiments
Orthogonality is a very mathematical term, and is more commonly referred to as perpendicularity. When two lines are perpendicular, they cross at right angles. I like to use the term orthogonal in a more abstract way to describe the intersectionality of ideas and concepts. In this particular case, I'm talking more concretely about running experiments on two parts of the platform that do not interact with each other.
This means that experiments that we run, for example, on the chats page, as described in the section above, do not interfere with experiments that are run on the sign up process for new fans. We can therefore consider them orthogonal.
Indeed, any experiment that only affects fans is orthogonal to any experiment that only affects creators. But once we start talking about how two experiments affect each other, the lines become blurry and we, as a team, have to put in effort to figure out which experiments we can run together, and which ones we can't.
As a clear example of the latter, consider a currently running experiment on the CTA used on the subscribe button. We're running that right now and it clearly would interfere with the results of, for example, the Super Subscription experiment detailed in a previous section. Similary, the Super Subscription would interfere with the Mini Subscription (and we might even add yet another experiment to test the presence of both Mini and Super Subscriptions). None of these experiments are orthogonal and we have to create a sort of experiment queue or backlog where pending experiments wait to be run.
As a result of this, I've spent quite some time filtering through ideas that we have proposed as a team to make sure we are running experiments across all available parts of the platform, and doing so in an orthogonal way. We don't have unlimited growth levers to pull on, and so making use of all the levers we have is an important part of the role.
After all, if we spend all of our time devising, designing and developing experiments that only target the fan user experience, we could very well be leaving money on the table by not experimenting at all with the creator user experience.
Getting good experiment coverage, like getting good code test coverage, is important and I'm focussing on making sure we keep it high at all times.
Rule of thumb: spread out your experiments so that you're testing across every possible facet of your application without your experiments interfering with each other.
As a famous pig once said: That's all, folks!
At least, for now. Check back in with me another time to see how my journey as a Growth Engineer is coming along. Until then, feel free to get in touch with me if you're interesting in a career in Growth or Software Engineering.