The Future of Email ROI Measurement

moneygraph

What is a good open (render) rate?  What percentage of my emails were actually delivered to the inbox?  How many clicks should I get on an average email mailing?  What is the average unsubscribe rate for each mailing?  Who are my Social Influencers?  Are my subscribers engaged?

Most questions that savvy marketers share are more complex than these, and over at the Email Measurement and Accuracy group, Luke Glasner and John Caldwell have assembled a team to foster universal metrics and develop protocol to be congruent across all ESPs and enterprises.

Applicable measurement benchmarks that are universally accepted within the email industry are a challenge, however it is my belief that the future of quality email measurement is quite possibly rooted in what I call a “Quality Email Score” for each type of mailing.  Let me explain…

Consider this:

Once an identity or definition of each email measurement is universally accepted and recognized, a specific measure of an email campaign may be based on a 4 or 5 digit integer.  This integer would be rendered after (x) number of hours (72, for example), for a decent sized sampling.  This Quality Email Score (QES) is plausible if we include a grouping of pertinent and well recognized email measurements as variables for each mailing.

Such measurements may include, but are not limited to an Open Rate (Render rate), CTR, Domain Reputation, Inbox Delivery Rate, Spam Complaints, Frequency (list fatigue), Relevance, Engagement, List Quality, Content, Conversions, ROI, Subscriber Influence, AOV, etc., or any subset of these measurements.  The proposal involves innovative leaders from the email and analytics industries facilitating, and then determining, a cumulative email algorithm for each deployment based on which variables are being tested.  In other words, marketers would assume one score for each email deployment. We can also break this down into a Quality Score for each particular measurement, then mix and match.

Each of these measurements will be weighted differently within the algorithm; this process may also include factoring in different types of industries.

Once the campaign is deployed, the Quality Email Score can be compiled and generated.  Since every email campaign would be treated differently, each type of email stream then may warrant its own algorithm.  This could mean that each type of email stream would have a different integer ranking.  For example, confirmation emails might have a simple two digit score since fewer measurements will be calculated, while transactional streams with social media integration will display a three digit score.  B2B campaigns may be computed differently than B2C; once again, this difference would be based on the measurement groupings within each campaign.  However, both could still utilize the 4-digit integer.

Once calculated by your ESP, a detailed summary of each score or SWOT (strengths, weaknesses, opportunities and threats) analysis will be produced and issued to each marketer with commentary, so that improvements can be made on subsequent deployments.  Remember, The Quality Email Score may be broken down further by assessing a score for each measurement.  Another example could include a Quality Score for each segmented list, which could be used to determine a “what if scenario.”  MailChimp offers a general idea of this with its “list activity score.”  The major benefit here would be a simplified method of testing before, during, and after each deployment, as well as an easy way to predict return and subscriber engagement.

These various scores will then be a part of an ongoing analysis of each marketer in that it would allow ESPs to further assess the history of a potential client.  For example, since not all ESPs measure deliverability the same way, this method invokes congruency across all ESPs.  In this way, it is similar to a credit score.  The goal, of course, is to give each marketer enough feedback in the SWOT evaluation so that future scheduled mailings will be more relevant and produce better overall engagement, eventually earning a higher score.  The initial goal is to create a core group of measurements that will be universally accepted by each ESP.

This idea is certainly an uphill climb, and is merely an endeavor to encourage thoughts on what “standardized” email measurement can grow to be, whether it’s as simple as four core measurements or as complex as 50. I understand that there are several milestones to accomplish before this idea can even seriously be considered, but I thought I’d put it out there for now to encourage a possible future vision.

fred

Fred Tabsharani is engaged in strategic marketing initiatives for Port25 Solutions, Inc., a globally recognized email software company which serves Email Service Providers and leading enterprises. After receiving his MBA from John F. Kennedy University, Fred immersed himself into the world of email deliverability and constantly discovers new insight from thought-leaders in the email industry. He is a columnist for a few industry blogging portals and is also a member of several email based organizations including but not limited to MAAWG and the Email Experience Council. Fred’s goal is to continue honing his skills and knowledge in this space and to build timeless industry relationships that transcend business goals.

Related posts

Top