Using duct tape and baling twine, you can build almost anything you need. It won’t be pretty, and it’s a good metaphor for finding a way to create a solution for an unknown problem. The first step is to do whatever it takes to find a workable solution. The next step is to see how well your improvised solution works. And finally, if the problem still exists and your solution is directionally right, you’ll need to find a more scalable way to solve the problem.
Finding a workable solution may mean using a kind of hack. Start by imagining how the process should work if there were no impediments. Recently, I wanted items from a calendar to show up on an Agile board – the goal was to understand how many of each item showed up in each list. A perfect solution would be an automatic import from the calendar. An imperfect solution would be to copy each item manually from the calendar to the Agile board. The end goal is the same – understand which calendar items belong in each buckets.
Go for the “It’s Done” Solution
My duct tape and baling twine method in this case? A product called Zapier – a kind of “data glue” that allows you to connect events in one service to events and data in another. I started by connecting Zapier to the Google Calendar and I also authenticated against Trello, a simple solution to create Agile boards. Zapier connects products using recipes for events triggered by data in a service. My recipe matched data in the calendar events to particular buckets in the Trello board. Using the date/time of the event and translating it into the day of the week didn’t work, so I had to use a different method: adding characters to the description of the calendar to indicate a particular day (a manual solution FTW).
Did the temporary solution work? Absolutely. Events added to the calendar now show up on the Trello board, which is a big improvemen over the previous method. To get the results into Excel, I also added another bit of duct tape – a Chrome extension to export the Trello board items. As an end-to-end solution, it works. As an automated process, it leaves a bit to be desired.
Next Steps: Building a Feature
So where will this solution go next? It needs to scale to be usable. Events need to get added to the calendar automatically and coded in such a way so that they show up in the Agile board. They also need to show up in the right place. The feature version of this idea could be feasible if there are additional user stories, a documented process showing how these events “live” from start to completion, and some idea of when the manual process will break. Start with duct tape and baling twine and build a “fake it until you make it” version. Then, test that version and see where it breaks. Finally, compile the “must have” and “nice to have” items and pick the best ones.
I love words. Probably too much. I love words so much that often use too many words when only a few are needed. It’s not because I want you to know about all the words. It’s that I want you to understand better.
Sounds a little silly, right? Yet often we make the same argument to customers when we present them with all of the choices they could make in our app. Don’t make just one choice – we persuade – make any choice you need to make!
By presenting too many choices, we run the risk of overloading the customer. You can hold 5-7 items in your active memory (you are probably using at least 1-3 of them right now). The chances of a customer remembering to do more than the next single thing you want them to do are pretty low.
Please, make it easier for the customer by picking the next thing you want them to do, telling them how to do it, and letting them know when they’re done. This might not mean telling them exactly what to do at the beginning of the process (though you should give them a suggestion).
If you provide a safety valve for the customer to let you know when things go wrong (a big “call us” or an “email us” button), you’ll win friends too. Make it easier for people to tell you what’s wrong. If they need you to add something, they’ll let you know – and they have a harder time telling you what to take away.
Try it – remove half of the choices on the front page of your app and see what happens. If no one complains, you probably removed things that didn’t need to be there. A good editor works wonders, whether editing a speech, and article, or an app.
Delivering a status update is a tricky thing. It’s really easy to overwhelm people with too much information, to leave things unsaid when you need more detail, and to leave out the “I need help” part of your message. So here’s a simple proposal, modeled off of the status updates my former CEO T.A. McCann asked team members to share at Gist.
Sharing Team Information
Having a regular schedule for sharing status updates helps a lot – at Gist, we shared these updates three times a week, right before our “standup” team meetings. T.A. wanted this information because he needed both tactical (what’s going on today) and strategic (what are the larger themes) feedback to know how his team was doing. We wanted these updates so we could know what other team members were doing. The system wasn’t perfect, but it made sure that everyone who came to our Standups was ready to share (at least some of) what was going on.
So how can you write a great status update? You should write the update quickly – spending just a few minutes to summarize and share the high-level information that matters – while also identifying any blockers that you need to discuss.
A “Cookbook” for a Status
In your status report to your team, make sure you answer these three things:
What did you do?
What are you doing?
Where do you need help?
A great update shares enough information for team members so that they can know what you’re doing, but not too much information so that it takes a long time to process the information and respond. If you share status in this way (usually in just a few lines) you can also think about larger, more strategic questions that relate to these everyday tasks.
A Longer-Term Status Update
Because simply writing a status update every two or three days isn’t enough to answer other questions that you ought to consider, you should ask bigger questions too. These might include:
What’s one thing I’m doing that I should keep doing?
What’s one thing I’m doing that I should stop doing?
What’s one thing I’m doing that I should start doing?
When you take a step back and name things you should add or remove from your typical tasks, you get better at valuing your work objectively and are more likely to see it from an outsider’s perspective. Getting into the habit of keeping and delivering a status report to a team is a great way to document what you do and gives you a consistent way to check what you do.
Let’s say you’re starting a new company, feature, or product. You have an idea that you want to test with some beta customers. So what would you do today to “get out of the building” (in best Steve Blank Lean Startup style) once you’ve done some initial customer development to determine your Minimum Viable Product and have some ideas and data about the kind of customer who might use or provide feedback on your idea? One way to do this to set up a beta program, where you combine your ideas, your preliminary feedback (and some actual people) to see what will happen. (The feedback will likely be both sweet, and a bit tart.)
What are the goals of a beta program?
At face value, the goals of a beta program seem simple: find some potential customers, ask them to use the product, identify bugs, and get feedback on what’s working and what’s not working in a defined period of time. “Customers” probably look the same as the initial persona you identified during your customer development phase, and since you’re looking for a directional indication at this point, you don’t have to get them completely right (the beta program is an extension of your customer development efforts.) “Using the product” and “defining bugs” will work best if you define a few tight scripts at first to help people understand your vision of what they should be doing, and not just what it looks like in your prototype. And “getting feedback” means finding out the most important thing to improve or fix at that point of your development process.
The Perils of Talking to Customers You Know
Finding potential customers is the first step, and also not the easiest. When you start, you’re likely to ask people you know – which is great because they will be more receptive and accommodating of problems, and not so great because they’re biased to give you good feedback – so you need a mix of people you know and don’t know. One way to solve this problem is to ask the people you know to recommend 2-5 people that they know who will provide practical feedback and who don’t know you all that well. Once your group reaches 30 people, you’ll be able to know more confidently that you have at least directional statistical information.
Practical tip: manage the list of customers in a Google Spreadsheet, identifying their name, email, Twitter handle, external ID in your system, “customer type” (this could be ‘experienced, noob, mainstream’ or another taxonomy), the date they joined the beta or the identifier for their cohort group, a comment field, and a “last contacted” date. This will allow you to model the list of beta customers by cohort, give you a way to communicate with them, and provide you with a data structure you can use to pivot their feedback by customer type, externalID, beta cohort, and date range.
Using the Product is Not Following The Instructions
It’s tempting to think that customers (any customers) will “read the manual” and follow your instructions to the “T”. Well, when was the last time you read the manual? It’s important when considering your beta group to include both those people who are likely to follow your instructions, those who are not very imaginative and who just want to “hire your product” to do a job for them, and those who want to break your product or will think of unusual ways to interact. And if you want really great bug descriptions, you need to make it really easy for customers to provide inline feedback and to give them a prompt every time to identify the key items you need to understand what’s going on.
Practical tip: provide instructions for your scenario using multiple learning styles, including listing items in an ordered list (“Step 1, Step 2, Step 3”), asking an open-ended question (“What’s the 1 thing you’d like us to improve”), and communicating in other media, e.g. asking them to record a Skype conversation, a screencast, or setting up a group Google+ Hangout to discuss the “how do you do it” aspect of your product.
The “90-9-1” rule for participation inequality suggests that some of your beta participants are going to give you a LOT of information, and the feedback they provide will be overweighted toward the one or five or ten people who feel really passionate in your first group of 100 or so beta participants. So how can you take what they’re telling you and make it more meaningful? First, you should identify the functional bugs they point out, and fix those: these are potential blockers for any new customers. If you’re not going to fix an issue, log it and let the reporter know you’re placing it in a lower priority queue (and then once every month or so, prune the queue aggressively to remove the “not-urgent not-important” items.) Second, you can use a Kanban or other scrum technique to organize the volume and priority of the work (Asana, Trello, Do, Jira, Pivotal, and others are all good for this task). And third, keep asking your beta testers short surveys frequently (1-3 minutes) to see whether what you think you’re doing is actually working for them.
Practical tip: use some bug tracking software to manage this process. Don’t reinvent the wheel: if you don’t like Jira, Bugzilla, or another solution you can always find a bug tracking template in another tool. And make sure you identify the type of customer for which you’re solving the problem; the severity of the issue (does it stop them from using the product; is it a major deficiency; or is it just nice to have) and the priority (get it done now, get it done soon, log it to see if it becomes more important later).
What’s realistic to expect from your results?
It’s reasonable to think you’ll get feedback, and that some of your customers will like (or even love) what you’re working on for them. And it’s also likely that some will dislike, or even hate, the thing you think is cool. To get the most mileage out of this feedback, use a dedicated email alias or distribution list (e.g. beta-feedback@) to share this information with as wide a group as possible within your company. And view the feedback in the context of the type of customer who’s providing that feedback. When you get positive feedback clustered among multiple people in the same persona in the same area of your product, that’s a good sign you’re heading in the right direction. And then use that feedback as a data input to make decisions: what kind of company do you want to be? Which customers are you serving, and will you serve them better by improving this feature or fixing this bug? If the answer is yes, it’s time to invest time and money in making that change.
One of the best ways to learn how real people view your product is to ask them to complete a set of tasks that you think all customers “should” be able to do. Think of this as a directional usability test, where you can get some feedback on the way “normal” folks use your product without sitting right next to them and telling them how to complete the task. Yet you can also learn a lot by sitting in the same room as someone who has tried your product and just having a conversation. Even if these people are not perfect examples of your persona definitions, setting up “Friends of the Company” sessions are a great way to make a tremendous leap in usability in a short period of time.
“Friends of the Company” sessions might look like this: every two weeks, line up two or three people to visit your office and ask them to complete a common customer task (set up an account, use the product the way they “normally”, and talk through the progress as they do it.) You should have someone from your design team, your engineering team, and your executive team in attendance, and make sure to give the person some homework before they arrive so that you can capture their feedback.
When your F.O.C. session is running, you should use this focused time to listen, learn, and suggest. You can listen by hearing what a “typical” customer does when you’re not around and hear more about the features that people outside of your building think are pain-killers, not vitamins. You can learn by identifying “cringe” moments that show up during the session, and plan which of these items to address and which to log for later effort. And you can suggest by using this time with a customer to bring up ideas that need additional feedback.
It’s important to note that the feedback you receive in these sessions is just that: feedback. It’s not usually enough to make major changes in usability, and it is an amazing way, however, to note little items that trip customers up when you think they should be able to complete (what you consider to be) routine tasks. Friends of the Company sessions give you a temperature reading of customers and let you know what those people are thinking and whether your message matches their experience with the product.
And matching that message to the product is an important task that’s very easy to practice during the F.O.C. Session. Remember, some of the people who are coming to see you are very talented and want to help, and some are just there to see what you’re up to in building your product and culture. All of this feedback can be really useful if you use it as a opportunity to refine your pitch, your usability, and the real-world functionality of your product.
If you talk to customers for any length of time, you’ve probably been asked, “what’s the best practice for that? And how can I get you to deliver best practices to me?” I often struggle with this question because o the definition of a “best practice” may vary depending upon who’s asking and the act of asking for a best practice – I think – is really something like “show me the process of getting to a best practice for the activity that I want to do.”
Wikipedia lists the definition for a best practice as:
A method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark. In addition, a “best” practice can evolve to become better as improvements are discovered. Best practice is considered by some as a business buzzword, used to describe the process of developing and following a standard way of doing things that multiple organizations can use. (source)
This is not a post for those who are seeking ISO 9001 certification – this is a discussion for the rest of us: how, exactly, do you get closer to a best practice, know when you’re close to getting there, document it, and improve upon it?
I propose a modest idea: that the process of getting to a “best practice” is actually something very close to the Agile idea of testing, acting, and measuring. Let’s name it in a less geeky way, like “evaluating”, “interacting”, and “enhancing” as the steps of a process to take an idea, determine what the success might look like when you finished it, actually walk through the procedure, and then evaluate whether you were anywhere near close to the mark.
These are actually small decisions (almost like Minimum Viable Decisions), and they might look like the following:
Evaluating – this is the step where you understand and quantify your business problem. An example might be: “How can we figure out how many widgets we sell in a month that are related to inbound visits to our web site,” or “we’d like to figure out how to get a new user in our software service to be proficient at adding a business rule more quickly.” The description looks a lot like an Agile product story, where you try to determine the actors who do things, the system that they use, and the results they qualitatively might like to see.
Interacting – this is the next step, where you think about the “happy path” of how a user who solved that problem might actually interact with your system to provide a solution. For some solutions, you might require a system configuration, e.g. putting a persistent invite code in a web page so that you can track unique sales that happen through your web site. Or it might be a softer interaction, like documenting the procedure for creating a business rule (and also generalizing the problem that you are trying to automate in your system by using business rules.
And finally, you need to enter the enhancing step, where you test whether your hypothesis made any sense and the resulting actions taken by the customer, the system (or perhaps some combination of the two of them acting together) produced something like the desired result. And more importantly, how often did this go right? How often did it go dreadfully wrong?
The best practice is the logical result of a process or procedure that allows you to enter this sequence of decisions and end with the desired business result. So if you actually get sales from your website the way you expected, or the customers you walked through this process were more successful in creating a business rule, then great! Now you need to test-run this “best practice” candidate with other customers who don’t have the benefit of you walking them through the whole process step by step. And you need to test it with at least 30-40 trials to rule out simple bias. When you get to the end of that process and your “best practice” still seems to work, congratulations! That’s one in a row.
Note that during your enhancing step, an important thing to consider is how the customer felt about the process. It’s not always easy to capture this emotion, and the customer often expresses it as “it felt too hard,” or “I don’t get it,” or “I don’t see why this would be useful to me.” (among a host of other responses you might get.) Boiling that response down to actionable concrete steps to improve the process is feedback gold. Ask for the one thing they would like to change (just one thing) and you’ll get closer to it.
If you’re like many people, when you hear the term “Agile Marketing” you might wonder if the person is talking about some new dance form or other form of stretching rather than an exciting way to improve the way that you propose, build, and measure your marketing success.
Don’t worry – you can start doing Agile marketing without buying new and expensive tools for your marketing toolbox. Agile marketing can help you define specific goals for your campaigns and find out quickly whether they’re working or not – and give you metrics to decide whether to move on.
Borrowed from the methodology of the Agile software movement that started about 10 years ago, Agile Marketing is a philosophy of getting things done that proposes to shorten the length of marketing campaigns, to get actionable information from those campaigns as soon as possible, and to test these ideas to keep the good ones and spend less time on campaigns that don’t produce results. The goal of this process is to make the marketing process more adaptable to changes in your business.
The purpose of Agile Marketing is – as Jim Ewel puts it – “to improve the speed, predictability, transparency, and adaptability to change of the marketing function.” The benefit of doing this should be obvious: you should spend more time working on the initiatives that work. If you can find the initiatives that work more quickly, you’ll be able to be more effective. And by communicating the metrics that you find important to the business, you’ll be better at sharing what you’re working on.
Does that sound difficult? It shouldn’t. If you focus on delivering things that work, measuring what you do, and build simple, self organizing teams, you can use these principles to get started with Agile Marketing. You can also take the application of Agile marketing ideas directly into your workplace today with these 13 hacks. You can also find some other great resources here.