Cost to Acquire a Customer (or “Cost of Customer Acquisition”) explained with real world examples and real numbers

One of the best ways to grow your business is by acquiring more customers.  If you solely grow through extending the value you get from your few customers, you run the risk of becoming a captured company.  We see this at a micro level where independent contractors with one golden goose client forcing them to convert to W2 employees.

If you know how much it costs to acquire a customer, and you know your customer lifetime value, you can then make the rational decision to invest in acquiring customers because you’re going to make the money back over the lifetime of your business relationship.

Of course, the formula is a bit more complex than “if LTV > CAC, invest heaps in customer acquisition!”.

For example, there is also the cost to deliver the service to the customer.  And there is the cost of capital to acquire the customer.  So therefore, VC firms have the golden ratio of 3+ as LTV/CAC ratio.  But VCs target 30%+ IRR on their portfolios.  If you are working with a bank having cost of capital around 10% – 20%, you may be able to get away with a lower LTV/CAC ratio.

So how do you calculate Cost of Customer Acquisition?

You take the amount you spent on marketing during a given month and divide it by the number of new customers in that month.

Let’s take this business, Code For Cash, as an example.  In August 2017 we had 17 new customers.  What were our marketing expenses?

Facebook ads – ($390)
Reddit ads – ($112)
Amazon marketing services for our ebooks Software Engineer’s Guide to Freelance Consulting and 30 Days To Your First Freelance Programming Client–  ($2000)  with $1260 in ebook royalties so ($740)
Freelance blog content – ($600)

Total: $1842

$1842/17 customers = $108 to acquire a customer.  According to this analysis, $108 is our CCA.

Unfortunately, this is too superficial an analysis to have any value.  The truth is, marketing programs pay off in months.  You would also have to look at my marketing spend for July, June, May, April, March – and the customer signups – to get greater accuracy.   Not only that, but I am probably forgetting lots of other relevant costs: SaaS programs used to automate the marketing, freelancers hired here and there for miscellaneous tasks, etc.

So, let’s get more accurate data:

Looking at the preliminary report from our bookkeeper Jamel Salter our marketing spend for the past few months is:

Sep Aug Jul Jun
2,419 3,641 3,680 5,661

Totaling $15,401

If we credit the revenue from Amazon royalties, approximately $4500, that’s $10,901 in marketing spend.  We added 31 customers over that time period: customer acquisition cost is actually higher than our preliminary estimate, at around $350.

Strategy for lowering customer acquisition cost

What does it take for someone to become a customer?

They have to:

A) Learn about our business (visit our webpage)

B) Sign up for a free trial

C) Convert their free trial into a paid subscription (establishing themselves as a customer)

Note: for the purposes of calculating CCA, only people who make it to stage “C” count.  However, people in B and A should be treated as customers – prospective customers, that is.

So in order for us to lower our customer acquisition cost, we need to: educate people on our business for a lower cost; improve our webpage so that more people sign up for the free trial; and do a great job during the free trial so that more people become paying customers.

What are some concrete example tactics?

Educate people on our business for a lower cost

  • Rely more heavily on content marketing, posting it on social media without promotional fees.  This article is an example.

Improve our webpage so that more people sign up for the free trial

  • Improve the design of the page so that it signals high quality
  • Remove the credit card requirement from the trial signup

Convert more trials into paid subscriptions

  • Aim for a 0% defect rate – 0 errors raised per customer
  • Measure # of gig opportunities available for user immediately: add additional markets to the database; write content that is also aimed at acquiring business customers (who will submit original gigs to the network)

My hypothesis is the highest impact things we could do would be to remove the credit card requirement from the trial signup, aim for a 0% defect rate through automated testing of key customer workflows, and adding new markets.  So that’s what’s coming up on the product roadmap!

Strategy and Tactics of Pricing – Summary

Chapter-by-chapter summary of The Strategy and Tactics of Pricing: A Guide to Growing More Profitably

Thomas T. Nagle, John E. Hogan and Joseph Zale

Summary notes by John O’Malley
Summary
  • A sophisticated understanding of the value a product creates for the customer serves as the bedrock input to a pricing strategy.
  • Different customers will value products differently. An effective pricing strategy will leverage a segmented price structure that reflects the values and costs across customer segments.
  • An integrated, strategic plan seeks to influence how customers perceive a product and its price rather than set prices reactively.
  • Setting a price is difficult, and the decisions too often reflect unclear leadership and misleading data. Strategic pricing prioritizes profitability.
  • Costs and competition are important considerations but not the drivers of effective pricing strategy because changes in prices alter volume and impact costs.
Chapter 1: Strategic Pricing
Coordinating the Drivers of Profitability

In the Information Age, the factors that determine profit are changing more rapidly than ever. New pricing models, such as those employed by Netflix, Ryanair, and Apple, form an integral part of some the most profitable enterprises in those changing markets. Yet, few managers train to set prices in anticipation of changes, rather than in reaction to them. Some common approaches to pricing simply reflect outmoded thinking: cost-plus pricing, customer-driven pricing, and share-driven pricing. These approaches all misunderstand the role of pricing.

Strategic pricing, on the other hand, rests on three key principles. First, the pricing strategy is value based. This means that prices reflect the differences in value across customers and over time. Second, the strategy is proactive. It anticipates those market changes, designs strategies to account for them and even dictates the terms of various trade-offs. Finally, the strategy is profit-driven. Instead of comparing prices to competitors, a pricing strategy is evaluated relative to the company’s alternative options.

These principles lead to the five-tier pyramid of strategic pricing, where each block builds on the next. Likewise, each of the following chapters expands on the nature of value creation, price structure, price and value communication, pricing policy, and price level before turning to the final component of strategic pricing: implementation. Although most companies do not need a large, central price management function, a clear strategic vision must ultimately have everyone in the organization do their part. Managers need to understand their role and have to data to execute successfully. A good strategy will seek to motivate new behaviors.

Pricing Policy Pyramid

Chapter 2: Value Creation
The Source of Value Creation

Value refers here to economic value, which relies on the differentiation of one product from another. Almost no one would volunteer to pay $2.00 for a can of coke from a seller if they know that it is available in a vending machine for $1.00 around the corner, but, more practically, a swimmer on hot beach might readily pay the higher price rather than walk to an inconvenient snack shack. This differentiation value is the only component of economic value captured by price.

Understanding the nature of differentiation is critical to understanding the creation of value. The two forms of differentiation are monetary value, which is the cost savings or income enhancements that a product provides, and psychological value, which is a measure of the satisfaction and pleasure derived from a product. Therefore, the total economic value of a product reflects the value of the next best competitor (the reference) plus the net value of the differentiation between products (the differentiation value).

To return to the example of why the beachgoer is willing to pay $2.00 for a can of coke, this breakdown of value creation can explain the decision to purchase. The reference value is the cost of the soda at the snack shack, the value normally derived from drinking a can of coke. The differentiation value is the psychological value of convenience and immediate gratification. The price point that results in the sale, the $2.00, captures the the total economic value to the customer on the beach.

Economic Value

This process also allows for value-based market segmentation, one of the most powerful ways to maximize profitability through strategic pricing. With a segmented marketing plan and price structure, a marketer can ensure that different segments pay an optimal price, instead of charging a single price that undercharges some customers and drives others to competitors. After a marketer determines the basic criteria for segments according to commonalities in purchasing behavior, they can identify the values that drive purchasing decisions within segments. These value-drivers do not necessarily correspond intuitively to segmentation criteria, and a deep understanding of the total economic value of a product helps identify specific value drivers. Then, after creating levels of segmentation and detailed descriptions for marketers, as illustrated in the two tables below, it’s time to move into the pricing strategy.

Segmented Customers

Chapter 3: Price Structure
Tactics for Pricing Differently Across Segments

A company that attempts to serve customers at a single price-point tends to make large, unnecessary trade-offs between volume and margin. By nature, differentiation value changes across customer segments, and the profit potential created by those differences can be captured through strategic pricing. In the case of railroads, for instance, railroad tariffs ensure that coal and grain cost significantly less to transport than other goods. At this lower price point, railroads maximize the profits on goods that would not be cost-efficient to ship at the cost of manufactured goods. The price structure allows the railroad to cover the high costs of its infrastructure.

Whenever differentiation value changes across customers, it is possible to design different offers for different segments. In fact, effectively designed bundles will accurately understand how they create differentiated value, and customers will self-select for the bundle that most directly corresponds to their needs. One of the core questions for configuring a price structure is deciding which features to offer individually and which features to bundle. In music, most customers will pay a premium for famous headliners, but must be enticed with lower prices to attend smaller performances. Some customers, however, may be willing to pay a premium for those same concerts, and segmenting can help buoy the profitability of these smaller performances.

Sometimes, quantity, such as number of tickets sold, does not accurately capture the value of a product. In these cases, marketers might adopt new price metrics by applying the cost of the price to a different unit. Price metrics can vary substantially. A sports club could charge per hour, per visit, or for a membership. Within that membership, the club might charge yet another hourly fee for a particular feature, like a sauna. Good metrics are unambiguous, appealing to prospective buyers, and aligned with their values. Someone primarily looking for a good sauna might balk at that sports club membership. Innovative price metrics, especially ones that are more effective than competitors, can improve existing margins and expand volume.

Value can also differ among customers receiving identical benefits because the factors contributing to their perception of economic value are different. In this case, a price fence, in which different segments are charged a different price for the same product, may be effective. This is quite common. Most museums have different charges based on age or student status. However, price fences can annoy people and create an incentive for buyers to, often successfully, avoid them. Effective price fences rely on proxy metrics like buyer identification, location, time of purchase, or quantity.

Chapter 4: Price and Value Communication
Strategies to Influence Willingness-to-pay

In order for strategic pricing to succeed, customers must accurately understand the value of the product on offer. More specifically, sellers need to convince customers to perceive the differentiation value of their product. Successfully communicating value protects segments from competitors by clearly highlight the differentiated value because the customer understands why one product better fits their need than the next best alternative. By extension, communicated value improves a customers willingness to pay and, ultimately, increases the likelihood of a purchase.

Take the example of the Amazon Kindle. Many observers feared that the high price point of the first devices would put off customers who considered the switch from physical books to e-books extremely risky. Amazon had to find a way to communicate the value of a new technology. They decided on a “Meet a Kindle Owner” program where prospective customers could meet people similar to themselves and learn the benefits of owning a Kindle. As a result of this highly effective approach to communicating value, Kindle sales far exceeded expectations.

This highlights that value communication is most important when the differentiated value of a product is not obvious to potential costumers. Typically, this is the case for inexperienced buyers or, in the case of the Kindle, when the product is highly innovative. In both cases, customers may have trouble noticing the differentiated value, so the purpose of effective value communication becomes identifying the perceptions to influence and connecting the key values to the appropriate characteristics of the product.

Two characteristics play the most important roles in influencing perception: type of benefit and cost to search. Type of benefit refers to the breakdown of value into monetary and psychological value. Presenting the monetary value to someone who primarily derives psychological value from a product will probably not influence them in any meaningful way, and it may even drive them away from the product.

The other characteristic is the relative cost to search, the cost relative to the expenditure of identifying a product’s value. Someone making a $5,000 purchase can reasonably spend more time searching than someone making a $5 purchase. When investigating search goods, a buyer can easily compare features and benefits objectively. The cost of search is low. However, experience goods that are more difficult to evaluate, like which auto shop to go to, have a higher relative cost to search. An effective value communication takes both of these kinds of attributes into account and adapts the communication across each stage of the purchase as the costs change.

The Customer Search Process

Importantly, customers do not always evaluate price as strategically as marketers, especially when it comes to discounts. Studies have demonstrated that customers evaluate purchase choices proportionally rather than absolutely. Most people would walk a block to pay $2.00 for a drink rather than $5.00, but they would consider it a waste of time to walk the same block to spend $102 rather than $105. Customers also place great weight on reference prices, the standard “fair” price, and perceptions of fairness. Although the subjectivity of these perceptions may make them seem difficult or unpredictable, in practice value communication can leverage them to a marketer’s advantage. In general, marketers can set a high “regular” price in order to lower the price through discounts and promotions before raising it again without seeming unfair. This also impacts reputation and whether the price is seen as maintaining or improving a standard of living, both important components of perceived fairness. Finally, people prefer to avoid losses when evaluating differences in products. Again, minimizing out of pocket costs or framing costs against a high reference cost can account for this gain-loss framing.

Chapter 5: Pricing Policy
Managing Expectations to Improve Price Realization

All sellers eventually face difficult customers asking for price exceptions. They may be loyal customers asking for an exception during an economic downturn or highly aggressive buyers forcing ad-hoc negotiations, slowing down sales overall. An effective pricing policy is a set of rules or habits that precludes exceptions for any factors that do not reflect changes in value or cost. Pricing policy ought to be consistent across the board. Over time, unclear pricing policies can allow buyers to dictate expectations to sellers. Strategic pricing flips this problem on its head by leveraging the price strategy to influence future customer behavior. Consistent price policies dictate consistent expectations, a key component to influencing customer behavior that avoids the pitfalls of ad-hoc negotiation.

Price policy develops over time. Each request for an exception to a price is an opportunity to set a pricing policy that will predict and prevent a similar exception request in the future. Over time, buyers will come to expect these pricing policies, as long as they are consistent and transparent. These policy decisions clearly must come from market level management, but they still have to empower sales managers to stand by pricing policies even at the potential cost of a sale. While this might seem intimidating, ad-hoc negotiations only defer substantial costs that will continue from unpredictable customer behavior. A pricing policy can cover price changes associated with discounts, increased industry costs, promotional trials, and changes in competitors prices. What matters is that each policy remain consistent.

Good policies also transform purchases into a price-value trade-off rather than an effort to extort the lowest prices. There are a wide variety of buyers who might engender pricing policies. In addition to consistency of application, good price policies can rely on give-get negotiations, in which the seller refuses to make any concession that does not have some value return. The principle behind give-get negotiations, which motivates good price policies generally, is identifying whether exception requests are a product of misplaced expectations or lost value. Pricing policy seeks to manage the expectations, but an in-depth, accurate understanding of value to customers may have wider repercussions. Revisit the thinking in early chapters to review how understanding value can influence strategic pricing.

Chapter 6: Price Level
Setting the Right Price for Sustainable Profit

A lot of data is available to set a price point, and a three stage process can guide the decision. This process builds on the premise that, in order to maximize profitability, price is different for different segments. This is the final level of the pyramid. At each step of the process, it is important to consider the pay-off for invested time. Managerial experience and market knowledge is always an essential component of price setting, and it may not be necessary to spend much time where experience will suffice.

The first step is to set a price window. This window outlines the highest and lowest acceptable prices for a product as defined by its total economic value. It extends the process for estimating value.

Price Windows

The next step is to set the price that best captures the differential value. The goal is not to set the highest price possible. The price should drive profitability because it aligns with the overall business strategy. Jeff Bezos, in the early days of Amazon, sought to undercut distributors in order to move market share to Amazon’s online platform, which was the germ of the company that exists today. It answers core question of the price-volume trade-off: “how much volume can I afford to lose for a price increase/how much volume would I have to gain for a price decrease?” Incremental break even analysis can help answer this question. Finally, it predicts the customer response to the price point. This is perhaps the most subjective step and therefore the best served by experience. Marketers have to estimate price-sensitivity, or the sensitivity to the price-value trade-off. This is the degree to which factors other than value influence willingness-to-pay, and it includes the same factors outlined in Chapter 4, such as expenditure size, perceived risk, and gain-loss framing.

The last step of the price setting process is to communicate the reason for the price to customers. Customers must understand the price and perceive it as fair. This clearly demonstrates how this step builds on all the others. Communicating a price to customers must particularly grapple with perceived fairness, which companies manage through price policies that reflect an understanding of differentiated value.

Chapter 7: Pricing Over the Product Lifestyle
Adapting Strategy in an Evolving Market

Products have a typical, and therefore predictable, life cycle. A market for a product appears, grows, reaches maturity, then declines. An effective strategy does not react to these changes. It predicts them. Profitable pricing represents the culmination of a successful plan and prediction. Moreover, while not every new product creates a new market, every new product presents new challenges and opportunities for marketers to introduce profitable price changes that reflect the different stages of the market life-cycle.

Product_Life_Cycle_png

For new products at the market development stage, the critical goal is buyer education. When buyers know nothing about a product, they have little sensitivity to its price. Consider how much information they lack. They have no way to evaluate cost to search, for instance, or leverage reference prices. Competitors are few or non-existent, and the potential profits of market development far outweigh the threat of competition. Pricing strategy revolves around effectively communicating value through adaptive solutions such as promotional trials, direct sales, or manipulation of distribution channels according to the characteristics of the innovation. Diffusion of experience across customers is a critical component of market development, as early adopters, such as in the case of the Amazon Kindle, can have a massive impact on developing customer knowledge.

During the growth stage, buyers have more information and increased price sensitivity so lower prices can effectively increase market share. In particular, strategies that successfully leverage diffusion can increase reception of price reductions and promote long-term profitability. Moreover, high rates of growth limit the impacts of price competition since companies can cut prices while maintaining profitability. Through effective product strategy, marketers can establish their product as the industry standard in anticipation of market maturity. In the case of Apple, promoting their computers as user friendly allowed them charge premium prices throughout the ongoing product life-cycle. Companies may also pursue cost-leadership, in which a price captures profitability by establishing the product as the cost-efficient market product, although not necessarily by maximizing market penetration.

Strategies in the maturity stage depend substantially on how a company positions its product in the growth stage. Buyer information is at its height, as is price sensitivity. Firms can only grow by seizing competitive market share, so prices are depressed and products become homogenous. As a result, cost-leadership or sustained, well-communicated differentiation produces competitive advantage. Strategic pricing at this stage might leverage unbundling that highlights differentiated characteristics, more accurate estimations of demand and value, expansion of the product line, or reconfigured distribution. In all of these solutions, a marketer takes advantage of the increased amount of information available to maximize price effectiveness.

Finally, the market declines as new products create entirely new markets. This leaves firms with excess capacity, which dictates effective pricing. Variable or easily reallocated costs might cause prices to fall only slightly, but fixed costs can result in higher average costs and increased competition as firms consequently attempt to seize market share, often through price slashing. Effective options include proactively protecting the strongest product lines, pricing to exit the market with minimum losses, or price cutting to capture the markets of weaker competitors.

Chapter 8: Pricing Strategy Implementation
Embedding Strategic Pricing in the Organization

Implementation relies on effectively designing organizational structure and motivating incentives. Effective organization uses a combination of formal reporting and empowered flexibility. This idea can be managed along a spectrum of roles, centralization, and rights and processes.

There are roughly four roles for the pricing function to take on within an organization. It can operate as an expert resource, which provides consultation to different market groups. It can operate as functional coordinator that decides how pricing decision will be made. In the next iteration, it can operate as a commercial partner, that sets both process and price. Less than ideally, it can also take on a figurehead role that sets a price without consideration to different markets. Each of these roles can play into different levels of centralization, and they can produce a center of scale that operates at the corporate level, a center of expertise that sets an advisory price for local managers, or simply a dedicated support unit for other pricing organizations. Across this map, managers and marketers must be assigned clear decision rights to set prices and clear process rights to dictate how decisions are made.

Pricing Function Archetypes

Even the best organizational plan is meaningless if managers refuse to implement it. Perhaps one of the largest challenges is utilizing data to organize incentives around profitability, not volume or price. The key is to link incentives to the correct data. Two effective categories of data analysis are customer analysis and process analysis. Customer analysis seeks a deeper understanding of customer behavior, similar to the estimation of value. In addition to those estimations, analyzing performance trends can reveal how competitors are influencing customers. Building metrics around customer profitability, a combination of the average price to the cost to serve, can provide another specific metric.

In process analytics, the goal is to find leaks in profits so that new pricing policies can seal them. Two forms of analysis are particularly appropriate: price bands, and price waterfalls. A price band shows which customers pay significantly more or less than others. This is a helpful tool to identify aggressive negotiators. A price waterfall tracks the impacts of all forms of discounts as they differ from invoices. This can show where out of pocket costs are actually much higher than what an invoice records.

Of course, implementation still poses seriously difficulties. These metrics can be effectively tied to more specific and effective selling incentives, but it may still be necessary for senior leadership to exemplify a commitment to new processes. Even with demonstrations, clear communications, and well designed incentives, it can take years to reach something close to the theoretical product. However, the benefits are worth it: firms that use strategic pricing earn 24 percent higher operating incomes than their peers.

Chapter 9: Costs
How Should They Affect Pricing Decisions

Costs are critical but not obvious. Strategic pricing integrates costs and value by avoiding the mistakes of cost-plus pricing. In strategic pricing, value comes first. A marketer understands what price will capture the value of a market, which allows them to understand the role of costs. Airlines, when faced with increases in the price of fuel, do not simply charge higher prices. Instead, they raise their revenue per mile by decreasing the number of trips and maximizing the number of full-fare travelers on each flight. However, costs are extremely complicated and subject in the text only to review.

Relevant costs are incremental, which means they are the cost of changing a price, or avoidable, which means they have not happened or can be easily reversed. These costs do not simply correspond to historical information. Knowing these costs determines whether or not a market would be profitable. There are four important mistakes associated with identifying relevant costs. The first is averaging total variable costs to estimate the cost of a single unit. If the incremental cost is not constant this is misleading. Second, accounting depreciation formulas do not always use current value. Third, considering one apparent cost as totally relevant or irrelevant may miss the incremental cost and, as a result, a chance to increase profitability. Finally, overlooking opportunity costs can lead to underpriced products.

The purpose of finding these costs is calculate an accurate contribution margin. This is the measure of the price-volume trade-off and, by extension, a measurement of the relationship of a product’s profitability to its volume. This allows managers to understand how a price change must affect the market in order to maintain profitability, the key first step to making the profitable price decision discussed earlier. It is also important to consider how the fixed costs of suppliers are passed on as incremental costs, and how price coordination can improve efficiency as outlined in the discussion of the product life-cycle and the implementation of a strategic plan.

Chapter 10: Financial Analysis
Pricing for Profit

The formulas presented in this chapter expand on the articulated theory of costs. They provide a method to quantify the impact of price changes on profitability and make informed, profitable decisions that integrate cost considerations. The method is called incremental break-even analysis. Managers simply take a considered price change and create a standard of comparison to the current level, a projection, or a hypothetical. Then they apply the formula to calculate the point at which the change will prove profitable. This is a quantitative answer the challenge of solving the price-volume trade-off.

Incremental_Break_Even_Graph_png

There are four cases where incremental break-even analysis can provide information about profitable pricing. In the most basic case, the formula produces the percent change in sales volume needed to maintain the same level of probability after a price change. (It can be converted into the percent change for price as well.) The formula can also be calculated with changes in variable costs or with changes in incremental fixed costs. It can even be calculated with consideration to a competitor’s price changes.

One of the most powerful tools that emerges from incremental break-even analysis is the break-even sales curve. The curve presents a range of price changes by percentage on a curve that includes the baseline. Profitable prices produce volumes to the right of the curve. Although to many it may seem unrealistic to apply economic theory to price decisions, working from the minimum elasticity can account for these concerns. However, it is important to keep in mind that the baseline should reflect the market change without a price change.

Strategic pricing does not ignore the problem posed by fixed or sunk costs. It simply notes that considering those costs is irrelevant to setting a price because they do not affect the profits the price will produce. The question of how to cover fixed costs is an important question of profitability that can be more clearly answered when firms understand the impact of price.

Chapter 11: Competition
Managing Conflict Thoughtfully

Effective responses to competitors leverage competitive advantage, not price. Price cutting, especially to make the next sale and without due consideration to strategy, can undermine the whole industry and irreversibly change a market, often to the disadvantage of the firms that resorted to price slashing. Sustainable, successful companies leverage their competitive advantage, which emerges from value differentiation. It can be reflected in the value of serving a highly specific customer segment, or it can be geographic and leverage convenience. Many American breweries maintain a competitive advantage by promoting their beer as the standard for a particular region. Advantage may also be based on variety, which takes advantage of cost-sharing and highly specific differentiation. Microsoft focused on developing computer operating systems, giving the company strong differential value in the computer industry while sharing costs with hardware manufacturers.

When competitors do changes prices, its important to react thoughtfully. Besides lowering prices, it is worthwhile evaluating how many customers will be attracted to a new competitor because of values like convenience, making them immune to reactive price changes. It may be the case that only one segment of customers is attracted by a discount. A response may call for new communication. Analysis may reveal that any retaliation would not be profitable. By carefully collecting and announcing information, firms can react to competitors in far more sustainable ways than simply slashing prices in a bid to grow market share.

Chapter 12: Measurement of Price Sensitivity
Research Techniques to Supplement Judgement

Estimates of price sensitivity can helpfully supplement managerial knowledge and expertise, but they cannot replace it. Experienced managers often have the strongest sense of which customers represent a product’s key market, know most acutely the factors that affect a sale, and can recommend appropriate parameters for research. On the other hand, most marketing decisions are highly subjective, and the information provided by estimates can serve as a guide or illuminate new information.

Research methods do differ in accuracy, cost, and applicability so its important for managers to assess the potential benefits of the information, without cutting corners. Techniques differ between highly controlled and totally uncontrolled. The added cost of controlled research is often worth it since uncontrolled environments have too many variables to allow for accurate information. Controlled analysis overwhelming tends to produce superior data.

Research can also track either actual purchase information or intentional purchase. Although information about actual purchase decisions is highly desirable, it is difficult and costly to acquire. In a controlled environment, however, measurements of intention and preference can prove highly predictive. Conjoint analysis, in particular, can match price sensitivity to specific differentiated characteristics.

The possibility for different research techniques also depends on the stage of the product development. Obviously, purchase information is not available for products in the early stages of development. This is when conjoint analysis tends to be the most helpful. Once the product is available, controlled in-store or laboratory experiments are possible, and, at the maturity stage, historical purchase data is easier and cheaper to obtain.

Chapter 13: Ethics and the Law
Understanding the Constraints on Pricing

Ethical constraints on pricing are meaningful from the standpoint of personal and societal ethics. However, beyond personally considered ethical stances, small changes can often bring suspect pricing policies into line with both the law and profit. In the United States, price competition is enforced through anti-trust law, as administered criminally by the Department of Justice, and civilly by the Federal Trade Commission and private parties. In the last decade, anti-trust law has focused on demonstrable economic effect, allowing a great degree of creativity for price setters. Some forms of prohibited price activity include price-fixing among competitors or price encouragement between suppliers and distributors. They remain, largely, per se illegal but some ambiguity in the application of the law has allowed them to become permissible in the last thirty years. Other key areas of ethical concern include price discrimination, promotional discrimination, non-price vertical restrictions, predatory pricing (in which the seller prices to harm their own profitability), and price signaling. However, these restrictions are rarely enforced due to the need to demonstrate economic harm over intention.

Managing risk on software development projects

Notes from Waltzing With Bears: Managing Risk on Software Projects (by Tom DeMarco and Timothy Lister)

Required reading for all serious students of software engineering.

If a project has no risk, don’t do it. No risk, no reward.

Risk defined: a possible future event that will lead to an undesirable outcome; the undesirable outcome itself. Better definition of risk: a weighted pattern of possible outcomes and their associated consequences.

Transition indicator: a harbinger the risk is likely to materialize

“Project managers often tell us that their clients would never do any projects if they understood the downside. Such managers see themselves as doing a positive service to their clients by shielding them from the ugliness that lies ahead. Concealing the potential for delay and failure, as they see it, is a kindness that helps clients marshal sufficient gumption to give the go-ahead. Then, the project can very gently introduce them to bad news, a little bit at a time, as it happens.”

Risk Management Decriminalizes Risk. “Can-do thinking” in corporate America; when you put a structure of risk management in place, you authorize people to think negatively, at least part of the time.

Risk Management Protects Against Invisible Transfers of Responsibility. When a client negotiates away a contingency fee that was meant to cover certain risks, responsibility for those risks has likely migrated from the contractor to the client.

Risk Management Requires Organizational Buy-In. Telling the truth where optimism/lying is the norm puts you at a huge disadvantage. Use your risk management knowledge in secret, unless your organization explicitly provides for this; otherwise you lose out to the hungry peer who says “Give me the project and I will deliver on time, guaranteed”.

Bad risk management: only dealing with problems for which you have solutions. To vaccinate, at the first go-round of what would normally be risk identification, vaccinate everyone by naming all the catastrophic outcomes you can imagine. Work backwards and try to describe which scenarios might lead to that.

Risk management: where your project planning is very much focused on what to do if you don’t catch breaks. Projects that start off as personal challenges seldom have their risks managed sensibly. Luck should never have to be built into the plan. Offer reasonable stretch goals, but make sure that real expectations make room for the breaks that don’t happen.

The pathology of setting a deadline to the earliest articulable date essentially guarantees that the schedule will be missed.

For the software industry as a whole, window size of delivery is in the range of 150 to 200% of the allocated time. That means in general, you can expect projects to take up to two times as long as you think they should, even when you do thorough, bottom-up estimates.  This is why I personally multiply all my estimates by 4x.

When a project strays from schedule, it’s seldom because the work planned just took longer than anyone had thought; a much more common explanation is that the project got bogged down doing work that wasn’t planned at all.

Totally mechanical beginning to the business of risk management: run a few postmortems of projects good and bad and look for ways in which they deviated from their initial expectations. Trace each deviation back to its cause and call that cause a risk. Give it a number and carry on. Yesterday’s problem is today’s risk.

In personal experience (having worked mostly in the industry since 2003) the common risks (the ones that keep appearing AGAIN AND AGAIN) are:

  • Key personnel turnover
    • You have someone great, but they aren’t being treated or compensated properly. Enjoy your amazing deal while it lasts, but find a way to treat people fairly and still profit. Improve the quality of codebase README documentation to reduce the onboarding time of new hires. Every new hire should make README improvements that fully resolve all areas of confusion.
  • Building systems that haven’t been designed for potential scale
    • You haven’t measured or modeled what it will take to scale delivery of services and how those costs will scale.
  • Changing product requirements
    • Use a spec and have 1-day development cycles, with team standups at the start and at the end of the day. Start of day: what I did yesterday, what I plan to do today, where I’m blocked. End of day: what I did today, what I plan to do tomorrow, where I’m blocked.
  • Growing too slow; having no predictable channel for acquiring customers
    • This can be done in parallel to building the product
  • Users find the product unintuitive
    • Easily mitigated through blind usability studies and then actually responding to the feedback.
  • Confusing and undocumented codebase functionality
    • All commits to version control should reference a JIRA/Trello ticket

There are 5 core risks common to all software projects: irrational deadline, requirements inflation (“scope creep”), spec ambiguity, employee turnover, and poor productivity.

What to do about a risk? Avoid it, contain it, mitigate it, evade it? Avoiding means forgoing the reward of the risk. You mitigate a risk when you take steps before its materialization to reduce eventual containment costs. These are the steps required in advance so that the containment strategy you’ve chosen will be implementable at transition time. Evading a risk is just like crossing your fingers and getting lucky: risk management is not the same as worrying about your project.

The client has every right to nominate certain risks for the contractor to manage, and vice versa. If you are the client, your safest posture is to assume that only those risks specifically allocated to the contractor are his, and that all the rest are yours. Incentives or penalties in the contract allocate risk.

The contractor’s risks are those that endanger the successful completion of the contract or diminish the value of completion to the contractor. Everything else is judged by the contractor to be somebody else’s risk, and thus a candidate for exclusion from his risk management. That means that you, as the client, have to manage these risks or no one will.

A common class of litigation arises out of projects in which the client is surprised to find that certain important risks never made it onto the contractor’s radar. Usually, fault lies with the contract that failed to assign those risks. As a general rule, there are no contracts that successfully transfer all responsibility to a single party. If you are either client or contractor, expect to have to do some risk management.

If you calculate exposure for all your risks and set aside a risk reserve equal to the total exposure, that risk reserve will, on average, be sufficient to pay for the risks that do materialize. Your best guess about likely materialization may come from industry data, previous lists, or just a flat-out guess… Don’t excuse yourself from this essential act just because any answer you come up with will never be demonstrably correct. Risks also need to be budgeted for in a time sense as well as money.

Showstopper risks: these are risks that, should they materialize, will fully kill a project. The rule here is that a risk owned above you in the hierarchy is an assumption to you. The risk still belongs on your risk list, but it should be explicitly noted as a project assumption. You would do well to make a little ritual of passing this risk upward. When you present your risk management plan, formally delegate the management of some risks upward to someone above you in the hierarchy.

For each managed risk, you need to choose one or more early indications of materialization. For example:

Risk Indicator
Startup won’t acquire enough users Company misses one of its early growth goals
Key personnel turnover Person is uncommunicative during one-on-one meetings

Steps for risk management:

  1. Use a risk-discovery process to compile a census of risks facing your project
  2. Make sure all of the core risks of software projects are represented in your census
    1. Inherent schedule flaw
      1. Managers who come up with or agree to seriously flawed schedule commitments are performing poorly. The key point is that when a project overruns its schedule, it is in spite of, not due to, developer performance. Schedules should be based on bottom-up estimate of work rather than arbitrary commitments.
    2. Requirements inflation
      1. Well-managed projects change at less than 1% per month (US Department of Defense standard)
    3. Employee turnover
    4. Specification breakdown (ambiguity in specification)
      1. 10-15% of software projects are canceled without delivering anything. Each project has cancelation risk that is closed once all parties sign off on the boundary data going into and out of the product, and on definitions down to the data element level of all dataflows arriving or departing from the software to be constructed. Data inflow and outflow descriptions are less prone to ambiguity than function descriptions. Force yourself to get agreement on data inflow and outflow before 15% of the way through the project. If you can’t attain consensus by that point, the best option is project cancellation.
    5. Poor productivity
  3. Do all of the following homework on a per-risk basis:
    1. Give the risk a name and id
    2. Brainstorm to find a transition indicator – the earliest practical indication of materialization – for the risk.
    3. Estimate the cost and schedule impact of risk materialization.
    4. Estimate the probability of risk materialization.
    5. Calculate the schedule and budget exposure for the risk.
    6. Determine in advance what contingency actions the project will need to take if and when transition occurs.
    7. Determine what mitigation actions need to be taken in advance of the transition to make the selected contingency actions feasible.
    8. Add mitigations actions to the overall project plan.
  4. Designate showstoppers as project assumptions. Perform the ritual of passing each of these risks upward.
  5. Make a first pass at schedule estimation by assuming that no risk will materialize.
  6. Using min,max bottom-up estimates for each of your functionality points to construct a risk diagram that shows the earliest and latest possible delivery for the project.
  7. Express all commitments using risk diagrams, explicitly showing the uncertainty associated with each projected date and budget.
  8. Monitor all risks for materialization or expiration, and execute contingency plans whenever materializations occur.
  9. Keep the risk-discovery process going throughout the project, to cope with late-apparent risks.
  10. Force a complete design partioning prior to any implementation. Use this as input to the process of creating an incremental delivery plan.
  11. Assess value to the same precision as cost.
  12. Break the requirements contained in the spec down to their elemental level. Number them in a rank-order by priority. Use net value to the user and technical risk as the two criteria for prioritization.
  13. Create a release plan in which the product is broken into versions (enough to schedule a new version every week or so). Assign all the elemental requirements to their versions, with the higher-priority items coming in earlier. Calculated Expected Value for each version and record it in the plan. Treat the incremental delivery plan as a major project deliverable.
  14. Create an overall final product-acceptance test, divided into releases; one per version.

Keep your risk census public if the politics allow for it.

The hidden meaning of “I don’t know”: an essential part of project management is coming up with the answers to key questions such as, When will you be done? Will your user accept and use the product? Our point is that you need to recognize these I-don’t-know questions because they are always indicators of risk. Force yourself each to ask a subsidiary question: What do I know (or what could I know) about what I don’t know?

Unwritten rules of corporate culture:

  1. Don’t be a negative thinker.
  2. Don’t raise a problem unless you have a solution for it.
  3. Don’t say something is a problem unless you can prove it is.
  4. Don’t be the spoiler.
  5. Don’t articulate a problem unless you want its immediate solution to become your responsibility.

Introduce a ritual that makes it okay to share fears about a project.

  1. Brainstorm disasters
  2. Describe scenarios that could lead to disaster
  3. Run root cause analysis

WinWin management: the project makes an up-front commitment to seek out all stakeholders and solicit from each one the so-called win conditions that would make the project a success from his or her point of view. The requirement is defined as the set of win conditions. Nothing can be considered a requirement if no one can be found to identify it as one of his or her win conditions. Ask participants, “Can you think of an obvious win condition for this project that is in conflict with somebody’s win condition?” Each identified conflict is a potential risk.

Incremental delivery is a way to reduce risk, but doesn’t make sense if you’re only shipping a total of two or three versions. A proactive approach to incremental delivery involves prioritizing value delivered to the stakeholder and confirmation of risk hypotheses. The risk-aware manager will want to force the portions involving serious technical risk into the early versions.

Projects with a critical deadline require an early start:

An IT manager and a normal person are both working in Chicago on a Wednesday afternoon when they learn that they have to be in San Francisco for a noon meeting on Friday and that it’s imperative to be on time. The normal person– let’s call her Diane– takes a Thursday evening flight and checks herself into that pleasant little hotel just down the block from the San Francisco office. She has a leisurely dinner at Hunam and wanders over to Union Street to take in a film. The next morning, she has a relaxed breakfast and works on her laptop until eleven. She checks out at 11:30 and strolls into the office ten minutes early.

Meanwhile, the IT manager, Jack, has booked himself on the 8:40, Friday morning. He catches a cab midtown on at 7:05 and runs into a traffic jam on the Eisenhower. He complains angrily to the cabdriver all the way to O’Hare. The stupid driver just can’t be made to understand that it is essential that Jack make this flight. When he checks in at UNited, he tells the check-in clerk rather forcefully that the flight must take off and land on time, no excuses. He tells her that he will be “very, very disappointed” with any lateness. When a gate hold is announced, Jack jumps up and objects loudly. When a revised departure time is announced, he digs deep into his bag of managerial tools and delivers the ultimate pronouncement: “If you people don’t get me into San Francisco in time for my noon meeting, HEADS WILL ROLL!”

How to decide what to build: costs and benefits need to be specified with equal precision. When a benefit cannot be stated more precisely than “We gotta have it,” then the cost specification should be “It’s gonna be expensive.”

“The savings figures also are classified by whether they are reductions or avoided costs. The difference is important. Reductions are decrements from current approved budget levels. You (the requesting manager) already have the money; you are agreeing to give it up if you get the money for the new system. Avoided cost savings are of this form: ‘If I don’t have this system, I will have to hire another in . But if I do get this system, I can avoid this cost.’ This is every system requester’s favorite kind of benefit: all promise, no pain. The catch is that there is no reason to believe you’d ever have been funded to hire those additional workers. You’re trading off operating funds you may never get in the future for capital funds today. Very tempting, but most request-evaluators see this coming miles away. The correct test for avoided-cost benefits is whether the putative avoidable cost is otherwise unavoidable, in other words, that the future budget request would inevitably be approved. This is a tough test, but real avoided-cost benefits can and do pass it.” – Steve McMenamin, Atlantic Systems Guild

If increasing the size of a product exposes you to more-than-proportional increases in cost, then decreasing product size offers the possibility of more-than-proportional savings. Eliminating those portions of the system where the value/cost ratio is low is probably the easiest and best way to relax constraints on time and budget.

How to lead when you’re not in charge

Summary of HOW TO LEAD WHEN YOU’RE NOT IN CHARGE: Leveraging Influence When You Lack Authority by Clay Scroggins.  Summary by Avery Erwin.


Takeaways

  • No matter your position, you can create a pocket of excellence around you right now.
  • Influence, not authority, is the real currency of leadership.
  • Develop a plan to grow as a leader. Lead yourself, choose positivity, think critically, and reject passivity.
  • Know when, how, and why to challenge authority. Don’t stay silent.
  • The way you lead now determines how you will lead in the future.

Start leading now.

Clay Scroggins worked his way up from facilities intern to lead pastor at the North Point megachurch outside Atlanta, Georgia. Looking back on his twenty years rising through the ranks, Scroggins realizes that he missed many opportunities to develop as a leader well before he was officially in charge. He challenges readers to stop dreaming about a corner office and discover the opportunities they have, right now, to lead. If you wait to lead, Scroggins insists, no one will never be put you in a position to lead.

Influence is the currency of leadership. Think kabash, not kibosh.

Many of us buy into the myth that we must be at the top of the totem pole to lead an organization. This kind of thinking conditions us to assume a go-with-the-flow attitude and shrug off responsibility. We make excuses, blame the institutional machinery. But Scroggins looks at self-appointed leaders like Martin Luther King Jr., Nelson Mandela, and Mahatma Gandhi as hard proof that “Leaders lead with the authority of leadership … or without it.” In fact, many who have formal authority fail to lead. The real lifeforce of leadership is not authority, but influence. The call to leadership is about forgoing brute force and instead cultivating one’s influence, wherever you are positioned in your organization. By focusing on your own area of responsibility, you can create an “oasis of excellence” around you that ripples on up.

We all know kibosh, like “put the kibosh on it.”   Kibosh means “put an end to; dispose of decisively”.  A kibosh leader eliminates, subordinates, squashes, puts an end to people and projects. They leverage their authority to elevate themselves. Scroggins calls for a kabash style of leadership, from the Hebrew word for “subdue.” In its original context, kabash means to bring something under your control in order to cultivate it. A kabash leader leverages influence to elevate those around them. They lead by humility and create space for others to flourish. A kabash leaders shows up to serve.

Plan to Grow as a Leader: Practice the Four Behaviors.

1. Lead Yourself

“Nothing so conclusively proves a man’s ability to lead others

as what he does on a day-to-day basis to lead himself.”

– Tom Watson, former CEO of IBM

The onus is on you to develop as a leader. Don’t depend on your boss to lead you well, or to find opportunities for you to lead. For burgeoning leaders, Scroggins stresses the importance of having a personal vision and built-in accountability. Make a plan to lead. Use others to motivate yourself to assume more responsibility. Get feedback on how you’re developing as a leader. When Scroggins made a job transition some years ago, he conducted an anonymous 360-degree survey of his former coworkers. If you want feedback, you need to ask for feedback. And never stop asking. Even leaders at the top continue to hone their leadership skills. “No matter how successful they become,” Jim Collins explains of truly great leaders, they “maintain a learning curve as steep as when they first began their careers.”

2. Choose Positivity

Research shows that people are most fulfilled at work when they understand how their specific role contributes to the larger results of their organization. Positive energy alone is a tremendous asset to a team and to an organization. One way to boot up a positive approach is to embrace a panoramic, what Scroggins calls a “panoptic,” view of expectancy and hopefulness, a mental posture that is fueled by trust and forward-thinking. Positivity, mind you, doesn’t come naturally. It’s developed, and it’s during the hard decisions you didn’t make and might not like that, as a leader, you’re called on stay positive.

3. Think Critically

There is such a thing as too much positivity. Unbridled positivity can be shortsighted or unrealistic. Beware of becoming what Scroggins calls the “rainbow-puking unicorn.”

Reinforce a positive approach with critical thinking. Scroggins isolates the three pillars of thinking critically:

  • Question. Challenge assumptions so you can discover the hidden realities behind actions and outcomes.
  • Notice. Pay attention to abnormalities.
  • Connect. Draw connections between seemingly disconnected behaviors and feelings. Understand the cause and effect relationships. The more connections you can draw, the more self-aware you’ll be.

Nothing fosters critical thinking like facing obstacles. Scroggins looks at the example of NFL quarterbacks who came from second-rate football colleges. Their “tough road” turned them into critical thinkers. Because his college couldn’t recruit a powerhouse offensive line, Ben Roethlisberger learned how to scramble out of broken pockets on his own.

Thinking critically means thinking as an owner, even if you’re an intern. It means scheduling thinking meetings between meetings, instead of getting sucked, half-alert, into the vortex of back-to-back meetings. Efficient is not always effective. A leader who thinks critically keeps strong motives front and center. Instead of being critical, they think critically about the situation and lend others a hand.

4. Reject Passivity

“You will never passively find what you do not actively pursue.”

– Tim Cooper

Remember, you’re in charge of yourself. Don’t depend on your boss to find work for you. Scroggins has a handy mnemonic to help you “resuscitate your proactivity.” Do CPR.

  • Choose. If you’re mid-level or even an underling in an organization, you often have a better idea of what needs to be done than those who are officially in charge. Take initiative and pick a project. Don’t wait for your boss to pick a project for you. Clean out the company closet that’s stuffed to the brim.
  • Plan. Once you stop stacking meetings and start scheduling thinking meetings for yourself, you’ll discover that well-planned ideas thrive at meetings because they had time to develop. In your calendar, plan time to plan.
  • Respond. Don’t get caught on your heels putting out fires all day. Respond to what’s most important, not just what’s next. Pay attention to the direction your boss is heading in, so you can both respond swiftly and anticipate the next move.

 

Challenge authority.

Many organizations unconsciously tend toward the status quo and resist change. Great leaders challenge the status quo to make changes for the better. This means learning to spot problems in the first place and then brainstorming solutions. Challenging the status quo also always means challenging up. How you approach the challenge will dictate just how well the challenge goes. Tailor your approach to fit the person you’re challenging. Great leaders don’t get defensive and take challenges to their system personally, but understand, as Scroggins says, that a “Change to the present system will be perceived as a criticism of past leadership.”

The Milkshake Experiment. Scroggins zeroes in on Shane Todd as a perfect case-study for making a patient, wise, and paradigm-shifting challenge to authority. In 2006, Shane was a Chick-fil-A franchise owner and operator in Athens, Georgia. He recognized a demand and introduced the milkshake at his store before it was available nationwide. Shane understood that senior management was concerned about service time, so when Tim Tassopoulus, senior VP of operations, visited Shane’s store, Shane proposed a race. If Tim could prepare two diet cokes faster than Shane could make a milkshake, Shane would call off the milkshake experiment. Two years later, the milkshake was the highest rated product on Chick-fil-A’s menu.

Shane was a single franchise owner, but he had cultivated enough relational capital to challenge delicately and strategically so he could bring about what was ultimately a game-changer for Chick-fil-A. His experiment supplies us with a whole toolkit for challenging up. Here are 10 tools:

  • When you approach your boss with a problem, think ahead and present a solution.
  • Be explicit that you have good intentions before you challenge.
  • Be curious, ask questions, and mean it. Admit that you may be missing information.
  • Know what your boss wants.
  • Know what’s essential to your organization’s mission. Acknowledge what’s secondary.
  • Challenge up quietly, but don’t stay silent.
  • Challenge when emotions are low.
  • Champion publicly, challenge privately. Schedule one-on-one time with your boss.
  • Be okay with a no. Take it as a not yet.
  • Why you’re challenging is more important than what you’re challenging.

Scroggins boils the art of challenging up down to one thought-provoking question: “how does your boss feel when your name pops on his or her phone?”

“As now, so then.” Your influence today will determine your influence tomorrow.

Every day, you are around people whom you can lead and serve. True leadership is powered by influence and filled with humility and self-sacrifice. Many leaders focus on their own success. Few leaders commit themselves to their entire staff, up and down the totem pole. The numbers say a lot: according to one Gallup poll, 50% of people who leave their jobs do so because of their bosses.

Like positivity and critical thinking, leadership is a skill that must be developed. Otherwise, it atrophies. If you don’t develop your ability to lead today, you won’t have the equipment or instincts to lead when you’re finally put in charge. Scroggins takes a point from Scott Adams, who advises, “Avoid career traps such as pursuing jobs that require you to sell your limited supply of time while preparing you for nothing better.” Look for opportunities to lead right now, wherever you are on the totem pole. Every role is an opportunity to lead. Stick to a personal growth plan for developing as a leader. Make a list of exemplary leaders whom you can model. True leaders are self-leaders, self-teachers.

Every role is training for the future, when you will have the corner office and be officially in charge. But every role until then is still an opportunity to lead. Remember that the most powerful leader is a servant-leader, someone who shows up and asks, “are the people I’m leading here for me or am I here for them?” Start cultivating your pocket of excellence today.

Formally establishing a sole proprietorship in NYC so I could open a business bank account

Code_For_Cash_on_Instagram__“Filing_the_Code_For_Cash_business_certificate_with_the_city_💸”_•_Instagram

Filing the Code For Cash business certificate with the city 💸

A post shared by Code For Cash (@codeforcash) on Aug 31, 2017 at 9:57am PDT

Formally establishing a sole proprietorship with the city was a necessary but exhausting process.

It’s just one of those things that I kept putting off, because I was intimidated by the process.  But it became a necessity because my personal transactions were so commingled with my business transactions that I had to “get my house in order”.

I wanted to understand questions like: “How much is our overhead?  How will it increase?”  and I wanted to truly understand “Exactly how much is the cost for us to deliver our services, and how low could our gross margins be, given what we know about conversion rates and such, without preventing us from growing rapidly”.  Unfortunately, because I had been simply tracking everything in an Excel spreadsheet (and not keeping it up-to-date), and because I hadn’t been assigning transactions as “overhead” vs. “marginal costs”, I didn’t have the opportunity to answer these questions.  Fortunately, we recently hired a bookkeeper, who will be able to provide data.

One necessary thing to make everyone’s lives easier is to start a separate bank account for all transactions related to the business.  I know this sounds like a no-brainer obvious thing, and I’m ashamed it’s taken me so long to get this together.  I think if I had a step by step tutorial guide on how to get this done, I would have done sooner, so hereby presents the Code For Cash tutorial:

Tutorial: NYC Guide on How to open a new, separate bank account for your sole proprietorship online business (DBA) in (SaaS, software development, freelancing, agency, etc.) if you are already a USA citizen

Time requirement: 2 hours or more if you are not lucky navigating NYC traffic

Prerequisites:  Having a personal bank account in a bank like Citibank, TDBank, etc.  USA citizenship.

  1. Bring two forms of government identification (just in case) before you set out on your quest.
  2. Go to lower Manhattan.  On Park Place St #11 there is a shop inside the building.  He will sell you 3 copies of form X-201 for $10.
  3. Go to the courthouse at 60 Court Street.  Go to the basement.  There is also a basement entrance to the building but it’s fine to enter through the main way.  You will have to go through a metal detector.  Reminds me of the airport in a way.
  4. Go to the basement.  You have to ask around to find out what room to go to, since the signage is wrong (fear not if the door to the business licensing room says “closed”; government business is up and running in another room.
  5. Show them form X-201 and your ID.  They will notarize the forms for you.  Pay $120 for 2 copies of your business certificate.
  6. Go to your bank and show them your business certificate and your driver’s license.  Expect them to have a bunch of forms for you to sign.  This process takes about 30 minutes to an hour and a half, depending on wait time at the bank.
  7. Expect it to take up to 5 business days for your account to be authorized.  This delay is due to AML (anti money laundering) and KYC (know your customer) laws.

Software development risks: the “tree swing risk”

In any software development project, there is going to be risk.  If a project doesn’t have risks, you shouldn’t do it!  After all, as the saying goes: “no risk, no reward”. However, based on the experience of your management team, you can recognize risks in advance that are likely to occur and either mitigate, have a contingency plan, or evade them entirely.

One of the most common risks is what I call the “Tree swing risk”, and it’s exemplified by this image:

tree-swing

I posted this image to the Code For Cash Instagram account and it got a lot of likes.  “So true!” people responded.  It’s resonated with almost everyone’s experience, yet rarely managed-for.

This essential risk describes the customer not receiving what they had envisioned in their mind.  This outcome leads to the negative emotions of surprise and disappointment.  When this happens, “rework” is needed, often at a cost to both parties – it leads to a situation where nobody is happy.  Often, the developer is fired.

How do we avoid this risk?  Through creating an “idiot-proof specification”.

An idiot-proof specification has clear acceptance tests that are binary “yes or no” answers regarding functionality that the application clearly has or does not have.  These are usually in the form of behavioral tests: “Can the user create an account with their email address?”  “Can the user recover their password through the site?”  “Can a user create a Widget through the site and have it be saved into their user dashboard?”  “Can the user export the Widget to PDF format?”  “Does the PDF show three different vantage points depicting the widget?”

An idiot-proof specification has visual wireframes.  My favorite tool for this is Balsamiq mockups, but any reasonable tool that allows you to build mockups/wireframes will enable the customer to visually confirm that you will be building what they envision. It’s also the case that while most engineers are “left brained” and communicate through text, most managers and executives are especially “visual thinkers” and respond best to visual stimuli.  A visual mockup of the deliverable cements everyone to the same page.  A visual mockup need not to have beautiful, branded user interface design elements, as long as it is specified in written form whether or not the final deliverable will have a beautiful design.  Always specify this case – whether or not the deliverable will be beautifully designed – in writing, and have both parties initial that!

An idiot-proof specification may be shopped on the market.  This means that the software may be built by the lowest cost credible bidder on the market; indicators of credibility run the gamut from “has GitHub or portfolio” to “passes reference checks” to “meets standards of our procurement department”.  Once you have an idiot-proof specification, you are in a better position to benefit from an outsourcing model.

A specification should be created in conjunction with an experienced developer (who knows what is possible), the customer (who understands the market) and the end-user who will be using the application day-to-day.  A specification is something that should be paid for (by the customer).  It becomes the customer’s intellectual property.

There is a saying: “a fair fight is the result of poor planning”.  Almost all software projects go over budget, and those are usually due to the materialization of risks that weren’t accounted for – i.e. work that was unexpected rather than work that was expected yet ran over budget.  Having the specification in hand enables a better understanding of expected work.  In case you want another cliche, how about “measure twice, cut once”.

Of course, there is also the question of what should go into the specification and what is extraneous.  These will be addressed in future blog posts, but for now, I recommend googling the phrase “minimum marketable feature”.

 

 

Code For Cash: Month 8 Report

Introduction

Truthfully, I have been dreading writing this report– and also dreading publishing it. This month we had negative progress, and that’s the last thing I want to publish. Not only is it somewhat embarrassing, but it’s also, possibly, negative signaling to customers and contractors who may take it as a sign that the end is near. There are many things to be optimistic about, and I’m here to move Code For Cash forward.

What do we do, again?

  • We are a community of freelance (and fte) software developers
  • We are a job search tool that is niche-focused on software developer jobs
  • We do agency-style software development

Revenue for month 8

Total revenue: $23,387.04
Recurring revenue: $3,000

Let’s compare revenue to last month.

MRR: $3,000 vs. $4,200: -28.5% M/M increase in MRR
Total Revenue: $23,387.04 vs $66,500: -65% M/M increase in total revenue

The total revenue drop was expected, and is not actually a disappointment; last month, I mentioned that we saw a higher than average spike in revenue due to completing milestones on several projects. But, the MRR drop? Oh god, what a total bummer.

What caused the MRR drop? Client who was on a high-MRR plan canceled service. Why? This high-value client kept emailing and calling over the weekend; I usually dropped everything and answered, to the dismay of my friends/family. This time I decided not to– I was burned out. Essentially, client fired me. ENTIRELY my fault for not communicating and setting expectations properly.

The good news

We sold 392 books. We got up to #3 in the Kindle bestsellers category for “Software Development”. However, this came at a cost. We cranked up the advertising and it cost on average $2.85 for someone to download a book. Just guessing: If 50% of the people who buy the book, end up opening it, and then 10% of those people end up actually reading it, that means it costs about $57 to acquire a reader. I actually feel like these economics are promising…in fact, they’re way, way better than Reddit ads.

More good news

We are in “trial” with many coding bootcamps; their students are getting tons of value of leveraging us as a tool in their job search.

Accountability – last month’s goals – how’d we do?

Find a predictable channel for acquiring hiring manager customers.

This didn’t go great. Tried tons of different channels, but haven’t discovered any scalable traction channels yet. Believe it or not, the best client/hiring manager channel we currently have is the one I wrote about extensively in the book… Craigslist. But it’s not exactly scalable without violating their TOS. Here are some other channels we tried…

  • Hiring a Harvard Business School person to write a case study for us
  • Emailing people
  • Cold calling people
  • Reddit ads
  • Printing 500 copies of the case study and distributing around the Flatiron (startup) district of Manhattan
  • Distributing copies of the case study at an AWS event
  • Blogging

We are still working on developing a scalable strategy. This is the most important thing for us to do. We already have a predictable (expensive) channel for sourcing developers, which is currently paused, because the bottleneck in our business is original projects. When we have too many original projects, we need more developers. Right now that’s not the case.

In lieu of finding a scalable strategy for acquiring hiring manager customers, we focused on something we can control: spidering more developer job boards and adding them to the database.

Bad job leads are in red.  Good job leads are in blue.

Untitled_spreadsheet_-_Google_Sheets.png

Miscellaneous updates

Slack is really at the core of our community. Why?

  • We do job alerts via Slack. (You customize your alerting preferences, including positive and negative keywords, via the webapp).
  • We have cool Slack extensions. /keywords @username lets you see someone else’s skills; /skill aws,python lets you see the @usernames of members who do both AWS and python. Finally, /timelogs @username lets you see your time chatting with @username, inbillable increments. (This is the coolest new feature – a serious time saver!)

Unfortunately, Slack was randomly killing some of our services due to reaching the integration limit, so we ponied up the extremely expensive monthly fees. Yes, we now have “upgraded” Slack.

This has changed the cost of delivering the service, so we changed our fee structure. We immediately raised the price for individual users to $200/month. This had two effects:

  • Way fewer people are signing up.
  • The few people who are signing up at this price point are extremely high quality and I can put them to work immediately on client projects.

Ultimately, we may be able to reconcile the costs of delivering the service and an appropriate profit margin and lower our monthly fees soon (I’m sure this is going to inspire exactly none of you to signup, but oh well. Transparency.). The $200/month was a bit of a knee jerk reaction, but it’s working okay. Remember, the cost of delivering the service also includes us paying human intelligence workers 10¢ for each job, to do manual review and add metadata that isn’t always explicitly present in the listing (1099, w2, onsite, remote, flag if we accidentally pick up a non-tech related jobs, etc.). Once we have a big enough corpus we may be able to do better machine learning and reduce those costs… but for now, the costs are

One more cool thing:

We have a client called WorkReduce. They’re an adtech company and they take jobs that require a human workforce and lower costs for companies by distributing them remotely.

WorkReduce did something interesting: they have one task that they pay humans on the order of 75¢ per task. It’s taking a page that previews multiple sizes of ad creatives, taking screenshots of it, cutting up the screenshots in Photoshop so that each size creative is a separate picture, and then uploading them into Salesforce. We were able to automate this for them, and they paid us a small upfront fee for integration into Salesforce as well as a per-task fee (25¢) for each successful task.

Let’s look at the benefits of this:

  • Developers can get paid a minimal upfront fee and also receive passive income (50%+ of developers I know are looking for passive income while also bringing in guaranteed money… solves a huge problem)
  • Company lowers their costs and increases automation. Company gets to market faster and with lower capital expenditure. Company becomes valued with 10x SaaS-style valuations instead of 1x service company valuations.

I feel like this is extraordinarily win-win-win.

We’re calling it Software as a Service as a Service, or SaaSaaS. We’re also in the process of building a serverless-marketplace for this– one new team member is working on this subsidiary initiative fully for SAFE, at 2x his normal hourly rate.

Next steps for us? Figuring out exact costs to deliver monthly service and decrease monthly fee so that we see a reasonable, but not exorbitant, margin of profit.

If you love this and want to read historical updates

– Month 1 https://www.indiehackers.com/forum/post/-KZTIFbGtOSKQUNrIsml

– Month 2 https://www.indiehackers.com/forum/post/-KajWVSRfUmTvlh1X9AS

– Month 3 https://www.indiehackers.com/forum/post/-Kfb7VxafAaHNPvgdvD7

– Month 4 https://www.indiehackers.com/forum/post/-Ki5j8MHQh0pkRm-DcpL

– Month 5 https://www.indiehackers.com/forum/post/-KkWE6UWbr2bDqar8BRb

– Month 6 https://www.indiehackers.com/forum/post/-Kn-QcExpR5_YJuwAQkU

– Month 7 https://www.indiehackers.com/forum/post/-KpQZV7B3INpJ8336_nH

Lock-free algorithms overview and (semi-) lock-free stack implementation

By Vladimir Pavluk.

In this article I would like to discuss the topic of lock-free algorithms, and particularly, a lock-free stack implementation. In fact, this implementation is lock-free only by name; there is a lock, it just is not obvious, and the question “why” will be clearly discussed further. All examples are given in the C programming language.

For those unaware of lock-free concepts, I’m going to briefly describe the matter in question and why is it useful. In multithreaded applications, it is very common that several concurrent threads require access to a single resource (memory, object, etc.): a processing queue would be a good example. To keep this shared data consistent we need to keep it from simultaneous uncoordinated changes. Usually, the means to coordinate such access, or synchronize it, are implemented using various kinds of locks: mutexes, spinlocks, etc.), which fully lock access to the data in question when one of the threads accesses it and unlock after the changes are completed.

I won’t go into differences between mutexes and spinlocks… just want to mention that the principle stays the same whatever kind of locks is used.

Contrarily to the lock principle, lock-free algorithms use atomic operations like CAS (Compare And Swap) to coordinate concurrent access and keep the data consistent. As a consequence, lock-free algorithms are usually faster than their mutex-based counterparts, and at every stage, the lock-free algorithms guarantee forward progress in some finite number of operations. And that would be ideal if it weren’t highly complex to make a correct implementation, starting from so famous ABA problem and ending with possible access to free memory segments and program crashes. So, solely because of the complexity, lock-free algorithms are still not used everywhere.

But that was just a preamble, so let’s dive into the topic. I’ve encountered a lot of lock-free stack implementations all over the Internet. Unfortunately, a lot of them are undoubtedly non-working.  Some of them are “naive” (the implementation is called “naive” if it does not take into account the ABA’ problem), and only few are really working and useful.

So let’s see why it is so and why finding problems in lock-free algorithms is a riddle.

One of the biggest problems in lock-free implementations is the dynamic memory management. We need stack nodes to be allocated on the heap, and if we don’t want memory leaked, to be deleted when they are not used anymore. Obviously allocation rarely causes issues, but deleting nodes can present a real problem. Let’s take a look at a “naive” implementation of a lock-free stack in the C programming language:

void push(void *data) {
  struct node *n = malloc(sizeof(struct node));
  n->data = data;

  do {
    n->next = head;
  } while(!compare_and_swap(&head, n->next, n);
}

void *pop() {
  struct node *n;
  void *result = NULL;

  do {
    n = head;
  } while(n && !compare_and_swap(&head, n, n->next);

  if(n) {
    result = n->data;
    free(n);
  }

  return result;
}

This implementation is called “naive” because it assumes that if CAS succeeds, we’ve got the data we intended to get. In fact, there are a lot of scenarios where head is the same as at the beginning of the iteration, but implies completely different data.

Let’s imagine the situation when one of the threads saved the head into its n variable, and took n→next, but have not yet called compare_and_swap. Then the thread yields control to other threads, one of which pops and deletes head, the other pops and deletes head→next, and the third one pushes new element on stack, and the memory allocates at the address of the old head (which was freed by the first thread). Then the control passes back to the first thread.

Now head will be the one compare_and_swap expects in n, but n→next will be pointing to the memory which was freed by the second thread. So when the pop operation succeeds here, head will be pointing to that deleted memory area, and the program will sooner or later crash.

Many sources call this problem an ABA’ problem. This name mirrors the sequence we see in the data, which is A, then B, then A’ (which looks like A, but in fact is not actually A). This is a real hassle for those starting to dig into lock-free algorithms implementation! The most common way to solve this problem is having a tag in addition to the data, which changes with each push, so even if the pointer to the data is the same, the tag will be different, which in the case of ABA’ would be what distinguishes A from A’.

This implementation is also prone to the other type of problem: the memory deallocation problem. To demonstrate it, let’s assume the situation where one of the threads saves head to n and yields control to other threads. The other thread successfully popped from the stack, so head is changed and the previous head element is deleted. When the control returns back to the first thread, n will be pointing to the freed memory area, and the attempt to take n→next will address the invalid memory, which is obviously not good for a program and can lead to a crash.

This problem can be solved different ways. Some use the hazard pointers list, which stores the pointers that can’t be deleted at the time. Some temporarily replace the head with some ‘magic’ value that prevents reading and deleting it from the other thread. But the implementation that I suggest uses the fact that access to the stack is done via a single element – top of the stack (head). So relative to the other containers like queues or lists, the gain from a lock-free approach is not that huge.

That is why I suggest a combined approach to the stack: the lock-free writing (pushing), using CAS, and spin-locked reading (popping) which prevents concurrent simultaneous reading. Spin-locked popping in a single thread means that we’re protected from accidentally accessing deallocated memory, and that also means that while we’re reading, we can’t remove and then insert removed elements, which means that the ABA’ problem is also solved. In other words, I suggest the semi-lock-free algorithm, which is lock-free in its easy part, and using lock in its most error-prone part.

One possible implementation is as follows:

void mt_stack_push(struct mt_stack *top, void *data) {
  struct mt_stack *tb, *old;
  tb = malloc(sizeof(struct mt_stack));
  tb->data = data;

  old = top->next;
  tb->next = old;

  while(!__sync_bool_compare_and_swap(&top->next, old, tb)) {
    usleep(1);
    old = top->next;
    tb->next = old;
  }
}

void* mt_stack_pop(struct mt_stack *top)
{
  struct mt_stack *current;
  void *result = NULL;

  // Acquire the spinlock
  while(!__sync_bool_compare_and_swap(&top->stack_mutex, 0, 1)) {
    usleep(1);
  }

  current = top->next;

  // We can't pop and delete one element, because it's read-locked
  // But it can change because the push operation is lock-free
  while(current && !__sync_bool_compare_and_swap(&top->next, current, current->next)) {
    usleep(1);
    current = top->next;
  }

  if(current) {
    result = current->data;
    free(current);
  }

  // Release spinlock
  while(!__sync_bool_compare_and_swap(&top->stack_mutex, 1, 0)) {
    usleep(1);
  }

  return result;
}

This implementation was tested along with a correctly-implemented purely lock-free algorithm and the algorithms using spin-lock and mutex locking. The time of the execution for all of the mentioned algorithms is following (and fluctuated only insignificantly with the number of runs):

mutex:
real 0m1.336s
user 0m1.173s
sys 0m3.628s

lock-free:
real 0m0.533s
user 0m0.792s
sys 0m0.046s

spinlock:
real 0m0.520s
user 0m0.630s
sys 0m0.018s

semi-locked:
real 0m0.353s
user 0m0.360s
sys 0m0.075s

The fact that lock-free and spinlock algorithms differ so little is explained by the fact that stack has a single point of access (top), which I mentioned before. So why is the lock-free approach slower? Because of the all the guards against deleted pointers and the ABA’ problem that I mentioned.

The conclusions follow: before implementing a lock-free algorithm, an analysis should be performed to determine whether or not the container allows multiple access points and simultaneous access from several threads. It is possible that in this particular case (like in the case of the stack), lock-free algorithms will only add hassle, not providing any significant gain in performance.

This article also shows how mixed locked + lock-free approaches could be reasonable in implementing thread-safe concurrent algorithms.

What is serverless and why does it matter

Until 2007, web hosting was a huge expense for companies.  If you wanted to make a website or server that handled any serious amount of traffic, you pretty much had to rent your own server.

1U data center server

You then had to put it in a data center, where it would be installed by some sad guy with a ponytail1.

1: Just jealous that my hair fros and I can’t grow a pony tail or man bun.

If you had the budget, you could use a company like Rackspace, where you could either rent their servers or share a server with a few people.

Then, the “VPS” (Virtual Private Server) revolution came in 2007.  Amazon launched Amazon Web Services.  Basically, anyone could spin up a server and pay Amazon an hourly fee, rather than having to buy the hardware and rent access to a rack in a data center.  This reduced hosting costs substantially, especially in the beginning.  It’s cheaper to rent a few servers by the hour than to own your own hardware.  Once you are spending millions per year in server expenses, people still evaluate the option of owning their own hardware, but the VPS/hourly rental innovation changed the game entirely.  Amazon competes in the space with Google Cloud, Salesforce (Heroku), Digital Ocean, Microsoft Azure, etc.

Another benefit to this is that it’s easier to scale up: if your app suddenly becomes an overnight success, you can triple the amount of server usage with the click of a mouse, rather than having to order hardware, drive to the data center, etc.  This reduced lag time from weeks (buying the hardware, installing it, setting it up) to days.. or even a few hours of configuration within the AWS / Google Compute Engine / Azure console.

In 2015 or so, another innovation came: “serverless” functions.  Instead of having to rent a server by the other, where you would then install/deploy your app/website/software, one may just write their app as a series of functions.  Instead of paying to rent the whole server per hour, you instead just pay for the system resources your app is using.  For example, let’s say you have an app that uses 100mb of RAM…. when in use.  Yet the server has 2GB of RAM.    If instead you only pay for the hosting costs of your app while it’s actually being used, you would be paying a fraction of the costs.

Serverless development has a few other benefits too:

 

  • Reduces expenses since you pay for the computing resources you use
  • Easy to measure function execution: instrumentation and measurement is built in to the concept
  • You no longer have to deal with server management.  You don’t have to watch the operating system for vulnerabilities/viruses, you don’t have to upgrade the software, you don’t have to deal with machines that experience hard drive failure.  All you have to worry about is your app, rather than the environment it runs on.
  • Infinitely scalable:  all you have to do is deploy your app to Amazon Lambda/Azure Cloud Functions, and then Amazon figures out how to provision the server resources necessary to execute your app.  The “cloud” reduced “time needed to scale” from weeks to days/hours.  Serverless reduces “time needed to scale” from days to… nothing.  It’s instant!

In summary….

  • Pre-2007: you essentially needed a data center or expensive web host (who had their own data center / hardware) to deploy your code for any serious scale
  • 2007: Amazon launches AWS.  This of course is the same thing as the cloud.  8 years later, pretty much every big company has a “Cloud” strategy
  • 2015:  Amazon launches serverless functions.  2 years later, the hardcore techies are enthusiastic about it and taking early advantage.  Within 5 years this will be mainstream… TV commercials, etc.

 

Thanks to Rich Jones for checking my ponytail comment. Thanks to Justin George for underscoring the importance of the time-to-market differences.

The Art of the Win – How Donald Trump Stunned the World and Became the 45th President of the United States

By Martin Keen.

The presidential election of 2016 has taught us all two important things:

1.) National pre-election polls can be completely wrong.
2.) In America, even if you are just a multi-billionaire real estate magnate, you too can work hard and achieve your dream of becoming the president of the United States.

 

On Tuesday, November 8, Donald Trump managed to pull off what could quite possibly be the greatest political upset in United States’ history. By earning 290 electoral votes against Hillary Clinton’s 232 electoral votes¹, Mr. Trump became the nation’s 45th president-elect.

 

Many political strategists were calling for a substantial win for Clinton. President Obama provided his support on the campaign trail. Her millions of supporters were also hoping to finally put the first woman into the Oval Office. As the former Secretary of State, Hillary Clinton was poised to be the rightful successor to President Obama and continue the course that had been laid out by the Democratic Party. But alas, that was not to be.

 

On the night of the presidential election, the map of the United States progressed to primarily being covered in red. The mood inside the Clinton camp headquarters changed from the jubilant excitement of an expected victory to a tone of somber and stunned disbelief. Donald Trump had defeated Hillary Clinton and won the presidential election.

The question is: How did he do it?

 

Trump achieved his amazing accomplishment primarily by setting himself apart from all of the other candidates. Not only did he appear to be different from his political opponents in both the Republican and the Democratic parties, he positioned himself to be the candidate that was unlike any other candidate in recent history. There were many factors that contributed to Trump’s incredible victory. The factors that served him well were those that allowed for him to be seen as unique.

 

Psychometrics

 

According to reports, the results of the 2016 U.S. elections were greatly influenced by a data analysis model. This particular model was created by Cambridge University student Michal Kosinski eight years earlier. While he was studying for his Ph.D. at Cambridge University, Michal Kosinski explored the idea of applying psychometrics². Psychometrics is a scientific field which focused solely on measuring basic psychological traits from data generated by Facebook users. He further developed a Facebook app, named “MyPersonality”. The MyPersonality app basically served as a questionnaire which utilized questions extracted from a similar psychometric model named “The Big Five”.

 

¹ CNN.com, http://www.cnn.com/election/results/president

² MichalKosinski.com, http://www.michalkosinski.com/

 

Users of The MyPersonality app were presented with an option to share their respective Facebook data with Michal Kosinski and his team of researchers. Surprisingly, many of the users actually did share. Within a short time frame, Kosinski had in his possession one of the largest data pools. It was a combination of Facebook profiles and psychometric scores. Thus, making his research an exercise in the unheralded potential of big data.

 

The resultant effect showed that, for every action on Facebook, a precise deduction was possible. With the collation of more data and time, Kosinski and his team of researchers were able to create a more precise model. By 2012, they were able to predict the ethnic background of a user of the MyPersonality app with a frightening accuracy of 95 percent. They could also predict the political party to which they had affiliations with, averaging 85 percent from a sum of just 68 likes.

 

After rejecting an offer in 2014 from the parent company of Cambridge Analytica, Strategic Communication Laboratories, Michal Kosinski did not think of it until Cambridge Analytica was contacted to aid the “Leave.Eu” Brexit campaign led by Nigel Farage in the United Kingdom. At that same time, American politics came calling as well. Cambridge Analytica was credited by Ted Cruz for being responsible for his campaign surge. As such, Donald Trump’s team followed suit subsequently.

 

At the beginning of the election campaign season, Cambridge Analytica came up with a model capable of predicting the personality of virtually every American adult. This particular model had similar features to Kosinski’s earlier method, which he developed in 2008 at Cambridge University.

 

The big data company purchased data from different sources. The data ranged from the type of magazines which people bought to information about cars and pets. The aggregate data was further used by Cambridge Analytica to create politically-inclined insights, which would be used in sending messages targeted towards specific voters.

 

However, instead of targeting the whole country, Cambridge Analytica focused only on 17 states. It believed that these states could be contested by Donald Trump. It further separated the population into 32 different types of personalities. With the aid of this framework, Cambridge Analytica was able to make specific deductions. For example, those who preferred American-made cars were more likely to be potential supporters of Donald Trump.

 

This model had so much precision in helping supporters of Donald Trump with their campaign. It aided them in knowing what homes would be receptive to specific messages. Without any reasonable doubt, this was the main reason why Wisconsin & Michigan were solely focused on by Trump’s team during the last week of his campaign. It’s a strategy that clearly worked well.

 

Tough Talk

 

When Trump first made his announcement that he would be running for the office of the presidency, he decided that all of his speeches were going to count. He focused on topics and areas that were highly concerning to Mr. and Mrs. John Q. Public. He began with his stance against illegal immigration. Securing America’s borders has always been a national issue.

 

However, other politicians have treaded lightly around this subject. While maintaining an effective security protocol around the country’s borders has always seemed to be a concern, the majority of presidential candidates have skimmed around the topic somewhat lightly. They have not wanted to appear to be off-putting to other countries and the cultures of their citizens. America is, after all, the great melting pot.

 

But Trump decided to take a markedly different approach to address the topic. He specifically called out the border situation between the United States and Mexico. He professed that there were many factions of illegal immigrants that want to come to this country for the sole purpose of committing severe, felonious crimes. Trump was direct in his approach and right to the point. None of the other candidates would have even dared to speak about the border situation in that manner. Some saw this as Trump’s expression of possible racist views. Many of his opponents actually distanced themselves away from him so that they would not be seen in a negative light.

 

But actually, this was an incredible, strategic move made by Trump. He stood completely alone by making this stance on national security. Most Americans were not accustomed to a presidential hopeful who spoke in such a direct manner. By doing this, Trump began to lay the groundwork for his self-marketing strategy. He was ensuring that the majority of the attention placed on this year’s presidential election would be focused on him. People were intrigued by his tough talk, and they wanted to know more about his plans. They also wanted to find out what he would say next. From this, Trump was able to build a strong momentum to garner high attendance rates for his rallies. During the debates, he was clearly the standout candidate. Donald Trump ensured that all eyes were focused on him and him alone.

 

A Man of Action

 

As a compliment to his stance on the nation’s border security, Trump also professed a radical course of action to keep America safe. He introduced the public to the wall. Trump’s plan to construct a stable barrier between the United States and Mexico was a plan that the other candidates would not have touched. With his plans for building a wall across our borders, Trump wanted to show that he was more than just a political candidate who talks about change. As an internationally known real estate businessman, he has both the knowledge and the experience to create grand structures. He also has a multitude of construction teams at his disposal. Whatever you may think about the man, Trump has the years of experience that proves that he knows how to put up a wall.

 

Whether or not an actual wall will be constructed on our borders is a matter that we will have to wait to be seen. However by presenting the mere concept of this proposed solution to an area of national security, Trump allowed for himself to be completely different from every other politician. He had become a serious man of action.

 

The Outsider

 

Unlike the majority of men and women who run for public office, Donald Trump has never held a political position. He also has never served in any branch of the armed services. In the terms of the Washington, D.C. brass, he was definitely a true outsider. Many political strategists thought that these would be the major factors in preventing Trump from having any chance of winning. In actuality, they helped in making him a viable candidate.

 

As long as there has been a stable form of government in our country, there has also always been a certain level of dissatisfaction with the government from its citizens. Career politicians have a general stigma of hypocrisy. It has been a common belief that politicians will say whatever they need to say in order to get elected. Once they get elected, then they will just do whatever they want to do.

 

Since Donald Trump was not a career politician, he was not surrounded by the same stigma of hypocrisy as his opponents. While someone else in his position may have felt intimidated by going up against experienced candidates, Trump saw this as his golden opportunity. His campaign was based on the foundation of infusing new blood into Washington. By doing so, he would be able to bring about important and real changes to the country. He turned a possible negative situation into a strong, positive force. He was the ultimate outsider.

 

Donald Trump beat the odds and became the next president of the United States. He used his business experience, his strong words, and his belief in himself to stand alone and win. Just like Frank Sinatra, Donald Trump did it his way.