Banner
    Why The Future Of E-business Is R-business
    By Fred Phillips | May 7th 2012 06:25 AM | 5 comments | Print | E-mail | Track Comments
    About Fred

    After a dozen years as a market research executive, Fred Phillips was professor, dean, and vice provost at a variety of universities in the US, Europe...

    View Fred's Profile
    I make the case for replacing business executives with robots.*† This is no smart-ass slur on the intellects of executives. A transformation of business soon will be upon us. In the transformed enterprises, robots will take on more and more business decisions. Humans will retain a smaller but still crucially important role.

    The argument involves ‘real options’ and ‘agency theory.’ Explaining them is simple, though lengthy. So let’s get started, using an illustrative example:
    An opportunity requires Rineu Corporation to invest $10,000 now, with an assured first-year cash flow of $6,000. The second-year cash flow is uncertain with a 50-50 chance of either a $15,000 gain or a $5,000 loss.

    The project can be abandoned at the end of the first year if new information uncovered during this period suggests that the second-year payoff will not be favorable. Dropping the project at that time involves no salvage value or penalty.

    The cost of capital is 10%. The expected undiscounted gain in year 2 is 0.5(15k – 5k) = 5k. The net present value (NPV) of the project is thus:

    NPV = -$10,000 + 6,000/1.10 + 5,000/(1.10)2 = -$413

    The NPV is negative, indicating that Rineu should not make the investment.
    This NPV calculation didn’t let us use the option of abandoning the project after we got the additional information. What if it did allow it? Then we could write:
    NPV(ABANDON) = -$10,000 + 6,000/1.10

    NPV(KEEP) = -$10,000 + 6,000/1.10 + 15,000/(1.10)2

    NPV(OPTION) = 0.5*NPV(KEEP) + 0.5*NPV(ABANDON) = $1,600.
    The now-positive NPV implies that we should make the investment, assuming (i) no alternative projects show higher NPVs, and (ii) we can trust a manager to make the right follow-up decision at the end of year 1.

    The example confirms what is well-known, namely that traditional discounted cash flow analyses can understate the attractiveness of new ventures that have highly uncertain initial returns.

    Real options
    A “real option” means treating a contingent operational decision using the same reasoning one would use for a financial option. Operational projects take on a higher initial valuation when one can count on a decision maker making the best use of information that is expected to arrive at a later time.

    Exercising a financial (stock market) option is easy: Your broker will remind you when the expiration date or strike price is reached. As you can imagine, though, the communication and control (“C2”) issues are more complex when a corporation faces a real option that involves many employees, departments, or divisions.

    When choices and “chance branches” are discrete and few, the “decision tree” analysis used above is a sufficient solution technique for a real option.

    However, many operational decisions are characterized by continuous probability distributions, and by continuous choice outcomes (for example, “Invest $x in the project,” where x is any non-negative number). In these instances, the math is much hairier, sometimes using the Black-Scholes model (http://en.wikipedia.org/wiki/Real_options_valuation), integral equations, or other scary creatures.

    Our discrete-math example, though, suffices for the r-business argument.**

    If options are so great, why is NPV still widely used?
    Acquiring a fleet of trucks (for example) has one value to the acquiring company if later decisions about maintenance are made in a certain way, e.g., if the oil is changed at prescribed intervals. The fleet shows a different lifetime value if the oil is changed less frequently or not at all.

    If the return on fleet acquisition is allowed to depend on reliable mechanics changing the oil regularly, why are other kinds of project selections made without depending on reliable executives to make the subsequent decisions that maximize the project’s value? Some years back, I surveyed high-tech executives on this question. Here are their replies (in descending order of reported importance):
    Survey Scores: “Reasons Why Your Company Does Not Use the 'Options' Method of Valuing Projects or the Decision Analysis Approach (DAA)”

    •Perfect information for project evaluations at future points is rarely available (or difficult to obtain).
    •Operations executives do not like to discontinue their own projects at a future point of evaluation.
    •All possible “options’ cannot be anticipated.
    •More convenient to obtain complete project funding now, rather than complete for partial funding with other projects in the future.
    •Conservative decision making avoids choosing ‘options’ or alternatives that involves large downside risks.
    •The company’s entry and exit barriers for projects will not permit project expansion or discontinuance based on DAA.
    •Employee turnover and transfers make future project expansion/discontinuance difficult.
    •Process of evaluating a project at each future decision point may incur higher costs.
    •Project valuation is generally performed by financial rather than operations executives.
    •DAA is more complex than ROI/NPV.
    Personnel turnover means the person who is charged with making the later decision may not longer be with the firm at the time the decision is due. If records are not complete, his/her successor may not be able to step in effectively. Then too, as implied by the survey results, the financial exec making the initial valuation may not trust the operations exec – who works in a different department – to make the right decision at the right time.

    We zero in on the trust issue, in the form of the agency problem.

    The principal-agent problem
    Suppose the decision in the first box above is left to a risk-averse, loss-averse manager. The manager is awarded a bonus of 5% of profits. As regards the project in question, our manager’s private calculation adds a risk premium to the discount rate (cost of capital), making it 12% instead of 10%. His “utility” for money is concave, meaning that his second million adds less pleasure to his life than the first million did.

    Forget the formal functional analysis; we can make a quick ‘n’ dirty utility function for him (tailored to the example, but WLOG – ‘without loss of generality’) that looks like this:
    Utility = -200 + √(income + 2000)
    He has already nailed the bonus on 2500 in profits earlier this year. He sees the current opportunity in this way:
    NPV(ABANDON) = √[0.05 (2500 - $10,000 + 6,000/1.12) + 2000] -200 = -$156

    NPV(KEEP) = √[0.05 (2500 - $10,000 + 6,000/1.12 + 5,000/(1.12)2) + 2000] -200 = -$149

    NPV(OPTION) = 0.5*NPV(KEEP) + 0.5*NPV(ABANDON) = -$152
    Let’s suppose that Rineu Corporation (as its name cleverly implies) is risk-neutral (or more generally that the controlling shareholders – the “principals” – have a risk profile that differs from that of their agent, the manager). If risk-neutral, the shareholders would expect a “yes” decision on this project because of the positive NPV of the option. The manager’s motives and personality, however, drive him to a “no” decision that’s in line with his personal utility but at odds with the interests of the shareholders. This is the classic “principal-agent problem,” as it manifests itself in the real options situation.

    What can be done?
    Some companies fiddle the bonus formula, trying to align the interests of owners and agents. This rarely works, because the manager-agent’s utility function is obscure even to him, and changes with his mood in any event.

    Other companies say, hell with it, we’ll just go with straight NPV. They’re leaving money on the table by taking this route.

    In some industries, companies do use real options on a routine basis. One such is the package delivery business. Pick-up and delivery orders in each 9-digit zip code arrive according to by-now well-tested probability distributions. The value of mobilizing x additional trucks and y more driver shifts (or diverting a particular truck from its current route) depends on the pattern of orders that materialize subsequent to that decision. Drivers are, of course, in constant communication with dispatchers. A perfect situation for using real options.

    Why is it perfect? The added profit margin from taking best advantage of truck and driver availability is significant. The decision situations happen routinely, indeed several times per hour. The decisions are of a uniform kind. C2 problems are minimized, because everything happens within the database that drives, no pun intended, the whole business.

    Who makes the decisions? A robot. Not Robbie the Robot, of course, but an algorithm. The robot knows the truck maintenance schedules, knows the neighborhood maps, and knows not to schedule triple shifts. It makes best dispatch decisions based on these and other constraints, and on evolving patterns of pick-up orders.

    With a tip o’ the hat to Tom and Ray’s Russian chauffeur, we’ll call this robot Pikop Andropov. There is no principal-agent problem with Pikop. Pikop has no utility function of his own; he can be easily tuned to the same risk profile as the shareholders’, taking full advantage of operational options in a trusted manner. Pikop’s control limits and alarm whistles bring a human operator running if and when truly unusual situations arise.

    The future
    As enterprise computing systems mature (Oracle, SAP, and their like are all still pretty messy), the C2 costs and risks of real options will decline. The “big data” and “business analytics” movements will make more industries’ information flows look like those of the package delivery industry.

    DHL and Federal Express, already for all intents and purposes e-businesses, have become r-businesses. They leverage real options, and have got rid of routine principal-agent problems. Human executives are there for strategizing and for HR functions. Tactically, they are responsible only for dealing with exceptions and emergencies.

    My students in Korea, where I’m now teaching, tell me Korean companies already operate in this manner, but in a slow, human-intensive, analog way. Rules for most situations are found in “manuals.” Human managers react to situations by looking for procedures in the manuals. They are permitted to call on higher-ups only when no manual addresses the situation at hand.

    So, I asked them somewhat facetiously, if managers are not allowed to make decisions flexibly... why are you studying for a Master’s degree in management? Naturally the question flustered them, and I got the answer I expected: none.

    The implication for management educators is clear, however. Human managers and executives of the future need to be taught not routine decision-making (which is what we’ve taught in the past), but emergency management: how to prevent, remediate, and minimize the impacts of crises and disasters. And how to creatively profit from exceptions to the norm. Robots will handle the rest.

    _____________________________________
    † The base option example stems from work done some years ago with Raj Srivastava. The principal-agent example is adapted from Katzman, Verhoeven, and Baker (2009).

    * Sure, you’ll say, someone’s already replaced Mitt Romney with a robot. But hey, I’m serious here.

    ** And involves nothing more complicated than Bayes’ Theorem.

    Comments

    MikeCrow
    We (I'll leave the names out, I don't want to turn this into a commercial) already have a module(s) that uses analytics to provide a customized dashboard for executives to help them make such decisions. We aggregate product info, change info, quality info, and support info into a picture of project statuses.
    And this is just from the segment I work for, we have lots of other pieces to fill in other holes.

    It's a lot of work, and companies are slow to adopt. But they are adopting.
    Never is a long time.
    vongehr
    "NPV(OPTION) = 0.5*NPV(KEEP) + 0.5*NPV(ABANDON) = $1,600.
    The now-positive NPV"
    depends entirely on assuming 0.5 (i.e. the assumption that the necessary information will turn up although the problem clearly states that it may well not) and reminds of "new Bayesian analysis" (hide desired result in one more level of math and it looks like science)
    Fred Phillips
    Sascha, the probability estimates may come from:
    • A long record of similar instances,  
    • A consensus of experts' subjective views, or  
    • A WAG (wild-ass guess).  



    For simplicity, I stuck to the analysis based on EVPI, expected value of perfect information. There is also math for expected value of imperfect information.
    And in any case, that's what sensitivity analysis is for. 
    You're right that estimating probabilities is a possible weak point. But a criticism is empty unless at the same time you consider, "What's the alternative?" In companies, the alternative is straight NPV. But that uses the same probabilities!


    BDOA
    What a distopian nightmare, replace all the workers with robots and a slaves, and all the customers with addiction driving impulse buyers, with no where free thought or educated decision making. I think this is what happens, if the leaders at the top of industry don't realize there duty to there customers and workers.

    You ask how can your trust the workers when you have low staff retention of replaceable labour units, in a competative and unforgiving labour marketplace. The answer is to give workers the pensions, a safe progression in the workplace, so that they can know, that if they work well for the company, it will provide for them. Without that I as a worker, might become to paranoid about my bosses. for me to believe that it is in my interests to work effectively for them.
    BDOA Adams, Axitronics
    Fred Phillips
    Barry, thanks for bringing the human dimension into the discussion. I agree with all that you've said, but I think you misread my theme.
    Employees mistrusting and being dissatisfied with an employer are one reason for the principal-agent problem, as you note. It is not the only reason, and in fact I went on and on at length about agents not always being aware of their own motives or risk profile. 
    The oil change example highlighted that companies usually trust the people we call "workers," and their supervisors, to maintain vehicles according to schedule. My point was the apparent paradox, that companies do not similarly trust decision-making managers to do the right thing. The cynical definition of a professional is someone who will always do the predictable thing. Yet the very fact of giving professional managers decision-making authority opens the door to them doing unpredictable things, like not following up a real option in the intended manner. 

    Yes, the capital vs. labor confrontation that has characterized American history continues to plague us. But line workers are not the issue in this blog entry. Rather it is the divergence of motive between owners (capitalists, if you will), and their agent-managers and agent-executives.