Nobody would dispute the claim that in business, as in all walks of life, knowledge boosts performance. A more contentions issue is how any large enterprise can bring together all the knowledge that its employees possess into a meaningful whole.
The most common approach is to capture that knowledge in the form of data through IT systems and then, as far as is possible, to compile the data into large repositories. Having been robbed of the context in which it was created, that data is then subjected to translation and analysis so that the business might figure out what it all means.
There are some circumstances in which this works perfectly, but it is not the only way.
Prediction markets
Until 2007, Mat Fogarty worked as financial forecaster for corporations, starting with Unilever in the UK before emigrating to the US to join video game maker Electronic Arts. It was his job to predict how much money given projects were going to cost and earn – predictions that the chief financial officer would use to allocate budgets.
Typically, these forecasts were pretty inaccurate, but Fogarty soon discovered that all the information required to make more accurate predictions was available within the organisation, just not in the finance department’s models. Instead, he says, it was ‘with the people’.
‘When I worked for EA, I used to play football with my colleagues,’ he explains. ‘Often, the information I was getting from the engineers, game testers and marketing guys on the football pitch was very different from the information that I was giving to the CFO from the systems.’
This led Fogarty to explore the possibility of using ‘prediction markets’ in corporate planning. A prediction market uses the model of a financial trading market – whereby participants buy and sell ‘assets’ – but instead of shares in a company they trade bets on the likelihood of a given outcome. Famous examples include the Iowa Electronic Markets, in which participants bet real money on outcomes including the US presidential election.
Proponents argue that prediction markets are a highly accurate way of making forecasts, as rather than relying on the influence of a few so-called experts they aggregate what the “crowd” knows. (Like the “rationality” of financial markets, however, this is much debated).
So convinced was Fogarty of the potential of prediction markets in a corporate environment that in 2007 he quit his job and founded CrowdCast, a company that now sells prediction market software as a web service. Among his first hires was Dr Leslie Fine, a scientist who had studied prediction markets both academically and at Hewlett-Packard’s research labs.
Fine explains that a number of modifications to the traditional prediction market model were necessary to make it usable in everyday business. For example, rather than following a strict market metaphor – in which participants give their trading positions on various assets (e.g. ‘At 80 per cent I short and at 90 per cent I buy’) – CrowdCast wanted a system where participants could make simple probability estimates (e.g. ‘I think there’s between an 80 per cent and 90 per cent chance of this happening’).
The resulting system allows employees to place bets (using imaginary money) on the probability range of certain outcomes. If they bet $10 that there is, for example, between an 80% and 90% chance of a product shipping on time, they win $100 back if it does indeed ship on time; a broader estimate and they would win less. Employees are motivated to participate by small prizes and a leaderboard of their performance.
In March 2010, CrowdCast launched a dashboard that allows managers to monitor the aggregated employee sentiment on an ongoing basis; they can set up alerts to let them know when confidence that a goal will be achieved dips below a certain point. It also allows them to slice the aggregate predictions to see which departments are positive and which are negative.
Group decisions
Fogarty claims that with this added functionality, CrowdCast is moving away from simple prediction markets towards what he describes as ‘social business intelligence’.
‘Normal BI is mostly around aggregating information from systems,’ he explains. ‘We see social BI as aggregating information from humans within the organisation.’
‘Social business intelligence’ is just one of the terms being used to describe an emerging category of software that aims to make business management more democratic, more transparent and – it is hoped – more effective as a result. Another, as coined by influential analyst company Gartner, is ‘collaborative decision-making’.
This term was minted in response to a number of enquiries that Gartner analyst Rita Salaam received in the wake of the credit crunch. ‘A couple of those enquiries were from very large banks that had been, I would say, blindsided by the financial crisis,’ she recalls.
They wanted to know why certain decisions had been made in the run-up to and immediately after the crisis, but found they had no insight into the decision-making process. The problem is that, for many organisations, this process involves many different stakeholders using information from many different systems and serving many different agendas – there is therefore no single record of the lifecycle of a decision.
And because organisations cannot trace back decisions that were made in the past, Salaam explains, they cannot learn from the mistakes or successes. ‘The important thing is they aren’t able to mine past decisions in order to develop best practices,’ she says. Salaam believes that for this retrospective view to be achieved, businesses need to adopt a new operating model that allows the decision-making process to be tracked. ‘We need a new way of working that allows a new level of transparency into decisions.’
She sees in the various social technologies gradually creeping into the enterprise an opportunity to achieve that operating model, but adds that today there are many different tools that embody a variety of approaches, none of which provides a complete platform for transparent decision-making.
‘There are companies taking a business intelligence approach to this, while others are taking a business process management approach, and others still a collaboration approach.’
A few software suppliers, however, focus directly on the problem of how groups of people take decisions. One example of this, says Salaam, is Decision Lens, a company founded in 2003 by the sons of mathematics professor Thomas Saaty.
Changed priorities
In Cold War days, Saaty was famous for developing what he called the “analytic hierarchy model” to help the US State Department prioritise objectives during talks with its Soviet counterparts. The model is based on an insight into the function of the brain – human beings are far better at making pairwise comparisons (‘X weighs more than Y’) than individual assessments (‘X weighs this amount’). It breaks any prioritisation task into a number of pairwise comparisons, which can be undertaken by any number of people.
Established by John and Daniel Saaty, Decision Lens sells collaborative decision-making software based on the model their father developed. Today, it is employed by organisations including the US Department of Defense and NASA.
‘The idea is to bring together a broad stakeholder group and have them structure a decision,’ explains CEO John Saaty. The software invites each member of a group, which could be the board of directors or a marketing operations team, to complete the pairwise comparisons individually. This, says Saaty, helps to counteract some of the psychological factors that are often at play in decision-making meetings.
‘You will often have A-type personalities that drive these meetings, and there will be other people in there with real expertise but because they are not the types that want to offer the information in the meeting they’re just quiet about it.’ he explains. ‘This software allows everyone to understand each other’s priorities. You don’t have to agree, but you have to be explicit about what your judgement is.’
The second phase of the process is to assess alternative courses of action according to the aggregate priorities of the group. ‘For some of our customers, this is the first time they are actually able to tie the evaluation of alternatives directly back to their strategic imperatives.’
Saaty argues that the popularity of his product – the company grew 70 per cent in 2009 and was used to allocate around $98 billion of budget, he says – reveals that conventional decision-making practices are broken. A common malaise, he says, is the expectation that information systems and data analysis can somehow direct strategy.
‘We met with [pharmaceuticals giant] Astra Zeneca’s head of global portfolio,’ recalls Saaty. ‘He said they had tried to drive decision-making off data. They had a whole project last summer where they were looking into various possible products, and they thought by evaluating all this data and plotting it on a graph that it would soon emerge what the correct direction would be for the products. But what happened was that all the products ended up landing right on top of each other – there was really no differentiation.’
‘His point was that looking for your strategy to emerge out of the data is not really that effective,’ Saaty concludes. He adds that making the priorities of the various stakeholders within the organisation explicit – and providing a mechanism for reconciling those priorities – is more likely to provide useful guidance than statistical analysis of financial records.
Reflected in Saaty’s comments is a broader point. Not only are the minds of an organisation’s employees its most valuable source of information, they also contain invaluable insight, analysis and forecasts. Any tool that helps an organisation build all of this into its operational management and strategic leadership must surely make that organisation stronger.
This article first appeared in Information Age magazine.