Smith College
Northampton, MA 01063

bdurbin(at)smith.edu

413.585.3559 (O)

Research

My ongoing research interests fall at the intersection of information theory, decision-making, and national security bureaucracies. Current projects address these topics within three distinct themes: (1) information and accountability in national security organizations; (2) performance metrics in information-based organizations; and (3) the influence of organizational structure on responsiveness to a changing security environment.

Intelligence Accountability

The first topic grows out of research I have done on principal-agent problems in intelligence oversight. In my current book project, I discuss how the secret nature of intelligence belies traditional approaches to accountability and oversight. More than any other policy domain, I argue, strategic intelligence creates severe information asymmetries between principals and agents, and exhibits a lack of the outside issue publics and media scrutiny that would be necessary for “fire alarm” oversight. In future research, I will consider how different governments and organizations have sought to overcome these problems. In addition to research completed for my book, I have been collecting data for this project during several visits to England, where I have spent weeks in the British archives and conducted more than a dozen interviews with parliamentarians and senior government officials. While I have been primarily focused on qualitative methods for understanding accountability, I have also begun exploring ways to represent and evaluate these information dynamics using formal modeling.

Performance Metrics in Information Organizations

The second area is closely related to the first, but concerns information and accountability at the micro-organizational level. Information-based organizations, in both the public and private sectors, face a difficult task in evaluating the quality of their analysts, particularly those who are asked to make probabilistic predictions. For example, consider when an intelligence analyst forecasts that a terrorist attack is very likely, but the attack does not come: does this reflect poor analysis, the non-occurrence of a high-probability event, or perhaps even a successful preventive response set in motion by the prediction itself? Three challenges set information-based organizations apart from other types of producers and service providers: (1) monitoring problems (it is difficult to measure how hard or well someone is thinking); (2) probabilistic analysis (by definition, the accuracy of probabilistic analysis cannot be measured ex post, since any relevant eventuality is accounted for with probability >0); and (3) constant environmental change (the likelihood of a given event can change quickly, and can even be influenced by the prediction itself, as in the example above). A preliminary survey has found a broad spectrum of performance metrics in these types of organizations. Hedge funds, for example, tend to assess performance exclusively based on outcomes: poor returns mean poor performance, regardless of the quality of the analysis. Many strategic intelligence organizations focus instead on analysts’ behavior, evaluating performance based on tradecraft and proper analytical techniques. Investment banks and management consulting firms often fall somewhere in between. Studying how these different metrics influence analyst performance will be valuable for understanding accountability in a host of public organizations, from strategic intelligence and law enforcement to disaster prevention and response.

Centralization and National Security Agencies

Finally, I am developing a project on the relative competencies of networked versus centralized security organizations. In intelligence, for example, there are clear benefits associated with centralization, such as improved coordination and accountability. Yet the drawbacks of a hierarchical system are equally clear: rigid structures limit how quickly and effectively an organization can respond to new threats, particularly when faced with a less centralized adversary. Despite an abundance of rhetoric about the need for U.S. intelligence agencies both to “work together better” and to become more “flexible” or “networked,” little systematic research has been conducted on how best to balance these contradictory goals. I have begun exploring how other types of organizations, particularly those in the U.S. military, have historically addressed these tradeoffs. For example, the flexibility required by ground combat troops can run up against broader battlefield and theater strategic goals, as well as against the need to coordinate air- and sea-based targeting. When the system does a poor job of balancing these objectives, people can die. (Attempts to mitigate such problems have given rise to a U.S. Navy adage that “every procedure is written in blood.”) I plan to continue tracing the evolution of military procedures as they relate to centralization, with a particular focus on how the adoption of new technologies has impacted these practices.