Category: socio-technical design

  • A Brief History Of Information System Design

    A Brief History Of Information System Design

    Susan Gasson, College of Computing & Informatics, Drexel University

    Design is too often viewed as a single stage in IS development, where we give form to an IT architecture. I present a history of IS design approaches over time, explaining how design goals become subverted by technocentric approaches and how to retrieve them for human-centered design, to provide an “information architecture” that is meaningful to the system’s users.

    Please cite this paper as:
    Gasson, S. (2024) ‘A Brief History Of Information System Design.’  Working Paper.  Available from https://www.improvisingdesign.com/bhod/ ‎ Last updated 08/17/2023.

    ABSTRACT

    Information system design is often viewed as a stage in the system development life-cycle, concerned with the detailed “laying out” of system software – more akin to technical drawing than to design in an architectural sense. The intent of this paper is to retrieve the notion of design and to view design as an holistic activity, where form is conceptualized for a whole set of information system elements, that together make up a meaningful set of features, supported by an information architecture for the system’s users. Every information system design process is unique, because every information system is embedded much more firmly in an organizational context and culture than physical artifacts. To manage this uniqueness, we need a more complex understanding of what design involves than that communicated by most IS texts.

    The paper presents a review of design theories, derived from IS literatures and from other relevant literatures, such as organizational management and social cognition. Friedman and Cornford (1990) identify three phases of computer system development: (i) dominated by hardware constraints, (ii) dominated by software constraints, and (iii) dominated by user relations constraints. Evolution in conceptualizations of design are presented from these perspectives, then a fourth evolutionary stage is discussed: IS design dominated by business process constraints. This fourth perspective moves the design of information systems on from the limited perspectives offered by viewing an information system as synonymous with a computer system and resolves many of the theoretical conceptualization issues implicit in recent IS design writings.

    At the conclusion, it is argued that current models of design focus on design closure and so de-legitimize the essential activities of investigating, negotiating and formulating requirements for an effective design. IS design faces five “problems” that need to be resolved: employing an effective model of design by with which to manage the labor process, defining the role of the information system, bounding the organizational locus of the system problem, understanding the cultural, social and business context of which the IS will be a part, and managing collaboration between cross-functional stakeholders. A dual-cycle model of design is proposed: one that focuses on “opening up” the design problem, as much as design closure. An understanding of this dialectic has significant implications for both research and practice of design; these are presented at the end of the paper.

    KEYWORDS: Information System Design, System Development Methods, Co-design of Business and IT Systems

    1. THE DESIGN OF ORGANIZATIONAL INFORMATION SYSTEMS

    1.1 What is Design?

    Business organizations are increasingly moving their focus ‘upstream’ in the traditional, waterfall model of the system development life-cycle. Recent trends in information system implementation – standardization around a small number of hardware and software environments, the adoption of internet communication infrastructures, object-oriented and component-based software design, outsourcing and the use of customized software packages – have standardized and simplified the design and implementation of technical systems. Organizational information systems are no longer viewed as technical systems, but as organizational systems of human activity – business processes, information analysis and dissemination – that are supported by technology. Firms can therefore focus more on the strategic and organizational aspects of information systems, implementing cross-functional information systems that affect stakeholders from many different organizational and knowledge domains.

    Yet existing approaches to information system design derive from a time when technical complexity was the core problem: so they are intended to bound and reduce the organizational ‘problem’ so that a technical system of hardware and software may be constructed. Existing design approaches treat complex organizational information systems as synonymous with information technology. They are based on models of individual, rather than group, problem- solving and cognition. We have few methods to enable stakeholders from multiple knowledge domains to participate in information system design. What methods exist are ad hoc and not based upon any coherent theoretical understanding of how collaborative design works. We have no models upon which to base future management approaches and methods for ‘upstream’ (from the waterfall model of the traditional, technical system development life-cycle) information system design.

    Winograd & Flores (1986) define design as “the interface between understanding and creation”. Unsurprisingly, given the difficulty of studying such a complex process, there are few models of design which are based upon empirical work, rather than theoretical conjecture or controlled experiments. Most models are also rooted in an individual perspective of design, rather than those group processes which occur in most IS design contexts.

    As theories of design activity have evolved, so the definition of the term “design” itself has changed. In the information systems literature, design was initially viewed as the decompositional processes required to convert a structured IT system definition into a physical system of hardware and software. In the introduction to Winograd (1996), the author states:

    ” Design is also an ambiguous word. Among its many meanings, there runs a common thread, linking the intent and activities of a designer to the results that are produced when a designed object is experienced in practice. Although there is a huge diversity among the design disciplines, we can find common concerns and principles that are applicable to the design of any object, whether it is a poster, a household appliance, or a housing development.” (Winograd, 1996, page v).

    Given these commonalities, we have to question why the design of an organizational information system is so much more problematic than the design of a physical artifact, such as a house. In the field of architecture, design has well-established principles and procedures, with established computer-based tools to support them. Yet information system design is often viewed as a single stage in a “structured” system development life-cycle, concerned with the detailed “laying out” of system software – more akin to technical drawing than to design in an architectural sense. The intent of this paper is to retrieve the notion of design and to view design as an holistic activity, where form is conceptualized for a whole set of information system elements, some of which are physical and some abstract in nature (for example, a particular approach to the management of organizational change, the physical information system’s suitability for a particular group of users, or to its ability to provide a set of flexible organizational outcomes for a range of different stakeholder groups). For the purposes of this discussion, design is viewed as the process of conceptualizing, abstracting and implementing an organizational information system, rather than as a specific stage in the information system development life-cycle. Design is not viewed as giving form to system software, but as giving form to a whole set of information system elements, some of which are physical and some abstract in nature. The abstract elements may lead to such deliverables as a particular approach to the management of organizational change, the physical information system’s suitability for a particular group of users, or to its ability to provide a set of flexible organizational outcomes for a range of different stakeholder groups. Every information system design process is unique, because every information system is embedded much more firmly in an organizational context and culture than physical artifacts. Abstraction and generalization are therefore much more complex than that required for a universal artifact that can be employed in many different concepts.

    This paper draws on several literatures to derive an understanding of what design “in the round” means. Abstractions of design from the literatures on organizational theory, architectural and engineering design, human-computer interaction, computer-supported cooperative work, management information systems and social cognition are synthesized here, to present an holistic conceptualization of design as organizational problem-solving, individual and group activity, and management-oriented process models.

    Friedman and Cornford (1990) identify three historical phases of computer system development and a putative fourth phase:
    1) System development dominated by hardware constraints (mainly cost and reliability of hardware).
    2) System development dominated by software constraints: developer productivity, expertise and team project management issues (dominated by meeting deadlines and project budgets).
    3) System development dominated by user relations constraints: inadequate perception of user needs by developers and lack of prioritization of user needs.
    4) (Predicted) Organization environment constraints.

    This paper reexamines these phases from the perspective of their impact on design methods and paradigms. The fourth phase is redefined, in the light of technical developments, which have provided ubiquitous computing platforms and simplified the development of both intra- and inter-organizational information systems. Instead, it is argued that the fourth phase constraints are to do with business and IT system alignment. This is signified by the poor involvement of business managers and other non-IS people, leading to a poor understanding of boundary- spanning business strategy and application domain issues.

    1.2 Employing IS Design Methods

    A brief summary of how methods relate to the evolving focus of IS design is provided in Table 1.

    Table 1. Evolving Focus of IS Design Methods

    Approach Description Issues Addressed New Issues Introduced
    Code-and-fix
     (pre-1970s)
    Write code, test if it works, fix problems in the code.
    Work out reqs,. & design as you code. Many web development projects use code & fix – inappropriately
    Lack of standardization in development methods.The code quickly becomes messy; the product often does not meet the requirements, and it is expensive to fix the code once established.
    Waterfall model
    a.k.a. structured development (1970s
    onward)
    System development proceeds through a sequence of stages: requirements analysis, design, build (code & test), deliver, maintain.
    System requirements are defined, in collaboration with the client and users, at the start of the project. There is little feedback between stages – requirements, design, evaluation criteria, etc. are frozen and system is built and evaluated against an unchangeable spec.
    No systematic approach to understanding requirements, or to ensuring you have a complete set.The requirements need to be understood in detail before system development starts. Assumes that the client and users understand their requirements for support and are capable of articulating these perfectly. Users only normally get involved at the start and end of development, so poor user support is not spotted until it is too late to do anything about it. High amounts of documentation are involved. This slows development, leads to out of date documentation (things change), and systems that deliver the agreed functions but are useless for their intended purpose.
    Incremental
    development

    a.k.a. phased
    rollout (mid-1970s
    onward)
    A modified waterfall model, that incorporates a predetermined number of system design and build iterations, to develop incremental sets of functional requirements.System requirements are defined at the beginning, but may be modified at specific points (start of project phases), after user evaluation of the previous phase deliverables.Fixes the “frozen requirements” problem, as it introduces iterations during which both developers and users get to evaluate each
    version of the system, so they understand the implications of requirements before it is too late to change things.
    Has the benefit that system requirements can be changed as both the project team and users understand these better (within the overall project scope). Does not always involve users meaningfully – phases can be used for developers to explore requirements. Changes to requirements need to be evaluated and controlled, or “scope creep” sets in (the project cost and duration keep increasing until they become 
    untenable). This approach needs a strong change control process, where the Project Manager negotiates additional budget and duration if additional functionality is added, or trades one area of system scope for another, with the client.
    Evolutionary, 
    user-centered
    prototyping
     
    (mid-1980s onward)
    Involves cycles of development where each increment is built to satisfy a specific set of user requirements, evaluated for fit with user needs, then modified or scope evolved in the next increment.Introduces a specific user-deliverable focus. Also introduces idea of rapid turnaround (shorter development cycles, based on incremental subsets of the system scope).Prototyping approaches almost always start at the wrong point in the cycle (design, rather than requirements analysis, because the tendency is for developers to build the parts they understand). So requirements follow design and evaluation of the system, rather than driving system design. Early design decisions (e.g. the choice of an implementation platform, size of storage buffers and databases, etc.) often constrain how later user requirements can be implemented. Can be costly, as prototypes need to be thrown away and the design started over.
    Agile 
    development
     
    (late 1990s
     onward)
    User-requirements drive cycles of incremental development. Each increment is built to satisfy a specific set of user feature-requests (a.k.a. use-cases). Each increment is evaluated for fit with operational user needs, then scope & requirements are reviewed before next increment.Developers meet to swap notes on workarounds for increasingly formalized methods that took too much time and did not satisfy user needs.
    They defined a minimalist approach around:
    i) rapid development cycles;
    ii) regular user evaluation of system prototypes;
    iii) client-driven prioritization of development goals.
    This approach can be perceived as a bottomless money pit (especially by accountants). Because it blurs the line between system development and maintenance, there is no point at which the system can be considered stable (unless the number of cycles or functionality is constrained by pre-defining project duration or scope – which defeats the point). Because of this, agile devt. is not a good choice for enterprise systems or systems to be integrated with other systems – although it may be used to explore user requirements for subsystems that are later enhanced to be cross-functional and scalable using an RUP approach.
    eXtreme 
    development
    (late 1990s onward)
    eXtreme programming was developed in the late 1990s and predates the agile manifesto. Now known as eXtreme development, this provides a method for adaptive systems development, that combines pair programming, user stories, with team coordination (the planning game), collective ownership of code, and test-driven development. Many of these ideas have been absorbed into other agile methods.User-requirements drive rapid,  focused cycles of development, aimed at producing a system prototype. Each increment is built to satisfy a specific set of user stories (scenarios of work-task sequences), that are elicited during development in face-to-face exploration of how people work. This approach is more user-facing and participatory than most agile approaches.This is very narrowly focused and seldom produces a system that can be used operationally, unless the number and variety of users is very small (e.g. a subset of the
    Marketing group may want to set up a customer database for high-value promotions). The system produced is designed around functionality, not technical stability or compatibility, so it may need to be redesigned once the user requirements have been determined. As with the prototyping approach from which it developed, system prototypes may be thrown away, so a lot of time may be wasted.
    Scaled Agile
    (2000s
    onwards)  
    An adaptive version of Agile Development that focuses on corporate systems scalability, compatibility, and maintainability. Takes a “minimal viable product” approach to design.Moves away from user- or human-centered interests of prior agile approaches, focusing more on optimization of systems, databases, & applications across platforms and business units.The “minimal viable product” approach means that user-requirements are often not satisfied unless these are key to business strategy. This becomes a firefighting approach, where developers don’t understand what users need and systems are updated only when things go wrong.

    * This is why object classes and inheritance are central, as telecomms networks are hierarchical in the way that data is distributed between devices. One telecoms device will act as a router, sending instructions to multiple lower-level devices that control different network routes. The Object-Oriented modeling approach emerged through the work of (Rumbaugh, 1991; Booch, 1991; and Jacobson, 1992), who developed O-O principles (object-classes and inheritance, the association of data-structure/object definitions with specific process/methods, and the relative independence of objects and their methods from other objects, coordinated by messages rather than sequential, monolithic code) for business applications. Use-cases were an afterthought, added when Ivar Jacobson defined his variant of O-O design around subsets of functionality associated with different user Roles, Products, and Tasks. This was formalized by Rational Software in 1997, to improve quality and predictability by adding performance testing, UI Design, data engineering, & PM controls. The resulting approach became the Rational Unified Process (RUP).

    1.3 Relating IS Design Methods To Models of Problem-Solving

    Design is most frequently equated with theories of decision-making and problem-solving in organizations. The following sections analyzes the development of how we understand problem-solving in design and traces the impact of evolving problem-solving models on design approaches for organizational information systems. This evolution is shown in Figure 1. It is argued in this paper that, as the scope of organizational impact encompassed by information systems increased with time, so did the importance of integrating contextual and social requirements, for the system to operate.

    Evolution of design approaches
    Figure 1: Evolution Of Design Methods Following Evolution Of Problem-Solving Models

    The paper is structured as follows. Theories of design, as they relate to organizational information systems, are presented from the evolutionary perspective of the five theories of decision-making. But theories of design do not follow a linear development of thought: they are interrelated and emergent. Relevant design theories are arranged according to the identified research threads, to reflect progressive and sometimes parallel developments in how design is perceived by the organizational and IS literatures. The use of Friedman and Cornford’s (1990) “constraints” view of computer-system development permits a reasonable (if post-justified) perspective of why various theories were adopted by the IS community (both academic and practical) at a particular stage in the development of how IS were perceived. Section 2 deals with the evolution of early design theories: the attempt to apply, and modify, rational, “information processing” models to the development of systems limited by hardware constraints. Section 3 discusses the adoption of hierarchical decomposition and “structured” approached to design, as a way of dealing with software constraints. Section 4 presents the incorporation of theories of social construction, emancipation and “human-centered” design as a way of dealing with user- relations constraints. Section 5 presents the extension of design theory to encompass boundary- spanning information systems and discusses this extension as a response to strategic business coordination constraints. Section 6 summarizes the evolution of design theories, discusses some lacunae in current understandings of how design works and presents a dual-cycle model of design, to resolve some of the implications for design research and practice.

    2. THE EVOLUTION OF INDIVIDUAL DESIGN APPROACHES

    2.1 From Rational Decision-Making To Bounded Rationality

    The concept of “rational” decision-making developed from Taylor’s (1911) “scientific management” principles and Weber’s (1922) “rationalization” of the social world. Both of these theories were concerned with optimization and quantifiable interpretations of natural phenomena, including human behavior. Simon’s (1945) book Administrative Study formalized rational decision-making into a linear, staged process model of intelligence-gathering, evaluation of alternative courses of action and choice. Early information system (IS) design theory stems from this perception of human behavior in organizations as rational decision-making. Human beings are seen as objective information processors, who make decisions rationally, by weighing the consequences of adopting each alternative course of action. One stage uses the outputs of the previous stage (hence the waterfall model of Royce, 1970). The information processing model of problem-solving (shown in Figure 2) assumes that all information pertaining to design requirements is available to the designer and that such information can be easily assimilated (Mayer, 1989).

    BHOD2
    Figure 2. The Information Processing Model of Design as Problem-Solving (Mayer, 1989)

    In the information processing model, design involves moving from a statement of the problem in the world to an internal encoding of the problem in memory by mentally encoding the given state, goal states, and legal operators for a problem – i.e. by defining the problem mathematically. Solution involves filling in the gap between the given and goal states, devising and executing a plan for operating on the representation of the problem – i.e. by making a rational choice between alternative courses of action.Two major advances in design theory modified, but did not replace, the notion of rational design. The first was that Alexander (1964) proposed that there is a structural correspondence between patterns embedded within a problem and the form of a designed solution to solve the problem.Alexander proposed the process of hierarchical decomposition – breaking down the overall design problem into a series of smaller problems (patterns) which can be solved independently from each other – as a way to accomplish complex problem-solving where all variables could not be grasped at once. The structure of an appropriate solution may then be determined mathematically, by analyzing interactions between the variables associated with the design problem (Alexander, 1964). As Lawson (1990) notes, this presupposes that the designer is capable of defining all the solution requirements in advance of designing the solution, that all requirements assume equal importance for the solution and that all requirements and interactions between them affect the form of the proposed solution equally. But there is a deeper consideration. Do external “patterns” exist, or do we simply impose them subjectively on external phenomena? Alexander himself (1966) criticizes the human tendency to artificially impose patterns on external elements that may constitute a design (in his example, he discusses architectural considerations affecting town planning). Yet there is a contradiction in Alexander’s position: even his recent work assumes that patterns are somehow inherent in external “entities” and therefore that his method of pattern-matching may be employed to define objects for object- oriented system design (Alexander, 1999).

    The second advance was that Simon (1960) introduced the principle of bounded rationality. Human-beings have cognitive limitations which constrain the amount of information they can absorb and that they have access to incomplete information about alternative courses of action. These limitations lead to high levels of uncertainty, to which humans respond by developing a simplified model of the real situation: they reduce, constrain and bound the problem until it becomes sufficiently well-defined to be resolved. Then they evaluate alternative solutions sequentially until an alternative is discovered which satisfies an implicit set of criteria for a satisfactory solution. The solution reached by this process of bounded rationality is not optimal, but satisficing, in that it satisfies a minimal, rather than optimal set of solution criteria.This theory only appeared to apply to a subset of relatively well-defined design problems. Simon (1973, 1981) distinguished between well-structured and ill-structured problems. Well-structured problems may be resolved through the application of hierarchical decomposition techniques. But ill-structured problems (such as the design of a computer system) need to be structured before they can be analyzed. Individuals structure such problems by decomposing them into sub- problems: these are synthesized unconsciously so that the original, ill-structured problem “soon converts itself through evocation from memory into a well-structured problem” (Simon, 1973). The significance of this is worth noting: the process of problem structuring requires additional information, retrieved from long-term memory. This demonstrates a gradual realization that inductive reasoning (generalization from evidence) is significant in design. The “rational” model assumes deductive reasoning (logical inference about particulars that follows from general or universal premises). Even Alexander (1964) did not view the design process as entirely rational. In fact, he observes:

    “Enormous resistance to the idea of systematic processes of design is coming from people who recognize correctly the importance of intuition, but then make a fetish of it which excludes the possibility of asking reasonable questions.” (Alexander, 1964, page 9).

    By Simon’s (1973) work, what Alexander refers to as “intuition” has become the application of inductive reasoning. However, a realization of the significance of inductive reasoning appears to lead to the notion that design is wholly “creative” in nature and therefore uncontrollable. Inductive reasoning involves conclusions drawn from particular cases in the individual’s experience (the inference from particular to general), which is the antithesis of deductive reasoning, such as that involved in hierarchical decomposition. Thus, we come to the period of IS design dominated by pressures to manage the labor process.

    2.2 From Structured Decomposition To Opportunistic Design

    2.2.1 Design As Hierarchical Decomposition

    The transition between constraint-phases was driven by the need to collaborate in the production of systems software. While IT systems remained relatively simple, a single designer could accomplish their execution. Once IT systems were used to support multiple organizational activities, it became necessary to employ teams of software designers. The evolution of design methods was driven by the need for collaboration and communication between individuals. Design approaches needed to develop a “common language”. This was provided by the concept of structured design, based on Alexander’s (1960) hierarchical decomposition. Lawson (1990) presents a typical hierarchical decomposition model from the architecture field (Alexander’s original work was in architecture), shown in Figure 3.

    BHOD3
    Figure 3. Design as Hierarchical Decomposition (Lawson, 1990; interpreting Alexander, 1960)

    Although the application of hierarchical decomposition appears first in the architectural design literature, this model was soon applied to IT system design. No single author can be credited with the development of the structured systems development life-cycle model that now underlies most IT systems development (the “waterfall” model), but the most influential presentation of this model is in Royce (1970). The model is so attractive because it seems to prescribe a way to control the labor process. By breaking the design process into stages (which are reproduced at multiple levels of decomposition), managers are enabled to (i) standardize work processes and (ii) divide design work between different people. This type of fragmentation in design-work distances people from the object of their work (leading to lower morale and productivity) and leads to uninformed decisions about design alternatives and form (Corbett et al., 1991).

    The model is linear – while still popular in the IS field because of its focus on clearly-defined delivery milestones, this type of model has been rejected by many areas of creative design, such as architecture, as being unrepresentative of ‘real-world’ design processes (Lawson, 1990). McCracken and Jackson (1982) voiced the first dissent with structured, hierarchical decomposition in the IS literature, when they argued that this approach did not support the actual processes of design. They observed that IS professionals circumvented the method, then post- rationalized their designs by producing structured design documentation that made it look as if they had followed the method. But they concluded that the structured approach should be used anyway, because of the benefits for controlled and standardized process management:” System requirements cannot ever be stated fully in advance, not even in principle, because the user doesn’t know them in advance … system development methodology must take into account that the user, and his or her needs and environment, change during the process.” (McCracken & Jackson 1982, p. 31).

    This paper argues for the emergence of system requirements through the process of design. But it is not until Boehm (1988) that structured approaches to design are seriously considered harmful to information system design because they ignore human-activity and task requirements of the information system:” Document-driven standards have pushed many projects to write elaborate specifications of poorly understood user interfaces and decision support functions, followed by the design and development of large quantities of unusable code.’’ (Boehm 1988, p. 63).

    If we assess the empirical literature, studies of information system design tend to embody the assumptions of their contemporary theoretical literature. Earlier studies (e.g. Jeffries et al., 1981; Vitalari & Dickson, 1983) argue that failure is due to a lack of methodological consistency in applying structured decomposition. Later studies, with a wider scope of IS design in its organizational context (e.g. Jenkins et al., 1984; Curtis et al., 1988; Hornby et al., 1992; Davidson, 1993) argue that methodologies do not represent a ‘theory-in-use’, but a ‘theory-of- action’ (Argyris and Schön, 1978): they represent a rule-based interpretation of what should be done, rather than what people actually do. As time proceeds (with disciplinary knowledge), the role of inductive reasoning increases in importance. There is an appreciation of the role that tacit knowledge plays in design. The notion of “pattern-matching” from Alexander’s (1964) early, positivist concept of deductive pattern-matching evolves to an inductive concept of convergence, involving the progressive fit of partial problem-definitions to partial elements of known solutions to such problem-patterns (Turner, 1987). Alexander himself critiqued his early notion of design as deductive variance-reduction. His recent work (e.g. Alexander, 1999) demonstrates a rich appreciation of design as inductive pattern-matching. In this sense, an individual design process as problem-solution convergence becomes very similar to the notion of design emergence discussed below.

    2.2.2 Design As Experiential Learning

    As information systems designers became more ambitious in the scope of organizational support that they attempted, information systems became more complex and the focus of design shifted to solving organizational and informational problems, rather than data processing. An evolution in thinking about organizational problems is demonstrated by Rittel (1972; Rittel and Webber, 1973) and Ackoff (1974). Ackoff (1974) described organizational problems as “messes”, arguing that organizational problem selection and formulation are highly subjective:“ Successful problem solving requires finding the right solution to the right problem. We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.” (Ackoff, 1974, page 8). Horst Rittel suggested that organizational problem are “wicked” problems (Rittel 1972; Rittel and Webber, 1973). A wicked problem has the following characteristics:

    a) it is unique
    b) it has no definitive formulation or boundary
    c) there are no tests of solution correctness, as there are only ‘better’ or ‘worse’ (as distinct from right or wrong) solutions
    d) there are many, often incompatible potential solutions
    e) the problem is interrelated with many other problems: it can be seen as a symptom of another problem and its solution will formulate further problems.

    Wicked problems (and messes) differ from Simon’s (1973) ill-structured problems in one important respect. Ill-structured problems may be structured by the application of suitable decompositional analysis techniques: they may be analyzed (even if not rationally, in a way that may be justified on rational grounds). But wicked problems cannot even be formulated for analysis, because of their complexity and interrelatedness (Rittel & Webber, 1973). Rittel (1972) argued that such problems cannot be defined objectively, but are framed (selected, artificially bounded and defined subjectively and implicitly). Even once a wicked problem has been subjectively defined, the designer has no objective criteria for judging if it has been solved (in computing terms, there is no ‘stopping rule’). Rittel advocated ‘second-generation design methods’ to replace the rational, decompositional model of design. These methods should include “designing as an argumentative process”, which Rittel saw as “a counterplay of raising issues and dealing with them, which in turn raises new issues and so on”.The realization that complex system design required experiential learning (Lewin, 1951) coincided with a demand-driven approach to IS design. As information systems expanded their scope, so information system users exerted their power. The rise of evolutionary prototyping was driven by their demands for more usable systems, but also by the fit with experiential learning. But it is a pragmatic fit, rather than an explanatory fit – the prototyping approach supports the need for experiential learning, but it does not explain the processes or behaviors of those people engaged in design. A convincing explanation is provided by Turner’s (1987) suggestion that design problems and solutions converge together. Information System design can be conceptualized as the progressive ‘fitting’ of the framework of system requirements that represent the problem with known solutions, based upon the designer’s previous experience of problems of a particular type (Turner, 1987). Turner observed various strategies employed by computer science and other students when resolving a semi-structured design task and concluded that goal definitions evolve with the design. Turner (1987) observed that, where designers’ own experience failed to provide a solution, they widened the search space to call on the experience of colleagues. Turner argued that “requirements and solutions migrate together towards convergence” and that the process of designing information systems is subjective as well as emergent:“Design appears to be more ad hoc and intuitive than the literature would lead us to believe, solutions and problems are interrelated and the generation of solutions is an integral part of problem definition. Problems do not have only one solution; there may be many. Consequently, design completeness and closure cannot be well-defined. There are two categories of design factors: subjective and objective. Objective factors follow from the subjective concepts on which designers model the system. The difficulty in the past is that we have not acknowledged, explicitly, the presence of subjective factors, with the result that, in many cases, objective factors appear to be arbitrary.” (Turner, 1987).

    We are then faced with the problem of how designers determine the “subjective concepts” on which they model their systems. In an empirical study of architects by Darke (1978), the author discovered that there was a tendency to structure design problems by exploring aspects of possible solutions and showed how designers tended to latch onto a relatively simple idea very early in the design process (for example, “we assumed a terrace would be the best way of doing it”). This idea, or ‘primary generator’ was used to narrow down the range of possible solutions; the designer was able to rapidly construct and analyze a mental archetype of the building scheme, which was then used as the basis for further requirements search. Darke’s (1978) model of the design process is shown in Figure 4.

    BHOD4
    Figure 4. The Role of the Primary Generator in Design (Darke, 1978)

    Darke’s (1978) architectural design model finds a parallel in the IS literature, in a protocol analysis study of information system design dialogues between designer and user (Malhotra et al., 1980). They highlighted the core role played by cognitive breakdowns (Winograd and Flores, 1982, after Heidegger, 1960) in making the implicit become explicit. They concluded that internal (mental) models held by designers often relied on assumptional, implied, rather than explicit requirements. These assumptions only surfaced when an implicitly-held requirement conflicted with an explicit user requirement, in dialog with system users. Designers often examined partially proposed design elements to test violation of an unstated goal and attempted to fit alternative solutions to subsets of the requirements, based on prior experience. Design goals evolve in with the learning that accrues from the process of design.Turner (1987) concluded that, in practice, only some ambiguities of design requirements and goals will be resolved and the central issue becomes one of discrimination between the significant and the insignificant. Strategies for such discrimination have been linked with “opportunism” in studies of software design (Guindon, 1990a, 1990b; Khushalani et al., 1994).

    2.2.3 Opportunistic Design

    Over time, it became clear in field studies of IS designers in context that designers did not employ a strictly hierarchical decomposition (top-down, breadth-first) approach. Given the widespread use of prescriptive “structured methods,” which were based on a breadth-first, then depth-wise approach to requirements decomposition, this was labeled “opportunistic design” (Guindon, 1990a). Ball & Ormerod (1995) compare opportunistic design with the more structured problem-solving approaches observed in earlier studies of software design. They conclude that much of the structure observed in the early studies of design arose from the more structured nature of the problems set for subjects in experimental situations. These ideas are synthesized in the diagrammatic model given in Figure 5.

    BHOD4b
    Figure 5. Opportunistic Decomposition Strategies

    The design process is both iterative and recursive, redefining parts of the problem as well as partial solutions. In practice, only some ambiguities of design requirements and goals will be resolved and the central issue becomes one of discrimination between the significant and the insignificant (Turner, 1987). As organizational problem-situations became increasingly complex, it became increasingly difficult for the design to delineate a suitable boundary and formulation for the design problem they were attempting to resolve (Rittel & Webber, 1973).

    As designers became aware that they faced evolving requirements (the scope and objectives of which emerged as the problem became better understood through analysis), it appears that designers abandoned an exclusively decompositional approach to design, for one where their understanding of both problem and solution converged by a process of pattern-matching, looking for successive fit between the two (Maher & Poon, 1996). Turner’s (1987) convergence findings and Malhotra et al’s (1980) conclusions about breakdowns would lead one to conclude that this type of problem-solving, especially in design, is emergent and contingent upon interaction with multiple perspectives of the problem-situation.

    Curtis et al. (1988) conclude that “developing large software systems must be treated, at least in part, as a learning, communication and negotiation process.” Designers have to integrate knowledge from several domains before they can function well. They identify the importance of designers with a high level of application domain knowledge: in their studies, these individuals were regarded by team members as “exceptional designers”, who were adept at identifying unstated requirements, constraints, or exception conditions, possessed exceptional communication skills. Exceptional designers spent a great deal of their time communicating their vision of the system to other team members, and identified with the performance of their projects to the point where they suffered exceptional personal stress as a result. They dominated the team design process, often in the form of small coalitions, which “co-opted the design process”. While these individuals were important for the depth of a design study, teams were important for exploring design decisions in breadth (ibid.).

    This supports a perspective found in the literature on design framing: that decompositional approaches to system requirements analysis are of most use when designers are inexperienced, or when the design problem is unusually difficult to define (Jeffries et al., 1981; Turner, 1987). But the literature does not tell us whether the methodology is useful for supporting design activity in these situations, or whether its function is to provide psychological support in conditions of high uncertainty, as suggested by Reynolds & Wastell (1996).Guindon (1990b) argues that information system design involves the integration of multiple knowledge domains: the application domain, software system architecture, computer science, software design methods, etc.. Each of these domains represents a problem-space in which a more or less guided search takes place (depending upon which solution paths look most promising and the previous experience of the designer in this domain). The IS development process should encompass the discovery of new knowledge, in particular the discovery of unstated goals and evaluation criteria.

    3. FROM PROBLEM-SOLVING TO PROBLEM-FRAMING

    3.1 Human-Centered Design

    The concept of the wicked (or socially constructed, multi-perspective problem) is reflected in the literature on participatory design, although this literature was driven also by an interest in worker emancipation and “human-centered” design. Research evidence indicated that the traditional approach to the development of new technology resulted in technological systems which were associated with a high degree of stress and low motivation among their users (Corbett, 1987; Gill, 1991; Scarbrough & Corbett, 1991; Zuboff, 1988). The human-centered approach to the design of technology arose as a reaction to this evidence. Gill (1991) defines human- centeredness as “a new technological tradition which places human need, skill, creativity and potentiality at the center of the activities of technological systems.” Bjorn-Andersen (1988) criticized the narrow definition of human-computer interaction used by ergonomics and systems design research, which takes technology as its starting point, with the words: “it is essential that we see our field of investigation in a broader context. A ‘human’ is more than eye and finger movements”. There is a wide body of literature on the development and application of human-centered technology. Some of the main ideas of this literature are:

    • The human-centered approach rejects the idea of the “one best way” of doing things (Taylor, 1947): that there is one culture or one way in which science and technology may be most effectively applied (Gill, 1991).
    • Technology is shaped by, and shapes in turn, social expectations: the form of technology is derived from the effect of these social expectations upon the design process (MacKenzie and Wajcman, 1985). This social constructivist approach reveals the social interior of technological design: technology no longer stands as an independent variable, but an outcome which is the result of socially-constrained choices made by designers.
    • The human-centered approach is opposed to the traditional, technically-oriented approach, which prioritizes machines and technically-mediated communications over humans and their communicative collaboration (Gill, 1991). While technically-oriented design traditions see humans as a source of error, the human-centered design approach sees humans as a source of error-correction (Rosenbrock, 1981).
    • That human-centered production should concern itself with the joint questions of “What can be produced?” and “What should be produced?” The first is about what is technically feasible, the second about what is socially desirable (Gill, 1991).
    • That objective and subjective knowledge cannot exist independently of each other: while technologists attempt to encode the explicit, rule-based knowledge needed to perform a task, this knowledge is useless without the “corona” of tacit and skill-based knowledge which surrounds the explicit core and through which explicit knowledge is filtered (Rosenbrock, 1988). Cooley (1987) raises the issue that modern technology is designed to separate “planning” tasks from “doing” tasks (for example, in modern Computer-Integrated Manufacturing). This results in deskilled human technology users, who are less equipped for exception-handling as a result, and poorer work outcomes, as those who plan are uninformed by seeing the results of their plans and those who “do” are unable to affect the way in which work tasks are approached.

    A common theme in the human-centered literature is that it is the process of technology design which determines the effect of that technology upon its human users. This is best illustrated by considering recent developments in the approach to technological determinism. Technology may be argued to determine work design (Braverman, 1974), or to be neutral in its impact, with the relationship between technology and work design being mediated by managerial intentions and values (Buchanan and Boddy, 1983), by managerial strategic choice (Child: 1972) or by organizational politics (Mumford & Pettigrew, 1975; Child, 1984). However, the forms of available technology have an independent influence on the range of social choices available (Wilkinson, 1983; Scarbrough & Corbett, 1991). An analysis of technology as an unexplored entity which simply embodies the intentions and interests of particular groups ignores the technological decision-making which precedes the managerial decision-making process: the processes of design.This socio-technical perspective is most apparent in the literature analysis of prototyping and participatory design. This area of work explicitly attempts to deal with the “multiple worlds” espoused by various organizational actors (Checkland, 1981). Evolutionary methodologies permit users to incorporate desired ways of working into the design of the information system (Eason, 1982; Floyd, 1987). IS stakeholders are placed in a situation where they can negotiate their requirements of an IS around a design exemplar – a prototype IT system, or a prototype work-system. But the attempt to balance the two domains tends to focus more on one domain than the other. Whilst, for example, Mumford’s work in ETHICS (Mumford, 1983; Mumford and Weir, 1979) attempts the joint satisfaction of both social and technical interests, it deals almost exclusively with the design of work systems. Technology is viewed as infinitely configurable to suit the organization of workgroups, with no account taken of constraints imposed by either technology design or its implementation. More recent work (Butler and Fitzgerald, 2001; Lehaney et al., 1999) examines the ways in which user participation in decisions concerning the use of information technologies affects the outcome, but focus on participation in business process redefinition. While this is essential, it is not sufficient.

    We have discussed how goals may be subverted by the technical systems design and implementation processes that follow business process redefinition.

    Muller et al. (1993) list a variety of methods for participatory design, classified by the position of the activity in the development cycle and by “who participates with whom in what”. The latter axis ranges from “designers participate in users’ worlds” to “users directly participate in design activities”. For participatory design to be participatory, user-worlds must be effectively represented in the design. But, as discussed above, there is a wide disparity in user “worlds”. Participatory development has more potential to be politically disruptive and contentious than traditional (non-participatory) forms of system development, because it involves a wide variety of interests, with differing objectives and perspectives on how organizational work and responsibilities should change (Howcroft and Wilson, 2003; Winograd, 1996). This situation is therefore managed carefully in practice. System stakeholders are selected for participation on the basis of political affiliations and compliance, rather than for their understanding of organizational systems support and information requirements. This constrains user choice and significantly affects the potential to achieve a human-centered system design (Howcroft and Wilson, 2003). Users often have little choice about whether to participate. Even when trained in system development methods, users and other non-technical stakeholders often cannot participate on an equal basis with IT professionals (Howcroft and Wilson, 2003; Kirsch and Beath, 1996). User views are often inadequately represented because of cost constraints, or a lack of appreciation of the significance of users’ perspectives (Cavaye, 1995). Howcroft and Wilson (2003) argue that the user choice is significantly constrained by organizational managers, who predetermine boundaries for the scope of the new system, who select who will participate in systems development and to what extent.Because of its reliance on the production of technical system prototypes, the participatory approach is therefore technology-focused.

    IT professionals frame user perceptions of how a technology can be employed (Markus and Bjorn-Andersen, 1987). They are able to constrain the choices of non-technical stakeholders, by the ways in which alternatives are presented and implemented in the system prototypes. User worldviews may easily be relegated to “interface” considerations by technical system designers, even when the explicit focus of the method is on joint system definition (Gasson, 1999). The use of participatory design may become a power struggle between, on the one hand, “rational”, technical system designers and, on the other hand, “irrational” user-representatives who are unable to articulate system requirements in technical terms (Gasson, 1999; Nelson, 1993). The concept of empowering workers raises hackles: this is seen as “social engineering” that compares unfavorably (in scientific, rationalist discourse) with “software engineering”. Designers who engage in such irrational behavior must have a subversive agenda that is counterproductive (Nelson, 1993). Thus, participatory design may often be subsumed to the less intrusive (and much less confrontational) path of producing user- centered design “methods” that can be partially used, in ways chosen and controlled by technical designers.Interaction DesignInteraction design is a recent development arising from work in Human-Computer Interaction (HCI). It considers a much deeper set of concepts than the traditional HCI interests of user- interface affordance and usability. Interaction design examines the ways in which people will work with a technical artifact and designs the artifact to reflect these specific purposes and uses (Preece et al., 2002). Winograd (1994) defines interaction design as follows:

    “My own perspective is that we need to develop a language of software interaction – a way of framing problems and making distinctions that can orient the designer. … There is an emerging body of concepts and distinctions that can be used to transcend the specifics of any interface and reveal the space of possibilities in which it represents one point.” (Winograd, 1994).

    So interaction design has the potential to consider a space of possibilities, but in general appears to be constrained to specific interactions with a predetermined technology by the tradition of HCI discourse. Interaction design, as defined by Cooper (1999) — who claims to have invented the approach — is “goal-directed design”:” Interaction designers focus directly on the way users see and interact with software-based products.” (Cooper, 1999).Interaction design from this perspective is product and development driven: this approach defines what software system products should be built and how they should behave in a particular context (Cooper, 1999). But goal-directed approaches are only appropriate when the problem is relatively well-defined (Checkland, 1981; Checkland and Holwell, 1998). Most organizationally-situated design goals are emergent. A similar, goal-driven approach is taken by Preece et al. (2002), who emphasize “the interactive aspects of a product” (page 11). Although they extend the goal-driven concept with rich discussions of use, their perspective is also essentially driven by the notion that design is centered around conceptualization of a computer- based product with an individual user. Inquiry into the socio-cultural worlds of its use and into negotiated collaboration between interested stakeholders are secondary.

    3.2 Agile Software Development

    Formal methods are increasingly being abandoned in favor of rapid methods with shorter lifecycles and a lower administrative overhead (Barry and Lang, 2003; Beynon-Davies and Holmes, 1998). But rapid methods do not appear to deal well with user requirements and may lead to a more techno-centric focus than with traditional methods (Beynon-Davies and Holmes, 1998). There is a temptation with rapid approaches, for system developers to revert to the code- and-fix approach that characterized software development before the advent of formal methods (Boehm et al., 1984; Fowler, 2003). “Agile” software development was conceived in response to a perceived need to balance technical system design interests with an understanding of user requirements. Uniquely, this approach is a practitioner-initiated approach to human-centeredness in IS design. Highsmith’s (2000) Adaptive Software Development and Beck’s (1999) eXtreme Programming are both examples of agile software development: practitioner-instigated approaches that combine a minimalist form of system design (i.e. informal methods and short lifecycles) with a user-centered approach. The Agile Manifesto (Fowler and Highsmith, 2001) argues for the following points:

    • Individuals and interactions are valued over processes and tools.
    • Working software is valued over comprehensive documentation.
    • Customer collaboration is valued over contract negotiation.
    • Responding to change is valued over following a plan.

    These points reflect many of the conclusions of the literature discussion above, particularly with their focus on goal emergence. The ways in which goals are inquired into, agreed and made explicit are critical to achieving a human-centered outcome. Agile software development emphasizes an adaptive approach to defining system goals and requirements, as the design proceeds. This is an implicit recognition of the difficulties of understanding the needs of multiple user worlds, in advance of the system design. System goals and requirements are adapted to the designer’s (and others stakeholders’) increasing understanding of the role that the system will play, in organizational work. In Adaptive Software Development, Highsmith (Highsmith, 2000) rejects what he terms “monumental software development”, in favor of “fitting the process to the ecosystem”. At the heart of the approach are three overlapping phases: speculation, collaboration, and learning. He argues that systems design should respond to the contingencies of the local context, rather than fitting the problem analysis to the framework underlying a formal analysis method. Although Highsmith does not prescribe specific methods, he does emphasize teamwork and the involvement of system users in all aspects of system definition and design. However, although Highsmith’s work has been influential in forming popular perceptions of how to manage system design, it does not offer a method for performing design. One of the most popular methods for agile software development is eXtreme Programming (Beck, 1999). This approach is based partly on the concept of scenario analysis (Carroll and Rosson, 1992) – a concept that is familiar to HCI researchers but novel to many technical system designers.

    The eXtreme Programming approach emphasizes a specific way of eliciting requirements from system users, in an informal and iterative process. Technical systems developers work in pairs with selected users, to generate short scenarios, which are coded into a system prototype. One developer codes, while the other checks the code for authenticity and correctness (these roles are swapped frequently). The user is invited back to validate the prototype against the scenario and to generate additional scenarios, based on their realization of shortcomings or omissions in the original scenario generated, after having used the prototype.In its focus on emergence and “the people factor”, agile software development may be considered human-centered in its intent. However, its ultimate emphasis on the practice and profession of producing software systems, without explicit validation of system goals and organizational roles by non-technical stakeholders, renders it vulnerable to deadline-driven expediency (Nelson, 2002). Agile approaches provide a worthwhile attempt to deal with problems of implicit knowledge, evolutionary learning (by users) of what technology has to offer for their work, and misunderstandings between technical designers and users, as technologists gradually enter the lifeworld of the user. But these approaches are based on the development of software, rather than organizational systems. It involves a very small selection of “representative” users, there is no attempt to understand or investigate the wider, socio-technical system of work and there is little attention paid to the selection of appropriate system users for scenario generation. Additionally, this method suffers from a common problem of evolutionary prototyping: the approach starts with the specific intention of building a technical system, not with the intention of bringing about organizational and technical change. As Butler and Fitzgerald (2001) remark, stakeholders must be involved in the definition of organizational and process change, before their involvement in IT systems development can be considered anything other than token.

    3.3 Evolutionary Models of Design

    Recent management concern has centered on more human-centered and business-oriented approaches to IS development (Hirschheim & Klein, 1994). However, IS development projects are concerned with process management issues at a macro level, rather than an individual level; an attempt to encompass both macro processes and human and organizational concerns can be seen in the spiral model of software development presented by Boehm (1988), shown in Figure 6. The spiral model is an attempt to manage design emergence, uncertainty and risk in ISD project management. As such, it preempts many of the situated action issues discussed above. In this model, the radial dimension represents the cumulative cost of development to date, the angular dimension represents the progress made in completing each cycle of the spiral.

    BHOD5
    Figure 6. Boehm’s Cycle Model of Design

    An underlying concept of this model is that each cycle involves a progression that addresses the same sequence of steps, for each portion of the product and for each of its levels of elaboration. However, there are three main problems with this model in guiding the management of IS development:

    • The model represents a macro level representation of development: it does not address outputs from effective design processes and it does not represent the behavioral issues which managers face in real-life IS development.
    • The skills required at different stages of the cycle are unrealistic: it is not feasible to expect the same group of developers to possess (or to acquire) skills in both detailed technical design and risk analysis.
    • The model cannot be said to represent IS development practice, even at a macro level: Boehm (1988) admits that it is not based on empirical observations, nor has it been tested experimentally.

    Despite these criticisms, the model is a real advancement in theoretical thinking about IS development practice. It embodies an iterative process and encompasses human and organizational concerns through the inclusion of evolutionary prototyping as an essential component of organizational risk management. However, the four evolutionary stages of the model – determine objectives, evaluate alternatives, develop product and plan next phase – may be too akin to the “rational” model of decision-making (Simon, 1960), criticized in the previous section, to be of help in managing real-life processes. What is needed are models which encompass both macro business processes and human and organizational concerns but rely less on managing the predictability and rationality of process outcomes. There is a basic conflict here: professional management is concerned primarily with the reduction of risk through an emphasis on predictability and an assertion of rationality, while effective design requires the “control and combination of rational and imaginative thought” (Lawson, 1990). It might be that the two goals are incompatible: that different models are required for the control of macro (project) processes and the support of micro (design) processes. However, the two are closely interlinked and any approach to IS development which adopts a single perspective will not succeed.

    The need for both deductive and inductive reasoning in design has to do with how human beings cope with design requirements that are either undiscovered or tacit in nature. So it would follow that the role of tacit knowledge and inductive reasoning in design is greater for problems that are not well-structured. Schön (1983) describes design as “art”. A design problem can only be approached via “reflection-in-action”: purposeful action which calls on tacit knowledge for its execution. The concept is best described in Schön’s (1983) own words:“ Even when he [the professional practitioner] makes conscious use of research-based theories and techniques, he is dependent on tacit recognitions, judgments and skillful performances.” (Schön, 1983, page 50).Expertise in professional skills such as design can only be accomplished be learning-through- doing (Schön, 1983). The role of learning-through-doing was also highlighted by Rosenbrock (1988), in discussing the exigencies of engineering design and by Argyris (1987), who called for a new way of approaching information systems design. But if we see learning as central to design, the hierarchical decomposition approach becomes unusable. Empirical studies observe “opportunistic” design strategies (Jeffries et al., 1981; Guindon, 1990a, 1990b; Khushalani et al., 1994), which is defined as (various types of) deviation from hierarchical decomposition. Visser and Hoc (1990) argue that many of the early studies into design processes conflate prescription and description: they ignore what the activity of design is really like, to focus on what it should be like. In addition, early studies often presented subjects with a unitary, relatively well-defined and well-structured problem to solve. Hierarchical decomposition is excellently-suited to this type of problem and “opportunistic” deviations from a decompositional strategy must be considered unproductive. But when designers are faced with ill-bounded and ill- structured problems, decompositional strategies fail.

    3.4 Empirical Studies Of Human-Centered and Evolutionary Design Processes

    The “user-centered” model of evolutionary prototyping supposedly focuses on the design of IT systems to support the needs of human beings in the role of system user. Unfortunately, human-centered and user-centered approaches do not lead to the same outcome. With human-centered design approaches, the analysis starts with an exploration of the workers (or collectivity’s) work processes and information needs. It explores how processes could be redesigned, to remove historical and technological system constraints on how people work. It investigates workflows between people, stages of information processing and sharing of information across processes, to produce a coherent understanding of how work processes can be simplified, coordinated more effectively, and how we can supply their information needs in a timely and human-centered manner. Only then does it define the system of IT required to provide these aspects of support. The most important aspect of human-centered design, however, is the precedence given to human decision-making over IT-enabled decision analysis. An explicit part of human-centered design is the analysis of key decision-points, involving debate around who should make decisions, on the basis of what information, and the degree to which decision-support can and should be automated. Human-beings are viewed as sources of error-correction and local knowledge. Human decision-making is privileged over machine decision-making as humans adapt to changing circumstances and are better capable than IT systems of making decisions under conditions of uncertainty. Decisions relegated to the IT system are routine, programmable decisions involving repetitive, complex data processing. For User-centered design, on the other hand, starts with an IT system concept. It defines work-processes around a predefined notion of information classes and objects – and the relationships between these. It assumes that all work, especially decision-analysis, is best done by the IT system, as human beings are perceived as subjective. Even when “user-centered” approaches deliberately set out to model human business and work-processes, these models are always flawed because they represent such processes as interactions with an IT system, rather than in the situated context of workflows and human interactions (Gasson, 2003).

    4. DESIGN DOMINATED BY BUSINESS PROCESS CONSTRAINTS

    Section 4 (under development) presents the extension of design theory to encompass situated, collaborative, group action and discusses this extension as a response to strategic business coordination constraints. Business processes are viewed as cross-functional collaboration between different work-groups, whether internal to, or external to the organization. This fourth perspective moves the design of information systems on from the limited perspectives offered by viewing an information system as synonymous with a computer system and resolves many of the theoretical conceptualization issues implicit in recent IS design writings.

    5. COLLABORATIVE DESIGN

    Section 5 deals with the extension of design theory to collaborative, group action.

    5.1 Intersubjectivity and Shared Understanding

    Weick (1979) argues that shared cognition emerges through the process through which a group develops collectively structured behavior and that this process is inconsistent with achieving intersubjectivity. He describes four phases of this process, shown in Figure 7.

    BHOD6
    Figure 7. Four Stages of Sensemaking in Developing Shared Understanding (Weick, 1979)

    Initially, groups form among people who are pursuing diverse ends. As a structure begins to form, group members reciprocate behavior which is valued by other group members while still pursuing individual goals and thus converge on common means: common group process rather than common goals. Once the group members converge on interlocked behavior, a shift occurs from diverse ends to common ends. Initially, the common end is to perpetuate the group’s collective structure, which has been instrumental in aiding individuals to get what they want; other common ends follow from this recognition of mutual interest. Finally the group is enabled to pursue diverse means, often because of division of labor between group members permits individuals to pursue ends in ways which fit with their own specialization, but also because the stability engendered by a durable collective structure enables individuals to pursue elements of the problem-situation which appear unpredictable and disorderly in comparison to the stable environment produced by the group. There may be pressures to reassert individuality following the subsumation of individuals’ interests to those of the group.

    Design depends upon intersubjectivity for effective communication between team members to take place (Flor and Hutchins, 1991; Hutchins, 1990, 1991, 1995; Orlikowski & Gash, 1994; Star, 1989). Technical system designers, “successful in sharing plans and goals, create an environment in which efficient communication can occur” (Flor and Hutchins, 1991). Orlikowski & Gash (1994), in a hermeneutic analysis of different interest groups’ assumptions, knowledge and expectations of a new groupware technology, refer to intersubjectively-held mental models as “shared technological frames”:“ Because technologies are social artefacts, their material form and function will embody their sponsors’ and developers’ objectives, values, interest and knowledge regarding that technology” (Orlikowski & Gash, 1994, page 179).

    However, most work on shared technological frames assumes too great an extent of intersubjectivity, applying shared frames to areas of collective work that are highly unlikely to be framed in the same way by different actors, even when they belong to the same workgroup. Figure 8 illustrates the extent of intersubjectivity (shared meanings) between organizational actors, reflecting the degree to which cognitive frames are likely to be shared.

    BHOD7
    Figure 8. The Limits of Shared Understanding in Collaborative Design Groups

    5.2 Distributed Cognition In Design

    5.2.1 How Understanding is Distributed

    An explanation of how the division of labor identified by Weick may be enabled lies in the notion of distributed cognition (Norman, 1991; Hutchins, 1991). Distributed cognition involves a model of the task or problem in hand which is “stretched over” rather than shared between members of a collaborative group (Star, 1989). Distributed cognition enables members of design teams and other workgroups to coordination their actions without having to understand every facet of the work of other individuals in the group.“ Distributed cognition is the process whereby individuals who act autonomously within a decision domain make interpretations of their situation and exchange them with others with whom they have interdependencies so that each may act with an understanding of their own situation and that of others.” (Boland et al., 1994, page 457).

    Distributed cognition may be coordinated using “boundary objects” which represent the current state of the outcome of group activity (Norman 1991; Star, 1989). Boundary objects which aid distributed cognition include external representations of a design (e.g. a diagrammatic model) and design specifications. Thus, intersubjective understandings of the design problem and solution are not required. An effective collaborative group can function well when group members share very little common understanding of the problem in hand, if they have effective coordination mechanisms (Star, 1989).Individual group members may have very different models of the organizational “world” and different design goals. The literature on collaborative group work normally assumes intersubjectivity – a common vision shared by all group members. Distributed cognition theorists would argue that a group of designers do not need to understand all the elements of the design problem. They just need to achieve sufficient overlap between their different perspectives and understandings of the design problem for the group to coordinate their design activity.

    Although some writers have proposed that group coordination may be aided by the use of “boundary objects” (Hutchins, 1991; Norman 1991; Star, 1989) we know little of the mechanisms by which effective distributed cognition is achieved and maintained. Lave (1991) argues that the process of socially shared cognition should not be seen as ending in the internalization of knowledge by individuals, but as a process of becoming a member of a “community of sustained practice”. Such communities reflect the sociocultural practices of the group in its work-context,  from the perspective of cultural cohesion, rather than rationality. These practices reflect ways of accommodating the real-life personas that the group needs to encompass, in coordinating work effectively.4.3 Design As Situated Creativity

    5.2.2 Tacit Knowledge About The Application Domain

    Argyris and Schön (1978) had previously compared the espoused theory held by a person to explain how they performed a task, and the theory-in-use, what they actually did to perform the task. Espoused theories tended to conform to explicit organizational procedures and rules, while theories-in-use tended to be derived from implicit understanding of a task’s requirements, which were difficult or impossible to articulate. Schön’s (1983) work developed this concept with a focus on design (among other work-activities). He argued that design depended upon continual interaction with the problem-context, followed by reflection upon that interaction. Learning through doing is key to this perspective.

    5.2.3 Social Construction In Design

    A further thread in the individual perspective of design is provided by Mackenzie and Wajcman (1985), Bijker et al. (1987) and Latour (1987, 1991), building on the social construction theories of Berger and Luckman (1966). These authors argue that technology is socially constructed and that features that enforced a particular behavior or control mechanism were embedded in the form of technology designed for a specific context. Mackenzie and Wajcman (1985) argue that this mechanism is unconscious: the form of new technology is constrained by the form of technological exemplars encountered by the designer. But Latour (1987) argues that the design of technology embeds the explicit intentions of the stakeholders whose interests the designer serves. Scarbrough and Corbett (1991) conclude that, while technology does serve the interests of dominant stakeholders, this is because of a cyclical influence. Technology is shaped by social constructions inherent in the context of design, as shown in Figure 9.

    BHOD8
    Figure 9. How Technology is Shaped by Social Constructions in the Context of Design (Scarbrough and Corbett, 1991)

    This model of design influences is significant because of the dependence that it demonstrates between the cultural context and the design of technology. Taken with Latour’s (1987) work on actor-network theory, which demonstrated a network of causality between the dominant interests and technology, this model provides a convincing argument for how the cultural management of meaning (Smircich and Morgan, 1982). Perceptions of the meaning of technology to the organization (for example, its role and value) influence the design of technical systems, which in turn reinforces the cultural ideology of the organization, which in turn shapes and manages meanings of technology within the organization … and so on. Design is thus both formed by and forms the social context in which it takes place.

    5.3 Design As Situated Action

    5.3.1 Contextually-Situated Action

    Suchman (1987) likened design and planning to steering a boat: while the overall goal may be fixed, the path to achieve that goal is affected by the local contingencies (the waves and currents that are encountered on the way to the goal). Current design practice is constrained by a view of information systems as rule-based information-processing systems, where human work disappears from view. She argues that design can only succeed if the process permits goals to change and contingent processes to emerge. This is reminiscent of Mintzberg’s (1985) arguments concerning strategic planning. As any plan is executed, new contingencies arise that cause some parts of the previous strategy to be discarded and new components to be added. This often leads to a change in the detailed goals of the plan. The consequence of applying a situated action model of human problem-solving to design is shown in Figure 10.

    Parabolic trajectory of design process, as goals evolve over time
    Figure 10. Effect of Evolving Goals on the Design Process

    Design is this no longer “guided” by goals, but a relatively unpredictable process of seeking out short-term and partial goals. As postulated in Gasson (1998), design is conducted by taking “good enough”, partial goal subsets and working with these until situational contingencies or a conflict of explicitly stated requirements with an individual’s implicit model of the way that the “real world” works causes a redefinition, in part of the design goals. The latter type of conflict, referred to by Winograd and Flores (1982, after Heidegger, 1960) as a cognitive breakdown, underlies the groundbreaking study of design by Malhotra et al. (1980), discussed above.The emergent, situated model of design is a significant development in how design is perceived. The majority of extant design theories are goal-oriented – including many of the more recent “soft” approaches. For example, Checkland’s 1981 Soft Systems Methodology is based on the notion that stakeholders in the new information system being designed are capable of defining consensus outcomes that they wish the new system to achieve. But consensus may mean that perspectives are unitary in nature, reproducing a primary constraint of traditional approaches to IS design. Burrell (1983) criticizes the Soft Systems Approach of Checkland (1981) for privileging the management interest through the modeling of consensus, which is unrealistic in a political context where management interests dominate. In contrast, goals constantly evolve with an understanding of the design and the actual path of design is much more complex (and longer) than that perceived by actors external to the design process, who only see the start and end points of the design. This model may explain why timescales always ‘slip’ in IS development projects – a common comment from those not involved in such projects is “why did it take you so long?”.A critical process of design must therefore be the management of external perceptions of the design process, particularly those of the “global network” (Law & Callon, 1992) – the informal network of influential decision-makers affected by, and indirectly attached to a design project.

    Socially Situated Action In Communities of Practice

    An important development of situated action theories arose from arguments that individual action is situated within one or more communities of practice (Brown and Duguid, 1991; Lave 1991; Lave and Wenger, 1991). Brown and Duguid (1991) argue that learning takes place within a sociocultural context: a set of rules, norms and expectations that are constructed by members of the local work community, through their interactions. Lave and Wenger (1991) demonstrate that tasks and artifacts cannot be abstracted independently of their sociocultural context. For example, Brazilian street children who cannot perform mathematical calculations in a classroom context are perfectly capable of performing the same calculations when transacting a sale. Understanding is situated in the context of practice. Remove the understanding of the context in which the task will be performed and you remove the ability to understand and abstract the task. Lave and Wenger also argue that membership of a community of practice depends upon the implicit learning and adoption of the sociocultural norms of that group. This results in a great deal of organizational practice which is not rational, but historical in nature. Information system designers need to understand the sociocultural practices underlying the day to day practice of work-tasks, to be able to design effective support for those practices.

    Organizationally-Situated Processes

    The core role played by information system designers in mediating social and political concerns, and their unpreparedness for this task, was demonstrated by Boland and Day (1989). An IS designer was shadowed and then interviewed about her work. She expressed her concern at having to make decisions about social and political issues which she saw as outside an appropriate scope of work for IS design. Such issues were often dealt with at an implicit level: the designer was not aware of making such decisions until much later.

    6. SUMMARY AND DISCUSSION

    Section 6 summarizes the evolution of design theories, discusses some lacunae in current understandings of how design works and presents a dual-cycle model of design, based on an empirical field study of situated design, to resolve some of the major deficiencies in current information system design theory.

    6.1 The Adoption Of Successive Theories of Design

    As might be expected, the information system design literature follows the paradigmatic assumptions of the period during which they were conducted. Early studies assume an individual, rational, “information processing” view of human cognition (Mayer, 1989) and focus on the design process as unitary, structured problem-solving. Studies in the 1970s and early 1980s follow Simon (1973) in their focus on structuring a unitary problem (e.g. Gane and Sarsen, 1979). Most studies equate the technical information system with an organizational information system and so are surveys of what IT system development methodologies [sic] are in use. Even after ground-breaking theory in socio-technical systems (Emery and Trist, 1960) had been applied to information system design by Enid Mumford (Mumford, 1983; Mumford and Sackman, 1985), empirical studies continued to focus uncritically on applying structured IT system development methods (e.g. Sumner and Sitek, 1986; Necco, 1989).

    This emphasis is followed by the early “psychology of programming” interest group literature (e.g. Hoc et al., 1980). Early empirical studies in this area found a distinct difference between the problem decomposition strategies employed by novice vs. experienced programmers (Adelson and Soloway, 1985; Jeffries et al., 1981; Kant and Newell, 1984). Novice programmers tended to employ a top-down, depth-first decomposition strategy (i.e. hierarchical decomposition, as shown in Figures 2 and 3 above). Experienced programmers employed a top-down, breadth-first strategy, developing an integrated approach to the synthesis of solutions to partial problem-statements in an ad hoc manner. They conclude that (experienced) design is therefore opportunistic in nature. Most of these studies employed a very structured design problem, such as designing a program to control the priorities of an elevator. They also often used students as proxy subjects for “novice” and “experienced” programmers, which limits the validity of their findings.

    Thus it can be seen that an evolution is perspective has taken place: an “information system” is now viewed as a work-system. Support for the IT system user’s decision-processes and a focus on the user-interface with the IT system characterize this period of IS design literature. This period contains a great many studies that focus on the application of prescriptive methods for ensuring usable systems, in terms of task-fit and IT system interface design. “Human-centeredness” and emancipation become popular threads in the IS and IR-related organizational change literature (for example: Corbett, 1987; Gill, 1991; Scarbrough & Corbett, 1991; Zuboff, 1988) . Prototyping becomes a popular method for developing IT systems (e.g. Boehm, 1984; Floyd, 1984, 1987). But prototyping does not necessarily focus on work-task support.

    Floyd (1984) contrasts evolutionary prototyping, an approach that involves IT system users in evaluating and providing input to the design of an evolving system, with experimental prototyping, the production of prototypes for feasibility testing of a technical concept. But the perspective of design is seriously constrained by a focus on the “user” of a technical system, rather than on the combined social, work-process, business strategy and technical goals of an organizational information system.Social construction is brought into the explicit processes of design by the “soft” systems perspective (Checkland, 1981). Churchman’s (1971) work on inquiring systems drew attention to the interconnectedness of system elements and requirements, and the need for purposive inquiry into the problem situation to define design goals. Checkland (1981, Checkland & Scholes, 1990) built on this concept to provide an holistic theory of information system design that has four main properties:

    • An organizational information system is a system of human-activity supported by an IT system. The task of information system design is to investigate the problem-situation concerning a particular human-activity system and to determine appropriate interventions, only some of which may involve IT system design.
    • Any human-activity system involves multiple sub-systems of tasks, performed for multiple purposes. A major priority for information system design is for the designer, participants and other stakeholders in the human-activity system to understand and to separate conceptually the purposive systems that constitute the whole.
    • Different information system stakeholders have different worldviews that cause them to interpret the meaning and purpose of human-activity in different ways. To succeed in implementing an information system that will benefit the majority of stakeholders, the design process must focus on the collective negotiation of requirements for action.
    • Selection of appropriate scope(s) for organizational intervention(s), such as the design of a new information system (work-task changes plus IT system support), should be made explicit to all system stakeholders and subject to consensus agreement.

    Criticisms may be leveled at the detailed mechanisms proposed by Checkland and other soft systems authors: for example, Burrell (1983) criticized Checkland’s (1981) notion of consensus, arguing that the facilitated workshops proposed for this purpose would always privilege the management interest over other interests). But Checkland’s work has had a significant influence on both theories and practice and is responsible for changing the dominant paradigm of information system design. The negotiation and incorporation of a consensus set of system requirements, derived from multiple stakeholder worldviews is a very long way from Alexander’s (1964) argument that the structure of a system solution is embedded in the system problem and exists independently of the designer. Checkland’s work has given a deeper meaning to Rittel’s (1972) argument that design is cognitively-constructed and so should be derived through “argumentation”.This perspective can be contrasted with the hard systems approach, which sees system properties as being objective, rather than emergent, with communication and control being human interactions with the material (computer-based) ‘system’, rather than properties of the system itself. While soft systems approaches to IS design see IT as the “serving system” to a “served system” of purposeful human-activity (Winter, Brown and Checkland, 1995), hard systems approaches see IT as the target object system. However, the view is still static: the soft systems literature views design as being a process of negotiating a consensus on organizational system definitions, which embody structure and persistence. It may also be argued that the whole thrust of the ‘problem’ investigation literature in the field of IS is aimed at structuring problems and constructing structured data (Preston, 1991).

    An alternative model rejects organizational structure as the basis for design (Truex & Klein, 1991): organizations are seem as emergent and dynamic, with design defined as situated, evolutionary learning.More recently, information system design is viewed as socially-embedded. The work of Winograd and Flores (1986) argued that design includes “the generation of new possibilities” in an organizational change context. They provided significant insights into the nature of cognition in design and its social context. Brown and Duguid (1990) take this perspective further with a discussion of design as supporting communities of practice. They view design as socially- situated and emergent: for successful design, the IS designer should be a peripheral participant in the community of practice which the information system is intended to support.

    6.2 Limitations Of Current Design Practice Arising From The Various Literatures On Design

    It would appear, from the review presented here, that design theories are primarily concerned with problem closure. While this may have been appropriate at a time when information technology designers were concerned with relatively well-defined, unitary problems, it is no longer appropriate for groups of designers engaged in the exploration and definition, as well as the solution of, “wicked” problems relating to organizational information systems. The problem of “the problem” dominates design theories and yet design models are concerned more with solution definition than with problem investigation. Given the concerns expressed above, coupled with the limitations of human cognition, it would appear that evolutionary models of the design process are more appropriate for relatively well-defined problems, that need a complex technical solution (and so are analyzable in Simon’s sense), than for complex organizational problems which require formulation, inquiry, reformulation, negotiation, and argumentation before any action can be taken (Rittel, 1972).

    Thus we end with five areas of concern, that limit current conceptualizations of design. As with any “wicked” problem, these five areas may be conceptually separated, yet are interrelated.

    1. The labor process problem: While the traditional model provides a clear basis for managing the labor process in IS development, it artificially separates the conceptual and social processes of organizational IS development which are referred to here as design processes. Design activity cannot be separated into a single stage of the system development lifecycle, as in the traditional model: requirements specification, design and technical system implementation are intertwined (Bansler and Bødker, 1993) and so require support and legitimacy at all stages of the system development life-cycle. Radical redesign of a technical system may occur even at the system implementation stage, when problems are encountered during interactive user testing; such redesign is often referred to euphemistically as ‘system maintenance’ (Lientz and Swanson, 1980).

    2. The design process-model problem: The way in which design is managed is based upon a decompositional, breadth-first exploration of the design problem, where all requirements for a solution are defined before problem decomposition is attempted. But empirical studies of individual design strategy show that design strategies are “opportunistic” in nature, adopting depth-first, iterative, recursive or ‘inside-out’ approaches (Ball and Ormerod, 1995). Turner (1987) argues that “requirements and solutions migrate together towards convergence”. Designers fit known solutions to parts of the problem, or reframe the problem to fit known solutions (Malhotra et al., 1980; Guindon, 1990; Turner, 1987).

    3. The bounding problem: The traditional model presupposes a design problem which is unitary in nature, which exists independently of the designer’s frame of reference and which is capable of analysis under conditions of “bounded rationality” (Simon, 1973), where the designer bounds the problem until it is amenable for structured analysis. But the design of complex organizational information systems centers upon the investigation of socially-constructed, “wicked” problems (Rittel and Webber, 1973), which are associated with interrelated, organizational systems of activity. Such problems cannot be “stated” or “solved” in the sense of definitive rules or requirements for a solution (Moran and Carroll, 1996): they are socially-constructed and subjective (Schön, 1983; Galliers and Swan, 1997) and each problem is interrelated with – and thus cannot be defined separately from – multiple, other organizational problems (Rittel and Webber, 1973).

    4. The collaboration problem: The traditional model of IS design is based upon an individual, cognitive model of problem- solving and so excludes many necessary social processes required for collective investigation and negotiation of design attributes. Empirical studies indicate the centrality of communication, shared learning and project co-ordination, but such processes are often deemed illegitimate by managers guided by traditional, individual models of design (Curtis et al., 1988; Walz et al., 1993). Existing approaches resolve this problem by assuming that a unitary, intersubjective model of the designed system can be negotiated by design team members. As we have argued above, this may not be feasible in most organizational information systems supporting complex human work-systems, as these problems are “wicked” problems (Rittel, 1972), that are socially constituted, represent multiple work-goals and are highly interrelated.

    5. The context problem: The traditional model ignores the context of design, as situated in a socially-constituted organizational culture. The form taken by a design involves both technical and social issues, for example, designers may debate the form of a technical artifact in terms of whether users should be prevented by its design from amateur repairs, or whether its design should reflect users’ desires for conspicuous consumption (Callon, 1991). Design is also political: an information system may change the nature of work and the basis of power, for different stakeholder groups within an organization (Wilkinson, 1983). Design processes are viewed as irretrievably interrelated with context: design activity is “situated” in knowledge and assumptions about organizational contexts (Gasser, 1986; Suchman, 1987; Lave, 1991; Lave and Wenger, 1991). Legitimate system “solutions” to political, situated problems are negotiated, rather than defined and are emergent, rather than explicitly stated (Boland and Tenkasi, 1995).

    The five problems of design result in a separation of degree, rather than concept, between the design of a physical artifact and the design of an information system. Current models of design focus on design closure and so delegitimize the essential activities of investigating, negotiating and formulating design problems. We need to focus on “opening up” the design problem, to legitimize the modes inquiry required for effective design of complex, situated information systems. An understanding of this dialectic has significant implications for both the research and practices of design. The situated nature of design requires design models to be constructed through sharing simulated design contexts, rather than through the medium of abstract representational models; this is ill-supported by traditional methods for design. Such constructions cannot be shared intersubjectively, but rather are distributed between collaborating design-group members.

    Additionally, the contextual constraints upon IS design are considered to have significant implications for design and constitute a critical area of activity which should be managed proactively, particularly where influential organizational decision-makers are involved as stakeholders in a design initiative. These findings have implications for co-operative learning, knowledge management and organizational innovation. If organizational problem-investigation processes are seen as involving distributed knowledge, then the focus of organizational learning and innovation shifts from sharing intersubjective organizational knowledge (achieving a “common vision”) to collaborating in constructing distributed organizational knowledge which is emergent, political and incomplete.

    6.3 A Dual-Cycle Model of Situated, Information System Design

    Section 8 concludes the discussion by presenting a dual-cycle model of design, based on this review of current knowledge, to resolve some of the major deficiencies in current information system design theory.

    Resolution Of The Five Problems: This can only be highlighted the central role played by a periodic reopening of the design problem-definition in achieving shared understanding.

    Findings From Previous Studies:

    • Effective, shared design understanding of design needs results from repeated revisiting of design problem definitions by stakeholders. This is driven through the introduction of a new ‘primary generator’ idea. Too early a closure of the problem is counter-productive.
    • It is not feasible for each member of a collaborative design team to understand all of the design rationale for a complex organizational information system. Readiness for problem closure (and solution specification) is gauged by the extent to which the team share an understanding of organizational goals and outcomes, not by the extent to which they share an understanding of the designed system.
    • The effectiveness of a design is constrained by the need to manage external perceptions and expectations of design outcomes. Successful expectation-management is key to successful evolution of the design, as stakeholder understanding of the design problem improves.

    The extent to which a shared understanding of organizational goals was found to be more critical than a shared understanding of design outcomes to group perceptions of design completeness: an initial, dual-cycle model of collaborative design processes was proposed (Gasson, 1998a). The model is shown in Figure 11.

    BHOD10
    Figure 11. Dual-cycle Model of Collaborative Design (Gasson, 1998a)

    Show progress and iterations of the following process model of design:- opening up activity: requires detailed investigation of how the organization works (not just how individual domains work) – need to understand target system in terms of work-processes, information use and interactions with other organizational systems of work – in doing so, team members construct process goals (how they want to affect the organization), – which enable them to effectively move into second loop of operation:– progress (delegation) only possible once team-members trust each other enough to hand off parts of design – happens only when process goals are shared – then move on to closing down loop;- breakdown (collective) occurs when product goals conflict with what has been designed – need to reframe process goals collectively in terms of expanded understanding of role of designed process in larger organizational process – this requires return to original loop of operation (opening up), to define new goals and boundaries;- once new process goals are agreed, can move back to closing down loop, to division of labor in designing details of new process and determining collective action to implement this.

    7. IMPLICATIONS FOR RESEARCH AND PRACTICE

    The dialectic expressed by the dual-cycle model is critical in our understanding of what design is. It is counterproductive to artificially separate problem articulation from solution formulation. Problems and solutions converge, at an individual level, at a group level and in the politically- negotiated organizational processes that surround design activity. Current models of design focus on design closure and so delegitimize the essential activities of investigating, negotiating and formulating design problems. A dual-cycle model of design is proposed: one that focuses on “opening up” the design problem, as much as design closure. An understanding of this dialectic has significant implications for both research and practice of design that are discussed at the end of the paper. The implications for research are that this type of model needs to be investigated in practice and its contingencies understood. How well does this type of “dual-cycle” model fit with the activities required for effective design? What elements drive this type of model, how do we ensure an effective cycling between inquiry and closure, and how do we recognize design stopping points? The implications for practice are that we need to understand the contingencies of this type of model for IS design process management. How do we assess progress, for such an iterative model? How do we plan IS design and development projects, in a way that ensures agreement from project sponsors and a definition of interim deliverables? These questions remain to be answered.

    References

    Ackoff, R.L. (1974) Redesigning The Future, Wiley

    Adelson, B. and Soloway, E. (1985) ‘The role of domain experience in software design’, IEEE Transactions In Software Engineering, Vol. 11, pp 1351-1360

    Alexander, C. (1966), ‘A City Is Not A Tree, Design, No. 206, February 1966, pp.46-55

    Alexander, C. (1999) ‘The origins of pattern theory: The future of the theory, and the generation of a living world’, IEEE Software Sept-Oct. 1999, pp 71-82.

    Argyris, C. & Schön, D. (1978) Organizational Learning: A Theory Of Action Perspective, Addison-Wesley, Reading, Mass.

    Argyris, C. (1987) ‘Inner Contradictions In Management Information Systems’, in R. Galliers, Information Analysis, Selected Readings, Addison-Wesley

    Ball, L.J. & Ormerod, T.C. (1995) Structured and opportunistic processing in design: a critical discussion’, Int. Journal of Human-Computer Interaction, Vol. 43, pp 131-151

    Bansler, J.P. & Bødker, K. (1993) ‘A Reappraisal of structured analysis: design in an organizational context’, ACM Transactions on Information Systems, Vol. 11, No. 2, pp. 165-193

    Barry, C. and Lang, M. (2003) “A comparison of ‘traditional’ and multimedia information systems development practices,” Information and Software Technology (45:4), 217-227.

    Beck, K. (1999) eXtreme Programming Explained: Embrace Change, Reading, MA: Addison-Wesley.

    Berger, P.L. and Luckman, T. (1966) The Social Construction Of Reality: A Treatise In The Sociology of Knowledge, Doubleday & Company Inc., Garden City, N.Y.

    Bertanlaffy, L. von (1968) General System Theory, Braziller, UKBeynon-Davies, P. and Holmes, S. (1998) “Integrating rapid application development and participatory design,” Software, IEE Proceedings (145:4), 105-112.

    Bijker, W.E., Hughes, T.P. and Pinch, T.J. (Eds.) (1987) The Social Construction of Technological Systems, New Directions in the Sociology and History of Technology, MIT Press, Cambridge, MA

    Boehm, B.W., Gray, T.E. & Seewalt, T. (1984) ‘Prototyping Versus Specifying: A Multiproject Experiment’, IEEE Transactions on Software Engineering, Vol. SE-10, Number 3, pp 290-302

    Boehm, B.W., Gray, T.E., and Seewalt, T. (1984) “Prototyping Versus Specifying: A Multiproject Experiment,” IEEE Transactions on Software Engineering (SE-10:3), 290-302.

    Boehm, B.W. (1988) ‘A Spiral Model Of Software Development And Enhancement’, IEEE Computer Journal, May 1988

    Boland, R. and Day, W.F. (1989), The experience of systems design: a hermeneutic of organisational action, Scandinavian Journal of Management, 5,2 87-104

    Boland, R J, Tenkasi, R. V., Te’eni, D. (1994) Designing Information Technology to Support Distributed Cognition, Organization Science, Vol 5, No 3, pp 456-475

    Boland, R J and Tenkasi, R V (1995) Perspective Making and Perspective Taking in Communities of Knowing, Organization Science, Vol 6, No 4, pp 350-372

    Booch, G. (1991) Objected-Oriented Design with Applications. Benjamin-Cummings Publishing Co. ISBN:0-8053-0091-0

    Brown, J.S. & Duguid, P. (1991) “Organizational Learning and Communities of Practice: Toward a Unified View of Working, Learning, and Innovation”, Organization Science, Vol.2, No. 1, pp. 40-57.

    Burrell, G. (1983) ‘Systems Thinking, Systems Practice: A Review’, Journal Of Applied Systems Analysis, 10.

    Butler, T. and Fitzgerald, B. (2001) “The relationship between user participation and the management of change surrounding the development of information systems: A European perspective,” Journal of End User Computing (13:1), 12-25.

    Callon, M. (1987) ‘Society in the making: The study of technology as a tool for sociological analysis’, in W.E. Bijker, T.P. Hughes, and T.J. Pinch (Eds.) The Social Construction of Technological Systems, New Directions in the Sociology and History of Technology, MIT Press, Cambridge, MA

    Callon, M. (1991) ‘Techno-Economic Networks and Irreversibility’, in J. Law (Ed.) A Sociology of Monsters. Essays on Power, Technology and Domination, Routledge, London, UK

    Carroll, J.M. and Rosson, M.B. (1992) “Getting Around The Task-Artifact Cycle How To Make Claims and Design By Scenario,” ACM Transactions on Information Systems (10:2), 181-212.

    Cavaye, A.L.M. (1995) “User Participation In System Development Revisited,” Information & Management (28:5), 311-323.

    Checkland, P. (1981) Systems Thinking, Systems Practice, John Wiley & Sons, Chichester.Checkland, P. and Holwell, S. (1998) Information, Systems and Information Systems: Making Sense of the Field, Chichester UK: John Wiley & Sons.

    Checkland, P. & Scholes, J.(1990) Soft Systems Methodology In Action, John Wiley & Sons, ChichesterChurchman, C. W. 1971. The Design of Inquiring Systems: Basic Concepts of Systems and Organization. Basic Books , New York, NY

    Cohen, M.D., J.G. March and J.P. Olsen (1972) “A Garbage-Can Model of Organizational Choice”, Administrative Science Quarterly, Vol. 17, 1-25

    Cooley, M (1990) Architect or Bee? The Human / Technology Relationship, 2nd Edition. Southend Books, UK

    Cooper, A. (1999) The Inmates Are Running the Asylum: Why High Tech Products Drive Us Crazy and How To Restore The Sanity. Sams Publishing.

    Corbett, J.M., Rasmussen, L.B. & Rauner, F. (1991) Crossing the Border: The social and engineering design of computer integrated manufacturing systems, Springer-Verlag, London

    Darke, J. (1979) ‘The Primary Generator And The Design Process’, Design Studies Vol. 1, No 1. Reprinted in N. Cross (Ed.) Developments In Design Methodology (1984), J. Wiley & Sons, Chichester, pp 175-188

    Emery, F.E. & Trist, E.L. (1960) ‘Socio-Technical Systems’ in C.W. Churchman & M. Verhulst (Eds.) Management Science Models and Techniques, Vol. 2, Pergamon Press, Oxford, UK

    Floyd, C. (1984) ‘A Systematic Look At Prototyping’, in R.Budde, K.Kuhlenkamp, L.Mathiassen, & H.Zullighoven (Eds.) Approaches To Prototyping, Springer-Verlag Books

    Floyd, C. (1987) ‘Outline of a Paradigm Change in Software Engineering’ in G. Bjerknes, P. Ehn & M. Kyng (Eds.) Computers and Democracy: A Scandinavian Challenge, Avebury Gower Publishing Company, Aldershot, UK

    Fowler, M. (2003) “The New Methodology,” Online at http://www.martinfowler.com).

    Fowler, M. and Highsmith, J. (2001) “The Agile Manifesto,” Software Development Magazine).

    Friedman, A.L. (1990) ‘ Four Phases of Information Technology – Implications for Forecasting IT Work’, Futures, Guildford, Vol. 22, No 8., pp. 787-800.

    Gane, C. & Sarsen, T (1979) Structured Systems Analysis: Tools and Techniques, Prentice-Hall, New Jerseyasson, S. (1998) ‘Framing Design: A Social process View of Information System Development’, in Proceedings of ICIS ’98, Helsinki, Finland, December 1998, pp. 224 – 236.

    Gasson, S. (1999) “The Reality of User-Centered Design,” Journal of End User Computing (11:4), 3-13.

    Gasson, Susan (2003) “Human-Centered vs. User-Centered Approaches to Information System Design,” Journal of Information Technology Theory and Application (JITTA): Vol. 5: Iss. 2, Article 5. Available at http://aisel.aisnet.org/jitta/vol5/iss2/5/

    Guindon, R. (1990a) ‘Designing the design process: Exploiting opportunistic thoughts’, Human-Computer Interaction, No. 5

    Guindon, R. (1990b) ‘Knowledge Exploited By Experts During Software System Design’, International Journal of Man-Machine Studies, Vol. 33, October 1990, pp 279-304

    Heidegger, M. (1962) Being and Time, Harper & Row, New York

    Highsmith, J. (2000) Adaptive Software Development: A Collaborative Approach to Managing Complex Systems, New York, NY: Dorset House Publishing.

    Hoc, J.M. , Green, T.R.G., Samurçay, R and Gilmore, D.J. (1980) [Eds.] Psychology of Programming, Academic Press, London, UK

    Howcroft, D. and Wilson, M. (2003) “Participation: ‘Bounded freedom’ or hidden constraints on user involvement,” New Technology, Work, and Employment. (18:1), 2-19.

    Hutchins, E. (1991), ‘The Social Organization of Distributed Cognition’, in Lauren B. Resnick, John M. Levine, and Stephanie Teasley (eds.) Perspectives on Socially Shared Cognition, Washington, DC: American Psychological Association. pp. 283-307.

    Jacobson, I. 1992 Object-Oriented Software Engineering: A Use Case Driven Approach. Addison-Wesley. ISBN 0-201-54435-0

    Jeffries, R., Turner, A.A., Polson, P.G. & Atwood, M.E. (1981) ‘The Processes Involved In Designing Software’, in J.R. Anderson (ed.) Cognitive Skills And Their Acquisition, Lawrence Erlbaum Associates, Hillsdale, New Jersey

    Kant, E. and Newell, A. (1984) ‘Problem solving techniques for the design of algorithms’, Information Processing andManagement, Vol. 28, pp 97-118

    Kirsch, L.J. and Beath, C.M. (1996) “The enactments and consequences of token shared and compliant participation in information systems development,” Accounting Management and Information Technology (6:4), 221-254.

    Land, F. and Hirschheim, R. (1983) ‘Participative Systems Design: Rationale, Tools and Techniques’, Journal Of Applied Systems Analysis, Vol. 10.

    Lanzara, G.F. (1983) ‘The Design Process: Frames, Metaphors And Games’, in U. Briefs, C. Ciborra, L. Schneider (eds.) Systems Design For, With and By The Users, North-Holland Publishing Company

    Latour, B. (1987) Science in Action, Harvard University Press, Cambridge, MALatour, B (1991) ‘Technology is society made durable’, in J. Law (Ed.) A Sociology of Monsters. Essays on Power, Technology and Domination, Routledge, London, UK

    Laukkanen, M. (1994) ‘Comparative Cause Mapping of Organizational Cognitions’, Organization Science, Vol. 5, No. 3, reprinted in J.R. Meindl, C. Stubbart, J.F. Porac (Eds.) Cognition Within and Between Organizations, Sage Publications, California, 1996

    Lave, J. (1991) ‘Situating Learning In Communities of Practice’ in L.B. Resnick, J.M. Levine, S.D. Teasley (Eds.) Perspectives on Socially Shared Cognition, American Psychological Association, Washington DC, pp 63-82

    Lave, J. & Wenger, E. (1991) Situated Learning: Legitimate Peripheral Participation, Cambridge University Press, Cambridge, UK

    Law, J. & Callon, M. (1992) ‘The Life and Death of an Aircraft: A Network Analysis of Technical Change’ in W.E. Bijker and J.Law (Eds.) Shaping Technology/Building Society: Studies In Sociotechnical Change, MIT Press, Cambridge, MA

    Lehaney, B., Clarke, S., Kimberlee, V., and Spencer-Matthews, S. (1999) “The Human Side of Information Development: A Case of an Intervention at a British Visitor Attraction.,” Journal of End User Computing (11:4), 33-39.

    Lewin, Kurt (1951). Field theory in social science: Selected theoretical papers (D. Cartwright, Ed.). Harper & Row, New York.

    Mackenzie, D.A. & Wajcman, J. (1985) ‘Introduction’ to Mackenzie, D.A. & Wajcman, J. (Eds.) The Social Shaping Of Technology, Open University Press, Milton Keynes

    Maher, M.L. and Poon, J. 1996. “Modelling Design Exploration as Co-Evolution,” Microcomputers in Civil Engineering (11:3), pp. 195-210.

    Malhotra, A., Thomas, J., Carroll, J., and Miller, L. 1980. “Cognitive Processes in Design,” International Journal of Man-Machine Studies (12), pp. 119-140.

    Markus, M.L. and Bjorn-Andersen, N. (1987) “Power over users: its exercise by system professionals,” Communications of the ACM June 1987 (30:6).

    Mathiassen, L. & Stage, J. (1992) ‘The Principle of Limited Reductionism In Software Design’, Information Technology & People, Vol. 6, No. 2, pp 171-185

    Mayer, R.E. (1989) ‘Human Nonadversary Problem-Solving’ in K.J. Gilhooley (Ed.) Human and Machine Problem-Solving, Plenum Press, New York

    McCracken, D. D. & M. A. Jackson, (1982). Life Cycle Concept Considered Harmful. ACM SIGSOFT, Software Engineering Notes. 7(2):29-32

    Mintzberg, H. & Waters, J.H. (1985) ‘Of Strategies, Deliberate and Emergent’, Strategic Management Journal, vol. 6, pp 257-72)

    Mumford, E. (1983) Designing Participatively, Manchester Business School, UK

    Mumford, E. & Sackman, H. (1975) Human Choice and Computers, North-Holland Publishing Company

    Mumford, E. and Weir, M. (1979) Computer Systems in Work Design: the ETHICS Method, New York NY: John Wiley.

    Necco, C.R. (1989) ‘Evaluating Methods of Systems Development: A Management Survey’, Journal of Information Systems Management, Vol. 6, Issue 1, pp 8-16

    Nelson, D. (1993) “Aspects of Participatory Design – Commentary on Muller et al. 1993,” Communications of the ACM (34:10), 17-18.Nelson, E. (2002) “Extreme Programming vs. Interaction Design,” FTP Online Magazine:January 15).

    Preece, J., Rogers, Y., and Sharp, H. (2002) Interaction Design: Beyond Human-Computer Interaction, New York, NY: Wiley.

    Rittel, H.W.J. (1972) ‘Second Generation Design Methods’ Reprinted in N. Cross (Ed.) Developments In Design Methodology (1984), J. Wiley & Sons, Chichester, pp 317-327

    Rittel, H.W.J. & Webber, M.M. (1973) Dilemmas in a General Theory of Planning, Policy Sciences, Vol. 4, pp 155-169

    Rosenbrock, H.H. (1988) ‘Engineering As An Art’, AI & Society, Vol. 2, No. 4, pp 315-320

    Royce, W. W., (1970) ‘Managing the Development of Large Software Systems: Concepts and Techniques’, in Proceedings of WESCON August 1970, pp 1-9. Reprinted in: Proceedings ICSE 9, Computer Society Press, 1987, pp. 328-338.

    Rumbaugh, J. (1991) Object-Oriented Modeling and Design. Prentice HallScarbrough, H. and Corbett, J.M. (1991) Technology and Organisation: Power, Meaning and Design, Routledge.

    Schön, D.A. (1983) The Reflective Practitioner: How Professionals Think In Action, Basic Books, NYSimon, H. A. (1957) Models of Man: Social and Rational, New York: John Wiley.

    Simon, H.A. (1960) The New Science of Management Decision, Harper & Row, New YorkSimon, H.A. (1973) ‘The Structure of Ill-Structured Problems’, Artificial Intelligence, No. 4, pp 145-180

    Simon, H.A. (1981) Sciences of The Artificial, Second Edition, MIT Press, Cambridge, MA

    Smircich, L. & Morgan, G. (1982) ‘Leadership: The management of meaning’, Journal of Applied Behavioural Science, Vol. 18, No. 3, pp 257-273

    Star, S. L. (1989), ‘The Structure of Ill-Structured Solutions: Boundary Objects and Heterogeneous Distributed ProblemSolving’, in L. Gasser and M. N. Huhns (eds.) Distributed Artificial Intelligence, Vol. II. San Mateo, CA: Morgan Kaufmann Publishers Inc., pp. 37-54.

    Suchman, L. (1987) Plans And Situated Action, Cambridge University Press, MA, USASumner, M. & Sitek, J. (1986) ‘Are Structured Methods for Systems Analysis and Design Being Used?’, Journal of Systems Management, June 1986, Vol. 37, Issue 6, pp 18-23

    Taylor, F.W. (1911) Principles Of Scientific Management, Harper, New York

    Visser, W. & Hoc J-M. (1990) ‘Expert Software Design Strategies’ in J.M. Hoc, T.R.G. Green, R. Samurçay, D.J. Gilmore (Eds.) Psychology of Programming, Academic Press, London, UK

    Weber, M. (1922), The Theory of Social and Economic Organization, translated by A.M. Henderson and T. Parsons (Eds.), Oxford University Press, New York NY, [This English translation published 1947.]

    Winograd, T. & Flores, F. (1986) Understanding Computers And Cognition, Ablex Corporation, Norwood, New JerseyWinograd, T.A. (1994) “Designing a language for interactions,” interactions (1:2), 7-9.

    Winograd, T. (1996) “Introduction,” In: T. Winograd (ed.) Bringing Design To Software, New York NY: Addison-Wesley Publishing.

    Winter, M.C., Brown, D.H. & Checkland, P.B. (1995) ‘A Role For Soft Systems in Information Systems

    This paper represents a work in progress. All contents copyright © Susan Gasson, 2005-2023. Contents may not be reproduced or linked to any other site without permission from Dr. Susan Gasson.

  • Boundary-Spanning Design

    Boundary-Spanning Design

    We talk about organizational design and change problems as if they were given — as if there is only one problem that could be defined for any situation and only one best way to design a solution. This is far from true. Design is like completing a jigsaw puzzle without the picture on the box. We find a bit of sky and then realize it is, in fact, part of a swimming pool.

    Organizational change, design, and problem-solving depend on pattern recognition. When organizations assemble a team of managers or designers to represent different business groups, each person brings the assumptions of their group culture and “best practices” with them. They are expected to collaborate as if they totally understand every single part of every business practice involved. But there are multiple, interrelated problems involved in any situation and different stakeholders will perceive different problems depending on their background and experience. The key skill is to recognize those problems and tease them apart, dealing with each one separately.

    Organizational problems – whether operational or strategic – typically span organizational boundaries, so the design of business processes and enterprise systems is complicated. Boundary-spanning systems of work need systemic methods and solutions. The point is to understand how collaboration works when people lack a shared context or understanding — and to use design approaches that support collaborative problem investigation, to increase the degree of shared understanding as the basis for consensus and action in organizational change. To enable collaborative visions, people need some point of intersection. In typical collaborations – for example a design group working on change requirements – the “shared vision” looks something like Figure 1.

    Venn diagram, showing intersubjective frames, intersections of understanding between 2 stakeholders, and distributed cognition as the union of all frames

    Figure 1. Differences Between Individual, Shared, and Distributed Understanding In Boundary-Spanning Groups

    The only really shared part of the group vision is the shaded area in the center. The rest is a mixture of partly-understood agreements and consensus that mean different things to different people, depending on their work background, their life experience, education, and the language they have learned to use. For example, accountants use the word “process” totally different to engineers. Psychologists use it to define a different concept from either group. When group members perceive that others are not buying in to the “obvious” consequences of a shared agreement, they think this is political behavior — when in fact, it most likely results from differences in how the situation and group agreements are interpreted.

    Boundary-spanning collaboration is about maximizing the intersections of understanding using techniques such as developing shared representations and prototypes, to test and explore what group members mean by the requirements they suggested. It involves developing group relationships to allow group members to delegate areas of the design to someone they deem knowledgeable and trustworthy. It uses methods to “surface” assumptions and to expose differences in framing, in non-confrontational ways. But most of all, it involves processes to explore group definitions of the change problem, in tandem with emerging solutions. We have understood this for a long time: Enid Mumford, writing in the 1970s and 80s, discussed the importance of design approaches that involved those who worked in the situation, and the need to balance job design and satisfaction with the “bottom line interests” of IT system design (Mumford and Weir 1979; Mumford 2003) – also see Porra & Hirchheim (2007). This theme has been echoed by a succession of design process researchers: Horst Rittell (Rittel 1972; Rittel and Webber 1973), Peter Checkland (1999), and Stanford’s Design Thinking initiative (although “design thinking” tends to be co-opted to focus on “creativity” in interface design, rather than the integrative design approach that may have been envisioned).

    The problem is that most collaboration methods for design of organizational and IT-related change, whether focused on enterprise systems design, business process redesign, cross-functional problem-solving, or IT support for business innovation, employ a decompositional approach. Decomposition fails dramatically because of distributed sensemaking. Group members cannot just share what they know about the problem, because each of them is sensitized by their background and experience to see a different problem (or at least, different aspects of the problem), based on where they are in the organization. Goals for change evolve, as stakeholders piece together what they collectively know about the problem-situation — a process akin to assembling a jigsaw-puzzle. (Productive) conflict and explicit boundary negotiation are avoided because group-members lack a common language for collaboration so misunderstandings are ascribed to political game-playing. We need design and problem-solving approaches that support the distributed knowledge processes underpinning creativity and innovation — approaches that recognize and embrace problem emergence, boundary-negotiation, and the development of shared understanding. This is the core of my work on improvising design.

    References

    Balcaitis, Ramunas 2019. What is Design Thinking? Empathize@IT https://empathizeit.com/what-is-design-thinking/. Accessed 8-15-2023.

    Checkland, P. 1999. Systems Thinking Systems Practice: Includes a 30 Year Retrospective, (2nd. Edition ed.). Chichester UK: John Wiley & Sons.
    [Original edition of this book published in 1981].

    Mumford, E. 2003. Redesigning Human Systems. Hershey, PA: Idea Group.
    Mumford, E. and Weir, M. 1979. Computer Systems in Work Design: The Ethics Method. New York NY: John Wiley.

    Porra, J., & Hirschheim, R. 2007. A lifetime of theory and action on the ethical use of computers: A dialogue with Enid Mumford. Journal of the Association for Information Systems, 8(9), 3.

    Roumani, Nadia 2022. Integrative Design: A Practice to Tackle Complex Challenges, Stanford d.school on Medium, Accessed 8-15-2023.

    Rittel, H.W.J. 1972. “Second Generation Design Methods,” DMG Occasional Paper 1. Reprinted in N. Cross (Ed.) 1984. Developments in Design Methodology, J. Wiley & Sons, Chichester: 317-327.

    Rittel, H.W.J. and Webber, M.M. 1973. “Dilemmas in a General Theory of Planning,” Policy Sciences (4), pp. 155-169.

  • Wicked Problems Explained

    Wicked Problems, Explained

    Wicked problems are problems that defy definition. They are too complicated for any one person to understand in their entirety, they span organizational and group  boundaries, and call upon knowledge from multiple, specialized areas of work. We only ever understand a small part of what people in other groups and departments do. Solving a wicked problem is like piecing together a jigsaw puzzle — without the picture on the box.

    When faced with uncertainty, we retreat to what we know – knowledge that is based on our own experience. In our day-to-day practice, we develop a repertoire of solutions-that-work, patterns and decision-criteria that fit the problems we face. We know (from research studies), that people try to fit partial solutions, based on this experiential knowledge, to the problems they perceive. When that does not work, they ask colleagues and trusted contacts for a solution. When that does not work, they re-define the problem to fit the partial solutions available to them.

    That is why change management groups get bogged down in political disputes. Managers from one function don’t realize that people from different areas of the business understand their words and ideas differently than they intended. They make assumptions about how these other areas work, based on what they know from their own area. When we extend our personal solutions to fit how people work in other business groups and functions, they fail miserably because, in essence, everyone is solving a different problem.

    In 1973, Horst Rittel and Melvin Webber wrote a paper exploring organizational planning and change-management problems were not amendable to logical analysis techniques. They contrasted “tame” problems, which could be expressed as a set of rules, or constraints, with “wicked” problems, which could not be defined in terms of an algorithm. They defined ten characteristics of wicked problems, shown in Figure 1.

    Ten aspects of wicked problems, discussed in text following
    Figure 1. Ten Characteristics of Wicked Problems (Rittel & Webber, 1973)

    1. There is no definitive formulation of a wicked problem.
    Wicked problems cannot be defined and have no clear boundary. They consist of multiple problem-elements, some of which appear more important to some stakeholders than others, depending on their experience and where they are in the organization. Rittel (1972) argues that we cannot use conventional problem analysis methods, as our understanding of the problem-situation and potential solutions emerge in tandem, through mutual exploration and “argumentation” concerning the nature of the problem and appropriate solutions.

    2. Every wicked problem is essentially unique.

    “… by ‘essentially unique’ we mean that, despite long lists of similarities between a current problem and a previous one, there always might be an additional distinguishing property that is of overriding import­ance. Part of the art of dealing with wicked problems is the art of not knowing too early which type of solution to apply.” (Rittel & Webber, 1973)

    In software design, the idea of not converging on a solution too early is known as avoiding premature design. In management science, the term complexification is used to explain how shared mental models, or frames become more complicated as groups of problem-solvers develop a shared language and pool perspectives on a problem-situation. The iterative processes of perspective-taking (from others in the group) and perspective-making (to improve shared understanding), underpin effective argumentation for wicked problems (Boland and Tenkasi 1995). 

    3. Wicked problems have no stopping rule.
    Because Wicked Problems have no definitive formulation, we cannot define criteria to evaluate when we have a good enough solution. Problem-definitions are negotiated across those involved in the situation to be changed. They incorporate multiple perspectives, some of which become more salient as specific aspects of the situation or its solution are explored, or as various stakeholders attempt to incorporate their own perspectives and agendas for change (Davidson 2002). We don’t know when we are done with this process – we can only judge individually if all the elements we wanted to see in a solution have been included.

    4. Solutions are subjectively good-enough, rather than optimal.
    As there is no clear set of criteria for a solution, we cannot evaluate when we have reached a solution that would “work.” Stakeholder assess­ments of proposed solutions are subjective and negotiated, defined around fit with the framing perspective they adopt, and expressed as ‘good enough,’ rather than right or wrong.

    5. There are no criteria to judge if all solutions have been identified.
    Lacking a definitive problem formulation and boundary, we cannot define any rules or logic to judge whether we have included all the important aspects of the problem in our analysis. Implementing a solution will generate waves of consequences, which may have repercussions that outweigh the intended benefits. We may end up making the situation worse than when we started.

    “The full consequences cannot be appraised until the waves of repercussions have completely run out, and we have no way of tracing all the waves through all the affected lives ahead of time or within a limited time span.” (Rittel & Webber, 1973)

    However, we can explore the knock-on effects of various solution elements by employing a systemic analysis that allows us to analyze the likely impacts of change in conjunction with our emerging understanding of the problem.

    6. Every solution to a wicked problem is a ‘one-shot operation’; because there is no opportunity to learn by trial-and-error, every attempt counts significantly.
    Every attempt at a solution changes things, in ways that are irreversible. Once we make changes to the problem-situation, we face a different set of problems. So we cannot plan an incremental set of changes, then reverse course if something does not work. We can attempt to predict the knock-on effects of changes using a systemic analysis, but some solutions may introduce unforeseen consequences, requiring us to start again with a fresh analysis.

    7. Wicked problems do not have an agreed scope, or set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan for a solution.
    Wicked problems tend to span organizational, functional, and management boundaries. So nobody fully understands what the problem is – just the parts that they can see from where they stand. Wicked problem exploration is like assembling a jigsaw puzzle – you get glimpses of a face, or a building, but are never quite sure what the big picture looks like. So solutions are partial and negotiated around “fit” with the emerging problem, rather than any objective definition of scope or legitimacy. We can agree a set of elements we’d like to incorporate into a plan of change, but we don’t know if we understood all the requirements, or if we missed something big. All we can do is to work of collective representations of the problem and debate the expected consequences of solution elements, using representations and methods to explore interactions between the parts.

    8. Every wicked problem can be considered to be a symptom of another problem.
    Wicked problems can be conceived of as “messes” (Ackoff 1974) or “soft systems” of human-activity (Checkland 1999). Because wicked problems are so complex, incorporating multiple chains of cause-and-effect (many of which may have multiple causes) into a coherent representation of the problem-situation is difficult. Problem-components are interrelated: some problems may be causes or symptoms of others, some problems have multiple causes, and some share a common cause. To untangle these problems, the relationships between them – and the consequential knock-on effects of changing part of the system of related elements – needs to be modeled and appreciated. This requires systemic analysis methods.

    9. A wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem’s resolution.
    As discussed above, we need to assess and argue the value of a solution across multiple stakeholders who have differing perspectives on the problem and competing agendas for change. Their perspectives are likely to depend on where they are in the organization, their disciplinary background and education, and their prior experience. Their priorities for change are likely to differ widely according to their group or personal interests and values, and their sensitization to various types of organizational problem and solutions. It is unlikely that everyone will define either problems or solutions in the same way. The process of argumentation used to explore problems mush therefore focus on building consensus, as the way in which the problem is defined tends to direct the type of solution considered. For example, if we define the problem as one of disorganized work processes, an appropriate solution might be to implement a team-coordination system, whereas if we define the problem as a lack of relevant information, the solution would be more likely to focus on an information repository.

    10. We need to involve participants in the situation – and to listen to what they say.

    “Planners are liable for the consequences of the actions they generate; the effects can matter a great deal to those people that are touched by those actions.” (Rittel & Webber, 1973)

    The consequences generated by changes last for a long time. Some of these may be predictable, but because problems are interrelated, some changes may introduce unforeseen consequences. The lives and work of peo­ple involved in the problem-situation are irreversibly changed, and these changes will influence how they work, and what they are able to do (or not do) in the future. The aim is to improve some aspects of the organizational situation, but changes may have unintended consequences that need remediation.

    Conclusion: Wicked Problems Need Exploration, Appreciation, and Systemic Analysis

    Solving Wicked Problems requires appreciation of the problem-situation, accompanied by systemic analysis. Horst Rittel (1972), who originated the term, suggested that we use a process of argumentation to design solutions: “a counterplay of raising issues and dealing with them, which in turn raises new issues, and so on, and so on.” He saw the goal of argumentation as piecing together a big picture from the fragmented viewpoints and problem-definitions held by change-agents, stakeholders, and those people who work in the problem-situation.

    The main thing to note about wicked problems is that they can be defined in multiple ways, which means that various stakeholders will define them differently, depending on their experience and their position in the organization. That means that wicked problems are not goal-oriented, nor are they simple to define. Instead, we need to employ a systemic problem analysis approach to resolve them – one where stakeholders can explore and negotiate the boundary and the elements included in the problem to be resolved.

    References

    Boland, R.J. and Tenkasi, R.V. 1995. “Perspective Making and Perspective Taking in Communities of Knowing,” Organization Science (6:4), pp. 350-372

    Rittel, H.W.J. 1972. “Second Generation Design Methods,” DMG Occasional Paper 1. Reprinted in N. Cross (Ed.) 1984. Developments in Design Methodology, J. Wiley & Sons, Chichester: 317-327.

    Rittel, H. W. J. and M. M. Webber (1973). “Dilemmas in a General Theory of Planning.” Policy Sciences 4, pp. 155-169.

  • Emergent Design

    Emergent Design

    Why is design improvisational? 

    Modern organizations are complex, and the sorts of problems that remain to be resolved using process redesign, IT systems design, or the combination of both that we call the co-design of business & IT systems can’t be defined around a simple set of issues. There are multiple managers and groups of stakeholders, with competing goals for change. Some of these overlap, some complement each other, and some are in conflict with those of other stakeholders. Even a group of people who work together will have differing requirements and goals, depending on their experience, their professional background, and their position in the organization. People understand the parts of the organization they have experience of. Those who have worked in multiple groups will have a much wider – and more complex – view of what needs to change than those who have worked in the same role for years.

    There’s a management consultancy joke that says if you get five stakeholders around a table, you’ll have at least fifteen different goals.

    The co-design of business & IT systems is like piecing together a jigsaw puzzle without the picture. You get an edge here and there, part of a building outline, or a connecting feature, but mainly you are assembling bits and pieces that are tacked together in whatever way makes sense at the time. Most IT analysts fudge this by merging stakeholder requirements for change under a single, vague business goal. But this doesn’t prevent the shift in focus between multiple objectives that stakeholders prioritize, as these become salient to the current area of design. Change analysts have to understand multiple business domains, as stakeholders’ requirements indicate different types of solution and the analyst attempts to integrate these around a coherent business vision.  Even business managers don’t really understand their processes – and know very little of the processes with which their area of responsibility interfaces. Conflicts, priorities, and omissions in change objectives are seldom realized as the logical analysis methods used for IT requirements don’t provide ways to map out the full scope of change – the big picture.

    We lack ways to represent the big picture of how the organization “works” in ways that would allow business managers to understand the implications of changing things. Business analysts, change managers, and IT systems analysts are in a no-win situation. They are expected to understand myriad interpretations of the business strategy, reconcile conflicting viewpoints on how business processes work, and somehow define a coherent set of change objectives that pleases everyone. All while the stakeholders they need to satisfy each understand only a fraction of what the business does.

    How does improvisational design work?

    In today’s complex organizations, very few design goals are understood to the point that they can just be stated and agreed across stakeholders. Design-goals are constantly changing between iterations, as shown in Figure 1. The designer starts by designing for the subset of goals they understand. As they explore and test the design with users, they become aware of new requirements and so modify the subset of goals they are designing for. As part of this process, they also discard any requirements that are no longer associated with perceived user needs.

    The parabola of process steps introduced by goal emergence in design
    Figure 1. Goal-emergence in design

    Goal-based design is a myth. Organizational change requires repeated cycles of appreciation, enculturation, inquiry, learning, change, and evaluation – until the design is good enough. Not perfect – and certainly not optimal – good enough is good enough [2]. We talk about design as if it were fixed: as if there were one best way to design everything. We celebrate designers who produce especially elegant or usable artifacts as if they were possessed of supernatural powers. Yet design should be easy. It is the application of “best practice” principles to a specific situation. We can observe how the users of a designed artifact or system work, then design the artifact or system accordingly. Why does that approach fail so often?

  • The Problem of The Problem

    The problem of “the problem.”  

    Designers are taught a repertoire of designs-that-works: patterns that fit specific circumstances and uses. Experienced designers are capable of building up a deep understanding over time, of which problem-elements each of these patterns resolves. So they can assess a situation, recognize familiar problem-elements, then fit these with design patterns that will work in these circumstances. The problem comes when a designer is faced with a novel or unusual situation that they have not encountered before. Novice designers encounter this situation a great deal, but even experienced designers must deal with emergent design in a novel context. In these circumstances, designers iterate their design, as shown in Figure 1. They identify (often partial) problems, ideate/conceive relevant solutions, give those solutions form with a prototype, then evaluate the prototype in context. This often reveals emergent user needs or problems, that are explored in the next iteration.

    The stages of iterative design: identify problem, ideate solutions, prototype designed solution, evaluate de4signed solution in context, explore remaining user needs.
    Figure 1. Iterative Design

    An important aspect of iterative design is that iterations can occur within cycles. As designers succeed or fail at successive designs, they accumulate experiential knowledge, that allows them to assess new situations quickly and to understand which design elements will work or fail in that situation, looping back to remediate the design as they spot logical flaws and gaps in the design. The problem with this is that (as the Princess said) you have to kiss an awful lot of frogs to get a Prince. An awful lot of people end up with really bad designs, because their designer did not recognize elements of the situation well enough to understand which pattern-elements to implement. If you are really unlucky, you will also end up with one of those designers who feel it is their mission in life to prevent the end-user “mucking about with” their design. If you are lucky, your designer will recognize that it is your design, not theirs. They design artifacts and systems in ways that allow people to adapt and improvise how they are used.

    Design as improvisation

    Improvisation takes a multitude of forms. It might be that a user wishes to customize the color of their screen (because the designer thought that a good interface should look like a play-school). This may not do much for the function of your work-system, but it does mean that your disposition towards work is a heck of a lot sunnier as you use it. Or it might be that the information system which you use expects you to enter data on one step of your work before another. You might be able to enter data into a separate screen for each step, reordering the steps as you wish. More usually, you have to enter fake data into the first step, then go back later to change this, once you have the real data. This is because IT systems designers treat software design as a well-structured problem. A well-structured problem is one that contains the solution within its definition. Defining the problem as a tic-tac-toe game application means that you have a set of rules for how the game is played which absolutely define how it should work. This is fine if everything goes to plan, but a huge pain for users when it does not. The only discretion left to the user is how to format the results in a printed report, which is not much comfort if your whole transaction failed because you were prevented from going back to change one of the inputs. This is not rocket science – developers need to design systems that let users work autonomously.

    But business applications are not well-structured. They represent wicked problems. A wicked problem cannot be defined objectively, for all the reasons identified in Figure 2. Solving a wicked problem needs business users and stakeholders to agree on what problems that they face, their priorities in resolving these, and what their change-goals are.

    Diagram lists the constraints on Design Posed by Wicked Problems
    Figure 2. Constraints on Design Posed by Wicked Problems (Rittel & Webber, 1973)

    A wicked problem can be viewed as a web of interrelated problems. It is not always clear what the consequences will be, of solving any part of this mess. Some of the problems may have “obvious” solutions. But implementing these solutions may make other, related problems worse or better. This is why iterative design is central to resolving wicked problems. In general, stakeholders don’t understand what they need until they see it. So solutions must be designed flexibly, for changes to be implemented as the consequences are realized and to permit adaptation-in-use by stakeholders and users. People are infinitely improvisational. They develop work-arounds and strategies to manage poor design. But, as Norman (2013) observes, why should users have to develop work-arounds for poor design? What is it, about the design process, that leads us to such constraining IT systems, interfaces, and work procedures that are based on the system design, rather than system designs that are based on flexible work-procedures?This website reflects findings from my research studies and reflections from my own experience in design, to discuss some key underlying principles of design, to explore how the design process works in practice (rather than how we manage it now, which is based on unsupported theoretical models), and to present a way of managing design differently.  Improvisationally.

    References

    Norman, D. A. (2013). The Design of Everyday Things: Revised and Expanded Edition. Basic Books, New York.

    Rittel, H.W.J. (1972) “Second Generation Design Methods,” Reprinted in N. Cross (ed.), Developments in Design Methodology, J. Wiley & Sons, Chichester, 1984, pp. 317-327., Interview in: Design Methods Group 5th Anniversary Report, DMG Occasional Paper, 1, pp. 5-10.

    Rittel, H. W. J., & Webber, M. M. (1973). Dilemmas in a General Theory of Planning. Policy Sciences, 4, 155-169.

  • Knowledge-Sharing In Collaboration Across Organizational Boundaries

    Knowledge-Sharing In In Collaboration Across Organizational Boundaries

    Susan Gasson
    College of Computing & Informatics,

    Drexel University

    Please cite this paper as:
    Gasson, S. (2007) ‘Knowledge-Sharing In Collaboration Across Organizational Boundaries.’  Working Paper.  Available from https://www.improvisingdesign.com/knowledge-sharing/ ‎ Last updated 08/15/23

    Sharing Knowledge in Collaborative Teams

    Knowledge-sharing in collaborative design is problematic, because it involves the merging of a variety of stakeholder perspectives to achieve a collective “vision” of what needs to be changed: the task objectives, the task goals and – even – the problem being addressed. We tend to assume that groups develop a common perspective on collaborative tasks over time, but there is quite a bit of research that demonstrates otherwise.

    The problem of collaboration is exacerbated when the collaboration spans organizational boundaries, such as groups that comprise members from different functions or divisions. People who work in different functional units not only tend to have diverse ways of defining organizational problems, that are related to their disciplinary and professional backgrounds, but also define problems and their solutions according to the local conventions of their department or function. A group of managers and participants in the work processes to be changed may have difficulty in agreeing what needs to change as they find that they have very little shared understanding of organizational problems – or how to resolve them. Each stakeholder defines the problems in a different way, depending on their experience of these problems and of the situation in which they occur (Gasson, 2004).

    Different subgroups of stakeholders may have shared understandings, that encompass a subset of the problem definition. These subsets often form the basis for political alliances, that emphasize specific aspects of the organizational problem-situation (Gasson, 2007). But knowledge about the actual problem, represented in Figure 1 by the union of the various perspectives (the bold outline), is difficult to represent and therefore to debate. Each stakeholder sees a different part of the problem, with different emphases and priorities, that are filtered through a different interpretation, based on their educational and work experience. So sharing knowledge – about how things work and what should or could be done – is difficult.

    Knowledge Convergence

    This is not an issue for simple, well-structured problems, that are easy to define. For example, if a group of stakeholders is designing a solution to the problem of reporting on what hours different employees work on different projects to which they are assigned, the problem is fairly easy to define and the solution follows from this problem definition. While there may be some “softer” aspects of the problem that need clarification (for example, how a “project” is defined, how employees can be expected to record the hours that they work, or cultural constraints of reporting on what various people work on), most aspects of the problem are straightforward and therefore easy to structure into a clear, consensus problem-definition. Over a period of working together, different stakeholders share their knowledge about a well-defined problem, to reach a clearly-defined domain of action. The degree of shared understanding can be increased, as trust between group members increases over time therefore very high.

    Wicked2

    Figure 1: Knowledge Convergence In Collaborative Work

    Not so for problems that are complex and ill-defined. A group of stakeholders who work together over time may be able to define the rationale for change and the context of the problem in a consensual way, but each member of the group will conceive of the problem and appropriate solutions in very different ways. The degree of shared knowledge possessed about the problem to be solved may not increase much from that shown in Figure 1. This is because organizational problems are “wicked” problems (Rittel, 1972). Wicked problems, according to Rittel and Webber (Rittel and Webber, 1973) have ten specific characteristics:

    1. Wicked problems have no definitive formulation. A problem can only be defined by exploring the type of solution required: problem and solution are interdependent. Each attempt at creating a solution changes the stakeholder’s understanding of the problem.
    2. Wicked problems have no stopping rule. Since you can’t define the problem, it’s difficult to tell when it’s resolved. The problem-solving process ends when resources are depleted, stakeholders lose interest or political realities change.
    3. Solutions to wicked problems are not true-or-false, but good-or-bad. Since there are no unambiguous criteria for deciding if the problem is resolved, getting all stakeholders to agree that a resolution is “good enough” can be challenging.
    4. There is no immediate or ultimate test of a solution to a wicked problem. Solutions to such problems generate waves of consequences, and it is impossible to know how these waves will eventually impact the situation. Wicked problems are interrelated (see point 8) and so resolving one problem may make another problem better or worse.
    5. Every implemented solution to a wicked problem has consequences. Once the Web site is published or the new customer service package goes live, you can’t take back what was online or revert to the former customer database, because customers have different expectations. So consequences not only relate to the original problem, but change the nature of the problem that now has to be resolved.
    6. Wicked problems do not have a single, well-defined set of potential solutions. Various stakeholders have differing views of acceptable solutions. It is a matter of judgment as to when enough potential solutions have emerged and which should be pursued. Alternative solutions may be just as good as the solution selected [1]. Alternative solutions may not exist.
    7. Each wicked problem is essentially unique. There are no “classes” of solutions that can be applied to a specific case. As Rittel and Webber wrote in “Dilemmas in a General Theory of Planning,” “Part of the art of dealing with wicked problems is the art of not knowing too early what type of solution to apply.” This moves us a long way away from the generalized ontology of the semantic web, or the pattern language proposed by Alexander (1999) [2].
    8. Each wicked problem can be considered a symptom of another problem. A wicked problem is a set of interlocking issues and constraints that change over time, embedded in a dynamic social context. So organizational problems are highly interrelated and resolving one problem in a particular way will affect other problems in unpredictable ways.
    9. The causes of a wicked problem can be explained in numerous ways. There are many stakeholders who will have various and changing ideas about what might be a problem, what might be causing it and how to resolve it. Problem resolution cannot be achieved through problem analysis, but must be achieved through “argumentation” (Rittel, 1972), where multiple views of the problem are debated and negotiated among stakeholders.
    10. The planner (designer) has no right to be wrong. Scientists are expected to formulate hypotheses, which may or may not be supportable by evidence. Designers don’t have such a luxury—they’re expected to get things right. Rittel (1972) argues that you cannot build a freeway to see how it works. Similarly, you cannot build an information system to see what type of IS you need [3].

    As a consequence, problem-solving and design groups tend to diverge, as much as they converge over time, in defining the problem that they are resolving. Design tends to proceed via a series of “breakdowns”, in which the current group consensus falls apart and a new consensus is formed around a mobilizing vision, that provides a good-enough definition of the problem to mediate negotiation and constructive argumentation (Gasson, Under Review).

    So What?

    We need new methods and approaches to manage IS design and collaborative problem-solving/innovation groups. Most current approaches are based on an individual model of problem-solving, that views problems as ill-structured (Simon, 1973). Ill-structured problems, while being ill-defined are capable of being structured, once a suitable problem-boundary and set of constraints have been agreed. But as I argued above, organizational problems are wicked problems and are therefore not amenable to objective definition or structuring. Approaches to wicked problem resolution [4] require techniques for surfacing people’s implicit assumptions, so that everyone is talking about the same elements of the problem. They require ways of managing multiple perspectives at once: recording constraints and solution requirements at multiple levels of decomposition, so that understanding of the problem is not “lost” when the group changes focus. They require ways for allocating responsibility for different parts of the problem to those familiar with those parts and for building trust so that these different views of a solution can be aligned, even if they are not shared. My research is about how these things can be achieved.

    I explore methods and processes for (a) sharing distributed information and knowledge, and (b) managing collaborative problem-solving and design activities in groups where knowledge-sharing is not feasible because the context and the problem are so diverse and “wicked”. Some of the issues that have arisen from this program of research so far are:

    Is the process goal-driven? Most views of problem-solving see this process as goal-driven, at least at a high-level. In other words, collaborative groups designing IT-related change derive a “common vision”. The findings from my prior research demonstrate that, for complex problems that span organizational groups and/or units, a common vision is highly unlikely to be shared. Group collaboration is impeded by continual revisiting of this vision, in the attempt to derive a common language for the project change goals.

    How do distributed groups assess their progress? In traditional perspectives of collaborative work, progress is judged by how far a group has proceeded towards a set of common goals for a solution. If the group is unable to establish a common set of goals, because group members view “the problem” in multiple ways, how do they assess progress towards achieving a collective solution? My prior studies indicate that groups do manage this satisfactorily and that group members assess a set of subtle change-management elements that are unrelated to the elements that we would normally define as part of a common vision. Further studies will investigate these elements further.

    What types of collaboration tools and techniques might be useful to increase the degree of shared understanding? If boundary-spanning groups really do possess conflicting or diverse perspectives of the problem to be solved and the types of solution that might be appropriate, are there specific techniques or approaches that might aid in increasing the shared element of the group’s understanding of the problem? My experience as an educator, developing methods for collaboration in a classroom context that often involves groups with diverse memberships, leads me to believe that certain types of approach might “displace” individuals’ current understanding sufficiently to allow a shared vision to emerge, at least for a limited scope of action. These techniques are to be developed further, through “action research” studies.

    References

    Alexander, C. 1999. “The origins of pattern theory: The future of the theory and the generation of a living world,” IEEE Software (16:5), Sept-Oct. 1999, pp 71-82.

    Gasson, S. 2004. “A Framework For Behavioral Studies of Social Cognition In Information Systems,” ISOneWorld: Engaging Executive Information Systems Practice, Information Institute, Las Vegas, NV.

    Gasson, S. 2005. ‘The Dynamics Of Sensemaking, Knowledge and Expertise in Collaborative, Boundary-Spanning Design’, Journal of Computer-Mediated Communication (JCMC), 10 (4). http://onlinelibrary.wiley.com/doi/10.1111/j.1083-6101.2005.tb00277.x/abstract

    Gasson, S.  2007. ‘ Progress And Breakdowns In Early Requirements Definition For Boundary-Spanning Information Systems’ in S. Rivard & J. Webster (Eds.) Proc. ICIS ’07, Montréal, Québec, Canada Dec. 9-12, 2007

    Rittel, H.W.J. 1972. “Second Generation Design Methods,” Reprinted in N. Cross (ed.), Developments in Design Methodology, J. Wiley & Sons, Chichester, 1984, pp. 317-327., Interview in: Design Methods Group 5th Anniversary Report, DMG Occasional Paper, 1, pp. 5-10.

    Rittel, H.W.J., and Webber, M.M. 1973. “Dilemmas in a General Theory of Planning,” Policy Sciences (4:155-169).

    Simon, H.A. 1973. “The Structure of Ill-Structured Problems,” Artificial Intelligence (4), pp 145-180.

    Notes

    [1] This is quite distinct from Simon’s perspective, that there is an “optimal” solution, that can be selected from a range of alternatives according to a set of definable criteria. Wicked problems do not possess any clearly-definable definition, so a single set of criteria for a solution cannot be defined.

    [2] Alexander, incidentally, was the initial proponent of hierarchical decomposition – the model that underlies the waterfall model of design and the traditional systems development life-cycle.

    [3] Although actually, the sad truth is that this is exactly what tends to happen … which explains why so many people are disenchanted with their IS development group.

    [4] Note that I do not use the term “problem-solving” here. One can only solve a problem that is amenable to definition. According to Rittel (1972), a wicked problem can only be understood through designing a solution. This is a high-risk activity and should not be treated in the same way as “solving” a well-defined problem.

    Page last updated 05/14/2015 © Susan Gasson (sgasson@drexel.edu) ; Paper last modified: 12/02/2007

  • Virtual Knowledge Exchange & Online Learning

    Whenever we have a group of people collaborating virtually, whether in online learning or in virtual project collaboration, their work and organizational roles become invisible. We therefore need to design in aspects (affordances) of digital technology that enable group, coordination, joint sensemaking, and knowledge exchange. These studies investigate how such groups coordinate knowledge exchange and how we can support these processes.

    Selected Papers:

    Waters, J. & Gasson, S. (2015) “Supporting Metacognition in Online, Professional Graduate Courses.” Proceedings of Hawaii Intl. Conference on System Sciences (HICSS-48), Jan. 5-8, 2015. Advances in Teaching and Learning Technologies minitrack, Collaboration Systems and Technologies.

    Gasson, S. (2012) Analyzing Key Decision-Points: Problem Partitioning In The Analysis of Tightly-Coupled, Distributed Work-systems, International Journal of Information Technologies and Systems Approach (IJITSA), 5(2), 57-83, July-December 2012. DOI: 10.4018/jitsa.2012070104

    Waters, J. and Gasson, S. (2012) Using Asynchronous Discussion Boards To Teach IS: Reflections From Practice, Proceedings of the International Conference on Information Systems, ICIS 2012, Orlando, USA, December 16-19, 2012. Association for Information Systems 2012, ISBN 978-0-615-71843-9, http://aisel.aisnet.org/icis2012/proceedings/ISCurriculum/9/

    Gasson, S., and Agosto, D.E. (2008) ‘Millennial Students & Technology Use: Implications for Undergraduate Education,’ in: Education in HCI; HCI in Education – The HCIC 2008 Winter Workshop, Jan. 30 – Feb. 3., 2008. Fraser, CO. http://www.hcic.org/uploads/Gasson1178.pdf

    Waters, J. and Gasson, S. (2006) ‘Social Engagement In An Online Community Of Inquiry’ in Proceedings of ICIS ’06, Milwaukee, WI, paper HCI-03. [Full research paper].

  • Human-Centered Design

    Human-Centered Design

    In the last few years, the terms human-centered and user-centered have become synonymous in HCI and IT design, with a focus on disciplines such as “user experience” and “interaction design.” Here I will argue that neither discipline really deals with the core issues of human-centered design.

    Human-centeredness in design involves designing technology artifacts, applications, and platforms that provide a “support system” to people performing specific work or play activities as individuals, or collaborating around a set of (more or less) well-defined aims – often messily and exploratively. Asking people to describe their requirements for technology to support them in their activity doesn’t work because no-body really stops top think about how they work, or what they do to achieve a goal. When they are forced to do so, they will describe how work should be done – the formal system of procedures and rules – rather than how it is done – the informal, socially-situated system that makes work activities fit with their environment and the objectives that people have.

    People are seldom alone in what they do, even when engaging in individual activity. They socialize with other people and exchange ideas, they seek advice on how to proceed, and they collaborate to achieve shared – or similar – goals. When confronted with a novel problem, most people turn to a “small world” network of trusted social contacts for input – people who share their values and perspectives – rather than conducting a wider search that includes subject experts and knowledge resources (Chatman, 1991). Even when working alone, we are never truly alone. We are thrown into a working environment that existed before we joined – a self-contained world of work and social activity that we can only understand through participation (Weick, 2004). Professionalism and practice in one organization are completely different to the practices and standards applied in another.

    When we try to understand the “user” of a software application or system, we often fail miserably because we only see the formal work activities that they perform. We miss the web of activities that their formal activity is a part of – the multiple other human-activity systems they interact with, to get things done.  User-experience design is reductionist in its focus on interaction design. It takes a human being, rich in purpose and understanding, and reduces them to the role of artifact user. Not only that, but by implication, the user of a pre-defined artifact, whose purpose is understood, but whose mechanisms of interaction remain to be fully defined. By focusing on conceptual models of use, user scripts, and activity/task frameworks for work-analysis e.g. Sharp, Preece, and Rogers (2019), it isolates the user from the social context of work, describing activities in terms of fixed procedures and embedding assumptions about how and why the artifact will be used. It loses the joyful multivocality of the human-centered approach to design. Instead of understanding that thrownness is a temporary state, where there is a choice between reaction or being proactive, user-centered design embeds reaction as a paradigm. It separates tasks from workflows, making each interaction an end in itself and enforcing the approach to design that led Lucy Suchman to write her famous treatise on situated design (Suchman, 1987, 2007). There is no linked flow of work processes, where the human being knows that (for example) they have already photocopied the report covers (onto special cardstock) and the early chapters, so now have only to copy later chapters. There is the dumb lack-of-saved-status machine, which jams halfway, then asks the user to reload the report pages in their original order, starting with the covers which need the user to load special cardstock into the paper feeder. Which they already did.

    We can support this world by understanding the various purposes of human activity and designing technology to assist in those purposes (Checkland and Winter, 2000). Human-centered design differs from user-centeredness by being systemic and multi-vocal: it is aware of the multiple networks of activity in which a human technology user engages, simultaneously. Unlike user-centered design, which focuses on a single, definable work-goal, human-centered design appreciates the multiple goals that people pursue simultaneously, for different purposes. Human-centered design appreciates the social and organizational context of work, employing analytical approaches and methods that explore the complexity of the activities that we do – and the social networks we inhabit to do them.

    Designing for humans rather than users is a choice:

    • Human-centered design explores the multiple, purposeful systems of human-activity that are required to achieve even simple work (or play) goals.
    • It treats the participants in a human activity system as autonomous individuals, not agents to be modeled, controlled, and curtailed. Human-centered design respects and supports the local knowledge required to act skillfully, using local knowledge and various forms of tacit or implicit knowledge to perform work that is often not recognized as knowledge work.
    • It recognizes that a social system of information exchange exists, of which the designed technology artifact or software is only a part, and that humans need to exercise a deliberative choice about what to record and why. Any computer-based system of data is part of a wider, human-network-based system of information.
    • Because it appreciates work as part of a wider social system,  human-centered design involves a conscious decision to support the informal communications and activities that keep the system of work connected and informed – for example, water-cooler conversations or phone calls. These informal channels produce more knowledgeable participants in the system of work, rather than resulting in recorded data records or written resources. They are often omitted from – or worse, designed out of – the formal system of “user experience design.”
    • Above all, it acknowledges that knowledge, understanding, and the meanings that we ascribe to work are emergent. We understand how to do things by doing them, then reflecting on what we did and how – after which we have a better understanding of how to do them next time. Designing any particular set of procedures into a computer-based system is not only a waste of time, but may be counterproductive, as we constantly improvise and improve on how we did things previously (learning-by-doing). Human-centered systems design allows the human to be in control of their work, rather than the IT system.

    So no – “user experience design” and “interaction design” do not support human-centeredness in work (or play). They seek to humanize the artificial processes imposed by transaction-based systems by associating these with perspectives that acknowledge the psychology of human activity, learning, and interactions with technology. But they don’t even scratch the surface of understanding situated, systemic activity. For that, you need to employ methods that complicate your perspective, such as Soft Systems Analysis (Checkland, 2000; Checkland and Poulter, 2006) – and to take human-centeredness seriously.

    To conclude, user-centered design – as the term is employed in HCI and UX – is not the same as human-centered design. User-centered design is aimed at mitigating and improving the experience of using a system of technology that was designed for another purpose than those the user prioritizes – to make money, to “engage” users on the website so they return (and spend more money), and to publicize the firm’s products and services. In contrast, human-centered design is an approach that starts with user values, priorities, and purposes. It seeks to afford uses of the system that fulfill how the user would like to access the features that they value and expect. It designs the flow of use-interactions around the expected user flow of work (or play), allowing the user to configure this flow how they want. It does not make you do illogical or stupid things, like reloading all the sheets in a photocopier feed in their original order, even when the copy failed on the last-page-but one. It does not make you enter the same information repeatedly, because the designer was too unimaginative to anticipate that a user might want to change some of the options they had selected earlier (e.g. when booking an airline ticket). And it doesn’t make you go through seven layers of a menu to reach the one page you need.

    Human-centered design is performed by people who talk to users, learn to think like users, and walk alongside them in their work. These designers not only prototype and evaluate their designs, but also listen to the feedback they are given. They value user input and see it an critical to their portfolio of design experience. In the design literature of the 1980s there was a lot of discussion of how user representatives would “go native,” when participating in design projects, learning to think like designers and subsuming the interests of their fellow users in the process. In the 2020s, we need to see more IT designers going native, learning to think like users, reworking IT system designs to support how users work, and valuing the aspects of system design that users value. That is human-centered design.

    References

    Chatman, E.A. 1991 “Life in a Small World: Applicability of Gratification Theory to Information-Seeking Behavior,” Journal of the American Society for Information Science (42:6), pp. 438–449.

    Checkland, P. 2000 “Soft systems methodology: a thirty year retrospective,” Systems Research and Behavioral Science (17), pp. S11-S58.

    Checkland, P., and Poulter, J. 2006. Learning For Action: A Short Definitive Account of Soft Systems Methodology, and its use Practitioners, Teachers and Students Chichester: John Wiley and Sons Ltd, 2006.

    Checkland, P., and Winter, M.C. 2000 “The relevance of soft systems thinking,” Human Resource Development International (3:3), pp. 411-417.

    Sharp, H., Preece, J., and Rogers, Y. 2019. Interaction Design: Beyond Human-Computer Interaction, 5th EditionWiley, UK, 2019.

    Suchman, L. 1987. Plans And Situated Action Cambridge MA: Cambridge University Press, 1987.

    Suchman, L. 2007. Human–machine reconfigurations: Plans and situated actions Cambridge, UK: Cambridge University Press, 2007.

    Weick, K.E. 2004. “Designing For Throwness,” in: Managing as Designing, R. Boland, J and F. Collopy (eds.), Stanford CA: Stanford Uniersity Press, pp. 74-78.

    Selected Papers:

    Gasson, S. (2008) ‘A Framework For The Co-Design of Business and IT Systems,’ Proceedings of Hawaii Intl. Conference on System Sciences (HICSS-41), 7-10 Jan. 2008. Knowledge Management for Creativity and Innovation minitrack, p348.  http://doi.ieeecomputersociety.org/10.1109/HICSS.2008.20.

    Gasson, S. (2005) ‘Boundary-Spanning Knowledge-Sharing In E-Collaboration’ in Proceedings of Hawaii Intl. Conf. on System Sciences (HICSS-38), Jan. 2005. http://doi.ieeecomputersociety.org/10.1109/HICSS.2005.123

    Gasson, S. (2003) Human-Centered vs. User-Centered Approaches To Information System Design, Journal of Information Technology Theory and Application (JITTA), 5 (2), pp. 29-46.

    Gasson, S. (1999) ‘A Social Action Model of Information Systems Design’, The Data Base For Advances In Information Systems, 30 (2), pp. 82-97.

    Gasson, S. (1999) ‘The Reality of User-Centered Design‘, Journal of End User Computing, 11 (4), pp. 3-13.

  • Managing Organizational Knowledge

    Managing Organizational Knowledge

    This thread of my work explores the forms of knowledge shared across organizational boundaries, the mechanisms for sharing knowledge that are employed, and how human-sensemaking is mediated by processual, technical and informational artifacts. My work draws on theories of distributed cognition, contextual emergence, and sociomateriality. Hayden White observes that human sensemaking relies on subjective forms of narrative for meaning. Much of this work explored how to enable a “conversation with the situation” that introduces reflexive breakdowns into the situated narrativizing and framing in which humans routinely engage. This results in different types of support, focusing on the different forms of knowledge that are required for decision-making — and the degree to which such knowledge can be shared.

    In virtual organizations and distributed project groups, non-human objects increasingly mediate human relationships, as they displace humans as collaboration-partners in distributed knowledge networks. We may be able to identify forms of metaknowledge that work across domain boundaries by identifying mediating object roles – e.g. categorization schemes, instrumentation, databases, and routinized practices that embed frameworks for analysis or participation. My analysis has revealed different forms of group memory management in use, depending on the organizational scope of projects and the locus of control in the global network. Organizational knowledge – about how to work, how to frame organizational goals and outcomes, and how to organize work effectively – is mediated by technical objects, creating assemblages of social and technical systems of work, that guide the emergence of new business practices. The distributed scope of organizational locales creates four categories of knowledge that are acquired in different ways, summarized in Figure 1.

    2 x 2 matrix showing 4 forms of knowledge - these are described in the following text

    Figure 1. Forms of Knowledge (Gasson & Shelfer, 2006)

    Codifiable knowledge is the simplest to define, as this knowledge is routine and programmable. It equates to explicit knowledge, in that we know that we know it – and we can articulate what we know, so it can be stored for others to access and use. Typical examples are organization charts, or the rules, standards, and forms used in business processes.

    Transferable knowledge is articulable, but it is also situational – it is related to the context in which it is applied. For example, an IT systems developer might design software differently for a general-purpose website, whose users are relatively unknown, than for a small local application to be used by 4-5 people working together to perform specific business calculations as part of their shared work. The knowledge of when to apply different design techniques depends on the designers experience of working in various business environments and is generally acquired through some sort of apprenticeship process, where they learn from someone who has more experience of that environment.

    Discoverable knowledge is less straightforward. It combines tacit knowledge (Polanyi, 1961), which is process or skills based, with implicit knowledge that people fail to recollect consciously, or perceive explicitly (Schacter, 1991). As such knowledge is inarticulable, its possessors must recall it inferentially, by relating reported case studies to their own experience, or pattern recognition that can be related to data analysis findings. An effective way of surfacing such knowledge is to discuss historical data or case studies to explore what is known collectively about various situations. This is similar to the argumentation method proposed by Rittel (1972) in his discussion of “second generation design.”

    Hidden knowledge is the most difficult type to surface. It’s not the sort of knowledge that you are going to realize, unless you stop to reflect on what went wrong in your decision-making, or how an action was performed. For example, an IT Manager commented to me that the business process he had selected for a new initiative in organizational change was not as “stand-alone” as he had expected. He stopped to think, then commented that “in fact, I couldn’t have chosen a worse process to start with – it was related to every single business process we have.” Then he paused, and added, “but actually, you could say that about all of our business processes. It seems there is no such thing as a stand-alone process!” This category of knowledge is surfaced through breakdowns (Heidegger, 1962), where the “autopilot” of everyday action is disrupted by the realization that one’s usual recipe-for-success in such circumstances is not working. At that point, the tool or process we were about to use goes from being ready-to-hand, ready for automatic use, to being present-at-hand, needing reflection in order to work out how to use a tool, or how to behave in those circumstances (Winograd & Flores, 1986). During breakdowns, we need to stop and think, revising our mental model of how the world works to come up with a new way of behaving that is a better fit to the situation. Again, Rittel’s (1972) argumentation approach would be helpful here, as people pool and debate what they have learned from a failure, collectively.

    The ways in which we learn, then, are dictated by the scope of access that we have to our colleagues. The more distributed people are, the more that knowledge is mediated across formal technology channels, as distinct from being acquired through face-to-face conversations. This remoteness means that we are more reliant on formal knowledge, that is codifiable, or discoverable from formal sources of information. When people are co-located, they can spend time learning from what others do, or how a mistake or failure happened. They key take-away is that we need multiple ways of configuring and using technology platforms, for all types of knowledge to be supported. We cannot design one-size-fits-all information and communication technology systems.

    Selected Bibliography:

    Khazraee, E.K. & Gasson, S. (2015) ‘Epistemic Objects and Embeddedness: Knowledge Construction and Narratives in Research Networks of Practice’ The Information Society, 31(2), forthcoming, Jan. 2015.

    Gasson, S. (2015) “Knowledge Mediation and Boundary-Spanning In Global IS Change Projects.” Proceedings of Hawaii Intl. Conference on System Sciences (HICSS-48), Jan. 5-8, 2015. Knowledge Flows, Transfer, Sharing and Exchange minitrack, Knowledge Systems.

    Gasson, S. (2012) The Sociomateriality Of Boundary-Spanning Enterprise IS Design, in Joey, F. George (Eds.), Proceedings of the International Conference on Information Systems, ICIS 2012, Orlando, USA, December 16-19, 2012. Association for Information Systems 2012, ISBN 978-0-615-71843-9, http://aisel.aisnet.org/icis2012/proceedings/SocialImpacts/8/

    Gasson, S. (2011) ‘The Role of Negotiation Objects in Managing Meaning Across e-Collaboration Systems.’ OCIS Division, Academy of Management Annual Meeting, San Antonio, August 11-16, 2011.

    Gasson, S. and Elrod, E.M. (2006) Distributed Knowledge Coordination Across Virtual Organization Boundaries’, in Proceedings of ICIS ’06, Milwaukee, WI, paper KM-01. [Winner of ICIS Best paper in track award].

    Gasson, S. and Shelfer, K.M. (2006) ‘IT-Based Knowledge Management To Support Organizational Learning: Visa Application Screening At The INS’, Information, Technology & People, 20 (4), pp. 376-399. Winner of 2008 Emerald Literati outstanding paper award.

    DeLuca, D., Gasson, S., and Kock, N. (2006) ‘Adaptations That Virtual Teams Make So That Complex Tasks Can Be Performed Using Simple e-Collaboration Technologies’ International Journal of e-Collaboration, 2 (3), pp. 65-91

    References

    Heidegger, M. 1962. Being and Time. New York NY.: Harper & Row New York

    Polanyi, M. 1961. “Knowing and Being,” Mind (5:70), pp. 458-470.

    Rittel, H.W.J. 1972. “Second Generation Design Methods,” DMG Occasional Paper 1. Reprinted in N. Cross (Ed.) 1984. Developments in Design Methodology, J. Wiley & Sons, Chichester: 317-327.

    Schacter, D. L. (1992). Implicit knowledge: new perspectives on unconscious processes. Proceedings of the National Academy of Sciences, 89(23), 11113-11117.

    Winograd, T. and Flores, F. 1986. Understanding Computers and Cognition. Norwood New Jersey: Ablex Corporation.

  • Distributed Sensemaking

    Boundary-Spanning Design

    Distributed Sensemaking in Wicked Problems

    When collaborative innovation groups span knowledge domain boundaries, we have the additional complexity of distributed sensemaking. Boundary-spanning groups find it difficult to develop a common language for collaboration — often because they use similar terms to mean different things, or because they frame salient aspects of the problem-situation in different ways. We cannot, therefore, use the typical, goal-directed methods that we would use with a homogeneous design group (for example, IT professionals engaged in software design). We need methods that represent and permit reconciliation of the multiple frames of meaning encompassed by boundary-spanning collaborators.

    I have explored the processes underlying the co-design of business processes and information systems in boundary-spannning groups across multiple studies. We are faced with a wicked problem: one that can only be resolved through stakeholder argumentation, rather than analysis. Choices in the design of technology and the effects of alternative forms of technology on work are formed by definitions of organizational problems and, in turn, affect how organizational problems are defined. So design choices are emergent. Technology and process design, organizational innovation, problem-solving, and management decision-making are inextricably intertwined. The critical issue for organizational problem-solving and design groups is how we manage distributed sensemaking in collaborative knowledge processes. In groups with little shared experience or background – such as the typical enterprise systems design group, which is constituted of managers from different business groups and knowledge domains, understanding is stretched across group-members rather than shared between them. This concept is shown in Figure 1.

    Venn diagram, showing intersubjective frames,  intersections of understanding between 2 stakeholders, and distributed cognition as the union of all frames
    Figure 1. Venn Diagram Illustrating Different Categories of “Shared” Understanding

    Most collaboration methods, whether focused on enterprise systems design, business process redesign, cross-functional problem-solving, or IT support for business innovation, employ a decompositional approach, which fails dramatically because of distributed sensemaking. Group members cannot just share what they know about the problem, because each of them is sensitized by their background and experience to see a different problem (or at least, different aspects of the problem). Goals for change evolve, as stakeholders piece together what they collectively know about the problem-situation — a process akin to assembling a jigsaw-puzzle. (Productive) conflict and explicit boundary negotiation are avoided because group-members lack a common language for collaboration so misunderstandings are ascribed to political game-playing. We need design and problem-solving approaches that support the distributed knowledge processes underpinning creativity and innovation — approaches that recognize and embrace problem emergence, boundary-negotiation, and the development of shared understanding.

    Selected Papers:

    Gasson, S. (2013) Framing Wicked Problems In Enterprise-System Design Groups, Ch. 4 in Boundary-Spanning in Organizations: Network, Influence, and Conflict, Janice Langan-Fox and Cary L. Cooper (Eds.), Routledge, Taylor and Francis, New York.

    Gasson, S. (2006) ‘A Genealogical Study of Boundary-Spanning IS Design ’, European Journal of Information Systems, Special issue on Action in Language, Organizations and Information Systems. 15 (1), pp. 1-16.

    DeLuca, D., Gasson, S., and Kock, N. (2006) ‘Adaptations That Virtual Teams Make So That Complex Tasks Can Be Performed Using Simple e-Collaboration Technologies‘, International J. of e-Collaboration, 2 (3), pp. 65-91.

    Gasson, S. (2005) ‘The Dynamics Of Sensemaking, Knowledge and Expertise in Collaborative, Boundary-Spanning Design‘, Journal of Computer-Mediated Communication (JCMC), 10 (4). http://jcmc.indiana.edu/vol10/issue4/gasson.html

    Gasson, S. (1998) ‘Framing Design: A Social Process View of Information System Development‘, in Proceedings of ICIS ’98, Helsinki, Finland, December 1998, pp. 224-236.