I thought an interesting issue came up during our discussions of paradigm and syntagm last week. The concept of syntagm helps us to understand how the sequence of different signs in an expression is constitutive of their meaning. This would seem to be a useful concept for HCI, considering that interactions, as much as any other ‘medium’, develop and unfold over time. The interesting part though is that in an interaction design, the ordering of parts is often not specified and is rather left open for the user to determine. True, some task-based software might have a fairly predetermined ordering. However many artifacts, such as websites, give the user a great deal of control over the syntagmatic differences every step of the way. What are the implications of this and how should it be dealt with by designers?
To me, this suggests that structural critique of an interaction is much more complicated than critique of film. A given interactive application can have so many different meanings depending on the user’s choices. The user is playing an active role in constructing those meanings. Sometimes understanding the possible meanings might be a matter of doing the math, enumerating all the possible paths through an application. Othertimes, though, an application might be so open-ended that it would hard to say much about the application in general. In that case, it seems you would have to do some observation and then structurally analyze particular user interactions.
A structural analysis of “all the possible paths through an application” would be folly. But even film theorists seldom take on a whole film. It is much more manageable in a film to talk about a shot, scene, or at most a sequence. Likewise, a syntagmatic analysis of an application is much more manageable at the level of the task than at the level of the whole application.
Also, I think the construction of meaning in the mind of a viewer of film is more creative and unpredictable than you seem to acknowledge here. Likewise, applications are more structured than you imply. That’s not to deny your central point, that interactive interfaces are less linear than film, which is surely true, but rather that it’s possible to overstate (as well as understate) this difference.
“a syntagmatic analysis of an application is much more manageable at the level of the task than at the level of the whole application”
This is a good point and well-taken. I am always needing to be reminded to be more focused and specific in scope.
At the same time, in doing design, one has to have a sense of the whole. In film critique, an analysis of a specific shot is usually related back to the overall narrative, and the point in the narrative in which the shot takes place is significant. Can we think of applications as having an overall narrative structure? Sometimes yes and maybe sometimes no. I think that is an interesting question.
I just realized that Manovich nicely explores this issue with his new media principle of “Variability”. I like that he breaks down variability into different types, ranging from simple menu-based interactivity to more open-ended interactivity. Thinking about Jeff’s comment, I wonder if we should also make distinctions between micro- and macro-level syntagmatic variability. For instance, some role-playing games are extremely open-ended on the micro-level, allowing you to roam freely through a large virtual space and experiment with many different actions, but then fairly constrained on the macro-level, adhering to a basic predetermined narrative. On the other end of the spectrum, are there any applications that have tightly constrained tasks, but allow you to complete the tasks in any order you choose?
Really nice post, Dave. I’m intrigued and struggling with a number of the ideas you brought up.. Although I’m skeptical of Manovich’s claim that (non-linear, interactive) new media is actually more controlling then more traditional, linear media such as film, I think this critique is something that should be taken seriously, and, is in many ways quite a brilliant argument against the “myth of interactivity” which nearly everyone else I’ve read seems to subscribe to.
Manovich says “Before we could look at an image and mentally follow our own private associations to other images. Now, interactive computer media ask us instead to click on an image in order to go to another image. Before, we could read a sentence of a story or a line of a poem and think of other line, images, memories. Now interactive media asks us to click on highlighted sentence to go to another sentence. In short, we are asked to follow pre-programmed, objectively existing associations.”
Manovich seems to be saying that authors are now able to more powerfully externalize their minds when composing software or hypermedia. Of course, the author does not consciously consider every possible path or syntagm (clearly this is infeasible even for a small number interactive options that grow exponentially). However, many (possibly all?!) of the important syntagms are implicit when the author creates a closed system possible syntagms. hence, they are exerting even more control over the user, even though the user may experience use as a form of authorship. All the (important) syntagms you would follow have already been defined and probably traced by the author beforehand.
One of problems I see with this model (assuming I’ve interpreted it right) is that many software applications and other new media works do not exist as largely independent sub-systems. How often do you stay within a single website when browsing the web? at least for me, the answer is usually not for more than couple minutes, unless I’m reading a single, linear text. In other words, the argument starts to dissolve as you start incorporating the interaction between multiple interactive programs created by multiple authors independently. I have 124 tabs open right now, with no ore than 5 from any single site, not too mention various other physical writings scattered around.
Another problem I see is that an author could create a closed-system that is so rich that users of the systems are actually authors. I think we would all consider the english language to be such a system. The question is, when exactly does a system become this rich? If we don’t accept this view (that is, we instead believe that we are all just users, not authors, of language) than I would think that we would have to give up the notion of authorship entirely, seeing as this may be the most fundamental system upon which all other works are build (at least according to structuralists and linguists). This may be true and interesting philosophically, but it doesn’t seem to provide any insight for designers, does it? I’ll have to think about that some more…
So for the time being, I think I may be in agreement with Jeff when he says “Also, I think the construction of meaning in the mind of a viewer of film is more creative and unpredictable than you seem to acknowledge here. Likewise, applications are more structured than you imply.” Which I have to say is quite different from my perspective coming in to this course. Although, I’m still very curious about how we might more precisely understand the distinguishing characteristics amongst (structurally, phenomenologically, or otherwise) linear v. on-linear media , unpredictable/open-ended v. structure/control, and use v. authorship.
thanks for the GREAT post! Very useful…