Sunday, January 31, 2010

Comparing across sessions

I thought it would be useful to note my thoughts about how the cross-session analysis will unfold before actually starting it.

First I'll look at each of the individual session analyses and compare them horizontally. For example, look at each of the eight shaping forms and look for interesting correlations and differences between the ways shaping happened, look at each of the sensemaking moments for the sessions and compare them, look at the way practitioner focus compares in the grid analyses, etc.

Generally I'm going to be looking for the way the following dimensions compare across the eight sessions:

- intended and 'lived-in' narratives, to give context to the breaches (sensemaking triggers) as well as 'canonicity' of the sessions
- sensemaking triggers (discontinuities, dilemmas, and anomalies that the practitioners respond to)
- shaping (the aesthetics/form of the representations, before and after sensemaking triggers)
- collaboration & practitioner/participant interaction, especially choices the practitioners make that affect the 'interests' of the participants (i.e. ethics)
- types of practitioner focus
- types of improvised practitioner actions
- what is done in the sessions to make the representations 'work' (to make them coherent, engaging, useful etc.)

I may construct some tables showing short summaries of each of the sessions as they fall into these categories.

Hopefully some interesting patterns will emerge. I should be able to come up with some axial coding-type dimensions (e.g. what are the 'more' and 'less' types of values that would emerge when I compare sensemaking triggers, what would 'more' and 'less' types of values be for aesthetic shaping and ethical choice-making, etc.

That may lead to having something interesting to say about this group of sessions as a whole.

Sunday, January 24, 2010

Finished another piece

This post is only to gloat over completing a sub-milestone on my PhD work, the second of the three analysis phases.




As the picture shows, I first did a set of six different kinds of analysis on each of the eight sessions I'm looking at for the thesis. I finished that about a month ago (after more than two years of work). The piece I just finished was comparing the questionnaires that I gave each practitioner to the individual session analyses.

I had asked each person questions about their software and facilitation skill levels and experience, and their opinions about the sessions they had conducted. There was nothing breathtaking that emerged from that analysis, but it filled in another piece of the puzzle by being able to compare what had happened in a session to the level of practitioner experience and skill.

For the last analysis sub-phase, "Cross-Episode Analysis", I will (at last!) look across all eight sessions and compare them on a number of criteria. I've been looking forward to doing this for years now, but had to finish all the individual analyses first.

Hoping to do this part in the next two months or less (I'll probably have to take some vacation time to do so). Then the actual writing of the thesis will start. It's been a long time coming (since Oct 2003).

Sunday, January 03, 2010

The ethics of shaping

As the light at the end of the PhD tunnel starts to turn from a pinprick to a dime-shaped glow, several people that have recently listened to me talk about my research have mentioned that they see similarities in the work I do at my day job.

I work in software usability and user interface design for systems used by call center reps inside a large company. Like the practitioners I've been looking at for my research, making decisions about the UI in enterprise software design has the same degree of connections between choices about the form to give a screen and the way this will affect the interests of the people who come into contact with it. And, as with participatory representations, it's not just a unitary set of considerations. There are multiple kinds of people involved -- clients, end users, other IT teams, auditors -- each with diverse imperatives that drive them. Even "users" are not a monolithic block with one set of interests. What works best for an experienced, expert user is not the same as what would be best for a new user encountering a task or screen for the first time (among many other sorts of differences between users).

We are constantly balancing considerations like speed of development, ease of maintenance, testability, business rules, time constraints, future changes and plans, and usability. Each of these dimensions have ethical implications and trade-offs. As a usability person, it would of course be easiest to give ease of use the principal value (and of course, it's our job to do so) and give it the highest ethical importance. But if we know that the best UI design will have costs and cause problems for others with equally legitimate interests, we have to weigh those factors against others in our choice-making.

For example, take a simple change to an existing dialog box in an ordering system. In our work we get requests for these all the time, perhaps half a dozen a day (along with much larger projects). From a usability point of view, we always want dialog boxes to have clear window titles that give an overview of the situation; concise and straightforward text in the box that lets the user know why the box appeared and what they can or should do; and clearly labeled buttons that spell out the choices they can make (e.g. "Proceed with Order" and "Return to Address Entry" instead of "OK" and "Cancel" or "Yes" and "No", so often misused or unclear). N0-brainer, right?

In the abstract, yes. But what if the issue comes up (as they so often do) two days before code freeze for a complex release in which this change is just one of hundreds affecting many interdependent systems? And (as is often the case) what if the dialog box is in an older part of a system where all such boxes are coded as simple alerts using constants in a programming language which don't allow for descriptive button labels or window titles?

Insisting on "proper" usability design in this case would require custom programming, which takes time (that the development team doesn't have) and additional testing, which adds costs that comes out of someone's budget (which then is less available for other things). It may mean a change to documented requirements, which then requires formal review by people who already have not enough time for all they have to do, and creates risk to delivery which may jeopardize delivery and testing.

A simple answer in this case might be, "just make simple text changes using the existing code for this release, and do a proper job for the next release when there's more time". And often we do take that approach -- when we can. Sometimes there won't be another opportunity because the dialog box is in a part of the system that won't now get touched for a long time, and it would be too expensive to open the code just to make that change (since that, again, would require development time, testing, requirements documentation, etc.). So in many cases the changes are "now or never".

These kinds of dilemmas are very familiar to anyone working in software development, and I'm not saying anything new about them. However, from a research point of view I am particularly interested in highlighting how the choices that have to be made about something's visual form relate to the ethical dimensions of such choices -- the conflicting imperatives that are all valid and which all reflect legitimate interests.

My focus is on the ethical choices involved in making decisions about form, when in the context of shaping interaction and experience for others with mediating tools and representations. In my research I've been applying it to the special case of people playing a group facilitative role with hypermedia representations, but really it's more broadly applicable. What I feel emerging is a set of ways to enable other people to think and talk about these choices as they relate to their own work. I'm not so much interested in being the expert assessor myself, though I have had to do dozen of assessments of practice in the course of the research (I just finished the last of the 47 individual analyses yesterday!) and I do similar kinds of assessments every day on the job.

Rather, I want to enable and enhance people's ability to think about these issues for themselves, and give them some tools to do so. I want to make the question of "how does the way I shape the things I make affect the people who interact with them?" something that's accessible for people to talk about and get insight on.