Leif’s PhD defense

After the initial presentation of 45 minutes, two opponents are asking questions: Supervisor Kurt Schneider (KS) and Arie van Deursen (AvD)

KS: What would you consider software engineering practice? As you dont distinguish between distributed or colocated. Is there a limitation?

Leif : XP, which is a combination of practices, we should look at individual practices.

KS: Why did you include ‘influence on non-functional properties’ in your definition

Leif: The change agents should be able to communicate the reason for change

KS: What about practices that influence productivity (as opposed to quality)

Leif: Yes, this is also a property

KS: Does the fact that an innovation has to be an improvement limit the scope, or broader it?

Leif: It broadens it. Think about documentation, it might not be seen as an innovation, but if we convince developers it is, adoption might be easier.

KS: You do not distinguish between hobby and commercial open source

Leif: In our studies, we mixed the two: some organizations had open source solutions that were sold.

KS: How do you distinguish a community of practice from a group of hobbyists

Leif: That is a broad question, it depends.

AvD: You state TDD is a practice, but this is also a collection of practices, like XP or Scrum, that you do not call practices?

Leif: Yes, you are right, but I have doubts about all aspects of Scrum, but I am unsure how software related all activities are.

AvD: You assume that organizations want to adopt something. Is that not the hardest step? Moore (of Crossing the Chasm) argues that the hardest step is from early adopters to early majority, which differs from the KAP gap.
Think about Git versus GitHub. Git is the true innovation, but it did not fly. GitHub serves the early majority, by creating a whole product that solves a problem (as Moore describes)

Leif: Github does lower the barrier, this is already part of Rogers theory, Moore just makes it more visible. And since Moore is not scientific work, Moore draws a clear line between the chasm, and states that the different groups do not communicate with easch other (early adopters with early majority) Rogers and other research found that there is communication between the groups. It might be useful as a tool for marketing, but I do not accept it as the truth.

KS: You did not have a lot of citations on adoption in software engineering.

Leif: I did not find a lot of research on adoption in software engineering. Most talk about problems and are not from SE themselves. It is strange that Roger’s work has not ‘diffused’ to SE. But it is getting better know, with Social Media being a topic of research.

KS: Will your method work if the innovation is perceived as old and boring.

Leif: Yes, I would look at patterns that are in step 4 and 5 (implementation and confirmation) as these people have alreay made a decision (“this is not
for me “)

KS: What was a surprising outcome of the first (github) study?

Leif: Difference between how project owners and contributors see what is happening. Contributors though examples were very useful, but owners did not.

KS: What was non-surprising? What outcome did you expect?

Leif: We tried to perform the study without expectations.

KS: How does Grounded Theory go together with GQM, in which you should have clear objectives?

Leif: GQM was used for the evaluation, not for problem finding.

AvD: In all three studies, I missed your coding system. Where can I find it if I want to do a similar study?

Leif: You could approach me. We are already sharing interview data with others.

AvD: Do you think it would be useful to share your coding system.

Leif: Yes, it would have been 🙂

AvD: I also missed data in the Github studies on the projects, for instance: what language, were there tests etc.

Leif: There were very different projects, some on Ruby, some did not use testing, they were deliberately taken technical debt in order to gain traction.

AvD: You could argue that a real grounded study is not random

Leif: In the beginning, we were still looking for directions. In later studies, we did not randomize anymore.

AvD: Did saturation (as defined in Grounded Theory) play any role?

Leif: Yes, at the end of the interviews we did get similar answers. We then reoriented ourselves and made our interview questions sharper.

AvD: I found it surprising that test coverage did not occur in the paper. Did anyone mention it?

Leif: Some were only ‘testing the happy path’ because they use BDD. This is the way it is done in Rails, something that community leaders tell.

AvD: Maybe I am an idiot, but if I would get a pull request, I would want to see some tests with it. All technology is there to verify this, but here the ‘whole project’ (by Moores definition) is missing.

Leif: Probably this is an improvement to Travis (?) that can already do some validation. One of the participants dit mention he checked coverage.

AvD: In a grounded theory, this would have been a nice point to dive deeper.

Leif: Yes, but we were not studying testing, but social phenomena

KS: How deep does PAIP go? Aren’t the real problems in implementing the pattern?

Leif: I provide the framework, but covering the details of every pattern was not in the scope of the thesis.

AvD: The though scientific question is: how can we disagree of verify PAIP? When we have a look at the table with patterns (Table 7.1) there are a lot of ‘ticks’. Was there a cell where you were in doubt? Concrete: Appreciation does not play a role in the starting phase?

Leif: I used existing theories to decide that. Since initially there is nothing to appreciate, in the starting phase it is not applicable.

AvD: But how do you validate it?

Leif: In an ideal world, we would do a experiment for each pattern.

KS: Can we design an experiment then, to validate one? For instance for a creative task, micro-blogging helps. An experiment could be choosing
a creative task and seeing whether micro-blogging supports it.

AvD: In the study with the students, many patterns were used, which makes it hard to judge their influence. What was the most important one?

Leif: We noticed that milestones worked, since we did not make them explicit. They guessed that there was milestone at 100, so they made a bunch of commits. We also did interviews at the end, from which we learned that the news feed was useful. Some used the leader-board to see who was not doing anything, which was not the intention. A more useful variation would be a cooperative game, where teams are scored against other teams.

4 Comments

  1. Neil

    Super awesome – both the questions/answers and the fact you blogged this.

  2. Felienne (Post author)

    Thanks! Was very fun to do.

  3. Pingback: Dissertation Published | Leif Singer's Blog

  4. Pingback: Dissertation Published – Dr. Leif Singer

Comments are closed.