Category / Process

Scrum’s Success Explained using SCARF-model June 15, 2015 at 20:35

Scrum has become one of the dominant agile organizational frameworks in the software development industry. With its simple set of roles, activities and artifacts, it has gained appreciation among managers and developers alike and has shown great improvements in team and company productivity when done right.

In this article I will try to explain why Scrum works from a social neuroscience perspective using the SCARF-model; a social behavior model developed at the NeuroLeadership Institute.

The SCARF-model

The SCARF-model tries to explain the core motivators of the human social being in context of the anatomy and function of the brain:

The acronym S.C.A.R.F. stands for:

  • Status – the relative importance to others
  • Certainty – being able to predict what lays ahead
  • Autonomy – control over events
  • Relatedness – belonging to a group, being friend rather than foe
  • Fairness –fair exchange between people

The model explains how certain actions are perceived by the recipient.

At the core, the model stipulates that the brain tend to mark certain events as something to avoid, triggering the fight, flight, freeze system of the primitive brain; or something to approach, triggering the reward system in the brain. A person receiving frequent mental rewards is generally more creative; more dedicated in his/her tasks, less risk of reduced mental health; while the reverse is true for someone who is exposed to negative triggers.

The domains covered by the acronym indicate social events/actions that triggers the avoid/reward mechanism. According to the model, a perceived reduction in status triggers the same kind of response in the brain as a threat to once life. Uncertainty, lack of control of one self, lack of belonging to a group, exclusion, and unfair treatment are all things that have negative effect.

SCARF and Scrum

Let look at each domain and how they are affected by the principle and practices in Scrum.

Status

Scrum tends to reduce the hierarchal depth of an organization, making it more flat. A flat organization has fewer situations where status can have a negative impact. Further, the cross functional teams where people of different professions come and work in the same team tend to reduce the status differences between the fields. E.g. historically, members of the test-/QA-department have often been given a lesser status than hard-core developers. In a well-functioning cross functional team where everyone participates in all types of work-items sharing knowledge and contributing outside their core competence, reduces the perceived relative importance between professions. A clear and complete definition of done further emphasizes the importance of all areas.

Certainty

By locking down the tasks for the coming 2-4 weeks in the form of the sprint backlog, uncertainty is greatly reduced. The routine of established activities (daily standup, sprint planning, sprint review) further promotes this.

Autonomy

Team self-organization, team commitment to workload in a sprint, responsibility for the end-to-end solution and therefore also freedom to decide on implementation, are all key components that boosts the feeling of self-governed destiny.

Relatedness

The team as such and end-to-end responsibility gives a strong feeling of belonging, and perceived value of one self.

Fairness

The team succeeds and fails together. All tasks are shared across team members and thanks to definition of done, all aspects of a task need to be completed in order to gain full value. All members are treated as full members of the team.

Conclusion

As explained, the gain of organizational structures such as those imposed by the Scrum framework have good chance of setting up a work environment that goes along the grains of the human brain. However, as with everything, good architecture is not enough; it is only realized through its implementation.

Continuous Integration: A Mindset, part II February 16, 2012 at 20:00

I received a couple of excellent questions/concerns about CI in a comment from Roger that I will try to address.

“How about human testing/QA? If team or feature branches are used, the QA department has a chance testing PBIs isolated before integrating with mainline which, hopefully, leads to more stable mainline.”

Ideally the QA department should not concern itself with testing in the traditional sense. The roll of the QA department, in my point of view, is to support the development teams by improving on the testing tools, maintain resources required to do automatic testing, help to improve strategies, etc. QA should concern itself with making sure that the process works, that the tests get run, that the tests are sound. They should do very little testing themselves.

All tests should be automated. Not only unit tests, but integration tests, load tests and acceptance tests as well.

That was an intentional lie. There are two types of tests humans should do and that can be performed by the QA-department:

  • Exploratory testing
  • Verification of aesthetics

However, nothing specifies that these activities must be done in isolation. I would prefer that the test-specialists worked in the design-teams and performed the testing together with the design team.

CI requires another thing: Every designer should be able to run most of the tests themselves from their local copy.

If you do work that likely has a big impact on the system, run some smoke-integration-tests first, before you commit to main.

Along-side or prior to a new feature is being developed, automated function tests and acceptance tests should be written. Obviously they will fail until the feature is complete. You don’t want these new tests from halting the build/deployment. So you will need some kind of mechanism that allows the developers to test against these acceptance tests but preventing someone turning the feature on. If your testing framework and CI-engine doesn’t allow you to easily exclude tests that are expected to fail, I could agree to put the test code on a “feature branch”. The reason I would allow this, is because tests, if correctly written are completely isolated and require no integration.

If you have commits that keep breaking main in a show stopping fashion, then you have the wrong set of tests and the wrong mindset. CI requires discipline; more so in the daily work than waterfall. Because you will be called on it every time you slip and break something. In waterfall, you get away with doing the wrong things for much longer, until it comes to crunch time and all the ugliness becomes apparent – or worse, does not show it self until it is running in the customers’ systems, causing havoc.

“Also, if new stuff is too unstable or there has been misunderstanding about functionality, it doesn’t have to be integrated at all.”

If there are uncertainties: discus, do pretotypes (pretotyping.org/), do internal demos. If a commit breaks the build, undo it. Also, new features should have a feature lock preventing them from showing up in the delivered product until the full feature is completed and ready for market. Large features may take months to develop to completion. Should you hold of the integration for that long? Of course not! Even if there are concerns with CI, the benefits exceed the drawbacks many times over.

You will have issues with CI and it will have drawbacks, but the alternative is far worse. So even if you don’t feel that I have addressed the stated concerns in a successful way, I would still argue that these concerns are considerably small compared to the issues that CI do solve.

Current Best Thinking January 28, 2012 at 12:33

My new favorite phrase is:

“Current Best Thinking

I can’t take credit for it unfortunately. I heard it from a co-worker who had gotten it from somewhere else.

It is inspiring as well as truthful. It expresses the insight that we will act accordingly to our best knowledge, as we understand the world today. But, we will continue to pursue a better understanding, think more and evolve our practices as we learn more about the world.

Excellent

Continuous Integration: A Mindset at 12:18

I attended a round-table discussion on CI (Continuous Integration) the other day – and it prompted me to write this post.

There seem to be some confusion about CI. Some have chosen to let  integration mean the interaction of two or more software components. Hence, making the conclusion, that integration is only taking place during test execution.

Unfortunate, if you have that limited understanding of what CI is, you loose the big picture. CI is not only the triggered build and test-execution on every check-in. That is only the safeguard that makes it possible to perform the actual activity; that of continuously integrating others’ work with your own.

I’ll say that again: continuously integrate others’ work with your own

That means you do not have team-branches, feature-branches; you keep it one track as much as the product will allow you to. Every change in the system should propagate to you more or less immediately.
Yes, you need builds, and test to insure that your code is always working, but the integration is not the integration of software modules or components, it is the integration of your work with other people’s work.
Someone might get irritated by the use of the word work here. Why isn’t he saying code if that what he means. Why not say that you continuously integrate other developers’ code with your own. Well, first off, “your code”, “others’ code” imply code ownership. In an agile environment, you don’t own code; it is a communal responsibility.
Second off, it is not just code that requires integration. It also applies to toolsets, platforms, hardware, environments, configurations; every aspect of the software development process.

The concept of continuous adaptation or really continuous adoptation, should be your mindset.

Using this practice, some will experience that everything changes underneath you, that you constantly need to change small aspects of your code because of others’ changes. It will become tedious and your own progress will be slow because you have to take every one else’s changes into account. You might even long back to the good old days when you could work without a care fore a month and then hand of your work to the integrator who magically just made things work together.
First of all, the integrator’s job in these situations was never easy. They often spent late nights getting yours and others’ crap to work properly. So even if it was the good old days for you, it wasn’t for them. Secondly, this was an error prone process. It created a mess that you usually ended up having to fix. So if you remember the frustrating debugging sessions due to obscure errors it isn’t so much the good ol’ days anymore.

Despite all of this, if you still find yourself longing for the old ways, it can be helpful to realize this:

Decoupling of dependencies should always be done by architecture and design in the code– never by black magic in the version control system/by configuration management.

If you are in a situation where you use the version control system to reduce the impact of changes, to create isolation between modules, this is a symptom of pore architectural design, which needs to be addressed. DO NOT hide the problem by configuration management.

In conclusion, the mindset of continuous integration is not just that of having automatic builds and automatic tests. It is the process of pulling in and integrating other developers’ work with your own. Decoupling of subsystems and modules must be done through architecture and design in the code, never through configuration management.

Merge is Release August 31, 2011 at 22:28

“A small part of me dies on every release”

A colleague wrote the above statement on the white board in the war-room today after working close to half a day with an internal release of source code meant for the other dev-teams working on the same project. When it takes almost four hours to do an internal release, a manual process that often goes wrong and has to be mended or redone, then it is something terribly wrong with the procedure.

Release should be as easy as Merge

Any new feature should be developed on a separate feature branch. Once the code is in place, it compiles, all unit- and function-tests pass, no lint warnings, no doxygen warnings, no code duplication and the code is peer reviewed; then you merge back to main/trunk. The merge should be close to trivial since the application you are writing are well partitioned, abide to the SOLID principles. (especially to OCP – open closed principle**) and each feature is small and distinctive enough so that you finish within a few days.

We do all that, or pretty close to it. The problem is not our coding practice –at least not in this instance. The problem is the version control system and the release process. I’m too embarrassed about the actual system in place so I won’t go into details. Lets just conclude that it is awful.

If your release process is any more complicated than a merge, than you are doing something wrong.

** If you are not familiar with the SOLID-principles or specifically the Open Closed Principle, please read this blog-post. Also stay tuned; future posts will discuss the individual principles behind the acronym.