Scrum’s Success Explained using SCARF-model June 15, 2015at 20:35

Scrum has become one of the dominant agile organizational frameworks in the software development industry. With its simple set of roles, activities and artifacts, it has gained appreciation among managers and developers alike and has shown great improvements in team and company productivity when done right.

In this article I will try to explain why Scrum works from a social neuroscience perspective using the SCARF-model; a social behavior model developed at the NeuroLeadership Institute.

The SCARF-model

The SCARF-model tries to explain the core motivators of the human social being in context of the anatomy and function of the brain:

The acronym S.C.A.R.F. stands for:

  • Status – the relative importance to others
  • Certainty – being able to predict what lays ahead
  • Autonomy – control over events
  • Relatedness – belonging to a group, being friend rather than foe
  • Fairness –fair exchange between people

The model explains how certain actions are perceived by the recipient.

At the core, the model stipulates that the brain tend to mark certain events as something to avoid, triggering the fight, flight, freeze system of the primitive brain; or something to approach, triggering the reward system in the brain. A person receiving frequent mental rewards is generally more creative; more dedicated in his/her tasks, less risk of reduced mental health; while the reverse is true for someone who is exposed to negative triggers.

The domains covered by the acronym indicate social events/actions that triggers the avoid/reward mechanism. According to the model, a perceived reduction in status triggers the same kind of response in the brain as a threat to once life. Uncertainty, lack of control of one self, lack of belonging to a group, exclusion, and unfair treatment are all things that have negative effect.

SCARF and Scrum

Let look at each domain and how they are affected by the principle and practices in Scrum.

Status

Scrum tends to reduce the hierarchal depth of an organization, making it more flat. A flat organization has fewer situations where status can have a negative impact. Further, the cross functional teams where people of different professions come and work in the same team tend to reduce the status differences between the fields. E.g. historically, members of the test-/QA-department have often been given a lesser status than hard-core developers. In a well-functioning cross functional team where everyone participates in all types of work-items sharing knowledge and contributing outside their core competence, reduces the perceived relative importance between professions. A clear and complete definition of done further emphasizes the importance of all areas.

Certainty

By locking down the tasks for the coming 2-4 weeks in the form of the sprint backlog, uncertainty is greatly reduced. The routine of established activities (daily standup, sprint planning, sprint review) further promotes this.

Autonomy

Team self-organization, team commitment to workload in a sprint, responsibility for the end-to-end solution and therefore also freedom to decide on implementation, are all key components that boosts the feeling of self-governed destiny.

Relatedness

The team as such and end-to-end responsibility gives a strong feeling of belonging, and perceived value of one self.

Fairness

The team succeeds and fails together. All tasks are shared across team members and thanks to definition of done, all aspects of a task need to be completed in order to gain full value. All members are treated as full members of the team.

Conclusion

As explained, the gain of organizational structures such as those imposed by the Scrum framework have good chance of setting up a work environment that goes along the grains of the human brain. However, as with everything, good architecture is not enough; it is only realized through its implementation.

Independence in Automated Tests June 9, 2013at 18:33

In a previous post (not true, not written yet), I outlined some of the properties that an automated test must ascribe to in order for it to be considered “well structured”. In this post I’ll dig into the concept of independent tests a bit further.

I have found that even though most developers by now know that tests should be written so that they are independent of each other, I’m not sure if they have realized the full implication of that statement. Let me explain what I mean.

For an automated test to be considered independent, it should not only code-wise not rely on state created in previous test cases, hence allowing the test code to be moved or refactored without affecting other tests; it should also be practically possible to execute any test case in any order –without rewriting the tests.

The implication of this is that all test suite creation must be completely separate to how we construct and write our tests. It is not okay to be forced to execute all test cases in a file. It is not okay to be forced to include all files in a directory in a given test suite.

The consequence of this is that the test runner must be able to take a suite-description that allows for execution of individual test cases. Even if the test case is stored in a file with a bunch of other test cases, and the file is stored in a directory with a bunch of other test files.

Further, the development environment itself must be flexible enough for easy temporary modification, such as a command line argument.

$> make run-tests #runs all test cases
$> make run-tests TEST=”monkey_can_peel_banana, fish_cannot_peel_banana” #runs specific test cases

Why is this important?

I, as a user of a test framework, would go bananas if I was forced to run all test cases every time. Yes, we strive for quick running tests, but quality cannot be achieved with unit tests alone. We also need higher level tests such as integration tests, component tests, system tests, feature acceptance tests, stress-tests…and they can take time.

Secondly, if we cannot execute test cases in different contexts, we are bound to copy paste the tests or not run them at all. For example, test case TC-1045 may be part of both customer A’s acceptance test suite, as well as customer B’s. We don’t want to be forced to copy the test code just because the framework doesn’t allow for the same test case be run in two different suites.

This has implications on how we structure our tests. Test fixtures become more important, since the execution context may have wider variations. It also puts greater strain on the test framework itself. The proper interpretation of setup/teardown, before/after suite and other such event-hooks becomes very important.

In summary, independent does not only mean code independent, but also practical test execution independence.

Metaphors in Software are Heuristics, not Algorithms at 11:08

A software system should be based around a model, a metaphor that helps the developers how to think about the systems functionality and how to construct its features. However, it is important to realize that the metaphor is not an algorithm that can predict exactly how the system should be constructed. It gives hints, it helps with a general direction, a tone, to what to do –the metaphor only provides heuristics to constructing the system!

Windows Live Mail Stuck in Working Offline Mode May 11, 2013at 00:45

I ran into the weirdest problem just now. Windows Live Mail  (WLM) would not get out of working offline mode! It didn’t matter how many times I pressed send/receive and confirmed that I wanted to go on-line, nor did it help to directly press the button to switch to on-line mode. Extremely annoying!

I did quick search and found the answer pretty quickly on this posting:

http://answers.microsoft.com/en-us/windowslive/forum/livemail-program/windows-live-mail-is-stuck-in-working-offline/a395b127-ef3c-42d2-88c4-e6a9d1284d09

The solution for those not bothered pressing the link:

The work offline mode in WLM is connected and overrun by the setting in Internet Explorer (IE). If you switch to offline mode in WLM, IE also goes to working offline. However, the other way around is not true. If you try to leave offline mode through WLM it will not work. You have to first close WLM, then open IE, toggle working offline so that it is showing online presence. It might show online for starters, then you need to toggle it back and forth.

Windows 7, Internet Explorer 10, Windows Live Mail 2011

Continuous Integration: A Mindset, part II February 16, 2012at 20:00

I received a couple of excellent questions/concerns about CI in a comment from Roger that I will try to address.

“How about human testing/QA? If team or feature branches are used, the QA department has a chance testing PBIs isolated before integrating with mainline which, hopefully, leads to more stable mainline.”

Ideally the QA department should not concern itself with testing in the traditional sense. The roll of the QA department, in my point of view, is to support the development teams by improving on the testing tools, maintain resources required to do automatic testing, help to improve strategies, etc. QA should concern itself with making sure that the process works, that the tests get run, that the tests are sound. They should do very little testing themselves.

All tests should be automated. Not only unit tests, but integration tests, load tests and acceptance tests as well.

That was an intentional lie. There are two types of tests humans should do and that can be performed by the QA-department:

  • Exploratory testing
  • Verification of aesthetics

However, nothing specifies that these activities must be done in isolation. I would prefer that the test-specialists worked in the design-teams and performed the testing together with the design team.

CI requires another thing: Every designer should be able to run most of the tests themselves from their local copy.

If you do work that likely has a big impact on the system, run some smoke-integration-tests first, before you commit to main.

Along-side or prior to a new feature is being developed, automated function tests and acceptance tests should be written. Obviously they will fail until the feature is complete. You don’t want these new tests from halting the build/deployment. So you will need some kind of mechanism that allows the developers to test against these acceptance tests but preventing someone turning the feature on. If your testing framework and CI-engine doesn’t allow you to easily exclude tests that are expected to fail, I could agree to put the test code on a “feature branch”. The reason I would allow this, is because tests, if correctly written are completely isolated and require no integration.

If you have commits that keep breaking main in a show stopping fashion, then you have the wrong set of tests and the wrong mindset. CI requires discipline; more so in the daily work than waterfall. Because you will be called on it every time you slip and break something. In waterfall, you get away with doing the wrong things for much longer, until it comes to crunch time and all the ugliness becomes apparent – or worse, does not show it self until it is running in the customers’ systems, causing havoc.

“Also, if new stuff is too unstable or there has been misunderstanding about functionality, it doesn’t have to be integrated at all.”

If there are uncertainties: discus, do pretotypes (pretotyping.org/), do internal demos. If a commit breaks the build, undo it. Also, new features should have a feature lock preventing them from showing up in the delivered product until the full feature is completed and ready for market. Large features may take months to develop to completion. Should you hold of the integration for that long? Of course not! Even if there are concerns with CI, the benefits exceed the drawbacks many times over.

You will have issues with CI and it will have drawbacks, but the alternative is far worse. So even if you don’t feel that I have addressed the stated concerns in a successful way, I would still argue that these concerns are considerably small compared to the issues that CI do solve.

Current Best Thinking January 28, 2012at 12:33

My new favorite phrase is:

“Current Best Thinking

I can’t take credit for it unfortunately. I heard it from a co-worker who had gotten it from somewhere else.

It is inspiring as well as truthful. It expresses the insight that we will act accordingly to our best knowledge, as we understand the world today. But, we will continue to pursue a better understanding, think more and evolve our practices as we learn more about the world.

Excellent

Continuous Integration: A Mindset at 12:18

I attended a round-table discussion on CI (Continuous Integration) the other day – and it prompted me to write this post.

There seem to be some confusion about CI. Some have chosen to let  integration mean the interaction of two or more software components. Hence, making the conclusion, that integration is only taking place during test execution.

Unfortunate, if you have that limited understanding of what CI is, you loose the big picture. CI is not only the triggered build and test-execution on every check-in. That is only the safeguard that makes it possible to perform the actual activity; that of continuously integrating others’ work with your own.

I’ll say that again: continuously integrate others’ work with your own

That means you do not have team-branches, feature-branches; you keep it one track as much as the product will allow you to. Every change in the system should propagate to you more or less immediately.
Yes, you need builds, and test to insure that your code is always working, but the integration is not the integration of software modules or components, it is the integration of your work with other people’s work.
Someone might get irritated by the use of the word work here. Why isn’t he saying code if that what he means. Why not say that you continuously integrate other developers’ code with your own. Well, first off, “your code”, “others’ code” imply code ownership. In an agile environment, you don’t own code; it is a communal responsibility.
Second off, it is not just code that requires integration. It also applies to toolsets, platforms, hardware, environments, configurations; every aspect of the software development process.

The concept of continuous adaptation or really continuous adoptation, should be your mindset.

Using this practice, some will experience that everything changes underneath you, that you constantly need to change small aspects of your code because of others’ changes. It will become tedious and your own progress will be slow because you have to take every one else’s changes into account. You might even long back to the good old days when you could work without a care fore a month and then hand of your work to the integrator who magically just made things work together.
First of all, the integrator’s job in these situations was never easy. They often spent late nights getting yours and others’ crap to work properly. So even if it was the good old days for you, it wasn’t for them. Secondly, this was an error prone process. It created a mess that you usually ended up having to fix. So if you remember the frustrating debugging sessions due to obscure errors it isn’t so much the good ol’ days anymore.

Despite all of this, if you still find yourself longing for the old ways, it can be helpful to realize this:

Decoupling of dependencies should always be done by architecture and design in the code– never by black magic in the version control system/by configuration management.

If you are in a situation where you use the version control system to reduce the impact of changes, to create isolation between modules, this is a symptom of pore architectural design, which needs to be addressed. DO NOT hide the problem by configuration management.

In conclusion, the mindset of continuous integration is not just that of having automatic builds and automatic tests. It is the process of pulling in and integrating other developers’ work with your own. Decoupling of subsystems and modules must be done through architecture and design in the code, never through configuration management.

The When, What and How of Code December 21, 2011at 00:10

When partitioning your code it is important to consider the when, the what and the how. It can decide if your program will be well structured or become a tangled mess. These three aspects should be separated whenever possible. It is a strategy for thinking about dependencies and separation of concerns.

The When

An action could be triggered by an event or signal, a click on a button in a user interface, the call to a command-line tool, receiving a message on a socket. The infrastructure that holds and decide when the what is performed should be kept separated whenever possible.

Further, the when should have no direct knowledge of the what, so invocation should be done using hocks and other means of abstractions. This results in a sound infrastructure, isolated from the specific problem domain, lending itself for reuse and easier modification through extension.

This can imply different registration-mechanisms; where the application construction wires/glues the application together by instantiating and registering functions that performs the what to be called conditioned by specific triggers. Hence making the infrastructural-framework-core the decision maker of the when.

The What

What should be done! Think of it as a recipe for solving the problem at hand or creating whatever it is you are trying to accomplish. It is expressed in terms of abstractions. You shouldn’t be concerned with how, only what should be done.

This is usually placed in service-objects/services. It could be the EditEmployee-method that restores an employee object from persisted data, modifies it using accessors on the employee object and then write the changes back to the data storage. It does not concern it self with how Employee objects are restored, how they are edited or how they are saved. Only that these things should be done as part of the Edit-operation.

This allows the higher-level logic to be expressed in terms of abstractions that are consistent, cohesive, that can make use of reusable object as well as making it easier to understand the intent of the expressed functionality.

The How

This is the low-level classes that deal with concrete implementations of the lowest building blocks of the application. It could be the EmployeeDB class that knows how to persist an Employee instance to the database/persistence layer. It is tiny building blocks that are easy to test, easy to understand, easy to replace and easy to reuse.

The Test Framework – An Example

In order to keep a testing framework clean and simple to use, you should separate your whens, whats and hows. The core framework should deal with the when. The test-runner dictates when a test case is run, when its before-method and after-method (setup/teardown) hocks are executed and when the procedure for recovery should be invoked. However, the core framework should not concern itself with what it means to recover and cleanup; only provide hocks that allow other code to inject the what.

The test cases should hold the what. It decides what is created (setup/before-test-method), what is destroyed upon cleanup (teardown/after-test-method) and what is executed as part of the test. It however, does not concern itself with the how.

The how is taken care of by the production code; and if needed in order to keep the test-case-code clean, help-libraries that raises the level of abstraction in the scope of a test case. The help-libraries should not be part of the framework.

This might seem as an overzealous approach to separation of concerns. However, if not abided by, it becomes all to easy to put too much functionality into the framework making it harder to reuse, and also forcing you to create mechanisms for excepting special behavior.

.NET Class Libraries: Application Settings, Config-File October 23, 2011at 14:53

I’m getting to a point where my programming limitations (skill-wise) is in the subtleties of different languages and platforms rather than general concepts, algorithms and design. It is a nice feeling to feel proficient in what you do, but oh so frustrating when you stumble on the small exceptions and querks in a langue or platform. This weekend it was C# and .NET Class Libraries. Apparently, class libraries ignore the LIB.dll.config file. It retains the default values it was compiled with, no matter what you write in the file, or if it’s even there. Very frustrating when you writing a plug-in architecture with configurable plug-ins. Not to mention all the time spent trying to figure out why the damn plug-in wouldn’t accept my changes in the config file.

So now you know! The application settings model does not work with Class Libraries. You can still use them in design time, but once you ship, the content of the config file will not matter. If you want your class libraries to be configurable, you have to roll your own. I’m suggesting reading and parsing the application config file, since it is there anyway -just do it your self. If I bother to do it myself for my application, I’ll try to write a general wrapper class and publish it here.

GNU Make: Exporting Environment Variables at 14:38

I’m still learning the finer arts of writing Makefiles. Just this weekend I learned a subtlety when it comes to the export keyword – it doesn’t work within recipes. As it turns out, the export var=value cannot be part of a recipe but must be written in the general area. It took me awhile to figure this out. I can’t remember if I found it on a side note in the GNU Make handbook or on a forum, but it wasn’t well advertised. I visited close to a dozen forums, non mentioning it. So there you go.

A second thing that can throw of the export command is the shell that Make uses to execute the shell-commands. I was advised  to set the SHELL variable at the top of the Makefile to what ever shell I was using, to get the syntax correct. I’m not sure if this had an affect in my case.

For those not familiar to the export keyword, it is used to export a variable to the environment and hence make it visible to any program executed,e.g. in a recipe.

Update:

There is a way to export a variable just for a specific recipe. It looks something like this:
target: export VAR += value

It can also be used to just update process local variables:
target: VAR += value

I found it a while back, in the GNU Make handbook, even though I can’t remember the exact place.