top of page
  • Writer's picturePia

Built-in quality: a question of architecture

Picture by Soloman Soh on

I am delighted to have my Zühlke colleague and experienced software architect, Marcel Stalder, as a guest for this blog post series.

Marcel lives and works in Switzerland. He has been developing technical solutions since his teenage years, first in electronics and later in software. After graduating in computer science, he began his career as an enthusiastic software engineer, architect and consultant. Over the past 22 years, Marcel has had the opportunity to prove and develop himself as an external consultant and internal expert in many exciting projects in various companies in a wide range of business areas. Almost six years ago, he had the opportunity to continue his journey together with Zühlke, also in several projects and different roles, currently as Principal Software Architect. Marcel's current focus is on projects in the financial sector with banks and insurance companies. One of his hobbies is hiking: On the one hand a big contrast to his job, but also with similarities such as setting a route to the destination and being motivated and persistent.

(The following conversation has automatically been translated from German).


Welcome, Marcel! Thank you for taking the time to talk to me about software quality today.

We both know each other from Zühlke, where we met on a very exciting project at a large Swiss bank. Among other things, I was able to support an agile team in which you had recently worked as an architect. There you introduced Behaviour Driven Development (BDD).

I experienced you as a generalist who was always interested in the specialist context. It was also important to you to know what is relevant for testing and what good quality means in the respective product. A clean technical basis, good architecture and clean code were obligatory components.

And so it is that we are sitting here today and sharing your experiences with my readers.

What does good software quality and built-in quality mean to you?


For me, built-in quality means focussing on quality attributes right from the start and not just in a test phase at the end. This means that we start with the requirements and consider where the quality aspects are. On the one hand with the functional requirements, but also with the non-functional requirements (NFRs). What should the system do in special cases? How fast should it be? What happens in the event of an error, etc.?

For me as a solution architect, this means that I have to understand the business I need to understand the requirement in detail in order to be able to develop an architecture and a design and to realise the implementation.

It is also important that quality is not a one-off thing that I tick off and that's it. We have to check quality on an ongoing basis. Keyword: regression tests. Requirements also change. This has an impact on the regression tests, which should ideally be updated and executed automatically. Keyword: Living Documentation (see BDD, Specification by Example).


In my experience so far, you are unfortunately still the exception among software engineers in terms of your interest in the business. It's clear to you that you need to understand the subject. Not everyone sees it that way. Many focus strongly on the technical aspects and have no desire, to deal with the business context.

Have you always been interested in the business or has your interest developed over time?


I have always been interested in the business side of things. When I am employed by customers or directly by a company, I want to understand the business model. I want to understand what makes the company tick, where the challenges lie. Also based on the IT strategy: where is the company heading to?

I have also seen that focussing too much on technology reduces quality. You then don't ask certain questions about quality aspects. You assume that the business already knows. The business tells you what you need to do. You implement it and at the end you find out that something is missing, there are gaps. Things, that you haven't thought about.

I have found for myself that it helps if you as a layperson challenge the business a little: " talk about this term...what exactly does that mean?". This goes in the direction of Domain Driven Design and the Ubiquitous Language (universal language). Just already, to have the same understanding, a common language is extremely important.

You are often talking on a high level. For example, someone writes "I would like to activate the customer" in a Word document. But what does "customer" mean? What does "activate" mean?

It's not always easy for the experts. After all, they know (or think they know), what is meant. It's funny when you find out that even the experts themselves have a different understanding of their "obviously simple" terms.


Yes, I have experienced that too.

You are someone who is not afraid to do requirements engineering or business analysis. Especially if it promotes mutual understanding and thus product quality. You spontaneously slip into these roles without actively labelling it as such. Is that part of your job as an architect?


Yes, for me this is a part of my job. As background information: I hate repetitive work. I don't like doing the same thing multiple times and I don't like touching the same code multiple times either. If you then need x iterations until you finally reach your goal, that bothers me personally. That's why I try to have a common understanding from the outset, to get to the solution as quickly and easily as possible.

But that does not mean lots of up-front design. It should remain agile and iterative. I don't want to spend half a year writing specifications and then find out that not all cases are covered. You should clarify the situation as quickly as possible, start with the implementation and then build on it.


Absolutely. It is important, to remain agile in order to be able to react quickly to market needs.

Do you have a tip, how to motivate software developers to be more interested in the business context?


Good question... I think, it depends a lot on the personality. If someone already has this motivation it's easier. When I meet people who tell me "I am actually not interested in this specialised domain...I just want to write cool software here.", that's a contradiction for me. But these are just different mindsets.

What I could imagine is if you initially explain the domain a bit that helps, me at least. So not, "build a feature in there...make a new field on the UI", but to explain, what we actually do, what the big picture is, why we do it, what the challenges are, etc. Every domain has exciting things, that you can learn. That helps me to say, "Hey, this is actually a cool thing... Not everything is exciting, but there are exciting areas".

If you know the challenges it is much easier than if you are only told, "Just adjust this code".


I used to constantly have the feeling that I have to have more technical knowledge. I still sometimes have the feeling that I have to be able to do the hands-on part when I talk to people with a lot of technical knowledge in order to be allowed to talk about certain things. You've just disproved that.

It helps me a lot to focus on what the business context is. What is exciting in this domain? What happens in the end with the software that we develop?


I think, it does help, to know the other side, and you know the other side very well. That's why I find interdisciplinary teams very exciting and appropriate. The mutual, interdisciplinary exchange is very enriching.

It may initially take a little more effort to understand, how the respective disciplines work. What is the domain, how does the technology work, how do we test, what are the qualitative attributes? But as soon as you have figured this out in the team, you can move forward much faster and with fewer explanations and words. You already know exactly what the other person needs and what he or she values.


What experiences have you had with test or QA specialists during your career? How should collaboration with people in testing or QA roles look like?


The old procedure, let's call it a waterfall, where someone writes the requirements, someone else implements them, someone else tests the product and someone else runs it - I don't like that at all. It is very inflexible and there is little dialogue. There are quality gates and everyone just makes sure that their own phase is completed.

When I was in my first Scrum team, interdisciplinary and also with a tester, that was a completely new experience. You already have someone at the Daily who asks critical questions. That helps the development. You are constantly being told, how someone who focusses on quality thinks. That's why I would say that even in an interdisciplinary team, one person should focus on quality. This could be a QA specialist, but it could also be an architect or a developer. The important thing is that he or she actually does it.

And as we both know, in big enterprises, there is typically still a downstream team that carries out system integration tests with tools such as Tosca. I find that valuable to a certain extent because it brings in another dimension. You can focus on one system or one application and there is someone else, who tests across the entire landscape. But I would also welcome it if these tests could be integrated. This would allow me to test the process directly throughout during implementation.


That's my big dream too. Let's keep our fingers crossed that it comes true soon!

People who have been working in testing for a long time, often find it difficult to switch to an agile team. Their role used to be clear. They were at the end and the ones who had to say whether the software was good enough for the release. There was a lot of responsibility on the shoulders of these people.

Today, we want to have one person with a focus on quality in the team. But that doesn't mean that this person has to test everything alone. Instead, the work is divided up and the QA role ensures that quality is built in and the testing is done.

What could help testers, find their role in an agile team? How can you allay any fears they may have?


For me, there are two things. One is that as a tester, you are no longer at the end of the process, but start your work earlier. The other is the shorter release cycles.

For the first part, I would try to take away the person's fear and explain: "You are actually still doing the same thing as before: you think about, how to ensure the quality, how can I test something, what test cases are there, etc. But you have additional options for implementing this. You are in a team and can, for example, give a developer something to automate. You have a better overview of what is being tested, how and where something is being tested. You are no longer solely responsible, you have a team.

So there are exciting opportunities for testers in an agile setup. Above all, the fact that they are involved much earlier and are closer to the actual implementation are very positive effects for me.

The second is the shorter release cycles. You don't stand a chance if you want to test manually. What you used to test in a month, you now have to do in a week. That's not possible. And simply automating more at application level is very limited. Just to achieve this, the process has to change. The person and the work content will also change.


I have noticed that many testers have the feeling that they have to learn to automate. They have to be able to do this now because otherwise they won't have a job.

I believe that if someone can automate, it's nice-to-have, but it's not a mandatory skill for testers. Today you are in an interdisciplinary team. The focus of testers is on bringing testing expertise into the team. How can we test as smartly as possible and at which levels? I can hand over the automation of test scripts to team colleagues.


Yes, I agree with that. And there are intermediate paths such as BDD, where test automation directly from the requirements becomes possible. This means that entire UIs can be tested automatically. The test cases can easily be written by people who are not familiar with the technology behind them.


Exactly. Let's dive straight into our former project here.

You worked there in an agile team, in which I was later employed as a tester, and introduced Behaviour Driven Development. That really helped us to improve our test coverage, especially at the lower test levels.

How did you go about it? How did you get the team excited about BDD? How did you get the product owner on board?


It was an iterative process. I started with a proof of concept to show the PO and the whole team, how such a feature file could look like and how a test could be implemented. I put together a few slides and presented this to the team. It's really no rocket science. I have shown, what the tooling looks like and that you can easily jump between steps and bindings. However, the feature file is easy to read for all disciplines. Technically, it is easy to execute and, in our example, already ran in the regular unit test set. And then we started with the first requirements. We deliberately did not let the PO do the entire definition but supported him from the start. Our developers recorded a requirement, created the corresponding feature file and fed it back to the PO. "I understood it like this. Is this what you want?" This is how we iteratively established BDD in our team. In terms of the domain, it was also a good fit.

It was a rule engine with many rules, special cases and exceptions. This had to be recorded somewhere. We didn't want to have our PO write this down in a Word file, but wanted to be able to use the requirements directly for our tests.


That's right and, very importantly, the domain must fit in any case.

When I tell this story many look in disbelief, because they think that BDD and the Gherkin syntax are about scenarios that have to take place on the graphical user interface. But this is not the case. It is used to describe the behaviour of a system. If it makes sense, this description can already be used for the unit test.

That was extremely helpful for me and the entire team. I was able to support them with my expertise even at the lower test levels. It's easy to read and understand. And I thought it was cool to see inside the code. That helped me, better understand our application technically.


You are right. Typically, BDD is brought into contact with Selenium, where the application is clicked through on the UI.

You have to be able to abstract here. You have to define, which functionality is tested where. A unit test is simple and runs fast. The developer changes something and has the test result a few seconds later. That helps immensely.

Trying to test all business logic via the UI is not only outdated, but generally not even possible. That's why I say make it modular, break it down and define, which functionality you want to test at which level.


One question, that is certainly on many people's minds: What do we do if we have many dependencies on peripheral systems? What can we test earlier and what do we still have to test in real system integration?


This is one of those things with the UAT (UAT= User Acceptance Test/User Acceptance Test environment, here: last test environment before the production environment). Often it is said, data quality is poor at the lower levels. This problem must of course be tackled and meaningful data provided at all test levels or environments, synthetic or anonymised. I would always try to test as much as possible, as early as possible.

You can test business logic early on. Interfaces too. If this is not possible, then you simply simulate the interfaces. And then run the final tests against the real interfaces in an integration environment, ideally automated.

UAT could then be the freestyle, where someone from the business clicks through and says "yes, it fits". This person no longer needs to test in detail. You have already tested the requirements.


What do you think of the approach, to go live as quickly as possible and test it in the production environment?


Good question. That fits in with to what I wanted to mention earlier: Why are testers the way they are? I think, this is also due to a suboptimal error culture in the company. Often, when an error occurs in the production, people look for someone to blame. Who didn't test? Who caused it? This is of course not helpful and can lead to production being postponed for as long as possible. I believe, the solution would be a changed error culture. You should also deploy and release more frequently. Fix-forward, when a error was not found earlier. Fix it quickly, re-test, set productive and the problem is solved.

Of course there are exceptions. If your company appears in the press because your e-banking didn't work or incorrect account balances were displayed, it's obviously bad.

What you can do here is to define different quadrants or criticality of business functionalities. Certain things need to be checked in detail. That takes time. Other areas of the application are less critical. If there is an error, it doesn't matter. The alternative would be to run regression tests on everything. Everything has to be tested, everything has to be right. No, it does not! There are critical areas and there are less critical areas.


Risk-based testing has been with me since day 1 of my testing career. It is extremely important, to make the best possible use of the resources available.

You mentioned that you should deploy more often. Deploy is not the same as release. What is your experience with feature toggles?

Unfortunately, I have seen many of these installed but never removed.


Feature toggles are helpful, but yes, you have to remove them again. Feature toggles help you, separate release and deployment. Unfortunately, for many people this is the same thing.

With long release cycles, when only every few months a release goes live, it is also difficult with feature toggles. However, feature toggles definitely help with short release cycles, e.g. to switch functionalities on and off in different environments and to test them early.

What I have also seen often in the past is that people forget about the feature toggle. You have activated it on the lower levels and everything runs perfectly. Then it goes to UAT and into production and then comes the big question: "Why isn't this feature working? Has anyone switched on the toggle?"

The difficulty is keeping an overview. What is deployed on which environment and in what state? What is active, what is inactive? Further should the story for the removal of the toggle be created when a feature toggle is created. A good time for the removal is the next release when a feature has worked successfully for one release cycle.


Exactly, otherwise it can quickly become very confusing.

One reason for feature toggles is certainly to minimise the difference between code and release statuses. In the past, people used to run apart for months and in the end it all came to blows.


This is also an important point. If you work with feature branches, you usually can't test them separately, but have to deploy them somewhere. However, you don't have a separate test environment for each feature. At some point, you have to decide whether to merge this feature into the release or not. If it is merged once and then should not be included in the release, it gets difficult. You have to build it back at great expense. In such cases, a feature toggle helps, which you can simply deactivate if necessary. This provides additional security and more flexibility.


From your more than 20 years of experience, what are the most important success factors for good software quality and, above all, for ensuring that quality is a priority from the outset?


  1. Critically scrutinise requirements and also think about special cases. Don't just cover the happy case.

  2. Think like a tester. Developer love writing code, but they should also think more about, how they can test a feature. Statements like "this is not testable" or "the effort would be too high" are unacceptable!

  3. A system or application should be designed to be testable. This starts with the architecture and ends with the implementation of functionality. You need to consider how the implementations are structured (separation of concerns) so that they can be tested quickly and easily. Keyword: Onion, Hexagonal and Clean Architecture.

  4. Automation. Tests should provide fast feedback on changes and extensions for new and existing functionality.

  5. Check Behaviour Driven Development with a focus on Living Documentation. Define requirements in such a way that they are machine-readable and can be used for test automation. This means you always have up-to-date system documentation.


Thank you very much, Marcel!

Is there anything else you would like to share with our readers?


Test automation is often neglected because people think that, it is time-consuming, complicated and therefore expensive. But I believe that the expensive part comes when you don't automate: mistakes, that are found late, data, that need to be corrected, etc.

Therefore: emphasise test automation and high test coverage right from the start of the project. This will make your life much easier later on.

38 views0 comments


bottom of page