Excellence in Software Engineering
Wrapping a Solid Shell Around: Avoiding Software Becoming a Nightmare
22 Januar 2020

Author: Serdar MUMCU, Project Manager / Software Consultant (Energy)

“No man ever steps in the same river twice, for it’s not the same river and he’s not the same man.”  (Heraclitus)

 

Understanding the nature of a software is crucial for all sorts of practitioners in the field. If you know the monster you are dealing with, you will have more chance to defeat your “enemy” and you might be able to tame it into a harmless pet that is loyal. Otherwise, the monster that you create with your very hands will get bigger and it will start managing you instead of you. It may even turn your work life into a nightmare.  Let’s zoom out a little bit together and have a look at what we are actually trying to accomplish. By definition, a software is a combination of instructions that we execute for getting desired functions with acceptable performance, data structures that we manipulate and documentation that describes the operation. So, what could be the problem if we take this simple definition into consideration?

First of all, “software” is a broad and multidisciplinary field and software development is a really complex activity, although people generally think otherwise. Apart from the complexity of the domain we are struggling with, each software part is alike and since these parts are not independent and usually related, the complexity grows non-linearly with the size in the order of square of the input size. It means that when the complexity of software with 10 features is 100 units, the complexity of a software with 100 features will be 10000 units. It grows geometrically with size. That’s why we need to utilize many techniques, approaches, patterns, methodologies with applying proper management activities. This is the reason mentioned in the Chaos report of The Standish Group and similar reports state that projects with bigger sizes, longer duration and team size of more than 6 people are less likely to be completed on time and within budget. So, size does matter in software. Simply put, there are 3 main concepts that contributes to this complexity:

Conformity: A software needs to conform to the hardware or the operation system that it runs on. If there exists something that can be changed by modifying one of these, the software will be the first soldier in the front line.

Changeability: Since the earliest ancient Greek philosophers, we know that change is the only constant in the universe. For software, it is no exception. The need of change can originate from the requirements, standards, laws, budgets, users, new applications etc. At the end of the day, the change is inevitable in software. It is not easy to adapt those changes if you are not ready. In order to obtain a modular and extensible system that is open for changes, you need to be able to know and apply principles such as “do not use switch cases for kind operations”, “implement to interfaces”, “encapsulate what varies”, “open / close principle”, “low coupling and high cohesion”, “use composition (or delegation) over inheritance when necessary” etc. If you are not confident with these, just learn and apply the GRASP (General Responsibility Assignment Software Patterns) and / or GOF design patterns because they usually use those principles for you and for the known problem-solution pairs.

More generally, there are stable or unstable classes in all kinds of software and we should not be coupled to unstable parts of the software since it would cause us many problems. In order to solve the issue, we need to wrap a solid shell around the unstable parts of the software, we might provide a facade object or a generic interface for other parts that use the varying part. In that way, we provide a common unified interface that protects the other part of the software from being affected by the varying part. This concept is called Protected Variation and it is the basis of many other principles.

Intangibility: A software does not have a mass and does not occupy space. There is no physical law valid for it. A software is very hard to visualize even by looking at the source code, we need to use UML or similar tool to visualize it and it is somewhat different from other engineering disciplines. So, time has proven that waterfall based methodologies that are designed for other engineering disciplines are not suitable for it. It is discovered that iterative-incremental approaches like agile that embraces the change instead of resisting it, are much more suitable for the software.

Other than these factors, industry demands for the software are literally cruel. Industry needs software that are more reliable, more available, more maintainable, more portable etc. and demonstrate better performance that can be realized with less budget and time. This is a real contradiction.  As defined by IEEE, for software engineering, “we need to apply systematic, disciplined, quantifiable engineering approaches to develop, operate and maintain the software”. If we can measure something, we can improve it.

Regardless of the kind of software we are developing, we are actually bridging the gap between the problem domain and the solution domain. Knowledge of the problem domain is equally important to the knowledge of the solution domain.  Since software engineering is an interdisciplinary approach, we need to know the domain or we need to work with a domain expert. There is a need in the problem domain, we convert it into a problem statement and by using the problem statement among millions of possible solutions we design our own solution which is the implementation statement that leads us our solution in the solution domain. Whatever the design we come up with, we need to check both whether we implemented the product right (verification) and we implemented the right product (validation) that satisfies what our “customer” wants. Verification process can be objective whereas validation process is more subjective since it depends on the stakeholders and stakeholders often have different and inconsistent needs. It is not uncommon to observe validation problems with software products. We must spend more time on the analysis of the customer needs and gather the requirements.

Requirements are basically services that a customer expects from a system and the constraints under which it operates and is developed. They represent the negotiated agreement among stakeholders. However, stakeholders often don’t know what they want, or they may come up with unrealistic demands. Different stakeholders may have conflicting requirements and they might express the requirements with implicit knowledge of their own work. Therefore, we need to use a systematic way to gather requirements in order not for problems to arise. In an iterative-incremental approach, at the end of each iteration we usually make a demo of the product to the customer to validate existing requirements or to obtain more detailed requirements from the customer, in a better way. When we feel that we won’t be able to finish every task in an iteration, it is a great idea to de-scope some of the tasks instead of delaying the customer demo after the iteration since it means delaying the requirement gathering.

Properly written requirements should have some characteristics: Completeness (nothing is missing), consistency (nothing is conflicting) and precision (not ambiguous). There are functional requirements that describe what the system should or should not do and non-functional requirements (often ignored by practitioners) that are related with quality attributes of the system such as usability, reliability, performance, availability, maintainability etc. and directly influence the architecture of the system and architecture of the system should be designed and realized in the earliest possible iterations in an iterative-incremental approach. It is harder to change the architecture in later project phases.  (For instance, in the elaboration phase of Rational Unified Process Model and phases in this model are inception, elaboration, construction and transition. Elaboration phase should start at most on the second week of the project.)

Although there are many techniques for gathering requirements, my personal preference is using use cases. Popular misunderstanding regarding to use cases is that use case diagrams are considered as use cases, they are just representations of the use cases in a UML diagram. Use cases are just the text stories of some actors using a system to meet the goals. Each use case should have a main success scenario (happy path – should be between 3 to 9 steps) and extension scenarios (other success or failure paths – striped trousers metaphor). They should be written in a brief, UI-free and technology-neutral manner.

People usually prefer compiling feature lists for identifying the requirements, however, use cases provide more advantages. Here are some advantages of preferring use cases over feature lists:

1)         Use cases drive the design

2)         They serve as a scenario to test the system

3)         They provide basis for user or operational manuals

4)         They play a vital role in iterative planning

5)         Relations may be lost in feature list approach

6)         We can easily group use cases by actor, by development teams, by subject area, by summary level use cases etc.

We don’t have to analyze and design the system as a whole in the beginning, we might plan an iteration for realizing one or more use cases and we might just analyze and design the parts only needed for realizing selected use cases for a single iteration. In the subsequent iterations we might realize more use cases. However, we should select and handle the use cases that include high risks, adds more business value and are architecturally important in early iterations. It is a good idea to classify use cases and assign them to different development teams as well. This is like a divide and conquer technique that we can benefit in solving big and complex problems like developing a software.

Back to my original argument, it would be better not to underestimate the complexity of software development. Knowing the essence of software better, embracing the change, giving more importance to analysis and design phases, solving architectural problems, taking the necessary precautions as early as possible and utilizing the usage of use cases would be really helpful to keep the “unpredicted” growth under control.

References

[1] “Chaos: A Recipe for Success”, The Standish Group, http://www.dsc.ufcg.edu.br/~garcia/cursos/ger_processos/artigos/chaos1998.pdf

[2] “Applying UML and Patterns: An Introduction to Object-Oriented Analysis and Design and Iterative Development (3rd Ed.)”, Craig Larman, Prentice Hall PTR, 2004.

[3] “Software Engineering, 9th Ed.”, Ian Sommerville, Pearson, 2009.

Frühere Artikel

A Couple of Tips For Software Quality

A Couple of Tips For Software Quality

Mediocre-quality software is available anywhere and everywhere. We have all encountered it using our desktops, smart phones, running applications from the cloud, and suffered various amounts of pain both as user and developer. Dealing with software, I had done my share of losing and causing loss, and I had done it my way.

Navigation