Quality engineers use quality characteristics to determine how to deliver the right quality and also what to test and with what intensity to test it. The TMap® NEXT book contains a list of 17 quality characteristics for product quality. In the TMAP book Quality for DevOps teams the ISO 25010 standard is adopted.
Today, with the rapid upcoming of AI and robotics these lists are no longer sufficient. To decide upon the right quality and which test varieties are needed to properly test this new technology, new quality attributes are needed.
Why we require more quality characteristics
To get a clear view on the quality level of any system we need to distinguish some quality sub-divisions, for which we use quality characteristics. The commonly used standards evolved in an era when IT-systems were focused on data processing and where input and output were done by means of files or screen-user-interfaces.
Nowadays, we see machine intelligence systems that have many more options. Input is often gathered using sensors (e.g. in IoT devices) and output may be in a physical way (like moving objects in a warehouse). This calls for an extension of the list of quality characteristics. The following sections describe new quality characteristics.
We have added three new groups of quality characteristics: intelligent behavior, morality and personality. In their respective sections, we describe these main characteristics and their sub-characteristics.
Intelligent behavior is the ability to comprehend or understand. It is basically a
combination of reasoning, memory, imagination, and judgment; each of these faculties relies upon the others. Intelligence is a combination of cognitive skills and knowledge made evident by behaviors that are adaptive. [source: Wikipedia]
sub 1. Ability to learn
The ability to learn is the ability to comprehend, to understand and to profit from experience. How does an intelligent machine learn? We see three levels of learning.
- The first level is rule-based learning. When a user frequently uses certain options in a menu, the intelligent machine can order the options such that the most used options appear first.
- The second level is based on gathering and interpreting data and, based on that, learning about an environment.
- The third level is learning by observing the behavior of others and imitating that behavior.
sub 2. Improvisation
Does it adapt to new situations? Improvisation is the power of the intelligent system to make right decisions in new situations. Situations that might have never been experienced before require quick interpretation of new information and the ability to adjust existing behavior. Social robots in particular must be able to adapt their behavior according to the information coming in, since social behavior depends on culture in
specific small groups. Applying long-term changes will also be important for a robot to remain interesting or relevant for its environment.
sub 3. Transparency of choices
Can a human understand how a machine comes to its decisions? An Artificial Intelligence system works 24/7 and takes a lot of decisions. Therefore, there must be transparency around how an AI system takes those decisions. For example, there must be clarity on which data inputs the decisions are made, which data points are relevant and how they are weighted. In several use-cases, the decision-making is crucial, such as when an Artificial Intelligent system calculates an insurance premium. In this specific use case, it is important to investigate how the premium has been calculated.
Transparency also means predictability. It is important that robots respond as expected by the people who work with the robot. How well can the humans involved foresee what (kind of) action the intelligent machine will take in a given situation? This is the basis for proper collaboration.
sub 4. Collaboration / Working in a team
How well does the robot work alongside humans? Does it understand expected and unexpected human behavior? Robots can work with people or other robots in a team. How communication works within this team is very important. A robot must be aware of the team members and know when a person wants to interact with the robot. With the help of natural interaction, the robot must make it possible to draw attention to itself.
Working in a team is particularly important in industrial automation where robots and people work alongside each other in a factory. Elsewhere, the importance of teamworking can be seen in traffic where, for example, a bicyclist should be able to see whether a self-driving car is aware that the cyclist wants to make a turn.
Collaboration between robots only, so without humans involved, is very similar to the existing quality characteristic of interoperability. However, because collaboration can be of great importance in robots and intelligent systems we are covering this separately.
sub 5. Natural interaction
Natural interaction is important, both in verbal and non-verbal communication. With social robots in particular, it is important that the way humans interact with a robot is natural, reflecting how they interact with people. One of the things that can be considered here is multiple input modalities, so there is more than one possibility for controlling the robot (for example speech and gestures).
In chatbots it is important that the conversation is natural, but also specific to the purpose of the chatbot. Consider that a chatbot making small talk has more room to make mistakes and slowly learn, whereas a chatbot that is supposed to make travel arrangements should clearly understand destination, dates and other relevant information without erroneous interpretations. Most people who enter “home” as their destination mean their own home and not the nearest nursing home, which a traditional search-engine would assume. In this case asking clarification is very important for the chatbot.
“Morality is about the principles concerning the distinction between right and wrong or good and bad behavior.” [source: Wikipedia]
The well-known science fiction author Isaac Asimov gave a great deal of thought to the morality of intelligent machines. One of his contributions was drawing up the “laws of robotics” that intelligent machines should adhere to.
These laws of robotics are:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Other authors created some additional laws:
4. A robot must establish its identity as a robot in all cases.
5. A robot must know it is a robot.
6. A robot must reproduce. As long as such reproduction does not interfere with the First or Second or Third Law.
Unfortunately, we observe that, unlike in Asimov’s stories, most intelligent machines do not have these robot laws built in. It’s up to the team members with a digital test
engineering role to assess to what level the intelligent machine adheres to these laws.
Ethics is about acting according to various principles. Important principles are laws, rules and regulations, but for ethics the unwritten moral values are the most important.
Some challenges of machine ethics are much like many other challenges involved in designing machines. Designing a robot arm to avoid crushing stray humans is no more morally fraught than designing a flame-retardant sofa.
With respect to intelligent machines, important questions related to ethics are:
• Does it observe common ethical rules?
• Does it cheat?
• Does it distinguish between what is allowed and what is not allowed?
To be ethically responsible the intelligent machine should inform its users about the data that is in the system and what this data is used for.
sub 2. Privacy
“Privacy is the state of being free from unwanted or undue intrusion or disturbance in one’s private life or affairs.” [source: www.dictionary.com]
Does the intelligent machine comply with privacy laws and regulations? The fuel of
machine learning algorithms is data. It determines what the solution can and will do in the end. It is important to ensure that the gathered data and the insights gained from that data are aligned with the business goals. There are also legal constraints, which depend on national and international laws, regulations and the analyzed data. In the EU, for example, the General Data Protection Regulation (GDPR) is now one of the strictest regulations with the potential for severe financial sanctions for non-compliance.
sub 3. Human friendliness
Human friendliness refers to the level to which intelligent machines don’t cause harm to humans or humanity.
Most of the leading AI experts and companies recognize that there is a risk for AI and robotics to be used in warfare. This challenges not only our current ethical norm, but also our instinct for self-preservation. The “future of life Institute” has taken a close look at these dangers. They are very real risks and should be considered when developing new solutions.
Human friendliness is also related to safety (especially when people work closely with robots, so-called cobotics). Safety and security are often confused, but they are not the same. Security is the protection of the application against people (or machines) with malicious intention. This is something other than safety that guarantees no harm comes to people. For robots this is very important since a co-worker may want to know: “How big is the chance that I will get a big robot-arm against my head if I try to communicate with this robot?”
A personality is the combination of characteristics or qualities that form an individual’s distinctive character.
Let’s focus on having robots as a partner or assistant. We want to build robots with a
personality that fits the personality of the humans it collaborates with.
sub 1. Mood
A mood is a temporary state of mind or feeling.
Will an intelligent machine always be in the same mood? We would be inclined to think that a machine, by definition, doesn’t know about moods, it just performs its task in the same way time, again and again. But, by adding intelligence, the machine may change its behavior in different situations or at different times of day.
A good use of moods may be in cobotics, where the robot adapts its behavior to the behavior of the people it collaborates with. For example, at night the robot may try to give as few signals as possible because people tend to be more irritable at night, whereas on a warm and sunny summer’s day the robot may be more outspoken in its communication.
Another aspect of mood, is using Machine Intelligence to change the mood of people. Mood altering or so-called AI-controlled brain implants in humans are under test already. Brain implants can be used to deliver stimulation to specific parts of the brain when required. Experts are working on using specialized algorithms to detect patterns linked to mood disorders. These devices are able to deliver electrical pulses that can supposedly shock the brain into a healthier state. There are hopes that the technology could provide a new way to treat mental illnesses that goes beyond the capabilities of currently available therapies.
sub 2. Empathy
Empathy is the ability to understand and share the feelings of another.
Machines cannot feel empathy, but it is important that they simulate empathy. They should be able to recognize human emotions and respond to them. An intelligent machine should understand the feelings of the people it interacts with. This is especially important with robots working in hospitals, for example as “companion” robots.
sub 3. Humor
“Humor is the quality of being amusing or comic, especially as expressed in literature or speech.” [source: en.oxforddictionaries.com]
Is there a difference between laughter and humor? Yes, there is. Laughter is used as a communication aid. From the gentle chuckle to the full-on belly laugh, it helps us to convey our response to various social situations. Humor could be defined as the art of being funny, or the ability to find something funny. How will robots detect these very human behaviors? That is the next step in AI, programming robots with the ability to get in on the joke, detect puns and sarcasm and throw a quick quip back! There is a whole branch of science dedicated to research and development in this area. Scientists in this field are known as computational humorists, and they have come a long way in the algorithms they have created so far. An example of such an algorithm is “SASI”, which detects sarcasm.
sub 4. Charisma
Charisma is the compelling attractiveness or charm that can inspire devotion in others. [source: en.oxforddictionaries.com]
Do people like the intelligent machine? Do people love the intelligent machine? Is it so appealing that they never want to put it away? If a product has this “wow-factor”, then it is much more likely to be a successful product. So, the charisma of a product is important.
Is charisma a sign of intelligence? It is. It is all learned behavior, no matter what factors are employed. To be accepted by users, the robot must appeal in some way to the user. That may be by its looks (see embodiment), but more important by its functionality and probably by its flexibility. One way to keep amazing the user is to continuously learn new things and thus stay ahead of the expectations of the user.