Bertrand Meyer谈软件质量需求

news/2024/6/16 8:21:14

The Demand for Software Quality                 软件质量需求

A Conversation with Bertrand Meyer, Part I       Bertrand Meyer访谈之一

by Bill Venners                                                  采访者:Bill Venners

October 27, 2003                                              时间:2003-10-27




Bertrand Meyer talks with Bill Venners about the increasing importance of software quality, the commercial forces on quality, and the challenges of complexity.

Bertrand MeyerBill Venners讨论软件质量不断攀升的重要性、市场对软件质量的影响以及软件复杂性的挑战。


Bertrand Meyer is a software pioneer whose activities have spanned both the academic and business worlds. He is currently the Chair of Software Engineering at ETH, the Swiss Institute of Technology. He is the author of numerous papers and many books, including the classic Object Oriented Software Construction (Prentice Hall, 1994, 2000). In 1985, he founded Interactive Software Engineering, Inc., now called Eiffel Software, Inc., a company which offers Eiffel-based software tools, training, and consulting.

Bertrand Meyer是同时活跃于学术和商业二界的软件先驱。他目前担任瑞士理工学院的软件工程协会主席。他撰写了数量浩繁的论文和书籍,如经典的《面向对象软件构造》(Prentice出版社,19942000)。1985年,他创立了交互软件工程公司。公司目前已经更名为Eiffel软件公司,提供基于Eiffel语言的软件工具、培训和咨询业务。


On September 28, 2003, Bill Venners conducted a phone interview with Bertrand Meyer. In this interview, which will be published in multiple installments on, Meyer gives insights into many software-related topics, including quality, complexity, design by contract, and test-driven development. In this initial installment, Meyer discusses the increasing importance of software quality, the commercial forces on quality, and the challenges of complexity.

2003928日,Bill VennersBertrand Meyer做了电话访谈。在这次访谈(内容将在Artima.com分多次公布)中,Meyer对许多有关软件的问题作了鞭辟入里的论述,如软件质量、软件复杂性、契约设计以及测试驱动型开发等。在这个最开始的部分中,Meyer论述了软件质量不断攀升的重要性、市场对于软件质量的影响以及软件复杂性的挑战。


The Importance of Software Quality


Bill Venners: In a 2001 interview with InformIT, you said, "The going has been so good that the software profession has been able to treat quality as one issue among many. Increasingly it will become the dominant issue." Why?



Bertrand Meyer: As the use of computers pervades more and more of what society does, the effects of non-quality software just becomes unacceptable. Software is becoming more ambitious, and we rely on it more and more. Problems that could be dismissed quite easily before are now coming to the forefront.



There is a very revealing quote by Alan Perlis in his preface to the MIT book on Scheme, The Structure and Interpretation of Computer Programming, by Abelson and Sussman. Alan Perlis wrote:

I think that it's extraordinarily important that we in computer science keep fun in computing. When it started out, it was an awful lot of fun. Of course, the paying customer got shafted every now and then, and after a while we began to take their complaints seriously. We began to feel as if we really were responsible for the successful, error-free perfect use of these machines. I don't think we are. I think we're responsible for stretching them, setting them off in new directions, and keeping fun in the house.

Alan Perlis在麻省理工学院的一本教材——AbelsonSussman合著的《计算机编程的结构和解析》——的序言中有一段颇具启发性的话。他写到:



That is typical of the kind of attitude that says "Sure, we can do whatever we like. If there's a problem we'll fix it." But that's simply not true anymore. People depend on software far too fundamentally to accept this kind of attitude. In a way we had it even easier during the dot-com boom years, between 1996 and 2000, but this is not 1998 anymore. The kind of free ride that some people were getting in past years simply doesn't exist anymore.



The Harvard Business Review published an article in May 2003, "IT Doesn't Matter" by Nicholas Carr, that stated that IT hasn't delivered on its promises. It is a quite telling sign of how society at large is expecting much more seriousness and is holding us to our promises much more than used to be the case. Even though it may still seem like we do have a free ride, in fact that era is coming to a close. People are watching much more carefully what we're doing and whether they're getting any return for their money. And the heart of that is quality.

20035月,《哈佛商业评论》发表了Nicholas Carr的文章《IT麻木不仁》,文章指出:IT没有兑现它的承诺。这发出了一个明显的信号:整个社会要求我们对自己的承诺要有比过去高得多的严肃态度。尽管在目前看来,我们似乎还能(不负责任地)搭便车,但这样的时代将很快终结。人们越来越关心我们的所作所为,关心自己的支出是否可以得到回报。这一些的核心就是质量。


Commercial Forces on Software Quality


Bill Venners: In your paper, The Grand Challenge of Trusted Components, you write "There is a quality incentive, but it only leads to the acceptability point: the stage at which remaining deficiencies do not endanger [the product's] usefulness to the market. Beyond that point, most managers consider that further quality enhancing measures yield a quickly diminishing return on investment." How do commercial pressures affect software quality?



Bertrand Meyer: Commercial pressures affect software quality partly positively and partly negatively. It is almost tempting to use as an analogy the Laffer Curve, which was popular for a while in so-called Reaganomics. I'm not an economist, and I hear that the theory has been by now discredited, so I really don't want to imply that the Laffer Cruve is fundamentally true in economics. Nevertheless, the Laffer Curve is the idea that if you tax people at zero percent, the state suffers because it doesn't get any revenue. If you tax people at 100%, it's in the end no better, because if people are not making any money, they will stop working, and the state will also not get any revenue. It's a rather simplistic argument. Although it is pretty clear the Laffer Curve has an element of truth, I'm not sure how precise or accurate it is in economics. But as an analogy, it describes well the commercial pressures on software quality.



If you produce a software system that has terrible quality, you lose because no one will want to buy it. If on the other hand you spend infinite time, extremely large effort, and huge sums of money to build the absolutely perfect piece of software, then it's going to take so long to complete and it will be so expensive to produce that you'll be out of business anyway. Either you missed the market window, or you simply exhausted all your resources. So people in industry try to get to that magical middle ground where the product is good enough not to be rejected right away, such as during evaluation, but also not the object of so much perfectionism and so much work that it would take too long or cost too much to complete.



The Challenge of Complexity


Bill Venners: You said in your book, Object Oriented Software Construction, "The single biggest enemy of reliability and perhaps of software quality in general is complexity." Could you talk a bit about that?



Bertrand Meyer: I think we build in software some of the most complex artifacts that have ever been envisioned by humankind, and in some cases they just overwhelm us. The only way we can build really big and satisfactory systems is to put a hold on complexity, to maintain a grasp on complexity. Something like Windows XP, which is 45 million lines of code or so, is really beyond any single person's ability to comprehend or even imagine. The only way to keep on top of things, the only way to have any hope for a modicum of reliability, is to get rid of unnecessary complexity and tame the remaining complexity through all means possible.

我认为我们的软件中存在一些到达人类想象极限的复杂部分,某些时候,这些部分让我们一败涂地。构建大型的、让人满意的系统的唯一办法是不要持续地复杂化,必须保持对复杂性的控制力。比如Windows XP系统包含大约45百万行代码,这完全不是单独某个人可以理解甚至想象的。如果要想保持对它们的控制,或者说还希望哪怕有一点点可靠性的话,唯一的办法就是剔除没必要复杂的部分,而想尽办法保持对其余部分的复杂性控制。


Taming complexity is really fundamental in the Eiffel approach. Eiffel is there really to help people build complex, difficult things. You can certainly build easy or moderately difficult systems using Eiffel better than using other approaches, but where Eiffel really starts to shine is when you have a problem that is more complex than you would like and you have to find some way of taming its complexity. This is where, for example, having some relatively strict rules of object modularity and information hiding is absolutely fundamental. The kinds of things that you find in just about every other approach to circumvent information hiding don't exist in Eiffel. Such strict rules sometimes irritate programmers at first, because they want to do things and they feel they can't, or they have to write a little more code to achieve the result. But the strictness is really a major guard against the catastrophes that start to come up when you're scaling up your design.



For example, in just about every recent object-oriented language, you have the ability, with some restriction, of directly assigning to a field of an object: x.a = 1, where x is an object, a is a field. Everyone who has been exposed to the basics of modern methodology and object technology understands why this is wrong. And then almost everyone says, "Yes, but in many cases I don't care. I know exactly what I'm doing. They are my objects and my classes. I control all the accesses to them, so don't bother me. Don't force me to write a special routine to encapsulate the modification of field a." And on the surface, people are correct. In the short term, on a small scale, it's true. Who cares?

比如,在目前很多的面向对象语言中,尽管有一些限制,但你可以直接给一个对象的字段赋值:x.a = 1,其中x是一个对象,a是它的一个字段。任何一个有近代方法论和对象技术基础的人都明白为什么这是错误的。几乎每个人都会说:“的确是错误的,但大多数情况下我不会在意它。我当然知道我在干嘛。它们是我的对象和类,我应该控制它们所有的访问接口,所以你不要来烦我。不要强迫我额外写些代码去封装对字段a的修改过程。”表面看,他们是对的。短期内,小范围情况下,它没有问题。有谁会在意它呢?


But direct assignment is a typical kind of little problem that takes up a completely new dimension as you start having tens of thousands, hundreds of thousands, or millions of lines of code; thousands or tens of thousands of classes; many people working on the project; the project undergoing many changes, many revisions, and ports to different platforms. This kind of thing, direct assignment of object fields, completely messes up the architecture. So it's a small problem that becomes a huge one.



The problem is small in the sense that fixing it is very easy in the source. You just prohibit, as in Eiffel, any direct access to fields and require that these things be encapsulated in simple procedures that perform the job—procedures which, of course, may then have contracts. So it's really a problem that is quite easy to kill in the bud. But if you don't kill it in the bud, then it grows to a proportion where it can kill you.



Another example is overloading: giving the same name to different operations within the same class. I know this is controversial. People have been brainwashed so much that overloading is a good thing that it is kind of dangerous to go back to the basics and say that it's not a good thing. Again, every recent language has overloading. Their libraries tend to make an orgy of overloading, giving the same name to dozens of different operations. This kind of apparent short-term convenience buys a lot of long-term complexity, because you have to find out in each particular case what exactly is the signature of every variant of an operation. The mechanisms of dynamic binding as they exist in object technology and of course in Eiffel are much more effective than overloading to provide the kind of flexibility that people really want in the end.



So these are examples of cases in which being a little more careful in the language design can make a large contribution to the goal of taming complexity. It's also sometimes why people haven't believed our claims about Eiffel. The use of Eiffel is quite simple, but the examples that we publish are simple not necessarily because the underlying problems are simple, but because the solutions are. Eiffel is really a tool for removing artificial complexity and finding the essential simplicity that often lies below it. What we realize now is that sometimes people just don't believe it. They don't believe in simple solutions. They think we must be either hiding something or that the language and methods don't actually solve the real practical problems of software development, because they know there has to be more complexity there. As this horrible cliche goes, "if it looks too good to be true, then it must not be true," which is certainly the stupidest utterance ever proffered by humankind. This is the kind of cliche we hear from many people, and it's just wrong in the case of Eiffel. If you have the right tools for approaching problems, then you can get rid of unnecessary complexity and find the essential simplicity behind it.



This is the key issue that anyone building a significant system is facing day in and day out: how to organize complexity both by removing unnecessary, artificial, self-imposed complexity, and by organizing what remains of inevitable complexity. This is where the concepts of inheritance, contracts, genericity, object-oriented development in general, and Eiffel in particular, can play a role.



Bill Venners: It sounds to me that you're talking about two things: getting rid of unnecessary complexity and dealing with inherent complexity. I can see that tools, such as object-oriented techniques and languages, can help us deal with inherent complexity. But how can tools help us get rid of self-imposed complexity? What did you mean by "getting at the simplicity behind the complexity?"



Bertrand Meyer: Look at modern operating systems. People bash the complexity of Windows, for example, but I'm not sure the competition is that much better. There's no need for any bashing of any particular vendor, but it's clear that some of these systems are just too messy. A clean, fresh look at some of the issues would result in much better architecture. On the other hand, it's also true that an operating system—be it Windows XP, RedHat Linux, or Solaris—has to deal with Unicode, with providing a user interface in 100 different languages. Especially in the case of Windows, it has to deal with hundreds of thousands of different devices from lots of different manufacturers. This is not a kind of self-inflicted complexity that academics like to criticize. In the real world, we have to deal with requirements that are imposed on us from the outside. So there are two kinds of complexity: inherent complexity, which we have to find ways to deal with through organization, through information hiding, through modularization; and artificial complexity, which we should just get rid of by simplifying the problem.

我们来看现在的一些操作系统。例如,人们猛烈抨击Windows的复杂性,但我并不认为其他竞争者做得更好。人们并没有攻击任何厂商的需要,但是很明显,有些系统的确显得过于混乱。如果重新彻底地审视人们提到的某些问题,的确可以设计出更好的架构。但是从另一方面来说,一个操作系统的复杂性又是无法避免的。Windows XPRedHat LinuxSolaris都必须处理Unicode,必须为上百种语言提供用户接口。特别是Windows,它必须兼容大量厂商生产的难以计数的不同设备。这就不是学术界批判的那种自找的复杂性。在现实世界中,我们不得不面对外界强加给我们的各种要求。因此,复杂性可以分为两类:必然的复杂性,它要求我们必须通过优化组织、分析隐藏信息和模块化等手段找到办法来处理;另一类是人为的复杂性,我们应该通过简化要解决的问题来消除之。


【机器学习】线性判别分析LDA(Linear Discriminant Analysis)

1. 数据说明 :指n个样本,每个样本有p个维度 :指n个样本所形成的矩阵的转置,为n*p维矩阵,即样本矩阵 w:指关于X的线性方程的参数,如一维表达中,y wx b 2. 思想 LDA主要用于分类…

【机器学习】朴素贝叶斯(Naive Bayes)

1. 思想 通过贝叶斯公式,计算最大后验MAP 2. 贝叶斯公式 3. 前提假设 朴素贝叶斯之所以称之为"朴素",原因在于它的前提假设是条件独立性。在数据集中,它假设在已知观测值(我们的预测值)的情况下&#xff…

【Scott Meyers】C++5×5断想之一:C++历史上最重要的图书

原文地址:。译文发表于《程序员》2006.11。作者介绍Scott Meyers,C顶级权威之一,为世界各地客户提供培训和咨询服务。出版有畅销的Effective C系列图书(《Effective C》、《Mo…

【机器学习】熵与KL散度(Kullback-Leibler divergence)

1. 信息量 1.1 定义 信息量是对一个概率P(θ)中所包含的信息的量化。 1.2 解释 假设我们要判断一个人是好人还是坏人的概率。现在有两个概率P(好人)0.8,(好人)0.5。P1包含的信息量少,P2包含的信息量多。…

【Scott Meyers】C++5×5断想之二:C++历史上最重要的文献

原文地址:。译文发表于《程序员》2006.12。作者介绍Scott Meyers,C顶级权威之一,为世界各地客户提供培训和咨询服务。出版有畅销的Effective C系列图书(《Effective C》…


1. 思想 假设所有未知的事件出现概率都相等,在有约束的条件下求最优解。 2. 解释 在投骰子的案例中,我们知道骰子有六个面,若我们不知道骰子每个面朝上的概率,那么最安全的选择是假设每个面朝上的概率都相等,即1/6&…

【Scott Meyers】C++5×5断想之三:C++历史上最重要的软件

原文地址:。译文发表于《程序员》2007.1。 作者介绍Scott Meyers,C顶级权威之一,为世界各地客户提供培训和咨询服务。出版有畅销的Effective C系列图书(《Effective C》、《…