Foundations of Software and System Performance Engineering: Process, Performance Modeling, Requirements, Testing, Scalability, and Practice

Foundations of Software and System Performance Engineering: Process, Performance Modeling, Requirements, Testing, Scalability, and Practice

By: Andre Benjamin Bondi (author)Paperback

Special OrderSpecial Order item not currently available. We'll try and order for you.

Description

"If this book had been available to Healthcare.gov's contractors, and they read and followed its life cycle performance processes, there would not have been the enormous problems apparent in that application. In my 40+ years of experience in building leading-edge products, poor performance is the single most frequent cause of the failure or cancellation of software-intensive projects. This book provides techniques and skills necessary to implement performance engineering at the beginning of a project and manage it throughout the product's life cycle. I cannot recommend it highly enough." -Don Shafer, CSDP, Technical Fellow, Athens Group, LLC Poor performance is a frequent cause of software project failure. Performance engineering can be extremely challenging. In Foundations of Software and System Performance Engineering, leading software performance expert Dr. Andre Bondi helps you create effective performance requirements up front, and then architect, develop, test, and deliver systems that meet them. Drawing on many years of experience at Siemens, AT&T Labs, Bell Laboratories, and two startups, Bondi offers practical guidance for every software stakeholder and development team participant. He shows you how to define and use metrics; plan for diverse workloads; evaluate scalability, capacity, and responsiveness; and test both individual components and entire systems. Throughout, Bondi helps you link performance engineering with everything else you do in the software life cycle, so you can achieve the right performance-now and in the future-at lower cost and with less pain. This guide will help you * Mitigate the business and engineering risk associated with poor system performance * Specify system performance requirements in business and engineering terms * Identify metrics for comparing performance requirements with actual performance * Verify the accuracy of measurements * Use simple mathematical models to make predictions, plan performance tests, and anticipate the impact of changes to the system or the load placed upon it * Avoid common performance and scalability mistakes * Clarify business and engineering needs to be satisfied by given levels of throughput and response time * Incorporate performance engineering into agile processes * Help stakeholders of a system make better performance-related decisions * Manage stakeholders' expectations about system performance throughout the software life cycle, and deliver a software product with quality performance Andre B. Bondi is a senior staff engineer at Siemens Corp., Corporate Technologies in Princeton, New Jersey. His specialties include performance requirements, performance analysis, modeling, simulation, and testing. Bondi has applied his industrial and academic experience to the solution of performance issues in many problem domains. In addition to holding a doctorate in computer science and a master's in statistics, he is a Certified Scrum Master.

About Author

Andre B. Bondi is a highly experienced software performance engineer. He founded Software Performance and Scalability Consulting LLC early in 2016. He has recently spent a semester as a visiting professor at the University of L'Aquila in Italy. He spent many years at Siemens Corp., Corporate Technologies in Princeton, New Jersey, and at AT&T Labs and its predecessor, AT&T Bell Labs in Middletown and Holmdel, New Jersey, respectively. He has also held senior performance positions at two startups. In November 2016, Dr. Bondi received the A. A. Michelson Award from the Computer Measurement Group for sustained and valuable contributions to his profession. In addition to holding a doctorate in computer science from Purdue University and an M.Sc. from University College London, Dr. Bondi is a Certified Scrum Master.

Contents

Preface xxiii Acknowledgments xxix About the Author xxxi Chapter 1: Why Performance Engineering? Why Performance Engineers? 1 1.1 Overview 1 1.2 The Role of Performance Requirements in Performance Engineering 4 1.3 Examples of Issues Addressed by Performance Engineering Methods 5 1.4 Business and Process Aspects of Performance Engineering 6 1.5 Disciplines and Techniques Used in Performance Engineering 8 1.6 Performance Modeling, Measurement, and Testing 10 1.7 Roles and Activities of a Performance Engineer 11 1.8 Interactions and Dependencies between Performance Engineering and Other Activities 13 1.9 A Road Map through the Book 15 1.10 Summary 17 Chapter 2: Performance Metrics 19 2.1 General 19 2.2 Examples of Performance Metrics 23 2.3 Useful Properties of Performance Metrics 24 2.4 Performance Metrics in Different Domains 26 2.5 Examples of Explicit and Implicit Metrics 32 2.6 Time Scale Granularity of Metrics 32 2.7 Performance Metrics for Systems with Transient, Bounded Loads 33 2.8 Summary 35 2.9 Exercises 35 Chapter 3: Basic Performance Analysis 37 3.1 How Performance Models Inform Us about Systems 37 3.2 Queues in Computer Systems and in Daily Life 38 3.3 Causes of Queueing 39 3.4 Characterizing the Performance of a Queue 42 3.5 Basic Performance Laws: Utilization Law, Little's Law 45 3.6 A Single-Server Queue 49 3.7 Networks of Queues: Introduction and Elementary Performance Properties 52 3.8 Open and Closed Queueing Network Models 58 3.9 Bottleneck Analysis for Single-Class Closed Queueing Networks 63 3.10 Regularity Conditions for Computationally Tractable Queueing Network Models 68 3.11 Mean Value Analysis of Single-Class Closed Queueing Network Models 69 3.12 Multiple-Class Queueing Networks 71 3.13 Finite Pool Sizes, Lost Calls, and Other Lost Work 75 3.14 Using Models for Performance Prediction 77 3.15 Limitations and Applicability of Simple Queueing Network Models 78 3.16 Linkage between Performance Models, Performance Requirements, and Performance Test Results 79 3.17 Applications of Basic Performance Laws to Capacity Planning and Performance Testing 80 3.18 Summary 80 3.19 Exercises 81 Chapter 4: Workload Identification and Characterization 85 4.1 Workload Identification 85 4.2 Reference Workloads for a System in Different Environments 87 4.3 Time-Varying Behavior 89 4.4 Mapping Application Domains to Computer System Workloads 91 4.5 Numerical Specification of the Workloads 95 4.6 Numerical Illustrations 99 4.7 Summary 103 4.8 Exercises 103 Chapter 5: From Workloads to Business Aspects of Performance Requirements 105 5.1 Overview 105 5.2 Performance Requirements and Product Management 106 5.3 Performance Requirements and the Software Lifecycle 111 5.4 Performance Requirements and the Mitigation of Business Risk 112 5.5 Commercial Considerations and Performance Requirements 114 5.6 Guidelines for Specifying Performance Requirements 116 5.7 Summary 122 5.8 Exercises 123 Chapter 6: Qualitative and Quantitative Types of Performance Requirements 125 6.1 Qualitative Attributes Related to System Performance 126 6.2 The Concept of Sustainable Load 127 6.3 Formulation of Response Time Requirements 128 6.4 Formulation of Throughput Requirements 130 6.5 Derived and Implicit Performance Requirements 131 6.6 Performance Requirements Related to Transaction Failure Rates, Lost Calls, and Lost Packets 134 6.7 Performance Requirements Concerning Peak and Transient Loads 135 6.8 Summary 136 6.9 Exercises 137 Chapter 7: Eliciting, Writing, and Managing Performance Requirements 139 7.1 Elicitation and Gathering of Performance Requirements 140 7.2 Ensuring That Performance Requirements Are Enforceable 143 7.3 Common Patterns and Antipatterns for Performance Requirements 144 7.4 The Need for Mathematically Consistent Requirements: Ensuring That Requirements Conform to Basic Performance Laws 148 7.5 Expressing Performance Requirements in Terms of Parameters with Unknown Values 149 7.6 Avoidance of Circular Dependencies 149 7.7 External Performance Requirements and Their Implications for the Performance Requirements of Subsystems 150 7.8 Structuring Performance Requirements Documents 150 7.9 Layout of a Performance Requirement 153 7.10 Managing Performance Requirements: Responsibilities of the Performance Requirements Owner 155 7.11 Performance Requirements Pitfall: Transition from a Legacy System to a New System 156 7.12 Formulating Performance Requirements to Facilitate Performance Testing 158 7.13 Storage and Reporting of Performance Requirements 160 7.14 Summary 161 Chapter 8: System Measurement Techniques and Instrumentation 163 8.1 General 163 8.2 Distinguishing between Measurement and Testing 167 8.3 Validate, Validate, Validate; Scrutinize, Scrutinize, Scrutinize 168 8.4 Resource Usage Measurements 168 8.5 Utilizations and the Averaging Time Window 175 8.6 Measurement of Multicore or Multiprocessor Systems 177 8.7 Measuring Memory-Related Activity 180 8.8 Measurement in Production versus Measurement for Performance Testing and Scalability 181 8.9 Measuring Systems with One Host and with Multiple Hosts 183 8.10 Measurements from within the Application 186 8.11 Measurements in Middleware 187 8.12 Measurements of Commercial Databases 188 8.13 Response Time Measurements 189 8.14 Code Profiling 190 8.15 Validation of Measurements Using Basic Properties of Performance Metrics 191 8.16 Measurement Procedures and Data Organization 192 8.17 Organization of Performance Data, Data Reduction, and Presentation 195 8.18 Interpreting Measurements in a Virtualized Environment 195 8.19 Summary 196 8.20 Exercises 196 Chapter 9: Performance Testing 199 9.1 Overview of Performance Testing 199 9.2 Special Challenges 202 9.3 Performance Test Planning and Performance Models 203 9.4 A Wrong Way to Evaluate Achievable System Throughput 208 9.5 Provocative Performance Testing 209 9.6 Preparing a Performance Test 210 9.7 Lab Discipline in Performance Testing 217 9.8 Performance Testing Challenges Posed by Systems with Multiple Hosts 218 9.9 Performance Testing Scripts and Checklists 219 9.10 Best Practices for Documenting Test Plans and Test Results 220 9.11 Linking the Performance Test Plan to Performance Requirements 222 9.12 The Role of Performance Tests in Detecting and Debugging Concurrency Issues 223 9.13 Planning Tests for System Stability 225 9.14 Prospective Testing When Requirements Are Unspecified 226 9.15 Structuring the Test Environment to Reflect the Scalability of the Architecture 228 9.16 Data Collection 229 9.17 Data Reduction and Presentation 230 9.18 Interpreting the Test Results 231 9.19 Automating Performance Tests and the Analysis of the Outputs 244 9.20 Summary 246 9.21 Exercises 246 Chapter 10: System Understanding, Model Choice, and Validation 251 10.1 Overview 252 10.2 Phases of a Modeling Study 254 10.3 Example: A Conveyor System 256 10.4 Example: Modeling Asynchronous I/O 260 10.5 Systems with Load-Dependent or Time-Varying Behavior 266 10.6 Summary 268 10.7 Exercises 270 Chapter 11: Scalability and Performance 273 11.1 What Is Scalability? 273 11.2 Scaling Methods 275 11.3 Types of Scalability 277 11.4 Interactions between Types of Scalability 282 11.5 Qualitative Analysis of Load Scalability and Examples 283 11.6 Scalability Limitations in a Development Environment 292 11.7 Improving Load Scalability 293 11.8 Some Mathematical Analyses 295 11.9 Avoiding Scalability Pitfalls 299 11.10 Performance Testing and Scalability 302 11.11 Summary 303 11.12 Exercises 304 Chapter 12: Performance Engineering Pitfalls 307 12.1 Overview 308 12.2 Pitfalls in Priority Scheduling 308 12.3 Transient CPU Saturation Is Not Always a Bad Thing 312 12.4 Diminishing Returns with Multiprocessors or Multiple Cores 314 12.5 Garbage Collection Can Degrade Performance 315 12.6 Virtual Machines: Panacea or Complication? 315 12.7 Measurement Pitfall: Delayed Time Stamping and Monitoring in Real-Time Systems 317 12.8 Pitfalls in Performance Measurement 318 12.9 Eliminating a Bottleneck Could Unmask a New One 319 12.10 Pitfalls in Performance Requirements Engineering 321 12.11 Organizational Pitfalls in Performance Engineering 321 12.12 Summary 322 12.13 Exercises 323 Chapter 13: Agile Processes and Performance Engineering 325 13.1 Overview 325 13.2 Performance Engineering under an Agile Development Process 327 13.3 Agile Methods in the Implementation and Execution of Performance Tests 332 13.4 The Value of Playtime in an Agile Performance Testing Process 334 13.5 Summary 336 13.6 Exercises 336 Chapter 14: Working with Stakeholders to Learn, Influence, and Tell the Performance Engineering Story 339 14.1 Determining What Aspect of Performance Matters to Whom 340 14.2 Where Does the Performance Story Begin? 341 14.3 Identification of Performance Concerns, Drivers, and Stakeholders 344 14.4 Influencing the Performance Story 345 14.5 Reporting on Performance Status to Different Stakeholders 353 14.6 Examples 354 14.7 The Role of a Capacity Management Engineer 355 14.8 Example: Explaining the Role of Measurement Intervals When Interpreting Measurements 356 14.9 Ensuring Ownership of Performance Concerns and Explanations by Diverse Stakeholders 360 14.10 Negotiating Choices for Design Changes and Recommendations for System Improvement among Stakeholders 360 14.11 Summary 362 14.12 Exercises 363 Chapter 15: Where to Learn More 367 15.1 Overview 367 15.2 Conferences and Journals 369 15.3 Texts on Performance Analysis 370 15.4 Queueing Theory 372 15.5 Discrete Event Simulation 372 15.6 Performance Evaluation of Specific Types of Systems 373 15.7 Statistical Methods 374 15.8 Performance Tuning 374 15.9 Summary 375 References 377 Index 385

Product Details

  • ISBN13: 9780321833822
  • Format: Paperback
  • Number Of Pages: 448
  • ID: 9780321833822
  • weight: 698
  • ISBN10: 0321833821

Delivery Information

  • Saver Delivery: Yes
  • 1st Class Delivery: Yes
  • Courier Delivery: Yes
  • Store Delivery: Yes

Prices are for internet purchases only. Prices and availability in WHSmith Stores may vary significantly

Close