A Detailed Study Of Nhpp Software Reliability Models

Preparing link to download Please wait... Attached file not found

E-Book Overview

Paper 11 p, Journal Of Software, VOL. 7, NO. 6, June 2012

E-Book Content

1296 JOURNAL OF SOFTWARE, VOL. 7, NO. 6, JUNE 2012 A Detailed Study of NHPP Software Reliability Models (Invited Paper) Richard Lai*, Mohit Garg Department of Computer Science and Computer Engineering, La Trobe University, Victoria, Australia Abstract—Software reliability deals with the probability that software will not cause the failure of a system for a specified time under a specified condition. The probability is a function of the inputs to and use of the system as well as a function of the existing faults in the software. The inputs to the system determine whether existing faults, if any, are encountered. Software Reliability Models (SRMs) provide a yardstick to predict future failure behavior from known or assumed characteristics of the software, such as past failure data. Different types of SRMs are used for different phases of the software development life-cycle. With the increasing demand to deliver quality software, software development organizations need to manage quality achievement and assessment. While testing a piece of software, it is often assumed that the correction of errors does not introduce any new errors and the reliability of the software increases as bugs are uncovered and then fixed. The models used during the testing phase are called Software Reliability Growth Models (SRGM). Unfortunately, in industrial practice, it is difficult to decide the time for software release. An important step towards remediation of this problem lies in the ability to manage the testing resources efficiently and affordably. This paper presents a detailed study of existing SRMs based on Non-Homogeneous Poisson Process (NHPP), which claim to improve software quality through effective detection of software faults. Index Terms—Software Reliability Growth Models, NonHomogeneous Poisson Process, Flexible Models I. INTRODUCTION Today, science and technology require high performance hardware and high quality software in order to make improvements and achieve breakthroughs. It is the integrating potential of the software that has allowed designers to contemplate more ambitious systems, encompassing a broader and more multidisciplinary scope, with the growth in utilization of software components being largely responsible for the high overall complexity of many system designs. However, in stark contrast with the rapid advancement of hardware technology, proper development of software technology has failed miserably to keep pace in all measures, including quality, productivity, cost and performance. When the requirement for and dependencies on *Corresponding author, E-mail: [email protected] © 2012 ACADEMY PUBLISHER doi:10.4304/jsw.7.6.1296-1306 computers increase, the possibility of a crisis from computer failures also increases. The impact of failures ranges from inconvenience (e.g., malfunctions of home appliances), economic damage (e.g., interruption of banking systems), to loss of life (e.g., failures of flight systems or medical software). Hence, for optimizing software use, it becomes necessary to address issues such as the reliability of the software product. Using tools/techniques/methods, software developers can design/propose several testing programs or automate testing tools to meet the client's technical requirements, schedule and budget. These techniques can make it easier to test and correct software, detect more bugs, save more time and reduce expenses significantly [10]. The benefits of fault-free software to software developers/testers include increased software quality, reduced testing costs, improved release time to market and improved testing productivity. There has