Software quality standards followed in the industry




















The measures are designed to be automated on source code through static analysis and provide an industry-wide foundation for benchmarking, setting quality targets, providing visibility, and tracking improvement progress. The CWE is a reference point for developers and tools and codifies over known software weaknesses. CISQ identified the most critical and impactful CWEs and standardized them for automation under each quality characteristic. Download a list of the CWEs in each code quality measure.

However, the measures defined in largely measure quality at the behavioral level rather than at the level of specific quality problems in the source code. To supplement the level of measurement in , CISQ defined source code level measures of four quality characteristics — Reliability , Performance Efficiency , Security , and Maintainability as outlined above. The following table shows a snapshot of software engineering rules contained in the measurement of each code quality characteristic at the code unit level and system level.

Software Quality Characteristic Coding Practices Unit Level Architectural Practices System Level Reliability Protecting state in multi-threaded environments Safe use of inheritance and polymorphism Resource bounds management, Complex code Managing allocated resources, Timeouts Multi-layer design compliance Software manages data integrity and consistency Exception handling through transactions Class architecture compliance Performance Efficiency Compliance with Object-Oriented best practices Compliance with SQL best practices Expensive computations in loops Static connections versus connection pools Compliance with garbage collection best practices Appropriate interactions with expensive or remote resources Data access performance and data management Memory, network and disk space management Centralized handling of client requests Use of middle tier components vs.

Protecting state in multi-threaded environments Safe use of inheritance and polymorphism Resource bounds management, Complex code Managing allocated resources, Timeouts. A simple example of the defined process is described in the following figure.

The input to and the output from the intermediate activities can be examined, measured, and assessed. At this level, the feedback from the early project activities can be used to set priorities for the current activities and later for the project activities. We can measure the effectiveness of the process activities. The measurement reflects the characteristics of the overall process and of the interaction among and across major activities.

At this level, the measures from activities are used to improve the process by removing and adding process activities and changing the process structure dynamically in response to measurement feedback. Thus, the process change can affect the organization and the project as well as the process. The process will act as sensors and monitors, and we can change the process significantly in response to warning signs.

At a given maturity level, we can collect the measurements for that level and all levels below it. Process maturity suggests to measure only what is visible.

Thus, the combination of process maturity with GQM will provide most useful measures. At level 1 , the project is likely to have ill-defined requirements. At this level, the measurement of requirement characteristics is difficult. At level 2 , the requirements are well-defined and the additional information such as the type of each requirement and the number of changes to each type can be collected.

At level 3 , intermediate activities are defined with entry and exit criteria for each activity. The goal and question analysis will be the same, but the metric will vary with maturity. The more mature the process, the richer will be the measurements. The GQM paradigm, in concert with the process maturity, has been used as the basis for several tools that assist managers in designing measurement programs. GQM helps to understand the need for measuring the attribute, and process maturity suggests whether we are capable of measuring it in a meaningful way.

Together they provide a context for measurement. Measures or measurement systems are used to asses an existing entity by numerically characterizing one or more of its attributes.

A measure is valid if it accurately characterizes the attribute it claims to measure. Validating a software measurement system is the process of ensuring that the measure is a proper numerical characterization of the claimed attribute by showing that the representation condition is satisfied.

For validating a measurement system, we need both a formal model that describes entities and a numerical mapping that preserves the attribute that we are measuring. For example, if there are two programs P1 and P2, and we want to concatenate those programs, then we expect that any measure m of length to satisfy that,.

If a program P1 has more length than program P2 , then any measure m should also satisfy,. The length of the program can be measured by counting the lines of code. If this count satisfies the above relationships, we can say that the lines of code are a valid measure of the length. The formal requirement for validating a measure involves demonstrating that it characterizes the stated attribute in the sense of measurement theory.

Prediction systems are used to predict some attribute of a future entity involving a mathematical model with associated prediction procedures. Validating prediction systems in a given environment is the process of establishing the accuracy of the prediction system by empirical means, i. It involves experimentation and hypothesis testing.

The degree of accuracy acceptable for validation depends upon whether the prediction system is deterministic or stochastic as well as the person doing the assessment.

Some stochastic prediction systems are more stochastic than others. Examples of stochastic prediction systems are systems such as software cost estimation, effort estimation, schedule estimation, etc. Hence, to validate a prediction system formally, we must decide how stochastic it is, then compare the performance of the prediction system with known data. Software metrics is a standard of measure that contains many activities which involve some degree of measurement.

It can be classified into three categories: product metrics, process metrics, and project metrics. Product metrics describe the characteristics of the product such as size, complexity, design features, performance, and quality level. Process metrics can be used to improve software development and maintenance. Examples include the effectiveness of defect removal during development, the pattern of testing defect arrival, and the response time of the fix process. Project metrics describe the project characteristics and execution.

Software measurement is a diverse collection of these activities that range from models predicting software project costs at a specific stage to measures of program structure. Effort is expressed as a function of one or more variables such as the size of the program, the capability of the developers and the level of reuse. Cost and effort estimation models have been proposed to predict the project cost during early phases in the software life cycle.

Productivity can be considered as a function of the value and the cost. Each can be decomposed into different measurable size, functionality, time, money, etc. Different possible components of a productivity model can be expressed in the following diagram.

The quality of any measurement program is clearly dependent on careful data collection. Data collected can be distilled into simple charts and graphs so that the managers can understand the progress and problem of the development. Data collection is also essential for scientific investigation of relationships and trends. Quality models have been developed for the measurement of quality of the product without which productivity is meaningless.

These quality models can be combined with productivity model for measuring the correct productivity. These models are usually constructed in a tree-like fashion. The upper branches hold important high level quality factors such as reliability and usability. The notion of divide and conquer approach has been implemented as a standard approach to measuring software quality.

Most quality models include reliability as a component factor, however, the need to predict and measure reliability has led to a separate specialization in reliability modeling and prediction. The basic problem in reliability theory is to predict when a system will eventually fail. It includes externally observable system performance characteristics such as response times and completion rates, and the internal working of the system such as the efficiency of algorithms.

It is another aspect of quality. Here we measure the structural attributes of representations of the software, which are available in advance of execution. Then we try to establish empirically predictive theories to support quality assurance, quality control, and quality prediction.

This model can assess many different attributes of development including the use of tools, standard practices and more. It is based on the key practices that every good contractor should be using. For managing the software project, measurement has a vital role.

For checking whether the project is on track, users and developers can rely on the measurement-based chart and graph. The standard set of measurements and reporting methods are especially important when the software is embedded in a product where the customers are not usually well-versed in software terminology. This depends on the experimental design, proper identification of factors likely to affect the outcome and appropriate measurement of factor attributes.

Software metrics is a standard of measure that contains many activities, which involves some degree of measurement. The success in the software measurement lies in the quality of the data collected and analyzed. Are they correct?

Are they accurate? Are they appropriately precise? Are they consistent? Are they associated with a particular activity or time period? Can they be replicated? Hence, the data should also be possible to replicate easily.

For example: Weekly timesheet of the employees in an organization. Collection of data requires human observation and reporting. Managers, system analysts, programmers, testers, and users must record row data on forms. Provide the results of data capture and analysis to the original providers promptly and in a useful form that will assist them in their work. Once the set of metrics is clear and the set of components to be measured has been identified, devise a scheme for identifying each activity involved in the measurement process.

Data collection planning must begin when project planning begins. Actual data collection takes place during many phases of development. An example of a database structure is shown in the following figure. This database will store the details of different employees working in different departments of an organization.

In the above diagram, each box is a table in the database, and the arrow denotes the many-to-one mapping from one table to another. The mappings define the constraints that preserve the logical consistency of the data.

Once the database is designed and populated with data, we can make use of the data manipulation languages to extract the data for analysis. After collecting relevant data, we have to analyze it in an appropriate way. There are three major items to consider for choosing the analysis technique. To analyze the data, we must also look at the larger population represented by the data as well as the distribution of that data. Sampling is the process of selecting a set of data from a large population.

Sample statistics describe and summarize the measures obtained from a group of experimental subjects. Population parameters represent the values that would be obtained if all possible subjects were measured. The population or sample can be described by the measures of central tendency such as mean, median, and mode and measures of dispersion such as variance and standard deviation.

Many sets of data are distributed normally as shown in the following graph. As shown above, data will be evenly distributed about the mean. Other distributions also exist where the data is skewed so that there are more data points on one side of the mean than other. For example: If most of the data is present on the left-hand side of the mean, then we can say that the distribution is skewed to the left.

To achieve each of these, the objective should be expressed formally in terms of the hypothesis, and the analysis must address the hypothesis directly. The investigation must be designed to explore the truth of a theory. The theory usually states that the use of a certain method, tool, or technique has a particular effect on the subjects, making it better in some way than another.

If there are more than two groups to compare, a general analysis of variance test called F-statistics can be used. If the data is non-normal, then the data can be analyzed using Kruskal-Wallis test by ranking it. Investigations are designed to determine the relationship among data points describing one variable or multiple variables.

There are three techniques to answer the questions about a relationship: box plots, scatter plots, and correlation analysis. Correlation analysis uses statistical methods to confirm whether there is a true relationship between two attributes.

For normally distributed values, use Pearson Correlation Coefficient to check whether or not the two variables are highly correlated. For non- normal data, rank the data and use the Spearman Rank Correlation Coefficient as a measure of association. Another measure for non-normal data is the Kendall robust correlation coefficient , which investigates the relationship among pairs of data points and can identify a partial correlation. If the ranking contains a large number of tied values, a chi-squared test on a contingency table can be used to test the association between the variables.

Similarly, linear regression can be used to generate an equation to describe the relationship between the variables. At the same time, the complexity of analysis can influence the design chosen. For complex factorial designs with more than two factors, more sophisticated test of association and significance is needed.

Statistical techniques can be used to account for the effect of one set of variables on others, or to compensate for the timing or learning effects.

Internal product attributes describe the software products in a way that is dependent only on the product itself. The major reason for measuring internal product attributes is that, it will help monitor and control the products during development.

The main internal product attributes include size and structure. Size can be measured statically without having to execute them. The size of the product tells us about the effort needed to create it. Similarly, the structure of the product plays an important role in designing the maintenance of the product. There are three development products whose size measurement is useful for predicting the effort needed for prediction. They are specification, design, and code. These documents usually combine text, graph, and special mathematical diagrams and symbols.

Specification measurement can be used to predict the length of the design, which in turn is a predictor of code length. The diagrams in the documents have uniform syntax such as labelled digraphs, data-flow diagrams or Z schemas. Since specification and design documents consist of texts and diagrams, its length can be measured in terms of a pair of numbers representing the text length and the diagram length.

For these measurements, the atomic objects are to be defined for different types of diagrams and symbols. The atomic objects for data flow diagrams are processes, external entities, data stores, and data flows. The atomic entities for algebraic specifications are sorts, functions, operations, and axioms.

The atomic entities for Z schemas are the various lines appearing in the specification. Code can be produced in different ways such as procedural language, object orientation, and visual programming. The most commonly used traditional measure of source code program length is the Lines of code LOC. Apart from the line of code, other alternatives such as the size and complexity suggested by Maurice Halsted can also be used for measuring the length.

He proposed three internal program attributes such as length, vocabulary, and volume that reflect different views of size. He began by defining a program P as a collection of tokens, classified by operators or operands. The basic metrics for these tokens were,. Where the unit of measurement E is elementary mental discriminations needed to understand P. Object-oriented development suggests new ways to measure length. Pfleeger et al. The amount of functionality inherent in a product gives the measure of product size.

There are so many different methods to measure the functionality of software products. Function point metrics provide a standardized method for measuring the various functions of a software application.

These features are set so that they meet all the boundary requirements of software and test cases. These are as follows:. These are some other useful standards a software tester must know related to QA and software testing. JavaScript Tutorials jQuery Tutorials. Angular 7. Machine Learning. Data Structures. Operating System. Computer Network. Compiler Design. Computer Organization. Discrete Mathematics. Ethical Hacking. Computer Graphics. Web Technology. Cyber Security. C Programming. Control System.

Data Mining. Data Warehouse. Javatpoint Services JavaTpoint offers too many high quality services. The modern view of a quality associated with a software product several quality methods such as the following: Portability: A software device is said to be portable, if it can be freely made to work in various operating system environments, in multiple machines, with other software products, etc.



0コメント

  • 1000 / 1000