S 2.83 Testing standard software

Initiation responsibility: Head of IT, Head of Specialised Department

Implementation responsibility: Tester

The testing of standard software can be divided up into the sections preparation, implementation and evaluation. The following tasks must be carried out in these sections:

Test preparation

Test implementation

Test evaluation

The various tasks are described below.

Test preparation

Determining the test methods for the individual tests (test types, processes and tools

Methods for carrying out tests are, for example, statistical analyses, simulation, proof of correctness, symbolic program execution, review, inspection, failure analysis. It should be noted that some of these test methods can only be carried out if the source code is available. The suitable test method must be selected and determined in the preparation stage.

It must be clarified which processes and tools will be used for testing programs and checking documents. Typical processes for testing programs are, for example, black box tests, white box tests or penetration tests. Documents can be checked using informal methods, reviews or checklists, for example.

A black box test is a functionality test without knowledge of the internal program sequences. Here, the program is run with all data types for all test cases with troubleshooting and plausibility checks.

A white box test is a functionality test with disclosure of the internal program sequences, e.g. by source code checks or tracing. White box tests generally go beyond IT baseline protection and can not normally be carried out for standard software as the source code is not disclosed by the manufacturer.

Functionality tests are intended to prove that the test is in accordance with the specification. Using penetration tests, it is intended to determine whether known or assumed vulnerabilities can be exploited in practical operation, for example by attempts to manipulate the security mechanisms or by bypassing security mechanisms by manipulation at the operating system level.

The way the results are to be secured and evaluated should be stipulated, particularly as regards repeating tests It should be clarified which data should be kept during and after the test.

Creating test data and test cases

The preparation of tests also includes the creation of test data. Methods and procedures should be stipulated and described in advance.

A number of test cases in accordance with the testing time must be created for each test. Each of the following categories should be taken into consideration:

Standard cases are cases which are used to test whether the defined functions are implemented correctly. The incoming data are called normal values or limit values. Normal values are data within the valid input area, limit values are threshold data.

Error cases are cases where attempts are made to provoke possible program error messages. The input values which should cause a predetermined error message to occur in the program are called false values.

Exceptional cases are cases where the program has to react differently than to standard cases. It must therefore be checked whether the program recognises these as such and then processes them correctly.

Examples:

In the event that it is too time-consuming or difficult to create test data, anonymous actual values can be used for the test. For reasons of

confidentiality protection, actual data must be made anonymous. It should be ensured that this anonymous data does not cover all limit values and exceptional cases, so that these have to be created separately.

In addition to the test data, all types of possible user errors should be taken into consideration. Particularly difficult are all user reactions which are not planned in the program sequence and which are thus not correctly rejected.

Establishing the necessary test environment

The test environment described in the test plan must be established and the products to be tested installed. The components used should be identified and their configuration described. In the event that deviations from the described configuration arise when installing the product, this should be documented.

Test implementation

The test must be implemented based on the test plan. Each action, together with the test results, must be adequately documented and evaluated. In particular, if errors appear, they must be documented in such a way that they can be reproduced. Operating parameters suited to subsequent production operations must be determined and recorded to enable installation instructions to be drawn up at a later stage.

If additional functions are detected in the product which are not listed in the requirements catalogue but can nevertheless be of use, a short test for them must be carried out at the very least. If it becomes apparent that this function is of particular importance for later operations, they must be tested in full. For the additional testing time incurred, application must be made if necessary for an extension of the time limit to the person responsible. The test results must be included in the overall evaluation.

If, when processing individual test contents, it becomes apparent that one or several requirements of the requirements catalogue were not sufficiently specific, they must be put in more specific terms if necessary.

Example: In the requirements catalogue, encryption is demanded to safeguard the confidentiality of the data to be processed. During testing, it has become apparent that offline encryption is unsuitable for the intended purpose. An addition must therefore be made to the requirements catalogue with regard to online encryption. (Offline encryption must be initiated by the user and each of the elements to be encrypted must be specified; online encryption is carried out in a transparent way on behalf of the user with pre-set parameters.)

Initial tests

Before all other tests, the following basic aspects must first be tested, as any failure in these initial tests will lead to direct actions or the stopping of the test:

Functional tests

The functional requirements which were placed on the product in the requirements catalogue must be examined in terms of the following aspects:

Tests of additional functional features

The additional features specified in the Requirements Catalogue besides the security-specific features and the functional features must also be checked:

In addition, the following additional points of the requirements catalogue must be tested:

Security-specific tests

If security-specific requirements were placed on the product, the following aspects must be examined in addition to the checks and tests mentioned above:

As the basis for a security check, the Manual for the Evaluation of the Security of Information Technology Systems (ITSEM) could, for example, be consulted. It describes many of the procedures shown below. The additional comments are an aid to orientation and serve as an introduction to the topic.

At the outset, it must first be demonstrated by functional tests that the product supplies the required security functions.

Following this, it must be checked whether all the required security mechanisms were mentioned in the requirements catalogue and, if necessary, this must be amended. In order to confirm or reject the minimum strength of the mechanisms, penetration tests must be carried out. Penetration tests must be performed after all other tests, as indications of potential vulnerabilities can arise out of these tests.

The test object or the test environment can be damaged or impaired by penetration tests. To ensure that such damage does not have any impact, data backups should be made before penetration tests are carried out.

Penetration tests can be supported by the use of security configuration and logging tools. These tools examine a system configuration and search for common vulnerabilities such as, for example, generally legible files and missing passwords.

Using penetration tests, the product should be examined for design vulnerabilities by employing the same methods a potential 'attacker' would use to exploit vulnerabilities such as, for example

The mechanism strengths are defined using the terms specialised knowledge, opportunities and operating resources. These are explained in more detail in ITSEM. For example, the following rules can be used for defining the mechanism strength.

It must be ensured that the tests carried out cover all security-specific functions. It is important to note that only errors or differences from the specifications can ever be determined by testing, never the absence of errors.

Typical aspects of investigation can be shown by a number of examples:

Password protection:

Access rights:

Data backup:

Encryption:

Logging:

In addition to this, it must be ascertained whether, as a result of the new product, security features will be circumvented elsewhere. Example: The product to be tested offers an interface to the operating system environment; previously however, the IT system was configured in such a way that no such interfaces existed.

Pilot application:

Following the conclusion of all other tests, a pilot application, i.e. use under real conditions, might still be considered necessary.

If the test is carried out in the production environment using actual data, the correct and error-free operating method of the program must have been confirmed beforehand with a sufficient number of tests, in order not to jeopardise the availability and integrity of the production environment. For example, the product may be installed at the premises of selected users who will then use it for a set period in actual production conditions.

Test evaluation

Using the decisive criteria specified, the test results must be assessed and all results must be assembled and submitted along with the test documentation to the Purchasing Department or the person responsible for the test.

With the aid of the test results, a final judgement should be made regarding a product to be procured. If no product has passed the test, consideration must be given as to whether a new survey of the market should be undertaken, whether the requirements set were too high and must be changed, or whether procurement must be dispensed with at this time.

Example:

Using the example of a compression program, one possibility is now described of evaluating test results. Four products were tested and assessed in accordance with the three-point scale derived from S 2.82 Developing a test plan for standard software.

Property Necessary/Desirable Significance Product 1 Product 2 Product 3 Product 4
Correct compression and decompression N 10 2 2 Y 0
Detection of bit errors in a compressed file N 10 2 2 N 2
Deletion of files only after successful compression N 10 2 2 Y 2
DOS PC, 80486, 8 MB N 10 2 2 Y 2
Windows-compatible D 2 0 2 Y 2
Throughput of over 1 MB/s at 50 MHz D 4 2 2 Y 2
Compression rate over 40% D 4 2 1 N 0
Online help system function D 3 0 0 N 2
Password protection for compressed files D 2 2 1 N 2
Assessment     100 98 K.O. K.O.
Pricing(maximum cost of 50.00 euro per licence)     49.00 euro 25.00 euro   39.00 euro

Table: Test plan for standard software

Product 3 had already failed at the preselection stage and was therefore not tested.

Product 4 failed in the test section "correct compression and decompression", because the performance of the feature was assessed with a 0, although it is a necessary feature.

In calculating the assessment scores for products 1 and 2, the grades were used as multipliers for the respective significance coefficient and the total finally arrived at:

Product 1: 10*2+10*2+10*2+10*2+2*0+4*2+4*2+2*2 = 120
Product 2: 10*2+10*2+10*2+10*2+2*2+4*2+4*1+2*1 = 118

Following the test evaluation, product 1 is thus in first place, but is closely followed by product 2. The decision in favour of a product must now be made by the Purchasing Department on the basis of the test results and the price-performance ratio resulting from them.

Review questions: