The aspect of Configuration Management that ensures no unauthorized changes have been made to the software is known as _______.
- Change Management
- Configuration Auditing
- Configuration Control
- Configuration Identification
Configuration Auditing is a critical aspect of Configuration Management. It involves the process of evaluating and examining the configurations to ensure that they align with the approved configuration documentation. Through this, unauthorized changes, discrepancies, or inconsistencies can be detected and addressed promptly.
Which black-box testing technique is based on deriving the test cases from the system requirements?
- Boundary Value Analysis
- Equivalence Partitioning
- Requirement-based Testing
- State Transition
Requirement-based Testing, as the name implies, involves designing test cases directly based on the system requirements. It ensures that the software system meets and conforms to the specified requirements, making certain that all functionalities are tested as intended.
If a tester deems the defect as not genuine, what status is typically assigned to the bug?
- Closed
- Deferred
- Rejected
- Reopened
If a defect is considered not genuine or if it's not a real issue, it is typically marked as "Rejected." This status indicates that the defect raised is either not replicable, is intended behavior, or isn't valid in the context it was reported.
Why might an organization prefer Alpha Testing over Beta Testing for certain software products?
- Alpha Testing is more time-consuming.
- Alpha Testing is performed without actual users.
- Alpha Testing offers tighter feedback loops.
- Beta Testing is restricted to internal teams.
Alpha Testing is usually performed in a controlled environment and involves internal teams. This allows the organization to receive feedback in a quicker and more direct manner. Beta Testing, on the other hand, involves actual users but may introduce challenges in managing feedback and potential public relations issues.
Which of the following is a primary goal of accessibility testing?
- To ensure compatibility on all devices
- To ensure the application is usable by people with disabilities
- To find performance bottlenecks
- To identify usability issues
Accessibility testing primarily aims to ensure that applications and websites are usable by people with disabilities like visual, auditory, cognitive, and motor impairments. While usability, compatibility, and performance are important, they are separate areas of testing. Accessibility testing focuses on ensuring equal access and inclusivity.
When it comes to managing large-scale test suites, which approach helps in ensuring that the tests remain relevant and effective over time?
- Adding more test scripts.
- Only focusing on new features' tests.
- Refactoring test cases regularly.
- Running all tests continuously.
Regularly refactoring test cases is an essential practice when managing large-scale test suites. As the application under test evolves, some test cases may become redundant, outdated, or irrelevant. Refactoring ensures that the test suite remains lean, relevant, and more maintainable, thereby keeping its effectiveness over time.
The technique where expert evaluators review an interface based on usability principles is termed _______.
- Cognitive Walkthrough
- Dynamic Analysis
- Heuristic Evaluation
- Interface Mapping
"Heuristic Evaluation" is a usability inspection method where expert evaluators individually review an interface based on a list of recognized usability principles, known as heuristics. These evaluators identify usability problems in the design, allowing designers to rectify these issues for an improved user experience.
When testers explore the application without any specific plans and simultaneously design and execute tests, they are engaged in _____.
- Exploratory Testing
- Regression Testing
- Scripted Testing
- Smoke Testing
Exploratory Testing involves testers exploring the software without pre-defined test cases or a specific plan. It's a dynamic process where testers learn the application and simultaneously design and execute tests to find defects.
The goal of _______ testing is to ensure that the software application performs adequately when subjected to varying workloads.
- Load Testing
- Performance Testing
- Scalability Testing
- Volume Testing
Performance Testing encompasses a range of tests (including Load, Stress, Scalability, and Volume Testing) to ensure that the software behaves well under expected loads, extreme conditions, and varying workloads. The objective is to deliver a seamless user experience, irrespective of the conditions or demands placed on the software.
How is the "Defect Removal Efficiency" metric typically calculated?
- (Defects fixed / Defects reported) x 100%.
- (Defects found by testers / Defects found by users) x 100%.
- (Defects found post-release / Total defects) x 100%.
- (Defects found pre-release / Total defects) x 100%.
"Defect Removal Efficiency" (DRE) metric measures the effectiveness of the testing process. It's calculated as the ratio of defects found before release (by the testing team) to the total defects (found both before and after release). A higher DRE indicates a more effective testing process.