How would you differentiate between a heuristic evaluation in usability testing and standard user interface testing?

  • Method of Analysis
  • Test Execution Speed
  • Tools Used
  • Type of Defects Detected
Heuristic evaluation and standard user interface testing differ mainly in their method of analysis. A heuristic evaluation is a usability inspection method where evaluators, usually usability experts, compare a software product against a set of usability principles or heuristics. In contrast, standard UI testing is often scenario-based.

What is the primary purpose of developing test scripts in automation testing?

  • To document test cases
  • To execute tests repeatedly without manual intervention
  • To find all software defects
  • To reproduce user actions
The primary purpose of developing test scripts in automation testing is to enable the execution of tests repeatedly without manual intervention. This ensures consistent test execution, efficient regression testing, and helps in validating application functionality across different test cycles.

Your company is launching a product that requires rigorous regression testing for each release. What type of testing approach might you consider?

  • Ad-hoc Testing
  • Automated Regression Testing
  • Black Box Testing
  • Exploratory Testing
Regression testing ensures that previously developed and tested software still works after a change. Given the repeated nature of regression testing, Automated Regression Testing is a suitable approach as it can efficiently handle repetitive test cases.

Which scripting technique involves creating scripts by capturing the user actions on the application?

  • Data-Driven Testing
  • Descriptive Programming
  • Keyword-Driven Testing
  • Record and Playback
The "Record and Playback" technique involves recording user actions as they interact with the application. The recorded actions then form a script, which can be played back later. This technique is helpful for novice testers as it requires minimal scripting knowledge but is not always scalable.

The _______ metric provides insight into how promptly a team responds to and addresses defects.

  • Defect Age
  • Defect Density
  • MTBF
  • Test Coverage
The "Defect Age" metric is a measure of the time between when a defect is introduced into the system and when it's discovered and resolved. It provides insight into how quickly a team identifies and addresses defects. MTBF (Mean Time Between Failures) measures the expected time between two successive failures of a system.

Usability, consistency, and adherence to design guidelines are primary considerations in _______ testing.

  • Compatibility
  • Load
  • Usability
  • User Acceptance (UAT)
Usability testing is geared towards understanding how end-users interact with the software and ensuring a positive user experience. The primary considerations involve the system's ease of use, its consistency in design and functionality, and adherence to user interface design guidelines. UAT, on the other hand, determines if the system meets user needs, while load and compatibility have different focuses.

_______ are typically used as placeholders for activities that should be developed during incremental integration testing.

  • Drivers
  • Simulators
  • Stubs
  • Test Harnesses
Stubs are used in incremental integration testing as placeholders for modules that have not yet been developed. They simulate the behavior of these missing modules and allow testing to continue in the presence of incomplete components.

When managing an automated test suite, what's the primary purpose of regularly updating and maintaining the test suite?

  • To accommodate new testing tools
  • To ensure the test suite matches the current application
  • To integrate with third-party applications
  • To make the suite look appealing
Regularly updating and maintaining an automated test suite is crucial to ensure that the suite is always in sync with the current version of the application. As software evolves, test cases that were once relevant might become obsolete, and new test scenarios may arise. Maintenance ensures the suite remains effective and reflective of current needs.

During a project review, it's revealed that certain parts of the codebase have been overlooked during testing. Which white-box testing metric might help identify these areas?

  • Cyclomatic Complexity
  • Load Testing
  • Reliability Testing
  • Response Time
Cyclomatic Complexity is a white-box testing metric used to determine the complexity of a program. It calculates the number of linearly independent paths through a program's source code. A higher complexity score can indicate areas of the code that might be riskier or overlooked during testing, thus requiring more thorough testing.

In the context of risk management, the term used to describe the level of risk after mitigation efforts have been applied is known as _______ risk.

  • controlled
  • inherent
  • initial
  • residual
Residual risk is the risk that remains after all risk response, mitigation, or prevention activities have been implemented. It represents the remaining threat even after you've taken measures to reduce the severity or likelihood of the adverse event.