Proceedings of the European Computing Conference

ebook Volume 2 · Lecture Notes in Electrical Engineering

By Nikos Mastorakis

cover image of Proceedings of the European Computing Conference

Sign up to save your library

With an OverDrive account, you can save your favorite libraries for at-a-glance information about availability. Find out more about OverDrive accounts.

   Not today
Libby_app_icon.svg

Find this title in Libby, the library reading app by OverDrive.

app-store-button-en.svg play-store-badge-en.svg
LibbyDevices.png

Search for a digital library with this title

Title found at these libraries:

Loading...
Software testing has been considered so important that organizations can assign teams only for testing activities. Testing is an important activity to ensure software quality. Tests are usually run several times to certify that code ma- tenance did not accidentally insert defects into working parts of the software. In such situations, test teams shall be able to estimate the required effort to execute test cases in its schedules and to request more resources or negotiate deadlines when necessary. When regarding model-based testing approaches that we use in our documented Integrated and Optimized Software Testing methodology (IOSTP) [1–4], a high number of test cases can be automatically generated. As team resources are limited, it may be not practical to execute all generated test cases. Their complexity usually determines the effort required to execute them and it can be used for planning test resources and test suites. Several software development estimation models have been proposed over the years. However, these models do not estimate the effort for executing a given test suite, since their estimations are based on software development complexity instead of its test planning, test case design and test execution complexity. According to our reading of the literature (e. g. [5–14]), ''best practices'' in model-based effort estimation include: Local calibration (or LC); i. e. using local data to set two special tuning parameters; Stratification; i. e. given a database of past projects, and a current project to be estimated, restrict local calibration to just those records from similar projects.
Proceedings of the European Computing Conference