Friday, 8 August 2014

Hello everyone - here are few frequent questions and answers that we have been getting asked below.

Automation

Q: What is the difference between what Microsoft Test Manager offers and Record and Playback?
A: Test Manager's record/playback is intelligent and robust. We try to figure user intention based on actual captured input to reduce dependency on specific location of controls, etc... Focus is on helping manual tester accelerate through some already recorded steps.

Q. Can testers view and modify code created by the recorder?
 A: While the action log created by the recorder cannot be modified, you can create a Coded UI test (either C# or VB.Net code) which can be viewed or modified in the Visual Studio IDE. We made the rerecording of the actions easy enough that we don’t expect you to edit the recording.

Q. How does the “assisted manual testing” react to UI whose properties are dynamically generated?
 A: This should work fine as long as the generated properties are deterministic. Here is a blog that enables you to control search properties.
  
Q. How easy would it be for a developer to re-use the rich bug to test the fix?
 A: This is fairly straight forward. If the test has been automated, the developer can open the associated test case, choose to run the test case and playback the recorded actions against the fixed application.

 Q: Would you recommend leveraging the UI automation created in Test Manager in a performance test?
 A: While UI automation can be used in a performance test, that would not be our recommendation since you won’t be able to drive significant load using a UI tool that will bottleneck at the UI layer. Instead you might be better off using Web Performance Tests to drive the server load without the UI dependency.

 Q: When you convert test actions to code, what languages do you support?
 A: We generate either C# or VB.Net code from the recorded actions.

 Q: Can you parameterize your automated tests to run for multiple sets of data?
 A: Yes, you can parameterize your tests to be run for multiple sets of data.  If you like, the data stored on a parameterized Manual Test can  be bound to and leveraged by it's automated counterpart.
  
Q: Can you drive your automated tests from data in a database?
A: You can bind your automated tests to a database to run them for multiple iterations of data.  Please refer to this article for more details.

 Code Coverage

Q: What level of support do you provide for code coverage?
 A: Users of Visual Studio are able to see the code covered by their automated tests in Visual Studio. Unfortunately Code Coverage information is not currently collected when Manually Testing.

 Competition

 Q:  How does this tool compare to Selenium and other open-source testing tools?
 A:  Our offering engages testers in the overall ALM effort through integration with lab management and TFS for work-item tracking, source control, build management, and reporting.  Many of the open-source testing tools are stand alone and tend to focus on a particular platform or type of testing (e.g., Selenium is specifically for browser-based testing of web applications). 
Q: What are the key benefit this tool provides over HP's QC/QTP?
A: We feel that Visual Studio Test Professional 2010 is optimized for the way that testing is performed today where more than 70% of testing is done manually through ad-hoc tools such as Microsoft Excel and Word. It is an integrated testing tool set that delivers a complete plan-test-track workflow while allowing testers to collaborate with developers effectively and test efficiently.

Integration

 Q: Does this tool integrate with Office Project?
 A:  Yes, via TFS.  Office Project can be used to manage data stored in TFS (such as dependent tasks) to facilitate planning projects, scheduling tasks, assigning resources, and tracking changes.  By using Office Project, you gain access to features such as a project calendar, Gantt charts, and resource views.
    
Q: Does this tool integrate with Office SharePoint?
 A: Yes, via TFS, which enables creation of a team project portal associated with a SharePoint site.  The project portal includes a customizable set of dashboards (SharePoint Web parts) that enable team members to monitor project data in the form of PivotChart reports and lists or counts of work items.  The testing tools are also integrated with SharePoint in the sense that they can be used to test SharePoint sites.
  
Q:  Does this tool have the capacity to integrate with other testing systems such as Rally, QTP, or QC?
 A:  Not directly; but if you can export test cases, bugs, and other artifacts from the other system into Office Excel, you can then import them to TFS as work items.  You can use the Test Case Migrator Tool (available at http://tcmimport.codeplex.com/) to import test case steps and parameterized data along with your test cases.
  

Intellitrace

Q: Does seeing the line of code that failed require a special type of build or deployment?
A: No, no special build or deployment is required.  A tester must however configure to capture intellitrace data on the machine which is running the code while they are testing, this is done via the Test Settings.

 Q: Will the developer be able to see the line of code for web-based development where the code sits on the server and the test is executed via client browser?
 A: Yes the developer will be able to see the line of code that failed, so long as the tester configures the webserver to capture intellitrace logs.

Internal Use of Test Professional

Q: Were these test tools used during the development of Visual Studio 2010 or any other MSFT products?
A: Absolutely.  The entire Visual Studio team has used this toolset to build Visual Studio 2010.  There are many other teams at Microsoft which are using them as well.
    

Platform Support

Q: Can you provide some information about the platform support for Microsoft Test Manager? What class of applications can I test with the tool?
A: You can use the tool to manually test almost any platform, though your experience around the creation of rich bugs and support for automation will be better if you are targeting IE7/IE8, Windows Forms, WPF, Win32, MFC or SharePoint. This blog post has an updated enumeration of the automation platform support and will be updated over time as we add additional support.
  

Requirements

 Q: Does Test Professional have requirement tracing built in? 
 A: Requirements are treated as first class citizens in Test Professional.  One can see full traceability of Requirement -> Test -> Code. One can also base a test suite off of a requirement.  This blog post speaks very extensively on the topic.

Q: Can a user produce a Requirement traceability report using Test Professional?
 A: Test Professional provides full requirement traceability.  Out of the box, users can see a requirements traceability report on their team dashboard.
  

Test Manager

 Q: Can I email a defect in Visual Studio 2010?
 A: Yes, you can email a link to a defect from Visual Studio. 
  
Q: How can I link Bugs to Test Cases? How can I link bugs to Product Backlog items? 
A: Defects, test cases, backlog items are all work items on Team Foundation Server. Test Professional automatically links the work items together when appropriate (for example, when a test creates a bug, the test and the bug are linked together).  Users can also manually link work items together in Test Manager or inside Visual Studio.
   
Q: Can a tester send a video of their testing to a developer? 
A: Yes a tester can send a video of their testing to a developer.  When a tester creates a bug, with background video recording on, the video of the testing which caused the bug is referenced on the bug form which the testers assigns to the developer.  If you wish, you can save the video which is stored on the test result to a network share as well.  Demo 2 of the webinar demonstrated the rich bug which is accessible to the developer.
  
Q: Is there any way to track testing across a variety of operating systems?
A: By setting up a configuration for each operating system, you can break down your testing for each operating system.  To read more about configurations, please refer to this blog post.

 Q: What does Analyze Test Runs do?
 A: Analyze Test Runs allows you to review the results of your automated and manual test runs.  For more details, please refer to this blog post.
  
Q: Can we access Test Manager via the web?
A: Test Manager 2010 installs as a client tool. When it launches, you are presented with the various projects on TFS which you can connect to.
  
Q: We have a large number of testers, can a test manager assign test cases to individuals
A: Test managers can assign test cases to individuals. Testers can also pull from a pool of test cases. Test case assignment is done in the Test Plan Contents, it can be done by clicking on the assign button.
  
Q: Is Test Manager available as a plugin for Visual Studio 2008?
A: No, Test Manager is only available via Visual Studio Test Professional 2010 or Visual Studio Ultimate 2010
  
Q: Does Test Professional support the branching of Test Cases?
A: Test Manager does not support branching of work items.  However you can import test cases from one plan to another so you would be able to use the same test case across code branches.
  
Q: Is it possible to work with components of tests being shared between test cases?
A: Yes, shared steps can be created once and shared between test cases.  They are really powerful. Please refer to this blog post to learn more about them.
    
Q: Are test steps and expected results which span multiple lines supported?
A: Yes, You can enter multiple lines for the test step and the expected result. The text wraps in both the test runner and while authoring the steps.
  
Q: Is Test Professional compatible with Agile Methods?
A: Yes it is compatible with Agile Methods.  We would suggest that you use 1 test plan for a sprint of work and 1 requirement based suite for each user story. This blog post describes our suggested approach.
Q: When I'm editing a test case in MTM, the steps / parameters section of my work item is changing height, or is very small. This is driving me crazy!
A: This is a known bug, and we're looking at addressing it in a future release. For the moment, there is a work around: Adjust the height of the window. This should make the issue go away.

Q: How can I find all the test cases that I have action recordings for, so I can create recordings for them?
A: Today, there is not an easy way to do this. The recording is stored on the Test Result, and it is not possibly to query between Work Items, and test results. It's important to note that with Fast Forward for Navigation, the expectation is that people will record, and update, their action recordings as they are performing their normal manual testing.

Q: How do I view the results of more than one query?
A: Just click the 'queries' link at the top of the application again. This will open a new Queries view, and allow you to have two queries open. You can find the other Queries you have open in the 'Open Items' drop down.

 Test Impact

 Q: How does Recommended Tests work?
 A: When the Test Impact Data Collector is turned on, Test Runner records the code paths taken by a Manual or Automated Test Case when it is executed.  Recommended Tests recommends test for execution when a developer makes a check in that touches the code covered by the test case. 
  
Q: Do applications need to be Debug builds to track the lines of code hit by the test?
A: No applications do not need to be debug builds to capture Test Impact Data.  The Test Impact Data Collector must just be turned on while testing.
  
Q: What is the difference between Test Impact Analysis and Code Coverage?
A: Code Coverage is a report which can be generated which speaks to the lines of your code under test which have been covered by your automated testing. Test Impact analysis collects the lines of code which your Tests cover to report which Test Cases are suggested for you to re-run based upon code churn.

Wednesday, 23 July 2014

02:10 | by Dragan Mestrovik | Categories: | No comments
Technical recruiters are always looking for automation engineers or testers that can deliver valuable contribution to the company saving lot of time and resources. Although different company uses different approach to recruit automation engineers, nowadays most of the companies are following the model that Google has implemented successfully to hire the engineers for their need.
As explained in “How Google Tests Software”,  James Whittaker, Jason Arbon and Jeff Carollo has described the three roles that Google have created making the engineers responsible while making productive and more quality-minded which are described below.
  • Software Engineer (SWE) : SWE’s play the role of tradition developer  writing functional code that is dispatched to the user. They also create design documentation; choose data structures and overall architecture of the automation framework. They are involved in writing test code, test-driven design, unit tests and they own quality for everything they touch whether they wrote it, fixed it, or modified it.
  • Software Engineer in Test (SET): SET also wears the hat of developer role but focuses on testability and general test infrastructure. SET’s review designs and looks closely at code quality and risk. They are more involved in refactoring code to make it more testable and write unit testing frameworks and automation.
  • Test Engineers (TE): TE is similar to SET role but focuses on testing on behalf of user first and developers the second. They are involved in writing the code that drives usage scenarios mimicking the user.  In short, TE’s are product experts, quality advisers and analyzers of risk.
Although the definition of three roles seems self-explanatory and we are more interested the later two roles, lot of people including myself are always wondering what engineering hat do we wear every day.
To answer the question,  James Whittaker, Jason Arbon and Jeff Carollo has summarized the list of questions that could help you decide whether you belong to the SET or TE.
You might be a SET If
  • You can take a specification, a clean whiteboard, and code up a solid and efficient solution.
  • When you code, you guiltily think of all the unit tests you should be writing. Then, you end up thinking of all the ways to generate the test code and validation instead of hand crafting each unit test.
  • You think an end user is someone making an API call.
  •  You get cranky if you look at a poorly written API documentation, but sometimes forget why the API is interesting in the first place.
  • You find yourself geeking out with people about optimizations in code or about looking for race conditions.
  • You prefer to communicate with other human beings via IRC or comments in check-ins.
  • You prefer a command line to a GUI and rarely touch the mouse.
  • You dream of machines executing your code across thousands of machines, beating up algorithms, and testing algorithms–showing their correctness through sheer numbers of CPU cycles and net- work packets.
  • You have never noticed or customized your desktop background. Seeing compiler warnings makes you anxious.
  • When asked to test a product, you open up the source code and start thinking about what needs to be mocked out.
  •  Your idea of leadership is to build out a great low-level unit test framework that everyone leverages or is highly exercised millions of times a day by a test server.
  • When asked if the product is ready to ship, you might just say, “All tests are passing.”
You might be a TE if
  • You can take existing code, look for errors, and immediately under- stand the likely failure modes of that software, but don’t much care about coding it from scratch or making the change. You prefer reading Slashdot or News.com to reading other people’s code all day.
  • You read a spec for a product that is half-baked, you take it upon yourself to fill in all the gaps, and just merge this into the docu- ment.
  •  You dream of working on a product that makes a huge impact on people’s lives, and people recognize the product you work on.
  • You find yourself appalled by some websites’ UI and wonder how they could ever have users.
  • You get excited about visualizing data.
  • You find yourself wanting to talk to humans in meat space. You don’t understand why you have to type “i” to start typing in a certain text editor.
  • Your idea of leadership is nurturing other engineers’ ideas and challenging their ideas with an order of magnitude more scale.
  • When asked if the product is ready to ship, you might say, “I think it’s ready.”
02:09 | by Dragan Mestrovik | Categories: | No comments
As technology has changed drastically, the importance of Automation has also propagated exponentially. As per salesforce.com, about 1 million browser tests run every day in 50,000 VM’s. If you just think about it, that’s a lot of automated tests in one day but with the cloud computing business scale that sales force provides, it can definitely be a necessity for the company. During the Selenium meet-up at Sanjose, Greg  Wester from Salesforce.com stated that the quality assurance at sales force is entirely dependent on automation that means no manual testing is done at all except few exceptions.
Now and then, I have been asked from my fellow colleagues about the best programming language to learn automation or to apply automation at work. My immediate answer always tends to be: “It depends”. It’s a complicated question and depends on lot of factors which I have attempted to describe below.
  1. Tool:  All automation tools are developed by certain programming language and therefore there is need to learn the programming language that the tools support. For instance, if you are using Quick Test Pro (QTP), the primary language that you need to learn is VB Script. Similarly, if you are using Test Studio, then you will need to learn C# or VB.NET. However if you use Selenium then you have variety of options to choose from which are C#, Java, PHP, Python, Ruby and JavaScript. Each language has its own advantages and disadvantages so it’s very hard to decide which one is better than another.
  2. Project Framework:  Although this factor can be very debatable but still many companies tend to use the same language that the developers use to develop the application. In most companies, Java is the commonly used language and therefore the QA Managers tend to use the same programming language with the concept that there is help when needed from the developers. However some QA teams tend to use different language that they think suits the most disregarding the language that the company uses to design the application.
  3. Team Knowledge: In my opinion, this factor plays a great role on determining the tool and the language for automation. If all or many QA Automation Engineers feels comfortable with any particular language, then that particular language should be opted for automation. Any new Automation engineers can be recruited with the language that was selected by the team. Personally, I have worked with many different teams which used variety of languages such as C#, Java and Ruby as the team opted for the language that they feel the whole team was comfortable.
  4. Automation Framework Support: Another factor to consider a programming language for automation is the support availability. For instance, since Java is the most commonly used language for automation, it has a significant amount of Selenium Java users and hence more automation support in Selenium. On the other hand, Python along with C# probably has the least automation support for Selenium so it will be very hard to find help if needed. Unless the automation engineers are expert in certain programming language, it is very important to consider the automation tool support especially for Open source tools such as selenium while deciding the programming language for automation.
If you consider all the factors highlighted above, it can be very useful for deciding the favorable programming language for the team. Since people favor different programming language, there is not a single jackpot winner programming language that satisfies all your needs.

Tuesday, 22 July 2014

23:28 | by Dragan Mestrovik | Categories: , | No comments
I've built up quite a comprehensive list of things to think about when test planning. I usually spend an hour or so going through these with the project manager at the start of the project to make sure we have a shared understanding (in my experience the PMs tend to find this really useful).
NB I work for a fairly small organisation who take on a wide variety of development projects, so the testing needs are often different for each project - hence we need to ask quite a few questions each time.

Scope
  • What's the project?
  • Why is this project important to the customer - what are their goals and priorities?
  • What's not in scope? (Anything that we are not planning to test)

Approach
  • Do we have wireframes, acceptance criteria broken down by story etc?
  • Can we use static analysis tools?
  • What's the code coverage target for unit testing?
  • What are the main integration points with internal or external systems which might need particular integration testing? e.g. emails, payment providers, data migration etc
  • What are the most important high-level functional and non-functional requirements for system testing? (e.g. performance and reliability might be particularly critical for a certain system, or the user must be able to make a booking, etc)
  • Non-functional requirements checklist: do we need to specifically test any of the following? (This is probably the most important and useful aspect of our test planning, as we often uncover unclear or implicit requirements!): security, accessibility, usability, performance, reliability, software/hardware compatibility (e.g. browsers, OS, mobile devices), resource usage (memory/CPU/battery), installation, backup & restore, maintainability (logging etc)
  • What's the plan for UAT?

Test Environment
  • How are we going to get realistic test data? (This is probably the second most useful aspect of our test planning as it can be a challenge, but is also really important, so needs early planning)
  • What CI/test/staging environments are we going to use?

Schedule, Budget & Reporting
  • What test deliverables and reports do we need to give to the customer?
  • What's the budget, how will progress be tracked?
  • Has the tester been invited to all the relevant team meetings?
  • What are the key test/release milestones?

Risks & dependencies
  • What are the project risks we're aware of? e.g. unrealistic timelines at the end of the project, missing 3rd party dependencies blocking integration testing
  • What are the product risks we're aware of? e.g. areas where spec is unclear, anything particularly hard to cover with automated tests
  • What can we do to mitigate these?

Friday, 4 July 2014

22:37 | by Dragan Mestrovik | Categories: | No comments
Well, here are some tips to create a good database test plan:
1. Database testing can get complex. It may be worth your while if you create a separate test plan specifically for database testing.
2. Look for database related requirements in your requirements documentation. You should specifically look for requirements related to data migration or database performance. A good source for eliciting database requirements is the database design documents.
3. You should plan for testing both the schema and the data.
4. Limit the scope of your database test. Your obvious focus should be on the important test items from a business point of view. For example, if your application is of a financial nature, data accuracy may be critical. If you application is a heavily used web application, the speed and concurrency of database transactions may be very important.
5. Your test environment should include a copy of the database. You may want to design your tests with a test database of small size. However, you should execute your tests on a test database of realistic size and complexity. Further, changes to the test database should be controlled.
6. The team members designing the database tests should be familiar with SQL and database tools specific to your database technology.
7. I find it productive to jot down the main points to cover in the test plan first. Then, I write the test plan. While writing it, if I remember any point that I would like to cover in the test plan, I just add it to my list. Once I cover all the points in the list, I review the test plan section by section. Then, I review the test plan as a whole and submit it for review to others. Others may come back with comments that I then address in the test plan.
8. It is useful to begin with the common sections of the test plan. However, the test plan should be totally customized for its readers and users. Include and exclude information as appropriate. For example, if your defect management process never changes from project to project, you may want to leave it out of the test plan. If you think that query coding standards are applicable to your project, you may want to include it in the test plan (either in the main plan or as an annexure).

Now, let us create a sample database test plan. Realize that it is only a sample. Do not use it as it is. Add or remove sections as appropriate to your project, company or client. Enter as much detail as you think valuable but no more.

For the purpose of our sample, we will choose a database supporting a POS (point of sale) application. We will call our database MyItemsPriceDatabase.

Introduction

This is the test plan for testing the MyItemsPriceDatabase. MyItemsPriceDatabase is used in our POS application to provide the current prices of the items. There are other databases used by our application e.g. inventory database but these other databases are out of scope of this test.

The purpose of this test plan is to:
1. Outline the overall test approach
2. Identify the activities required in our database test
3. Define deliverables

Scope

We have identified that the following items are critical to the success of the MyItemsPriceDatabase:
1. The accuracy of uploaded price information (for accuracy of financial calculations)
2. Its speed (in order to provide quick checkouts)
3. Small size (given the restricted local hard disk space on the POS workstation)

Due to limitation of time, we will not test the pricing reports run on the database. Further, since it is a single-user database, we will not test database security.

Test Approach

1. Price upload test
Price upload tests will focus on the accuracy with which the new prices are updated in the database. Tests will be designed to compare all prices in the incoming XML with the final prices stored in the database. Only the new prices should change in the database after the upload process. The tests will also measure the time per single price update and compare it with the last benchmark.

2. Speed test
After analyzing the data provided to us from the field, we have identified the following n queries that are used most of the time. We will run the queries individually (10 times each) and compare their mean execution times with the last benchmark. Further, we will also run all the queries concurrently (in sets of 2 and 3 (based on the maximum number of concurrent checkouts)) to find out any locking issues.

3. Size test
Using SQL queries, we will review the application queries and find out the following:
a. Items which are never used (e.g. tables, views, queries (stored procedures, in-line queries and dynamic queries))
b. Duplicate data in any table
c. Excessive field width in any table

Test Environment

The xyz tool will be used to design and execute all database tests. The tests will be executed on the local tester workstations (p no.s in all).

Test Activities and Schedule
1. Review requirements xx/xx/xxxx (start) and xx/xx/xxxx (end)
2. Develop test queries
3. Review test queries
4. Execute size test
5. Execute price upload test
6. Execute speed test
7. Report test results (daily)
8. Submit bug reports and re-test (as required)
9. Submit final test report

Responsibilities
1. Test lead: Responsible for creating this test plan, work assignment and review, review of test queries, review and compile test results and review bug reports
2. Tester: Responsible for reviewing requirements, developing and testing test queries, execute tests, prepare individual test results, submit bug reports and re-test

Deliverables

The testers will produce the following deliverables:
1. Test queries
2. Test results (describing the tests run, run time and pass/ fail for each test)
3. Bug reports

Risks

The risks to the successful implementation to this test plan and their mitigation is as under:
1.
2.
3.

Approval
       Name        Role        Signature        Date
1. ____________________________________________________________
2. ____________________________________________________________
3. ____________________________________________________________
22:36 | by Dragan Mestrovik | Categories: | No comments
Database migration testing is needed when you move data from the old database(s) to a new database. The old database is called the legacy database or the source database and the new database is called the target database or the destination database. Database migration may be done manually but it is more common to use an automated ETL (Extract-Transform-Load) process to move the data. In addition to mapping the old data structure to the new one, the ETL tool may incorporate certain business-rules to increase the quality of data moved to the target database.

Now, the question arises regarding the scope of your database migration testing. Here are the things that you may want to test.
1. All the live (not expired) entities e.g. customer records, order records are loaded into the target database. Each entity should be loaded just once i.e. there should not be a duplication of entities.
2. Every attribute (present in the source database) of every entity (present in the source database) is loaded into the target database.
3. All data related to a particular entity is loaded in each relevant table in the target database.
4. Each required business rule is implemented correctly in the ETL tool.
5. The data migration process performs reasonably fast and without any major bottleneck.

Next, let us see the challenges that you may face in database migration testing.
1. The data in the source database(s) changes during the test.
2. Some source data is corrupt.
3. The mappings between the tables/ fields of the source databases(s) and target database are changed by the database development/ migration team.
4. A part of the data is rejected by the target database.
5. Due to the slow database migration process or the large size of the source data, it takes a long time for the data to be migrated.

The test approach for database migration testing consists of the following activities:

I. Design the validation tests
In order to test database migration, you need to use SQL queries (created either by hand or using a tool e.g. a query creator). You need to create the validation queries to run against both the source as well as the target databases. Your validation queries should cover the scope defined by you. It is common to arrange the validation queries in a hierarchy e.g. you want to test if all the Orders records have migrated before you test for all OrderDetails records. Put logging statements within your queries for the purpose of effective analysis and bug reporting later.

II. Set up the test environment
The test environment should contain a copy of the source database, the ETL tool (if applicable) and a clean copy of the target database. You should isolate the test environment so that it does not change externally.

III. Run your validation tests
Depending on your test design, you need not wait for the database migration process to finish before you start your tests.

IV. Report the bugs
You should report the following data for each failed test:
    a. Name of the entity that failed the test
    b. Number of rows or columns that failed the test
    c. If applicable, the database error details (error number and error description)
    d. Validation query
    d. User account under which you run your validation test
    e. Date and time the test was run

Keep the tips below in mind to refine your test approach:

1. You should take a backup of the current copies of the source and target databases. This would help you in case you need to re-start your test. This would also help you in reproducing any bugs.
2. If some source data is corrupt (e.g. unreadable or incomplete), you should find out if the ETL tool takes any action on such data. If so, your validation tests should confirm these actions. The ETL tool should not simply accept the corrupt data as such.
3. If the mappings between the tables/ fields of the source and target databases are changed frequently, you should first test the stable mappings.
4. In order to find out the point of failure quickly, you should create modular validation tests. If your tests are modular, it may be possible for you to execute some of your tests before the data migration process finishes. Running some tests while the data migration process is still running would save you time.
5. If the database migration process is manual, you have to run your validation queries externally. However, if the process uses an ETL tool, you have the choice to integrate your validation queries within the ETL tool.

I hope that you are comfortable with the concept of database migration testing. (whether  data is migrated between binary files and an RDBMS or between RDBMSs (Oracle, SQL Server, Informix or Sybase)). According to you, what is the main problem faced while testing database migration? What is a good way to handle this problem?
22:14 | by Dragan Mestrovik | Categories: | No comments
Many (but not all) applications under test use one or more databases. The purposes of using a database include long-term storage of data in an accessible and organizedform. Many people have only a vague idea about database testing.

Firstly, we need to understand what is database testing? As you would know, a database has two main parts - the data structures (the schema) that store the data AND the data itself. Let us discuss them one by one. 


The data is stored in the database in tables. However, tables may not be the only objects in the database. A database may have other objects like views, stored procedures and functions. These other objects help the users access the data in required forms. The data itself is stored in the tables. Database testing involves finding out the answers to the following questions:

Questions related to database structure
1. Is the data organized well logically?
2. Does the database perform well?
3. Do the database objects like views, triggers, stored procedures, functions and jobs work correctly?
4. Does the database implement constraints to allow only correct data to be stored in it?
5. Is the data secure from unauthorized access?


Questions related to data
1. Is the data complete?
2. Is all data factually correct i.e. in sync with its source, for example the data entered by a user via the application UI?
3. Is there any unnecessary data present?


Now that we understand database testing, it is important to know about the 5 common challenges seen before or during database testing:

1. Large scope of testing
It is important to identify the test items in database testing. Otherwise, you may not have a clear understanding of what you would test and what you would not test. You could run out of time much before finishing the database test.
Once you have the list of test items, you should estimate the effort required to design the tests and execute the tests for each test item. Depending on their design and data size, some database tests may take a long time to execute. Look at the test estimates in light of the available time. If you do not have enough time, you should select only the important test items for your database test.

2. Incorrect/ scaled-down test databases
You may be given a copy of the development database to test. This database may only have little data (the data required to run the application and some sample data to show in the application UI). Testing the development or test or staging databases may not be sufficient. You should also be testing a copy of the production database.

3. Changes in database schema and data
This is a particularly nasty challenge. You may find that after you design a test (or even after you execute a test), the database structure (the schema) has been changed. This means that you should be aware of the changes made to the database during testing. Once the database structure changes, you should analyze the impact of the changes and modify any impacted tests.
Further, if your test database is being used by other users, you would not be sure about your test results. Therefore, you should ensure that the test database is used for testing purpose only.
You may also see this problem if you run multiple tests at the same time. You should run one test at a time at least for the performance tests. You do not want your database performing multiple tasks and under-reporting performance.

4. Messy testing
Database testing may get complex. You do not want to be executing tests partially or repeating tests unnecessarily. You should create a test plan and proceed accordingly while carefully noting your progress.

5. Lack of skills
The lack of the required skills may really slow things down. In order to perform database testing effectively, you should be comfortable with SQL queries and the required database management tools.

Next, let us discuss the approach for database testing. You should keep the scope of your test as well as the challenges in mind while designing your particular test design and test execution approach. Note the following 10 tips:

1. List all database-specific requirements. You should gather the requirements from all sources, particularly technical requirements. It is quite possible that some requirements are at a high level. Break-down those requirements into the small testable requirements.

2. Create test scenarios for each requirement as suggested below.

3. In order to check the logical database design, ensure that each entity in the application e.g. actors, system configuration are represented in the database. An application entity may be represented in one or tables in the database. The database should contain only those tables that are required to represent the application entities and no more.

4. In order to check the database performance, you may focus on its throughput and response times. For example, if the database is supposed to insert 1000 customer records per minute, you may design a query that inserts 1000 customer records and print/ store the time taken to do so. If the database is supposed to execute a stored procedure in under 5 seconds, you may design a query to execute the stored procedure with sample test data multiple times and note each time.

5. If you wish to test the database objects e.g. stored procedures, you should remember that a stored procedure may be thought of as a simple program that (optionally) accepts certain input(s) and produces some output. You should design test data to exercise the stored procedure in interesting ways and predict the output of the stored procedure for every test data set.

6. In order to check database constraints, you should design invalid test data sets and then try to insert/ update them in the database. An example of an invalid data set is an order for a customer that does not exist. Another example is a customer test data set with an invalid ZIP code.

7. In order to check the database security, you should design tests that mimic unauthorized access. For example, log in to the database as a user with restricted access and check if you can view/ modify/ delete restricted database objects or view or view and update restricted data. It is important to backup your database before executing any database security tests. Otherwise, you may render your database unusable.
You should also check to see that any confidential data in the database e.g. credit card numbers is either encrypted or obfuscated (masked).

8. In order to test data integrity, you should design valid test data sets for each application entity. Insert/ update a valid test data set (for example, a customer) and check that the data has been stored in the correct table(s) in correct columns. Each data in the test data set should have been inserted/ updated in the database. Further, the test data set should be inserted only once and there should not be any other change in the other data.

9. Since your test design would require creating SQL queries, try to keep your queries as simple as possible to prevent defects in them. It is a good idea for someone other than the author to review the queries. You should also dynamically test each query. One way to test your query is to modify it so that it just shows the resultset and does not perform the actual operation e.g. insert, delete. Another way to test your query is to run it for a couple of iteration s and verify the results.

10. If you are going to have a large number of tests, you should pay special attention to organizing them. You should also consider at least partial automation of frequently run tests.

Now you should know what database testing is all about, the problems that you are likely to face while doing database testing and how to design a good database test approach for the scope decided by you.