Application Lifecycle Management (ALM) with TFS
What is (Application Lifecycle Management) ALM?
ALM describes methods to manage software development and IT initiatives by automating the process from end to end, and integrating the information from the various steps. Integration provides consistency, accuracy and also introduces opportunities for automation. There are three core aspects of ALM.
- Traceability of relationships between artifacts. This is traditionally a labor-intensive, manual process, where the effort varies with the number and size of projects, the varying size and scope, and the number of artifact inter dependencies. Compliance requirements make traceability a necessity.
- Automation of high-level processes. Development organizations commonly use paper-based approval processes to control hand-offs between functional areas. ALM solutions improve efficiency by automating these hand-offs and by providing central storage for all associated documentation. Automated and executable process models are used by ALM solutions to ensure process adherence.
- Reporting to increase visibility. Most managers have limited visibility into the progress of development projects. What visibility they have is typically gleaned from subjective testimonials, and not objective data. The lack of proper reporting also hinders opportunities for process improvement. The ALM reporting functions benefit from integration and automation to provide real-time status information and deep analysis of all activities.
To understand what TFS offer, it’s useful to walk through each of the activities in a typical development process: managing requirements; architecting a solution; developing code; testing code; and managing and tracking the project. It’s also important to think about maintenance, which often consumes more money than a project’s original development. Figure 1 depicts a typical application development life cycle
Figure 1: An Application Lifecycle |
Managing Requirements
Requirements are the backbone of a software development project. They drive the design and development, they determine what tests are done, and they’re fundamental to deciding when the software is ready to ship. Given this central role, managing requirements effectively is important.
In Visual Studio, requirements are stored as work items in TFS. The product doesn’t specify how requirements should be gathered, however. One common solution is to record requirements using Microsoft Word or another tool. Third party products, such as TeamSpec from TeamSolutions, provide add-ins that allow requirements gathered in Word to be automatically synchronized with requirement work items in TFS. Another option is to use SketchFlow, a tool included with Microsoft’s Expression Blend, to create quick sketches of user interfaces. Because these interface prototypes let people see what an application will look like, they can help in understanding a project’s requirements.
However they’re gathered, requirements stored in TFS can be used in several different ways. A primary goal of Visual Studio is to provide requirements traceability, connecting requirements with other aspects of development throughout a project’s life. As mentioned earlier, requirements can be connected with other work items such as tasks and test cases to make this possible. These connections let team members do things like determine which requirements don’t yet have test cases, figure out who’s responsible for the tasks necessary to meet a given requirement, or decide what tests to work on today.
To work with requirements and the work items they’re connected to, the people on a development team have a number of options. A developer might choose to use the Visual Studio IDE to see which bugs are associated with a specific requirement. A tester can use Microsoft Test Manager to display the tests for that requirement. A business analyst might prefer to see requirements via Excel, letting her list the tasks associated with each one, as Figure 2 shows.
Figure 2: A business analyst might use Excel to work with requirements (user stories) and associated tasks stored in TFS. |
The overarching goal is to allow requirements traceability throughout the development lifecycle. By associating requirements with tasks, testing, and other parts of the process, and by making this information accessible through a variety of tools, Visual Studio aims at making it easier for teams to build the right software.
Architecting a Solution
Before writing any code, the people responsible for building a new application usually start by thinking about its structure. What parts should the application have? What should each one do? And how should those parts fit together? Once code actually exists, they ask more questions. What does this class look like? What other classes is it related to? What’s the sequence of calls from this method?
All of these questions lend themselves to visual answers. In every case, creating diagrams that show what’s going on can be the clearest path to understanding. Accordingly, Visual Studio contains tools for creating and working with diagrams that address all of these questions.
Designing Code: UML Modeling
One common way to think and talk about an application’s behavior is by using the Unified Modeling Language (UML). To allow this, Visual Studio provides tools for application modeling with this popular language. These tools support five of the most common UML diagrams: Class, Sequence, Use Case, Activity, and Component.
Once they’re created, UML elements can be linked to work items in TFS. For example, an application architect might create a use case diagram, then link that use case with a work item containing the specific requirement this use case applies to. As usual, the requirement work item can then be linked with test cases, tasks, and other TFS work items. This can make it easier for the architect and other members of the development team to navigate more intelligently through the sea of information about this application and the process used to create it.
Controlling Code: Layer Diagrams
Grouping related responsibilities into clearly defined parts of the code makes sense. One obvious example of this is the division between user interface, business logic, and data in a multi-tier application. But these sharply defined boundaries aren’t useful solely for design; enforcing them also makes code more maintainable. Knowing that a change in, say, the user interface tier won’t affect the data tier eliminates one more risk in making that change.
To help define and enforce these boundaries, Visual Studio provides layer diagrams. An architect or developer can create a layer diagram, then associate different parts of the application with each layer by dragging and dropping a project or class file into it.
Writing Code
Once requirements have been gathered and at least some design has been done, it’s time to start writing code. Visual Studio was originally created more than a decade ago to support this part of the development process, and it’s still a critical aspect of what the tool family provides.
Like every IDE, Visual Studio provides a graphical interface for developers. Figure 3 shows a simple example.
Figure 3: The Visual Studio IDE lets developers write, compile, execute, and test code. |
As the figure suggests, the tool provides what a modern developer expects from an IDE, including a straightforward mechanism for managing code and configuration files, along with the ability to show different parts of the code in different colors.
This same user interface can be used to write code in any of the languages provided with Visual Studio, including:
- C# and Visual Basic
- F#
- C++
- JScript.NET
- Support for refactoring, which allows improving the structure, readability, and quality of code without changing what that code does.
- Static code analysis, which examines code for compliance with the Microsoft .NET Framework Design Guidelines or other custom-defined rules. For example, this analysis can help warn developers when they’ve left security holes that allow SQL injection attacks and other threats.
- Dynamic code analysis, including performance profiling and code coverage. Performance profiling lets a developer see how a running application (including parallel applications) divides its time across its methods, track the application’s memory usage, and more. Code coverage shows what parts of an application’s code were executed by a specific test, allowing the developer to see what’s not being tested.
- Code metrics, a set of measurements calculated by Visual Studio. They include simple things like the number of lines of code, along with more complex metrics such as cyclomatic complexity and a maintainability index. Since complex code is both harder to maintain and more likely to contain errors, having an objective measure of complexity is useful.
Supporting Database Development
Software development is more than just writing application code. Working with a database is also an important part of most projects. As always, tools can make this work easier.
The Visual Studio IDE allows creating database projects for working with SQL Server, much like the projects used for Windows applications. The tool also includes project types for creating applications that use the SQL CLR. And to help developers create databases and the code that uses them, the Visual Studio IDE includes visual designers for tables, queries, views, and other aspects of database development.
Like any other projects, these database projects can use the source code control and build management functions provided by TFS. For instance, a database project can help manage changes to a database’s schema by putting it under version control. Visual Studio also provides support for tracking dependencies between database objects, refactoring database objects, database deployment, and other aspects of database change management.
Supporting Developer Testing & Debugging
Along with writing code, every developer does testing and debugging. It’s common today, for example, for a developer to create unit tests for the code she writes. Each unit test runs against a specific component, such as a method, verifying one or more assumptions about the behavior of that component. Unit tests are commonly automated, which lets a group of unit tests be run easily whenever changes are made. (In fact, the build verification tests run by Team Foundation Build are most often unit tests.) To support this, Visual Studio provides a unit testing framework. It’s also possible to use other unit testing frameworks with Visual Studio, such as NUnit, although much of the integration with the rest of this product family is lost.
Working with Project Information
The information TFS holds about a project—requirements, tasks, bugs, and everything else—needs to be accessible by every member of the development team, regardless of the tools they use. To let developers work with this information in a natural way, Team Explorer can run inside the Visual Studio IDE. Figure 4 shows an example of how this looks.
In this example, current requirements (user stories) are shown at the top, along with the tasks associated with each one. User story 7 has been selected, and so its details are shown below: who it’s assigned to, the state it’s in, and other information. The pane in the upper right shows other TFS data that the developer can access under headings such as “My Bugs”, “My Tasks”, and “My Test Cases”. The goal is to provide a useful window into the diverse information available in TFS about this development project.
The information TFS holds about a project—requirements, tasks, bugs, and everything else—needs to be accessible by every member of the development team, regardless of the tools they use. To let developers work with this information in a natural way, Team Explorer can run inside the Visual Studio IDE. Figure 4 shows an example of how this looks.
Figure 4: Using Team Explorer from inside the Visual Studio IDE lets developers work with requirements, tasks, bugs, and other TFS information. |
Testing Code
Testing is an essential part of the development process. There’s lots to do and plenty of information to keep track of. Some tests, such as unit tests, are typically created by developers. Yet much of testing is done by dedicated testers. Testing is a discipline in its own right, and testers play a critical role in most development teams.
Providing solid support for both automated and manual testing is a primary goal of Visual Studio. In both cases, effective test tools can make the job easier and more efficient. In Visual Studio, these tools include the following:
- Ways to gather the results of running tests, including diagnostic data to help determine the cause of failures.
- A tool for creating and managing test plans.
- Software for creating, configuring, and deploying virtual machines for use in a test environment.
- Support for manual testing, including things such as the ability to record and automatically play back a manual test.
- Support for automated testing, including load tests.
Gathering Test Results & Diagnostic Data
The most fundamental aspect of testing is running a test, then getting the results. Determining when a test has failed might be straightforward, but what information should the tester provide to the developer to help fix the bug this test has discovered? Just indicating that the test failed isn’t enough—the developer might not even believe the tester, especially if the developer can’t reproduce the bug on his own machine. What’s needed is a way for the tester to supply enough information for the developer to understand the bug, then figure out and fix the underlying problem.
To do this, the test environment needs to provide mechanisms for gathering a range of information about the application under test. Visual Studio does this with diagnostic data adapters (DDAs). Figure 5 shows how they fit into the test environment.
Figure 5: When running a test from the Visual Studio IDE or Microsoft Test Manager, a tester can rely on one or more diagnostic data adapters to collect data about that test. |
As the figure shows, a test might be run either from the Visual Studio IDE or from Microsoft Test Manager (step 1). Here, the test is sent to a test controller which then sends it to a test agent running on the same machine as the application under test. As Figure 5 shows, this application is monitored by one or more DDAs while it’s being tested. The diagnostic data those DDAs produce and the test results are returned to the tester (step 2).
Each DDA collects a specific kind of information, and the tester can select which DDAs are used for a particular test. Some examples of DDAs provided with Visual Studio are the following:
A Tool for Testers: Microsoft Test Manager
Running tests from the Visual Studio IDE is fine for development-oriented testers. For many testers, however, a developer tool isn’t the best option. To support these people, Visual Studio includes Microsoft Test Manager (MTM). This tool is designed explicitly for testers, especially those who don’t need to edit code. One obvious indication of this is the MTM user interface: It’s not based on the Visual Studio IDE. Instead, the tool provides its own interface focused explicitly on the tasks it supports. Figure 6 shows an example.
MTM supports two distinct activities. One is acting as the client for Visual Studio Lab Management, described in the next section. The other, unsurprisingly, is managing and running tests. Toward this end, MTM lets a tester define and work with test plans. A test plan consists of one or more test suites, each of which contains some number of automated and/or manual test cases. A test plan can also specify the exact configuration that should be used for the tests it contains, such as specific versions of Windows and SQL Server. All of this information is stored in TFS using the Test Case Management functions mentioned earlier.
Managing Test Lab VMs: Visual Studio Lab Management
Testing software requires machines on which to run that software. Those machines must replicate as closely as possible the eventual production environment. Traditionally, testers have done their best to replicate this environment with the physical machines on hand. While this approach can work, it’s often simpler and cheaper to use virtual machines instead. VMs are easier to create and to configure into exactly what’s required. Think about testing a multi-tier application, for example, where the user interface, business logic, and database all run on separate machines. Realistic tests for this application require three machines, something that’s usually easier to do with VMs.
Putting the Pieces Together: A Testing Scenario
To see how all of these parts can be used together, it’s useful to walk through a scenario. Figure 8 shows the first steps.
This example begins with a tester using the Lab Manager activity in MTM to request new VMs from Visual Studio Lab Management (step 1). The templates used to create those VMs are pre-configured to contain a test agent, which means they’re ready to be used for testing. The software being tested is a Web application, so three separate VMs are created, one for each tier of the application. Next, the tester uses MTM to create a test plan, storing it in TFS Test Case Management (step 2). Once the test plan is ready, testing can begin.
Since the goal is to test a specific build of the application, that build must first be created and deployed (step 1). As the figure suggests, Team Foundation Build can deploy a new build to the VM-based test environment as part of an automated build process. The tester can then use MTM to run test cases from the test plan (step 2) against this build. Each test is sent to a test controller, which distributes it to the test agents. These test agents run each test, using appropriate DDAs to gather diagnostic data (step 3). As described earlier, different DDAs are probably used in each of the three VMs, since each is testing a different part of the application. The Video Recording DDA might be used in the VM running the application’s user interface, for example, the IntelliTrace, Code Coverage, and Test Impact DDAs might be used in the VM running the middle -tier business logic, and just the Event Log DDA might be used in the VM running the database. When the tests are completed, the test results and diagnostic data are sent back to TFS via the test controller (step 4). The tester can now use MTM to access and examine this information (step 5).
When a bug is found, the tester can submit a report directly from MTM. A description of the issue, together with whatever diagnostic data the tester chooses, is stored in TFS for a developer to use. When Lab Management is used, as in this example, it’s even possible for a tester to submit a bug report that links to a snapshot of the VM in which the bug was detected. Because the bug report contains so much supporting information, the developer responsible for the code being tested is significantly more likely to believe that the bug is real. Just as important, that developer will have what she needs to find and fix the underlying problem.
Supporting Manual Testing
Manual testing, where someone sits at a screen exercising an application, is an inescapable part of software quality assurance. This kind of testing might be done in an exploratory fashion, or it might require the tester to follow a rigidly specified set of test scripts. In either case, a large part of an application’s tests are often manual.
A primary purpose of Microsoft Test Manager is to support manual testing. Along with the user interface shown earlier for test management, MTM also provides an interface for running manual tests. Sometimes referred to as Test Runner, an example of this interface is shown in Figure 10.
The UI of the application under test is shown on the right, allowing the tester to interact with it. The pane on the left lists the steps in the manual test currently being performed. The green circles in this pane mean that the tester has marked this step as successful. A red circle appears at step 6, however, indicating that the tester believes this step has failed. The tester can now file a bug report directly from Test Runner. And because manual tests use the Visual Studio testing infrastructure described earlier, any of the diagnostic data provided by the DDAs, including IntelliTrace, can be included with this bug report.
Some DDAs are especially useful with manual tests. The Action Recording DDA, for instance, creates a log of the actions performed in a test. This action log contains everything the tester actually did, including details such as mouse hovers. This fine-grained information lets a developer see exactly what steps the tester went through and so can be quite useful in replicating the bug. The action log can be included as part of a bug report, perhaps accompanied by a video of the tester’s screen.
An action log can also be used sometime later by the tester to automatically step through just part of the test. This option, referred to as fast forward for manual testing, lets a tester (or a developer) move quickly to a later part of the test rather than laboriously working through every step to get where she needs to be. For example, a tester who wishes to test an application’s behavior deep into a series of screens can rely on this facility to zip quickly through the navigation required to get there.
Supporting Automated Testing
While manual tests are important, automated testing—software that tests other software—is also useful. Automated tests can only be created with the Visual Studio IDE—they can’t be created using MTM. They can be run in various ways, however: from the IDE, from MTM, or by Team Foundation Build as part of the build process.
Because there are a variety of things to be tested, Visual Studio has built-in support for several types of automated tests. Figure 11 illustrates some of the most important options.
The kinds of automated tests supported by Visual Studio include the following:
To simulate large numbers of users, an organization can buy one or more copies of the Visual Studio Load Test Virtual User Pack. Installed on the test controller, as Figure 12 shows, each pack represents 1,000 virtual users. If a load test needs to see how an application behaves with 3,000 users, for example, the tester needs to install three Load Test Virtual User Packs.
Managing & Tracking A Project
Whatever stage a project is in, the people involved need to know what’s going on. Perhaps a project manager is concerned with keeping the work on schedule, or maybe the project is using Scrum, with the ScrumMaster playing a coordinating role on a self-organizing team. Whatever the situation, the people involved must manage and track the work. As already described, Visual Studio provides reports, dashboards, and more aimed at doing this. It’s worth taking a closer look at two specific areas, however: the beginning of a project and its end.
Pretty much every project (and every iteration) starts with planning. Plans are typically based on requirements, which are stored in TFS. A project manager might use Microsoft Project to read those requirements directly from work items. She can then use this tool to plan a schedule for the effort, as Figure 13 shows.
The project manager can use familiar techniques, such as the Gantt chart shown here, to plan the project’s schedule, with the requirements and tasks drawn directly from TFS. For project managers who prefer Excel, Visual Studio provides other options. For example, the Agile process template includes an Excel workbook designed expressly for planning an agile project and the iterations (sprints) within that project. As always, the information this workbook uses is synchronized with TFS.
Projects start, and projects eventually end. Assuming a project ends successfully, the software must be handed over to the customer. But when is it ready to ship? How can a team know when it’s time to release their work? One obvious metric is fulfilling enough of the project’s requirements, but another is quality, something that’s harder to judge accurately. To help with this, both the Agile and CMMI process templates provide a Quality dashboard that presents relevant information from TFS. Figure 14 shows an example.
This dashboard provides a window into important metrics for project quality over a specific period, such as the previous month. Those metrics include the following:
Each DDA collects a specific kind of information, and the tester can select which DDAs are used for a particular test. Some examples of DDAs provided with Visual Studio are the following:
- Action Recording: Produces a recording containing all of the steps executed in a manual test. This log can be played back to re-run the test or used in other ways.
- ASP.NET Profiler: Provides profiling data for ASP.NET applications, such as a count of how much time was spent in each method.
- Event Log: Collects information written to event logs during the test. It can be configured to collect only specific event logs and event types.
- IntelliTrace: Creates a detailed trace of the application’s execution. As described earlier, a developer can use this trace to replay the execution of a test in the Visual Studio IDE’s debugger, providing a window into exactly what was happening when a bug appeared.
- Test Impact: Keeps track of which methods in the application were invoked by a test case. This DDA provides the raw material for the test impact analysis described earlier
- Code Coverage: Provides statistics about what percentage of the code was covered by one or more tests. As its name suggests, it provides the base information used by the code coverage option described earlier.
- System Information: Provides a description of the machine on which the test is run, including its CPU, memory, installed patches, and more.
- Video Recorder: Records a video of a computer’s desktop on which a test is being run.
A Tool for Testers: Microsoft Test Manager
Running tests from the Visual Studio IDE is fine for development-oriented testers. For many testers, however, a developer tool isn’t the best option. To support these people, Visual Studio includes Microsoft Test Manager (MTM). This tool is designed explicitly for testers, especially those who don’t need to edit code. One obvious indication of this is the MTM user interface: It’s not based on the Visual Studio IDE. Instead, the tool provides its own interface focused explicitly on the tasks it supports. Figure 6 shows an example.
Figure 6: Microsoft Test Manager is designed expressly for testing—it's not a development tool. |
MTM supports two distinct activities. One is acting as the client for Visual Studio Lab Management, described in the next section. The other, unsurprisingly, is managing and running tests. Toward this end, MTM lets a tester define and work with test plans. A test plan consists of one or more test suites, each of which contains some number of automated and/or manual test cases. A test plan can also specify the exact configuration that should be used for the tests it contains, such as specific versions of Windows and SQL Server. All of this information is stored in TFS using the Test Case Management functions mentioned earlier.
Managing Test Lab VMs: Visual Studio Lab Management
Testing software requires machines on which to run that software. Those machines must replicate as closely as possible the eventual production environment. Traditionally, testers have done their best to replicate this environment with the physical machines on hand. While this approach can work, it’s often simpler and cheaper to use virtual machines instead. VMs are easier to create and to configure into exactly what’s required. Think about testing a multi-tier application, for example, where the user interface, business logic, and database all run on separate machines. Realistic tests for this application require three machines, something that’s usually easier to do with VMs.
Figure 7: Visual Studio Lab Management allows creating and managing a VM-based test and development lab. |
Putting the Pieces Together: A Testing Scenario
To see how all of these parts can be used together, it’s useful to walk through a scenario. Figure 8 shows the first steps.
Figure 8: A tester can use Lab Management to create VMs for testing an application, then define a test plan for that application with Test Case Management. |
This example begins with a tester using the Lab Manager activity in MTM to request new VMs from Visual Studio Lab Management (step 1). The templates used to create those VMs are pre-configured to contain a test agent, which means they’re ready to be used for testing. The software being tested is a Web application, so three separate VMs are created, one for each tier of the application. Next, the tester uses MTM to create a test plan, storing it in TFS Test Case Management (step 2). Once the test plan is ready, testing can begin.
Figure 9: Once a new build is deployed, a tester can run test cases from MTM, then access the results and DDA-generated diagnostic data produced by those tests. |
Since the goal is to test a specific build of the application, that build must first be created and deployed (step 1). As the figure suggests, Team Foundation Build can deploy a new build to the VM-based test environment as part of an automated build process. The tester can then use MTM to run test cases from the test plan (step 2) against this build. Each test is sent to a test controller, which distributes it to the test agents. These test agents run each test, using appropriate DDAs to gather diagnostic data (step 3). As described earlier, different DDAs are probably used in each of the three VMs, since each is testing a different part of the application. The Video Recording DDA might be used in the VM running the application’s user interface, for example, the IntelliTrace, Code Coverage, and Test Impact DDAs might be used in the VM running the middle -tier business logic, and just the Event Log DDA might be used in the VM running the database. When the tests are completed, the test results and diagnostic data are sent back to TFS via the test controller (step 4). The tester can now use MTM to access and examine this information (step 5).
When a bug is found, the tester can submit a report directly from MTM. A description of the issue, together with whatever diagnostic data the tester chooses, is stored in TFS for a developer to use. When Lab Management is used, as in this example, it’s even possible for a tester to submit a bug report that links to a snapshot of the VM in which the bug was detected. Because the bug report contains so much supporting information, the developer responsible for the code being tested is significantly more likely to believe that the bug is real. Just as important, that developer will have what she needs to find and fix the underlying problem.
Supporting Manual Testing
Manual testing, where someone sits at a screen exercising an application, is an inescapable part of software quality assurance. This kind of testing might be done in an exploratory fashion, or it might require the tester to follow a rigidly specified set of test scripts. In either case, a large part of an application’s tests are often manual.
A primary purpose of Microsoft Test Manager is to support manual testing. Along with the user interface shown earlier for test management, MTM also provides an interface for running manual tests. Sometimes referred to as Test Runner, an example of this interface is shown in Figure 10.
Figure 10: Microsoft Test Manager, shown docked to the left of the application under test, provides a user interface for running manual tests. |
The UI of the application under test is shown on the right, allowing the tester to interact with it. The pane on the left lists the steps in the manual test currently being performed. The green circles in this pane mean that the tester has marked this step as successful. A red circle appears at step 6, however, indicating that the tester believes this step has failed. The tester can now file a bug report directly from Test Runner. And because manual tests use the Visual Studio testing infrastructure described earlier, any of the diagnostic data provided by the DDAs, including IntelliTrace, can be included with this bug report.
Some DDAs are especially useful with manual tests. The Action Recording DDA, for instance, creates a log of the actions performed in a test. This action log contains everything the tester actually did, including details such as mouse hovers. This fine-grained information lets a developer see exactly what steps the tester went through and so can be quite useful in replicating the bug. The action log can be included as part of a bug report, perhaps accompanied by a video of the tester’s screen.
An action log can also be used sometime later by the tester to automatically step through just part of the test. This option, referred to as fast forward for manual testing, lets a tester (or a developer) move quickly to a later part of the test rather than laboriously working through every step to get where she needs to be. For example, a tester who wishes to test an application’s behavior deep into a series of screens can rely on this facility to zip quickly through the navigation required to get there.
Supporting Automated Testing
While manual tests are important, automated testing—software that tests other software—is also useful. Automated tests can only be created with the Visual Studio IDE—they can’t be created using MTM. They can be run in various ways, however: from the IDE, from MTM, or by Team Foundation Build as part of the build process.
Because there are a variety of things to be tested, Visual Studio has built-in support for several types of automated tests. Figure 11 illustrates some of the most important options.
Figure 11: Visual Studio supports several kinds of automated tests. |
The kinds of automated tests supported by Visual Studio include the following:
- Unit tests: As described earlier, a unit test verifies specific behavior in a particular component of an application. Unit tests are typically created by the developer who writes the code being tested.
- Database unit tests: These are unit tests aimed specifically at aspects of an application’s database, such as a stored procedure. They’re also typically created by developers.
- Web performance tests: An effective way to test the functionality of a Web application is to send HTTP requests directly to the application’s business logic. This is exactly what Web performance tests do. Typically built by testers, these tests can be defined manually using Visual Studio’s Web Test Editor, or (more likely) created automatically by using the product’s Web Test Recorder to record HTTP requests made from a browser.
- Coded user interface (UI) tests: Sometimes referred to as UI test automation, a coded UI test is software that executes actions directly against an application’s user interface, then verifies that the correct behavior occurs. Coded UI tests are typically created by a tester recording the manual actions she takes against an application’s UI, then letting Visual Studio generate code to replicate those actions. The product also provides support for validating a test’s result, such as checking that a particular text box in the UI should contain a specific value.
There’s one more important category of automated testing to describe: load tests. A load test is intended to simulate how an application performs when it has many users, and it’s typically composed of a group of Web performance tests. Figure 12 illustrates how Visual Studio provides load testing.
Figure 12: In load testing, multiple test agents submit Web performance tests against an application's business logic, simulating the behavior of many simultaneous users. |
As the figure shows, load testing relies on a test controller and one or more test agents, each of which runs on its own physical machine. The controller parcels out Web performance tests among the test agents. Every test agent then submits its tests to the application. The result helps the tester understand how this application will behave under the load created by many users.
To simulate large numbers of users, an organization can buy one or more copies of the Visual Studio Load Test Virtual User Pack. Installed on the test controller, as Figure 12 shows, each pack represents 1,000 virtual users. If a load test needs to see how an application behaves with 3,000 users, for example, the tester needs to install three Load Test Virtual User Packs.
Managing & Tracking A Project
Whatever stage a project is in, the people involved need to know what’s going on. Perhaps a project manager is concerned with keeping the work on schedule, or maybe the project is using Scrum, with the ScrumMaster playing a coordinating role on a self-organizing team. Whatever the situation, the people involved must manage and track the work. As already described, Visual Studio provides reports, dashboards, and more aimed at doing this. It’s worth taking a closer look at two specific areas, however: the beginning of a project and its end.
Pretty much every project (and every iteration) starts with planning. Plans are typically based on requirements, which are stored in TFS. A project manager might use Microsoft Project to read those requirements directly from work items. She can then use this tool to plan a schedule for the effort, as Figure 13 shows.
Figure 13: Microsoft Project can read work items such as requirements and tasks directly from TFS, then let a project manager construct a schedule. |
Projects start, and projects eventually end. Assuming a project ends successfully, the software must be handed over to the customer. But when is it ready to ship? How can a team know when it’s time to release their work? One obvious metric is fulfilling enough of the project’s requirements, but another is quality, something that’s harder to judge accurately. To help with this, both the Agile and CMMI process templates provide a Quality dashboard that presents relevant information from TFS. Figure 14 shows an example.
Figure 14: The Quality dashboard gives a view into several quality-related metrics for a project. |
- Test Plan Progress: Shows the team’s progress in running test cases defined in test plans created with Microsoft Test Manager.
- Build Status: Shows the number of builds that succeeded or failed.
- Bug Progress: Shows the number of bugs, broken down by status: Active, Resolved, or Closed. Shippable code quality needn’t mean zero bugs, but a continually rising count of active bugs doesn’t bode well for quality.
- Bug Reactivations: Shows how many bugs that were previously marked as Resolved or Closed have been reactivated. If this number is rising, the quality of the team’s bug fixes probably isn’t very high.
- Code Coverage: Shows the percentage of code tested by build verification tests and others.
- Code Churn: Illustrates how many lines of code were added, deleted, and changed in check-ins to TFS version control. This is another useful quality measure, since the total should be heading down as a project nears its end.
Using this dashboard, project managers, ScrumMasters, and others—even the customer—can better understand the team’s progress. Rather than seeing individual bits of data in isolation, they can see the totality of a project’s state and thus make better decisions. This information can also help in determining when the product they’re building is ready to ship. Different projects have different standards: A team building a community Web portal probably has a lower quality bar than, say, one building an online ordering system for a major retailer. Yet whatever the group’s risk tolerance, they need data to make a good decision. The Quality dashboard is one example of how Visual Studio provides this information.
Maintaining Code
Releasing the results of a development project doesn’t mean that there’s no work left to be done. If the software that’s been created has any value, it will stick around for a while, and it’s bound to require changes. Maybe the people who use it have ideas for improvement, or perhaps the business process it’s part of needs to change in some way. Whatever the reason, applications require maintenance.
To a great degree, the tools used for development are also appropriate for maintenance. Still, there are unique challenges that show up in this part of an application’s lifecycle. For example, the people who do maintenance work often aren’t the original developers. A fundamental challenge for these people is to understand the application’s code base. The conventional way to do this is by brute force: just sit down and read the code. Yet understanding the complicated interactions and dependencies in even a moderately sized code base can be very difficult. Besides, we are visual creatures—why not use a graphical tool to help?
Toward this end, Visual Studio includes the Architecture Explorer. This tool gives its user a window into the structure of existing C# and Visual Basic code. By generating dependency graphs that show how various parts of an application work with each other, the Architecture Explorer can help developers and architects understand what the code is doing. Figure 15 shows an example.
This example shows a dependency graph focused on an assembly called PetShopWeb.dll. This assembly contains the namespace PetShop.Web, which itself contains a number of other types. The connections between these types are shown as gray lines, with the thickness of each line representing the number of interactions between each pair. As the dialog box in the upper left shows, the user has searched for methods whose name contains the word “Submit”, and the Architecture Explorer is showing one match: a method in the UserProfile type.
As the diagram suggests, the tool is interactive. A user can zoom in and out of the dependency graph, choosing different granularities. For example, a developer new to this application can get a broad view by starting at the assembly level, then zoom in to examine a particular assembly in detail (which is the situation shown in Figure 15). She might then zoom further in to look at a particular class and the other classes it depends on. Having this understanding of dependencies can help mitigate the risk of making a change to the code, since it’s easier to trace the impact a change might have.
While it’s always possible to acquire the same knowledge by reading the source code, starting with a visual approach is likely to speed up the process. And given the importance—and the cost—of maintaining software, providing tools to help makes sense.
Maintaining Code
Releasing the results of a development project doesn’t mean that there’s no work left to be done. If the software that’s been created has any value, it will stick around for a while, and it’s bound to require changes. Maybe the people who use it have ideas for improvement, or perhaps the business process it’s part of needs to change in some way. Whatever the reason, applications require maintenance.
To a great degree, the tools used for development are also appropriate for maintenance. Still, there are unique challenges that show up in this part of an application’s lifecycle. For example, the people who do maintenance work often aren’t the original developers. A fundamental challenge for these people is to understand the application’s code base. The conventional way to do this is by brute force: just sit down and read the code. Yet understanding the complicated interactions and dependencies in even a moderately sized code base can be very difficult. Besides, we are visual creatures—why not use a graphical tool to help?
Toward this end, Visual Studio includes the Architecture Explorer. This tool gives its user a window into the structure of existing C# and Visual Basic code. By generating dependency graphs that show how various parts of an application work with each other, the Architecture Explorer can help developers and architects understand what the code is doing. Figure 15 shows an example.
Figure 15: A dependency graph illustrates the relationships among different parts of an application's code. |
As the diagram suggests, the tool is interactive. A user can zoom in and out of the dependency graph, choosing different granularities. For example, a developer new to this application can get a broad view by starting at the assembly level, then zoom in to examine a particular assembly in detail (which is the situation shown in Figure 15). She might then zoom further in to look at a particular class and the other classes it depends on. Having this understanding of dependencies can help mitigate the risk of making a change to the code, since it’s easier to trace the impact a change might have.
While it’s always possible to acquire the same knowledge by reading the source code, starting with a visual approach is likely to speed up the process. And given the importance—and the cost—of maintaining software, providing tools to help makes sense.