Network testing should be run ad-hoc after a configuration change to validate that everything went well, as well as permanently, via active network monitoring , to detect network problems as soon as they happen. In the first case, here are some situations in which you want to validate your design and implementation assumptions after a configuration change:.
So how do we get started with network testing? You can telnet or SSH to different remote hosts, and remotely perform the same tests that you would locally run. NetBeez is a network monitoring solution that provides these features in a browser-based interface. You can run ad-hoc, or permanently, commands like ping, traceroute, and iPerf. If you want to check it out, just request a demo. All Rights Reserved. Privacy Policy Terms of Use.
The criteria for declaring that a test passed or failed must be clear and agreed upon by the tester and the customer. This avoids a common problem in which, after the test is completed, the testers and customers agree on the results, but they disagree on the interpretation of the results.
The test objective should simply be to measure the outcome, not prove that the outcome is in your favor; and it should be based as much as possible on industry standards for all relevant technologies and services for example, VoIP and International Telecommunication Union [ITU] voice acceptability standards. Objectives and acceptance criteria can reference a baseline measurement for the current network. For example, an objective might be to measure the cyclic redundancy check CRC error rate on a WAN segment and compare the results to a baseline measurement.
The acceptance criterion might be that there must be 20 percent fewer errors than there are today. In general, tests should include performance, stress, and failure analyses. Performance analysis should examine the level of service that the system offers in terms of throughput, delay, delay variation, response time, and efficiency.
Stress analysis should examine any degradation of service due to increased offered load on the network. Failure analysis should calculate network availability and accuracy, and analyze the causes of any network outages. The specific types of tests to run against your particular design depend on test objectives. Typical tests include the following:. These operations include starting an application, switching between screens, forms, and fields within the application, and executing file opens, reads, writes, searches, and closes.
With this type of testing, the tester watches actual users or simulates user behavior with a simulation tool. The test should start with a predetermined number of users and actions, and then gradually increase the number of users and actions to expose any increases in response time due to additional network or server load. It can also measure throughput in terms of packets per second through a switching device for example, a switch or router. As in response-time testing, throughput testing should begin with several users and actions that are gradually increased.
The rate of errors and failures is monitored. Regression testing does not test new features or upgrades. Instead, it focuses on existing applications. Regression testing should be comprehensive and is usually automated to facilitate comprehensiveness. A test plan should include a network topology drawing and a list of required devices. The topology drawing should include major devices, addresses, names, network links, and some indication of link capacities.
The list of devices should include switches, routers, workstations, servers, telephone-equipment simulators, wireless access points, firewalls, cables, and so on. The list should document version numbers for hardware and software, and availability information. Sometimes testing requires new equipment that might not be available yet or equipment that for other reasons has a long lead time for procurement. If this is the case, it should be noted in the test plan. In addition to network equipment, a test plan should include a list of required testing tools.
Typical tools include network management and monitoring tools, traffic-generation tools, modeling and simulation tools, and QoS and service-level-management tools.
The test plan should also list any applications that will increase testing efficiency, such as software distribution applications or remote-control programs that facilitate the configuring and reconfiguring of nodes during a test.
In a Microsoft Windows environment, if your testing requires loading new software applications or versions on multiple systems, Microsoft System Center Configuration Manager software formerly known as SMS can be a useful addition to the testing environment. For each test, write a test script that lists the steps to be taken to fulfill the test objective. The script should identify which tool is used for each step, how the tool is used to make relevant measurements, and what information should be logged during testing.
The script should define initial values for parameters and how to vary those parameters during testing. For example, the test script might include an initial traffic-load value and incremental increases for the load. The following is an example of a simple test script for the network test environment shown in Figure Step 1. Start capturing network traffic on the protocol analyzer on Network A. Step 2. Start capturing network traffic on the protocol analyzer on Network B.
Step 3. Step 4. Stop capturing network traffic on the protocol analyzers. Step 5. Verify that the network layer destination address is Server 1 on Network B, and the destination port is port the port number for Application ABC. Step 6. Step 7. Log the results of the test in the project log file. Step 8. Save the protocol analyzer trace files to the project trace-file directory.
Step 9. Gradually increase the workload on the firewall, by increasing the number of workstations on Network A one at a time, until 50 workstations are running Application ABC and attempting to reach Server 1.
Repeat Steps 1 through 8 after each workstation is added to the test. For complex testing projects, the test plan should document the project timeline, including the start and finish date for the project and major milestones. The timeline should list major tasks and the person who has been assigned to each task. The following list shows typical tasks, including some tasks that should have already been completed by the time the test plan is written:.
Implementing the test plan is mostly a matter of following your test scripts and documenting your work. Realize, however, that sometimes test scripts cannot be followed precisely, because it is difficult to consider all contingencies and potential problems when writing a script. For this reason, it is important to maintain logs, either electronically or in a paper notebook. An electronic log is preferable because it enables easier searching for information later.
In addition to keeping a log that documents collected test data and test results, it is a good idea to keep a daily activity log that documents progress and any changes made to test scripts or equipment configurations. The daily activity log can also be used to record any problems that were encountered and theories about causes.
These theories might be helpful later when analyzing results and might help with future testing projects. This section discusses the types of tools that can be used to test a network design. The section also recommends some specific tools. Network management and monitoring tools are usually used in a production environment to alert network managers to problems and significant network events, but they also help when testing a network design.
Network management applications, such as the CiscoWorks offerings from Cisco or the HP Operations Manager software, can alert you to problems on your test network.
Copy to clipboard. Presentation on theme: "Network design Topic 6 Testing and documentation. Download ppt "Network design Topic 6 Testing and documentation. What is it, Why do it? About project SlidePlayer Terms of Service. Feedback Privacy Policy Feedback. To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy , including cookie policy.
I agree.
0コメント