This project measures the maximum Session Initiation Protocol (SIP) throughput and rate of sending invite/call with no errors that a SIP proxy can support. We have learned to operate the SIPp tool and also studied the draft RFCs that described the method of testing. Successfully installed a SIP server and a SIP client in the lab and measured its key performance metrics Session Capacity and Session Rate that are defined in the Internet Engineering Task Force (IETF) drafts.
This project will also re-test the values of these key performance metrics that were obtained by previous students. The achieved results will be used to write a short paper for submission to an academic journal.
With the exponential growth of Internet, a new technology called Voice over Internet Protocol (VoIP) appeared. In the beginning telephone industry (using PSTN) ignored this evolution that meant huge modification of their infrastructure and business model. Eventually, telephone providers adopted this technology offering new services and applications to both public and private sectors. One of the protocols to create VoIP services is called Session Initiation Protocol (SIP) : this application-layer signaling tool is used to create, modify and terminate sessions between two or more user agents (UA).
SIP has been adopted by many sectors of the telecommunications industry: Enterprise IP PBX’es and SIP Session Border Controllers, as well as the Emergency Services IP network (ESINet) for the delivery of Emergency calls, Business VoIP phone services, SIP-to-Phone-PC-Telephone for cheap International call rates are some important examples. Many commercial systems and solutions based on SIP are available today to meet the industry’s requirements. As a result, there is a strong need for a vendor-neutral benchmarking methodology to allow different SIP servers to be meaningfully compared one with the other. The goal of this program is to develop such benchmarks and to design systems to collect the benchmarks we define. The outcome of the program will include creation of two IETF drafts that describe the benchmarks and the methodology to be used for their collection. The outcome will also be a tool that collects the benchmarks and that is built in accordance with the methodology we describe. The Methodology and Terminology draft documents are available on the references page of this site.
After a brief explanation about our test bed configuration, we will explain step-by-step how our script works and then provide an analysis for each result.