Honorary Chair

Mohammad S. Obaidat
Monmouth Univ., NJ, USA

General Chair

Jose Sevillano
Univ. of Seville, Spain

Program Chairs

Raffaele Bolla
Univ. of Genoa, Italy

Pere Vilŕ
Univ. of Girona, Spain

Isaac Woungang
Ryerson Univ., Canada

Tutorials and Special Session Chair

Sanjay K. Dhurandher
Univ. of Delhi, India

Publicity Chairs

Essia Hamouda
Univ. of California, Riverside, USA

Lei Shu
Osaka Univ., Japan

Local Arrangement Chair

Olga Jaramillo
Univ. of Genoa, Italy

Publication Chair

Daniel Cascado
Univ. of Seville, Spain


Antonio Bueno
Univ. of Girona, Spain

Sponsor SCS logo The Society for Modeling and Simulation International www.scs.org

Technical Sponsor ComSoc logo IEEE Communications Society www.comsoc.org

SPECTS 2012 Program

In addition to this webpage, a PDF version of the conference program (227 KiB) is available.

SPECTS is part of The Summer Simulation Multi-Conference (SummerSim). The full SummerSim'12 program (PDF, 2.2 MiB) is also available. Please note that SPECTS registration gives you access to all SummerSim activities.

Monday, July 9th, 2012

Single Track

Opening Session and Keynote Speech

Room: Benvenuto
Keynote I
Keynote Speaker: Aldo Zini
Virtual Ship: Dream or Reality

Session 1: Ad Hoc and Sensor Networks

Room: 4B
Session Chair: Juan Luis Font, Univ. of Sevilla, Spain
Reliability and Survivability of Vehicular Ad hoc Networks
Dharmaraja Selvamuthu, Resham Vinayak, Xiaomin Ma and Kishor Trivedi
Vehicular ad hoc network (VANET) is a technology that enables communication amongst the vehicles by creating mobile Internet. The primary purpose of VANET is road safety and security. Hence reliability and survivability of the network become matters of prime concern. Reliability and survivability of the network is immensely dependent upon the hardware and channel availability. In this paper, we analyze the reliability of the vehicles and RSUs, using reliability block diagrams. Further, survivability of network, in terms of message lost due to unreliable hardware or channel unavailability, is explored for two types of communication in VANET namely, vehicle-to-vehicle and vehicle-to-roadside communication. The technique of hierarchical modeling is followed for the same.
Evaluation of Routing Schemes in Opportunistic Networks Considering Energy Consumption
Annalisa Socievole and Floriano De Rango
Opportunistic networks are challenged wireless networks where most of the time contacts are intermittent and link performance is typically highly variable. In this kind of environment, an end-to-end path between the source and the destination may only exist for a brief period and may change quickly. Long propagation and variable queuing delays might be introduced and conventional routing protocols for mobile ad hoc networks are inappropriate to deliver messages between nodes. Different routing protocols for opportunistic networks have been proposed in the literature. In most cases, authors evaluate the performance of their protocols considering metrics like delivery ratio and delivery latency, disregarding the importance of energy consumption constraint. In this work we present a performance comparison of the Epidemic, Spray and Wait, PROPHET, MaxProp and Bubble Rap routing protocols with respect to energy consumption. We evaluate how the energy consumption impacts the routing performance and how the different forwarding algorithms for opportunistic networks affect the energy usage in the mobile devices.
Social and Dynamic Graph-Based Scalable Routing Protocol in a DTN Network
Floriano De Rango and Filippo Monteverdi
In this paper we consider the DTN routing problem in networks where the links between nodes are not stable, but tend to change over time. In these cases, the classical algorithms are not adequate, so new protocols inspired to social networks are needed to deliver the highest number of messages, trying to limit the amount of messages transmitted in the network. We propose a new algorithm, S-Grasp, whose foundations are found in other works already presented in the literature, but in some cases it improves their performances. Finally we will show, through simulations, that although the success rate in data forwarding is close to the optimal value, the number of retransmissions is reduced by at least 50% compared to that of other algorithms analyzed.
Wireless Sensor Network deployment using DEVS formalism and GIS representation
Poggi Bastien and Antoine-Santoni Thierry
Wireless Sensor Network (WSN) deployment appears like a crucial point for phenomenon monitoring. In this case it is important to define a real strategy to deploy the sensor nodes. Actually the works focus only on the Area of Deployment (AoD) and its representation. However the proposed methods don’t allow a real representation for a non specialist. In this paper we try to bring some solutions to allow a better a representation of WSN deployment on a geographic information system (GIS) using Discrete EVent Specification (DEVS) formalism. We present the first results of developed software.

Session 2: Internet Technologies

Room: 4C
Session Chair: K. Karaoglanoglou, Aristotle Univ. of Thessaloniki, Greece
Servload: Generating Representative Workloads for Web Server Benchmarking
Jörg Zinke, Jan Habenschuß and Bettina Schnor
Web server benchmarking has two main purposes, comparing different hardware platforms or server configurations, and testing the scalability of a given system. For the first purpose, the capability to replay given workload traces is needed. For the second purpose, also the capability to generate increased representative workloads is needed. Scalability tests are of special interests for ISPs to estimate performance numbers for SLA contracted to customers. Depending on the setup and the implemented web application the required scalability may vary from a few requests per seconds up to a waste amount of requests. This paper presents a web server benchmark called servload which generates modified workloads from given traces keeping similar characteristics. Servload is generic in the sense that it is not dedicated to a special application field. It can process traces from different application fields for example banking, e-commerce or collaborative wikis. The different workload generation methods of servload are discussed and evaluated. In the presented experiments servload shows good replay capabilities.
An Error Concealment Technique and Learning Web Design
Yuan-Chen Liu and Ming-kuan Lee
This paper proposes a new error concealment technique for the motion vectors, which are lost in error propagation. Experimental results demonstrate that the performance of proposed method is better than Decoder Motion-Vector Estimation, and much better than the algorithm obtained by conventional temporal concealment.
Exposing Resources as Web services: a Performance Oriented Approach
Ravishankar Kanagasundaram, Shikharesh Majumdar, Marzia Zaman, Pradeep Srivastava and Nishith Goel
This paper focuses on exposing resources including computing and database resources as Web services for providing inter-operability among clients and servers that uses diverse technologies. A systematic performance analysis of two technologies, the RESTful Web Service and the SOAP-based Web service used for exposing resources as Web services is reported. A novel Hybrid Web service that combines the advantages of both RESTful and SOAP-based Web services is proposed and analyzed.
Directing Requests and Acquiring Knowledge in a Large-Scale Grid System
Konstantinos Karaoglanoglou and Helen Karatza
This paper proposes two resource discovery mechanisms (the Upgrading and the Re- routing one) in a large-scale Grid system, in which the categorization of resources plays a crucial role. Through a simple matchmaking framework, each resource is identified as being part of a certain category, using its technical specifications (i.e. disk, and memory). Each Virtual Organization (VO) acquires only partial/limited knowledge of the availability of resources controlled by other VOs in the system. The goal is to discover a suitable resource for a request, and then effectively direct this request to the specific VO that controls locally the appropriate resource. During the satisfaction of requests, the VOs are supported and enhanced by the proposed resource discovery mechanisms in order to gain better knowledge of the Grid resource availability. As the creation and satisfaction of requests progresses, the VOs manage to acquire an adequate knowledge of the Grid resources, enhancing the overall system’s well- function. Finally, this paper presents the performance evaluation for both of the proposed resource discovery mechanisms by providing a number of simulation tests.

Keynote Session II

Room: Benvenuto
Keynote II
Keynote Speaker: Mario Marchese
The Evolution of Internet Technology, From Pervasive to Cloud Computing

Session 3: Work in Progress

Room: 4B
Session Chair: Floriano De Rango, Univ. of Calabria, Italy
Empirically Characterizing the Buffer Behaviour of Real Devices
Luis Sequeira, Julián Fernández-Navajas, Jose Saldana, Luis Casadesus and José Ruiz-Mas
All the routers include a buffer in order to enqueue packets waiting to be transmitted. The behaviour of the routers' buffer is of primary importance when studying network traffic, since it may modify some characteristics, as delay or jitter, and may also drop packets. As a consequence, the characterization of this buffer is interesting, especially when real-time flows are being transmitted: if the buffer characteristics are known, then different techniques can be used so as to adapt the traffic: multiplexing a number of small packets into a big one, fragmentation, etc. This work presents a preliminary study of how to determine the technical and functional characteristics of the buffer of a certain device, or even in a remote Internet network node. Two different methodologies are considered, and tested on two real scenarios which have been implemented; real measurements permit the estimation of the buffer size, and the input and output rates, when there is physical or remote access to the system under test. In case of having physical access, the maximum number of packets in the queue can be determined by counting. In contrast, if the node is remote, its buffer size has to be estimated. We have obtained accurate results in wired and wireless networks.
Impact of intermittent connectivity for telemetry server applications of vehicular embedded systems in agribusiness: study and modeling of hi-performance architecture
Daniel Mezzalira and Luis Carlos Trevelin
The management of multiple systems such as machine tools, vehicles, aircraft, among others, results in a very intense flow of data between the server and embedded systems, using wired and/or radiofrequency structures, demanding performance and interest in real time systems. The objective of this study is to propose a low cost scalable architecture for embedded applications, using pools of personal computers for high performance storage, retrieval and processing of information through the study of traces and real solutions for companies operating in this niche market and demonstrate the impacts of the intermittent connectivity for agribusiness applications.
Modelling Packet Capturing in a Traffic Monitoring System based on Linux
Luis Zabala, Armando Ferro and Alberto Pineda
The need to monitor and analyse network traffic grows with the deployment of new multimedia services over high speed networks. Predicting the overall capturing performance is crucial to know if the traffic monitoring system will be able to cope with all the traffic packets, or if it needs more processing power. In this paper, we present an analytical model based on a Markov chain to study the efficiency of the Linux network subsystem. Improving the capturing stage of Linux has been an extensively covered research topic in the past years. Although the majority of the proposals have been backed by experimental evaluations, there are few analytical models. We identify the softIRQ process as the main element in the Linux capturing stage and we have built a model that represents the different steps in the softIRQ and the computational cost for each one of them. The goodness of the model is checked by comparing analytical results with practical ones obtained from a real traffic monitoring system. Prior to obtaining the theoretical performance results, it is necessary to introduce some input parameters for the model. These initial necessary values are also extracted from experimental measurements, making use of an appropriate methodology. The results of all this process indicate us that the behaviour of the system performance depends on the network traffic rate and this has become our work in progress.
An experimental evaluation of server performance in Networked Virtual Environments
Juan Luis Font, José Luis Sevillano and Daniel Cascado
Several works in the literature have recently addressed the study of different Networked Virtual Environments (NVE) due to their increasing popularity and widespread use in fields ranging from entertainment to e-Health. Open Wonderland is one of these NVEs which has been the subject of several studies mainly focused on the client side. This paper aims to cover the server-side performance issues to provide complementary results that can be useful for properly sizing Open Wonderland systems according to the number of expected users. An experimental testbed is used, which provides real data that shows that CPU and outgoing bandwidth are the most critical parameters when the number of clients increase.
PHISON: Playground for High-level Simulations On Networks
Marc Manzano, Juan Segovia, Eusebi Calle and Jose L. Marzo
Network simulation has become crucial in the study of telecommunication networks. In this paper we present PHISON (Playground for High-level Simulations On Networks), an easy-touse discrete-event simulator whose features facilitate the study of diverse phenomena on path-oriented telecommunication networks. A key differentiation feature of PHISON is that it considers the dynamic aspects of such networks at a higher level than previous proposals. Hence, our proposal does not consider protocol data units nor user data packets. Its design considerations and implementation details are presented and, finally, two examples illustrate some of the functionalities of the network simulator.

Tuesday, July 10th, 2012

Keynote Session III

Room: Benvenuto
Keynote III
Keynote Speaker: Wolfgang Borutzky
Bond Graph Modeling for the Analysis of Fault Scenarios in Hybrid Systems

Session 4: Networking Techniques

Room: 4B
Session Chair: Jose Saldana, Univ. of Zaragoza, Spain
Smart Proxying for Reducing Network Energy Consumption
Rafiullah Khan, Raffaele Bolla, Matteo Repetto, Roberto Bruschi and Maurizio Giribaldi
Many network-based applications require full time connectivity of the hosts for fate sharing and responding to routine applications/protocols heart-beat messages. Past studies revealed that network hosts are most of the time unused or idle but still kept powered on just to maintain network connectivity. Thus, significant energy savings are possible if the network hosts can sleep when idle. Unfortunately, present low-power sleep modes cannot maintain network connectivity and results in applications state loss, thus preventing users from enabling power management features. A proxy-based solution was recently proposed in the literature that allows network hosts to sleep and still maintain their network standings over Internet while requiring minor changes to applications/protocols. The Network Connectivity Proxy (NCP) can handle some basic network presence and management protocols like ICMP, DHCP, ARP etc on behalf of sleeping hosts and wake them up only when it is truly necessary. This paper addresses NCP generic architecture and its requirements, main NCP responsibilities, different NCP types and their characteristics and some possible solutions to preserve open TCP connections for the sleeping hosts. It also describes key challenges in the design and implementation of NCP and proposes possible solutions. NCP can result in significant network energy savings up to 70% depending on the hosts time usage model.
A Fast-Converging TCP-Equivalent Window-Averaging Rate Control Scheme
Shih-Chiang Tsao
Smooth rate control schemes are necessary for Internet streaming flows to use available bandwidth. To equally share the Internet bandwidth with existing Transmission Control Protocol (TCP) flows, these new schemes should meet TCP- equivalent criterion, i.e., achieve the same transmission rates as TCP under the same network conditions. However, when the available bandwidth oscillates, many of these schemes fail to meet this criterion due to their slow increasing rate. This study proposes a window-averaging rate control (WARC) scheme to send packets at the same average rate as a TCP flow over a fixed time interval. Considering the TCP rate only over a fixed interval allows WARC to forget the historical loss condition more rapidly than other schemes, thus achieving a faster increasing rate when additional bandwidth becomes available. When the available bandwidth drops dramatically, WARC uses a history-reset procedure to converge its rate to the new steady rate immediately after a specified number of losses. The analysis and simulations in this study show that WARC not only achieves the same bandwidth as TCP, but exhibits faster convergent behaviors and has a smoother rate than existing schemes.
MAP/SM/1/b Model of a Store and Forward Router Interface
Krzysztof Rusek, Zdzislaw Papir and Lucjan Janowski
The model of a router interface with a buffer limited to a fixed number of packets (regardless their lengths) is discussed. The interface is described as a finite FIFO queuing system with a Markovian Arrival Process (MAP) and semi- Markov (SM) service times. The new analytical results for the loss ratio, the local loss intensity or the total number of losses are developed. All results are suitable either for a transient or a stationary analysis and it is possible to extend them beyond the loss process.
Traffic Optimization for TCP-based Massive Multiplayer Online Games
Jose Saldana, Luis Sequeira, Julián Fernández-Navajas and José Ruiz-mas
This paper uses a traffic optimization technique named TCM (Tunneling, Compressing and Multiplexing) to compress the traffic of MMORPGs (Massively Multiplayer Online Role-Playing Games), which use TCP to provide a real-time service. In order to optimize the traffic and to improve bandwidth efficiency, TCM can be applied when the packets of a number of players share the same link, which occurs in some scenarios, as e.g. the traffic between proxies and servers of game-supporting infrastructures. First TCP/IP headers are compressed by the avoidance of repeated fields; next, a number of packets are included into a bigger one and finally, they are sent using a tunnel. The expected compressed header size has been obtained using traffic traces of a real game. Next, simulations using a traffic model have been performed in order to estimate the expected bandwidth savings and the reduction in packets per second. The added delays are shown to be small enough so as not to impair players’ experienced quality.

Session 5: Scheduling Algorithms and Learning Systems

Room: 4B
Session Chair: Essia Hamouda, Univ. of California, Riverside, USA
Two-Level Scheduling Algorithm for Different Classes of Traffic in WiMAX Networks
Zeeshan Ahmed and Salima Hamma
The IEEE 802.16 standard is one of the most promising broadband wireless access (BWA) systems. The standard incorporates a QoS architecture that supports both realtime and non-realtime applications. To provide QoS three data schedulers are furnished by the architecture. However, the working of the schedulers are not defined by the standard. Some researchers have attempted to fill this gap by providing different scheduling schemes. However, no scheme has yet been adapted by the standard and the area is still open for new research. In this article we propose Two-Level Scheduling Algorithm (TLSA) that ensures QoS for all service classes, while avoiding starvation of lower priority classes. Furthermore, it ensures fair resource allocation among flows of the same class. The simulation results prove that the algorithm is effective and efficient.
An Optimal Scheduling Policy for a Multi-flow Priority Queue with Multiple Paths
Essia Hamouda, Nathalie Mitton and David Simplot-Ryl
We consider a path-scheduling problem at a resource constrained node $A$ that transmits two types of flows to a given destination through alternate paths. Type-1 flow is assumed to have a higher priority than type-2 flow thus, it is never rejected upon arrival. Type-2 flow, on the other hand, may be denied admission to the queue. Once accepted to the system, a packet joins queue 1 and is guaranteed service independent of its type. Instead of being rejected from service, packets have the option to be served at a slower server behind a second queue (queue 2) at node $A$. The slow server is intended mostly to serve low priority packets, therefore, type-1 packets are charged a switching cost in the event they are sent to queue 2. Transmitted packets receive a reward depending on which queue they were served at. The reward represents the resources saved for making that decision. A good path-scheduling policy at node $A$ can reduce resource consumption at node $A$, extend the life of the efficient path, maximizes the service of both flows and guarantees the service of at least the high priority flow to the full extent. We propose and solve the path-scheduling problem for node $A$, which maximizes the average reward of successfully transmitting flows to a given sink, by dynamically assigning packets to one of the queues based on the packet type, the instantaneous queue lengths and the average reward for the associated path. We formulated the pathscheduling problem as a Markov decision process and show that the optimal policy is threshold-type.
ATRC: A Swarm-based Robot Team Coordination Protocol for Mine Detection and Unknown Space Discovery
Floriano De Rango and Nunzia Palmieri
In this paper, we consider the problem of exploring an unknown environment with a team of robots to detect and disarm mines. The goal is to minimize the overall exploration time and to disarm all mines in a landscape. We present two approaches for the coordination of robots. An indirect communication, stigmergy, inspired to biology is used to help robots to cover the overall area in a minimum time. In this way the robot simultaneously explore different regions of their environment. Moreover, a new coordination protocol called Ant-based Task and Robot Coordination (ATRC) is proposed to recruit robots and disarm mines in a minimum amount of time. This approach has been implemented and tested extensively in a simulation environment. The results showed the best convergence time of proposal in term of space discovery in comparison with a well-known algorithm such as the Vertex Ant Walk (VAW). Moreover the effectiveness of proposed protocol is evaluated varying network parameters in the simulation.
Research on VoIP Traffic Detection
Muhammad Mazhar Ullah Rathore and Tahir Mehmood
VoIP usage is rapidly growing due to its cost effectiveness, dramatic functionality over the traditional telephone network and its compatibility with public switched telephone network (PSTN). Internet service providers (ISPs) and telecommunication authorities are interested in detecting commercial usage of VoIP to either block or prioritize it. In this paper we propose taxonomy of existing VoIP detection techniques. Basic 4 types of techniques are used to detect VoIP traffic i.e. Port-based techniques, signaturebased techniques, pattern-based techniques , and statistical analysis-based techniques. This paper mainly focuses on the basic methodology, usage, advantages and disadvantages and comparative study of these techniques.

Session 6: Tools, Methodologies and Applications I

Room: 4C
Session Chair: Josep L. Marzo, Univ. of Girona, Spain
Capacity Planning of Enterprise Information System through Simulation
Sara Saleem, Paulvanna Marimuthu and Sami Habib
Enterprise information systems (EIS) nowadays experience the problem of over maintenance and increased operational costs due to the installation of many independent servers to accommodate either new or existing projects/applications. Most of the servers are underutilized, consumed more electrical power due to operations and cooling, and increased operative and administrative expenses. In this paper, we propose a server consolidation approach to improve the servers’ utilization, whereby the lowest utilized servers in the EIS are removed and their clients are rerouted to the remaining servers with acceptable utilization and network delay. We have utilized two random distribution schemes to select the servers for distributing the clients: single nearest server at random and multiple servers at random. We have validated our proposed server consolidation approach by simulating the enterprise information network using OPNET Modeler and analyzing the utilization of servers before and after the servers consolidation. The simulation results demonstrate that the multiple server distribution approach shows better performance with improved utilization ranging from 27% to 37% and with 16% reduced annual operational cost.
A model of a finite buffer shared by queues with batched Poisson arrivals
Miron Vinarskiy
We study a model of a finite buffer shared by M[x]/M/1 queues. Customer arrivals in each queue are modeled as a batched Poisson process with an arbitrary batch size distribution. The model is a generalization of the Kamoun and Kleinrock model of a packet switch buffer memory shared by M/M/1 queues. A sufficient condition is found, involving quite general buffer sharing policies, for equilibrium state probability distribution to have the product form. For the special case of the geometrically distributed batches and complete sharing buffer policy, we obtain closed form solutions for the normalization constant. The solutions emerge from a few system load configurations and lead to efficient computational procedures for the performance characteristics.
Traffic-level Community Protection in Telecommunication Networks under Large-Scale Failures
Víctor Torres-Padrosa, Marc Manzano, Eusebi Calle and Josep L. Marzo
Large-scale failures in telecommunication networks make the preservation of the connections inside a community a challenging task, being traditional approaches focused on the preservation of the global connectivity. To achieve this goal, a new concept of community is proposed, which combines not only the topological information of the network but also the traffic-level interaction. Moreover, six novel community-based strategies to determine best node candidates to be protected according to a limited budget are assessed. The proposed strategies have been tested over four different types of networks and compared to other well-known immunization or protection methods. The obtained results show that, depending on the network topology, either an improved intra-community or global traffic preservation can be achieved w.r.t. traditional approaches.
Enhancing the Oakley key agreement protocol with secure time information
Pawel Szalachowski and Zbigniew Kotulski
Message freshness and time synchronization are nowadays essential services in secure communication. Many network protocols can work correctly only when freshness of messages sent between participants is assured and when internal clocks protocol's parties are adjusted. In this paper we present a novel, secure and fast procedure which can be used to ensure data freshness and clock synchronization between two communicating parties. Next, we show how this solution can be used in cryptographic protocols. As an example we apply our approach to the Oakley key determination protocol providing it with time synchronization without any additional communication overhead.

Wednesday, July 11th, 2012

Keynote Session IV

Room: Benvenuto
Keynote IV
Keynote Speaker: José J. Granda
The Grand Challenges for Modeling and Simulation in Space Exploration

Session 7: Wireless and Grid Systems

Room: 4B
Session Chair: Alessandro Carrega, Univ. of Genoa, Italy
Traffic Merging for Energy-Efficient Datacenter Networks
Alessandro Carrega, Suresh Singh, Roberto Bruschi and Raffaele Bolla
Numerous studies have shown that datacenter net-works typically see loads of between 5% – 25% but the energy draw of these networks is equal to operating them at maximum load. In this paper, we propose a novel way to make these networks more energy proportional – that is, the energy draw scales with the network load. We propose the idea of traffic aggregation, in which low traffic from N links is combined together to create H < N streams of high traffic. These streams are fed to H switch interfaces which run at maximum rate while the remaining interfaces are switched to the lowest possible one. We show that this merging can be accomplished with minimal latency and energy costs (less than 0.1W) while simultaneously allowing us a deterministic way of switching link rates between maximum and minimum. Using simulations based on previously developed traffic models, we show that 49% energy savings are obtained for 5% of the load while we get an energy savings of 22% for a 50% load. Hence, forasmuch as the packet losses are statistically insignificant, the results show that energy-proportional datacenter networks are indeed possible.
Deriving Test Procedures of Low-Rate Wireless Personal Area Networks
Marina Eskola, Tapio Heikkilä and Pirkka Tukeva
Low-Rate Wireless Personal Area Network (LR-WPAN) technologies are more widely applied in industrial applications, posing additional challenges for reliability due to harsh conditions. Efficient and manageable testing methods and tools to support building and maintaining reliable communication are inevitable though currently not so well existing. We take an approach for systematic testing of Low-Rate Wireless Personal Area Networks (LR-WPAN). Our goal is to define a common test procedure, from which specific targeted test sequences can be derived. We define a set of faults and disturbances and a set of metrics indicating impacts of these. Further on we define Test Procedures for deriving specific targeted test sequences for detecting specific faults and disturbances, like detecting device failures or harsh traffic conditions. We also present some test examples and measurement results from field tests.
A New Markov-Based Mobility Prediction Scheme for Wireless Networks with Mobile Hosts
Peppino Fazio and Salvatore Marano
Recently, mobile communications need to benefit a good level of Quality of Service (QoS), since communications guarantees are mandatory during active flows. Passive resources are used to ensure service continuity when mobile hosts are moving among different coverage cells. In this work the attention is focused on wireless services in cellular networks, where the hand-over effects need to be mitigated, through an appropriate reservation policy. The whole considered system is modeled through a distributed set of Hidden Markov Chains (HMC) and the related theory is used to design a mobility predictor, as the main component of the proposed idea, which does not depend on the considered transmission technology (GSM, UMTS, WLAN, etc.), mobility model or vehicular scenario (urban, suburban, etc.). MRSVP has been used in order to realize the active/passive bandwidth reservation in the considered network topology and many simulation campaigns have been carried out in order to estimate the correctness of the proposed algorithm, also in terms of CDP and CBP.

Session 8: Tools, Methodologies and Applications II

Room: 4C
Session Chair: Marco Cello, Univ. of Genoa, Italy
Opportunistic Estimation of Television Audience Through Smartphones
Igor Bisio, Alessandro Delfino, Giulio Luzzati, Fabio Lavagetto, Mario Marchese, Cristina Frŕ and Massimo Valla
Television audience estimation is an important task for advertisement placement. In this paper we present a system based on a client-server architecture able to recognize a live television show. The clients are implemented throughout smartphones, thus a well-known audio fingerprint algorithm has been modified to fit smartphone needs. To reach this aim, the optimization of an ad-hoc cost function has been introduced. Server and client are generally far from being synchronized, due to the variety of broadcasting media (aerial, satellite or streaming over the Internet) leading to different and unpredictable delays. For this reason, we present a new likelihood estimate designed to overcome the lack of synchronization.
Creating and Using Key Network-Performance Indicators to Support the Design of Change of Enterprise Infocommunication Infrastructure
Muka László and Muka Gergely
Nowadays, an increasing number of organisations have to make decisions about the change and optimization of their enterprise infocommunication infrastructure. The usual approach of performance (using QoS, SLA) is not user (enterprise) centred and complex enough to help ICT (Information and Communication Technology) experts to support management decisions. The paper proposes a strategic management inspired, user centred and complex method of creating KNPIs (Key Network-Performance Indicators) for the measurement. In the paper, a case study is introduced to illustrate the creation and the use of KNPIs: alternatives are generated for the change, and using KNPIs, the feasibility, capacity, quality and financial aspects – integrating also the business impact and the influence of user experience – of the change of the enterprise infocommunication infrastructure are examined and compared. In the case study, to the analysis and prediction of the necessary enterprise network capacities, network traffic simulation is used. Financial modelling is also applied in comparing the alternatives.
Maximum Throughput Computation of an Application in a Multi-tier Environment
Subhasri Duttagupta
Performance projection of an application for large number of users involves predicting the maximum throughput that the application can achieve and the maximum number of users it can support. Factors affecting the maximum throughput can include both hardware and software resources of each of the servers associated with the application. In a multi-tier environment, the number of resources can be quite large. As soon as any of these resources is bottlenecked, i.e., the resource utilization reaches close to 100%, the rate of increase of throughput drops. Further increase in the number of users results in reduction in throughput. For any enterprise application that intends to cater to a large number of users, it is desirable to know the maximum throughput it can achieve. This paper proposes a systematic technique for analyzing maximum throughput of any application that can be used irrespective of the test environment or production environment. Our technique computes maximum throughput with more than 95% accuracy in most scenarios. This technique can be useful in reducing the load testing effort and time.
MILP Formulation for Squeezed Protection in Spectrum-Sliced Elastic Optical Path Networks
Karcius Assis, Raul Almeida and Helio Waldman
In Spectrum-Sliced Elastic Optical Path Networks (SLICE), the lightpath bandwidth is variable and can provide a finer granularity when compared to that of current optical path networks. Therefore, the virtual topology overlay on a physical topology must be designed considering new constraints and requirements to optimize the spectrum utilization, which has been recently investigated as a solution to a Mixed Integer Linear Program (MILP). In this paper we propose a mechanism that can provide both traffic survivability and efficient spectral utilization in SLICE networks by employing the spectrum flexibility and the novel concept of bandwidth squeezed. A MILP formulation is presented that uses these new concepts to efficiently route the traffic in the network, while satisfying a committed bit-rate in the event of a failure. Cases studies are carried out in order to analyze the benefits of the proposal as well as the basic proprieties of the formulation.

SummerSim Tutorial Session III

Room: Benvenuto
Green Technologies for Smarter Next-Generation Wire-line Networks
Raffaele Bolla, Roberto Bruschi
Genoa University, Genoa, Italy