The School of Arts & Sciences

Department of Computer Science & Mathematics

Research

The Department of Computer Science and Mathematics is an important research hub within LAU, with specific interests in the following areas.

Computer Science

Algorithms

Bioinformatics

Protein structure prediction

We have been exploring computational solutions for predicting tertiary protein structures. The focus has been on the use of metaheuristics for ab initio solutions.

Gene-disease association

We have been exploring computational techniques for discovering associations between disease, such as some type of cancer, and genes. For this purpose, we have been experimenting with data mining and evolutionary algorithms.

Databases

Electronic design automation and high-level synthesis

High-level synthesis is the process of transforming a behavioral description into a structural one. From the input specification, the synthesis system produces a description of a datapath, that is, a network of registers, functional units, multiplexers and buses. The synthesis must also produce the specification of the control path. There are many different structures that can be used to realize a given behavior. One of the main tasks of high-level synthesis is to find the structure that best meets the constraints while minimizing other costs. Research in this area aims at the development of CAD tools and methodologies for highly testable electronic systems at the behavioral and structural levels.

Networking

Pervasive and mobile computing

According to a 2004 EMC report, there are about 1.5 billion cell phone users worldwide — more than three times the number of personal computers — and the number is expected to exceed 4 billion users by 2010. The main goal of mobile computing is anytime, anywhere access, liberating people from relying on a computing or communication device at a fixed location.

Mobile devices however have strict resource limitations as compared to traditional personal computers. This includes battery lifetime, memory storage, and processing speed. To combat the current limitations of mobile computing, one possibility is to introduce new technologies for long lifetime batteries, fast and abundant memory, and fast processors. Significant research efforts are also spent in designing resource-aware algorithms and protocols for software running on these devices so as to consume minimal battery power and memory.

IP network planning

All-IP converged networks should be planned carefully to accommodate new services with high and stable quality level. The ultimate objective of network planning is to lay the foundation for making profit out of network operation. This is achieved by granting a given network sufficiently high but not too high performance quality. Network planning examines the tradeoff between performance quality and resulting costs. Its task is to select the scenario with the optimal tradeoff, i.e. the one with the best quality for minimum cost. In summary, the objectives of network planning are: 1) economical network ownership and 2) guaranteed quality of service.

With these objectives, a general network design problem can be formulated to determine the optimal values of variables such as topology, routing table and scheme, link capacities, bandwidth allocation and domain design. The solution of this general problem is complex and not yet feasible due to the interdependencies among the different design variables. As a result, most research in this area is focused on solving subproblems.

In the literature, four basic design subproblems are defined: the flow assignment, the capacity assignment, the capacity and flow assignments, and the topology, capacity and flow assignments. Flow assignment determines the optimal routes over which information is transferred among the communicating nodes; capacity assignment determines the link capacities required for high quality transmission at minimum cost, and finally topology assignment determines node locations and link selection.

Network coding

Network coding has emerged recently as a promising research area in the field of networking systems. Network coding moves away from the classical approach of networking in which intermediate nodes send packets identical to what they receive, instead with network coding intermediate nodes send packets that are a linear combination of packets they receive. When packets don’t have the same length, the shorter ones are padded with zeros. Note that linear combination is different than concatenation: a linear combination of a set of packets of maximum length L result in an encoded packet of size L.

research-sanaa2.JPGNetwork coding can be used to improve throughput. This can be illustrated by the famous butterfly example shown in the adjacent figure. Without network coding nodes R1 and R2 can receive only A or B, however as shown in the example above when using network coding R1 and R2 can receive both A and B. Network coding provide robustness and adaptability, it can be incorporated into wireless networks, ad hoc networks, peer-to-peer networks and mobile networks.

Peer-to-peer networking

The vast majority of video applications being delivered today over wireless networks emanate from dedicated infrastructure servers. However, it is very costly to meet the steadily increasing demand for high data rate video applications, both in terms of bandwidth costs and server hardware costs. As a result, much of the video on demand will likely be streamed via peer-to-peer architectures, in which the consumers of the video content are also the suppliers of the content. In peer-to-peer architectures, the peers contribute to the system their storage and network bandwidth resources. Moreover, the peers additionally contribute their power capabilities in wireless ad hoc environments in order to relay video data between nodes in the network. However, since videos are typically large in size and require high share of the network capacity for delivery, many peers may be unwilling to cache them in whole to serve others, the fact that makes providing scalable and high-quality video services in a peer-to-peer environment challenging.

Rule-based software quality estimation

Assessing software quality is very important in the software developing field as it helps reduce cost, time and effort. However, most of the software quality characteristics such as stability and maintainability cannot be directly measured. However, they can be estimated based on other measurable attributes. For this purpose, machine learning algorithms have been extensively used to build software quality estimation models. These models build a relationship between what can be directly measured and what can only be predicted/estimated. Rule-based models are the most widely used due to their white-box nature. However, the accuracy of these models deteriorates when we use them to estimate the quality of new software components. Research in this area concentrates on genetic algorithm-based approaches to optimize already existing rule-based software quality estimation models. The research touches on three fields in computer science: machine learning, evolutionary computation and software quality.

SOC design and embedded systems

Recent advances in semiconductor process technologies enable the integration of an entire system on a chip (SOC) based on a reuse philosophy that divides the CAD community into core providers and core integrators. Core providers create embedded cores that are pre-designed and pre-verified complex logic blocks. Research in this area aims at the development of tools for SOC test scheduling, test access mechanism design, and SOC integration.

Software engineering

Testing and regression testing

We have been developing code-based testing techniques to gain confidence in the correctness of web applications that include dynamic features. Also, we have developing regression testing algorithms to provide confidence in modified programs based on design and code information.

Timetabling

Timetabling problems

We have been developing algorithms for timetabling university exams and courses.

Mathematics

Applied mathematics

Mathematics education

Non-smooth analysis and optimal control

Non-smooth analysis refers to differential analysis in the absence of differentiability. It can be regarded as a branch of a vast subject known as nonlinear analysis. Since its inception in the early 1970s, there has been a sustained and fruitful interplay between non-smooth analysis and optimal control. A familiarity with non-smooth analysis is, therefore, essential for an in-depth understanding of present day research in optimal control. The main object of our research involves the application of new methods from non-smooth analysis for the study of certain Hamilton-Jaccobi equations arising in optimal control. The viscosity methods play an important role in this research.

Pure mathematics

 


Copyright 1997–2017 Lebanese American University, Lebanon.
Contact LAU | Emergency Numbers | Feedback