Background Image

Past Seminarss

In this Section
Friday, May 27, 2005, 2 p.m. - Science Center 356
Hon-Ching Lo
Department of Computer Science, Clarkson University
Program Verification
Abstract:
The objective of the thesis is to develop software to automate the process of Program Verification, based on existing Formal Methods and the theorem prover Integrated Canonizer and Solver (ICS). We developed software, called Converter To Constraints (CTC), which produces constraints for ICS to prove programs. We also defined a language CC as input to CTC. By applying the software to different CC programs, we explore useful techniques for allowing more varieties of programs to be verified, and for increasing the usability of the software. We also devise methods/strategies to maximize the automation. In the course of building on the basic framework of CTC, we discovered various problems and limitations on ICS. Therefore, part of efforts resided on making necessary adjustments on CTC so that it is compatible with ICS.
 
Tuesday, May 24, 2005, 2 p.m. - Science Center 356
Hon-Ting Lo
Department of Computer Science, Clarkson University
Cryptographic Protocol Analysis
Abstract:
Cryptographic protocols provide secured services in communication. Methods like model checking and theorem proving are used to verify the correctness of cryptographic protocols. But model checking methods can only check a bounded number of states, and theorem proving methods generally do not find attacks. Our thesis provides a method of verifying protocols automatically and also finding attacks on protocols automatically if there is one, by using the automated theorem prover SPASS. We show how to transform cryptographic protocols in conventional notation into SPASS format, then verify the protocol in SPASS. We also provide a method to figure out the steps of attack from the SPASS output. The advantages of our method are that it is completely automated; we use ordered resolution for efficiency and also equations to model cryptographic primitives, which differ our methods from other methods.
 
Monday, March 21, 2005, 4 p.m. - Science Center 342
Jeffrey Jackson
Department of Mathematics and Computer Science, Duquesne University
Average-Case Learnability
Abstract:
For many seemingly-simple problems in learning theory, no polynomial-time algorithm has been discovered. For example, no efficient algorithms are known for approximating either boolean decision trees or monotone disjunctive normal form (DNF) expressions from uniform examples. We simplify these problems somewhat by asking whether these classes can be learned "on average" relative to certain natural probability distributions defined over the classes. In this average-case setting, we obtain efficient algorithms for learning boolean decision trees and for learning a substantial subclass of monotone DNF (as well as a slightly more restricted subclass of nonmonotone DNF). This is joint work with Rocco Servedio (Columbia University).
 
Friday, February 25, 2005, 4 p.m. - Science Center 356
Michael Molloy
Department of Computer Science, University of Toronto
Random Constraint Satisfaction Problems

Abstract:
Constraint satisfaction problems are generalizations of CNF boolean formulas. We are given a set of variables, and a domain of values that the variables can take. Some subsets of the variables have "constraints" which restrict the value-assignments that can be made to those variables. A problem is satisfiable if there is an assignment of values to the variables which does not violate any of the constraints. Some well-studied special cases are k-SAT, graph colourability and graph homomorphism.

In this talk, we discuss models of random constraint satisfaction problems. The main issue is the following: if we form a random problem by adding random constraints one-at-a-time, how does the probability of the problem being satisfiable change? It is clear that this probability is monotonically decreasing as the number of constraints increases, but how suddenly does it decrease? How many constraints must be added before the decrease is significant? (In formal terms, these questions can be rephrased as "does the problem have a sharp threshold of satisfiability?" and "how many constraints must be added before it is no longer almost surely satisfiable?") These questions generalize the well-studied notions of the threshold of satisfiability for a random instance of k-SAT and the threshold of k-colourability for a random graph.

 
Monday, January 24, 2005, 4 p.m. - Science Center 356
Brigitte Pientka
Department of Computer Science, McGill University
Overcoming Performance Barriers: Efficient Proof Search in Logical Frameworks

Abstract:
The logical framework Twelf provides an experimental platform to specify, implement and execute formal systems. One of its applications is in proof-carrying code and proof-carrying authentication, where it is successfully used to specify and verify formal guarantees about the run-time behavior of programs. These real-world applications have exposed some critical bottlenecks: execution of many logical specifications is severely hampered and some are not executable at all, thereby limiting its application in practice.

In this talk, I will describe three optimizations which substantially improve the performance and extend the power of the existing system. First, I give an optimized unification algorithm for logical frameworks which allows us to eliminate unnecessary occurs-checks. Second I present a novel execution model based on selective memorization and third I will discuss term indexing techniques to sustain performance in large-scale examples. As experimental results will demonstrate, these optimizations taken together constitute a significant step toward exploring the full potential of logical frameworks in real-world applications.

 
Friday, January 7, 2005, 10 a.m. - Science Center 354
Lin Zhang
Department of Computer Science, Clarkson University
Exponential Tail Bounds

Abstract:
Given a sequence of independent random variables, how close is their sum to its mean? The answer is that it is exponentially close, assuming that the random variables are well-behaved. Some explicit bounds are known under several reasonable assumptions on the random variables. We describe here well-known exponential tail bounds that are relevant to computer science. These are organized in the following categories:

(a) standard bounds: Chernoff and Hoeffding bounds.
(b) martingale bounds: McDiarmid's bound and Azuma's inequality.
(c) isoperimetric bound: Talagrand's inequality.

We survey the basic ideas behind these bounds and describe the most direct proofs. Then, we observe a simple quantitative connection between Chernoff-Hoeffding bounds and the notion of negatively associated random variables.

 
Friday, December 10, 2004, 3 p.m. - Snell Hall 214
Todd Deshane
Department of Computer Science, Clarson University
Web-Based Collaborative Exploration and Characterization of Large SQL Databases
Abstract:
Groups of people in many diverse fields face the challenge of characterizing or mining information from a large data set. People seeking to understand this large amount of data that is typically stored in a relational database, often need to submit long-running SQL queries that inefficiently use valuable computing resources. In addition, the web interfaces to large data sets were written by and for engineers or scientists, without accommodating various user skill sets. Our goal is to shift computational data mining from the laborious task of the individual to the cooperative task of groups. To address these problems regarding large data sets, we developed a query caching system with a web-based collaborative interface. With this interface, we promote knowledge sharing between users of various skill levels and experiences. We discuss our design for this system along with our implementation of a prototype system that uses a large set of Border Gateway Protocol (BGP) data. By using this system to explore and characterize large BGP databases, we demonstrate that it is in fact possible to effectively apply collaborative techniques to mine large SQL databases.
 
Friday, December 10, 2004, 10 a.m. - Snell Hall 214
Matthew Finlayson
Department of Computer Science, Clarkson University
OfficeBeans – An Interactive Component Toolkit for Data
Visualization
Abstract:
Office Beans provides a portable web based solution for viewing large amounts of data quickly and concisely. This project provides the user with the ability to arrange their data using dynamic charts and graphs as well as share the results with other users. Office Beans was developed as a generic front end to any large data repository, in particular it was designed to meet the needs of jETIA (pronounced "juh tie ah"), a report generation infrastructure and toolkit for enterprise reporting. Office Beans is implemented as a collection of Java Applets that receive data through an XML stream. A single instance of an XML data set is shared by the reporting applets and changes are reflected in each of the applets through inter applet communication. This modified instance of an XML dataset can be passed back to jETIA to be used at a later time or to be shared with other users. Although Office Beans has been designed specifically to work with jETIA any Office Bean compliant XML stream can be represented with spreadsheets, pivot tables, line, bar, or pie charts. The implementation of Office Beans is an exercise in the application of modern software engineering practices and open standard technologies to produce a simple and robust application to meet end users needs.
 
Wednesday, December 8, 2004, 9:30 a.m. - Snell Hall 175
Eli Dow
Department of Computer Science, Clarkson University
Project E.D.E.N.: Extra Data Everyone Needs

Abstract:
Interested in learning about an open source alternative to Microsoft's oft-delayed WinFS, Google's recently announced desktop search system, or Apple's Spotlight search technology? Would you find it useful if your computer held on to extra information about your files so you could locate them with greater ease?

I have implemented my own approach to metadata storage and search that parallels the functionality found in these systems. This includes a method for reliably storing additional information about your files seamlessly and quickly. Furthermore, my approach is unlike those commercial solutions because it is extremely portable across operating systems, underlying file systems, and database implementations.

Come to New Snell Hall room 175 Wednesday, December 8, 9:30 – 11:30 a.m. to hear more about the E.D.E.N. system architecture, discussion about keeping stored metadata in synchronization with actual files on disk (a.k.a. the coherence problem), and the interesting advances in search it can facilitate. The talk should be fairly complete, and accessible to anyone with basic interest in the subject matter.

 
Friday, November 12, 2004, Noon - Snell 212 (jointly sponsored by the Bio-Interface Seminar Series and The Department of Biology)
Phil Long
Center for Computational Learning Systems, Columbia University
Microarray Data and the Theory of Generalization in Boosting

Abstract:
Boosting is a widely successful approach to classification in which a number of "base classifiers" are incorporated into an aggregate classifier. The behavior of boosting seemed to run counter to the common view that a learning algorithm should strike a balance between the complexity of a hypothesis and its fit to the data. The influential "margin analysis" provided one explanation of this phenomenon.

Microarrays allow scientists to measure the rate of expression of a large number of genes simultaneously in a given tissue sample. These rates are referred to collectively as an expression profile for the tissue sample. Some biological problems can be addressed by learning a classifier for expression profiles.

Applying boosting, and some related algorithms, on microarray data, and related data, exposes that the margin analysis provides an incomplete explanation of generalization in boosting. Examining how suggests an alternative mode of analysis -- we have carried out a preliminary analysis of this type. Suitably modified, boosting appears to be a useful tool for classifying expression profiles. (This talk includes joint research with Vinsensius Vega and Sanjoy Dasgupta.)

 
Monday, November 8, 2004, 12:30 p.m. - SC 346
Supraja Gurajala
Department of Computer Science, Clarkson University
A New Multi Rate QoS Aware MAC Protocol for Multi hop Ad hoc Networks
Abstract:
Ad hoc networks are an emerging area of research due to the growing popularity of portable wireless devices. The advances in wireless communications and growth of real-time applications have necessitated the development of wireless networks that support a high quality of service (QoS). The performance of a single channel Ad hoc network depends largely on the medium access control (MAC) protocol. A number of MAC protocols have been proposed in recent years, and each of these has focused on improving and optimizing a few selected QoS issues. We introduce a new Mac protocol, called Multi-rate Multi hop QoS-aware Mac Protocol (MMMP) for Ad hoc networks that concurrently addresses several QoS issues. MMMP is an asynchronous scheme that can handle multi-rate and multi-hop real-time traffics, reduce hidden terminal problems, provide different QoS requirements for different real time traffics, and has bounded end-to-end delay for real time traffics in multi-hop environment. Two additional new features, Smart Drop and Release Bandwidth, are incorporated in MMMP to enable bandwidth preservation and admission control. Simulations were conducted on the QualNet platform to test the performance of the new scheme. The results indicate that MMMP outperforms IEEE 802.11 for all performance measures and can efficiently handle a range of load conditions.
 
Thursday, October 21, 2004, 2:30 p.m. - SC 354
Mark Goldberg
Department of Computer Science, Rensselaer Polytechnic Institute
Network Algorithms for Homeland Security, or How to find a Small Hidden Group in a Large Communication Network

Abstract:
A small group of actors in a large communication network is planning a terrorist act against people they disagree with. They are very careful in three ways:

  • they take time to plan all details, to guarantee their "success";
  • all their messages are well camouflaged: the context is encrypted so that it is difficult for an outsider to discern their real meanings; and
  • the group members do not trust any outsider with the details of the plan and preparation.

Do we have any chance of discovering this group before they start implementing whatever they are planning? We describe models and efficient algorithms for detecting groups, functioning in communication networks, which attempt to hide their functionality–hidden groups. As expected, we find that if the background communications are dense or more structured, then the hidden group is harder to detect. Surprisingly, we also find that when the hidden group is non-trusting, it is easier to detect that group.

 
Wednesday, September 1, 2004, 2 p.m. - Science Center 348
Zhaoying Chen
Department of Electrical and Computer Engineering, Clarkson University
A new QoS MAC protocol for Ad Hoc Wireless Network
Abstract:
A lot of MAC protocols have been developed to support the Mobile Ad Hoc Network. Their main goals are to solve the Hidden Terminal Problems and Exposed Terminal Problems uniquely exists in Ad Hoc Networks. With the excitement of the development of high speed wireless communications, real-time transmissions are getting more and more common in Wireless Ad Hoc Networks. In order to provide Quality of Service (QoS), differentiated but not at the cost of starving the rest of transmissions services should be provided. Here we developed a new MAC scheme based on the MMACA/PR scheme, combined different features in DPS, ASAP and designed some inspiring new features to provide differentiated but somewhat fair Service, deal with the Hidden Terminal Problem and provide aggregated services.
 
Friday August 20, 2004, 1 p.m. - Science Center 348
Dalia Solomon
Department of Mathematics and Computer Science, Clarkson University
Honeypot machines
Abstract:
HoneyPot machines are designed to mimic systems in order to mislead intruders from breaking into the real system. By luring the intruder to a HoneyPot machine an administrator can monitor the activity of the trespasser. The administrator can then learn about the vulnerabilities of the current system and redesign it to be more secure. But in order to do so the administrator must properly build the HoneyPot machine is such a way that the HoneyPot machine fools the attacker in believing that it?s the real system so that he/she can effectively log information about the attackers? behavior. I will discuss about building a HoneyPot with VMware also I will talk about fingerprinting and countermeasures etc.
 
Thursday, April 22, 2004, 4:15 p.m. - Science Center 307
Thai Giang
Department of Mathematics and Computer Science, Clarkson University
Using Batch to Improve RSA
Abstract:
Security is a major issue in today's networks. Passwords and private messages sent across the network are not as secure as one might expect. Although encryption schemes are available, the performance of applications that uses them are proven to be worst than those that do not use any encryption scheme. This thesis analyzes the standard RSA encryption algorithm and compares it with Fiat's Batch RSA algorithm. The two algorithms will be compared in a secure Server/Client Handshake Protocol. Our goal is to test Batch RSA and show that the standard Handshake Protocol can be improved with Batch.
 
Friday, April 16, 2004, 3 p.m. - Science Center 354
Joe Skufca
US Naval Academy, University of Maryland, US Navy
Travellers on a Loop Without - A Generalized Random Walk
Abstract:
We consider the problem of stochastic flow of multiple particles traveling on a closed loop, with a constraint that particles move without passing. We use a Markov chain description that reduces the problem to a generalized random walk on a hyperplane (with boundaries). By expressing positions via a moving reference frame, the geometry of the no-passing criteria is greatly simplified, with the resultant condition expressible as the coordinate system planes which bound the first orthant. To determine state transition probabilities, we decompose transitions into independent events and construct a digraph representation in which calculating transition probability is reduced to a shortest path determination on the digraph. The resultant decomposition digraph is self-converse, and we exploit that property to establish the necessary symmetries to find the stationary density for the process.
 
Friday, February 27, 2004, 10 a.m. - Snell 213
Jeanna Matthews
Department of Mathematics and Computer Science, Clarkson University
Security Exploits and Defenses Up and Down the Protocol Stack
Abstract:
I will discuss some of the security holes in Internet protocols a various layers in the network protocol stack from application level protocols to link layer protocols. I will also discuss defenses that can be implemented at each layer. I will also discuss some of the most common classes of attacks on computer systems.
 
Friday, February 20, 2004, 10 a.m. - Snell 213
John Aycock
Department of Computer Science, University of Calgary
The Sky is Falling!

Abstract:
Last fall the University of Calgary offered a course on computer viruses and malicious software, the first of its kind in Canada and one of only a handful that have ever been offered worldwide. The course generated a global stream of controversy and media attention, in part because students were taught how to write viruses as well as defend against them.

This talk will be the first public presentation about the course. I will talk about the rationale behind the course, what I taught the students, how we put together a secure laboratory for coursework, and the experience we gained.

 
Friday, December 5, 2003, 3 p.m. - SC 342
Juan Wang
Department of Mathematics and Computer Science, Clarkson University
To Design and Implement a Minitype Proxy Firewall System

Abstract:
This thesis analyzes the causes of Internet Security problems and the advantages and disadvantages of the commonly used network security techniques such as encryption, authentication, secure protocols and firewall. It also introduces basic concepts and functions of firewall, and then analyzes the implementation price and security performance of firewall system with different architectures. It is important to research on firewall techniques because of its effective protection for network security.

Since proxy firewall is proved securer than packet filtering firewall, this system is designed as a proxy firewall. This system runs on a bastion host with Redhat Linux operation system. It has the abilities of Network Address Translator, internal information hide, strong user authentication, detailed log, etc. It is sufficient for normal Internet connection.

The focus of the thesis is on implementation of a mini type proxy firewall system, which includes a telnet proxy, an ftp proxy, user authentication service, log, and accounting. Telnet proxy tn-gw and ftp proxy jftpgw control telnet and ftp connection, respectively. They validate the connection, conduct the necessary user authentication, forward data between client and server, log necessary information and take down accounting data. User authentication service authsrv controls access in user instead of IP address thus makes system securer. It supports UNIX password and One-Time Password, with client embedded in proxies, sever creates challenge and verifies response, and user information is managed by a manage program. Detailed log is used for future audit and trace, and accounting data is used for calculating communication fee. Total incoming and outgoing bytes and online time for E-mail application are calculated by user name, while those for other applications are calculated by user name or IP address, and all are done automatically every month.

With consideration of the development of network and security technology, this thesis has also studied on the new firewall techniques and some improvement need to make that might be considered in future firewall design.

 
Thursday, December 4, 2003, 2:30 p.m. - SC 354
Alexis Maciel
Department of Mathematics and Computer Science, Clarkson University
A Simple Proof of the Weak Pigeonhole Principle

Abstract:
The Pigeonhole Principle states that if n pigeons are placed in n-1 holes, then at least one hole will contain more than one pigeon. The Weak Pigeonhole Principle refers to a version where the number of pigeons is significantly larger than the number of holes, typically at least twice as large. Both of these basic principles have numerous applications.

These theorems are easy to prove. How easy? Questions about the complexity of proving mathematical theorems are natural but they also have connections to important problems in computational complexity, automated theorem proving and software verification.

In this talk, we will introduce the study of proof complexity and present one of the simplest known proofs of the Weak Pigeonhole Principle. This is joint work with Toni Pitassi and Alan Woods.

 
Monday, November 17, 2003, Noon - SC 356
(Co-Sponsored with the Department of Biology and Seminar Series in Science and Technology at the Bio-Interface)
Katherine St. John
Department of Mathematics and Computer Science, City University of New York
Computational Methods for Analyzing Phylogenetic Trees
Abstract:
Evolutionary histories, or phylogenies, form an integral part of much work in biology. In addition to the intrinsic interest in the interrelationships between species, phylogenies are used for drug design, multiple sequence alignment, and even as evidence in a recent criminal trial. Much work has been done on designing algorithms that build phylogenetic trees given representative sequences of their DNA. The optimization criteria preferred by biologists for building trees is NP-hard. So, heuristics are often used that return many possible trees, instead of single optimal tree. This talk concentrates on the heuristics used for tree reconstruction, as well as how to summarize, analyze, and visualize these sets of trees. In particular, we will focus on fast reconstruction methods with provably nice properties, calculating biologically meaningful distances between trees quickly, and visualizing large sets of trees using the treecomp package designed by our group, as a module for the Mesquitesystem (developed by Wayne and David Maddison).
 
Tuesday, November 11, 2003, 3 p.m. - Science Center 348
Allan Borodin
Department of Computer Science, University of Toronto
What is a Greedy Algorithm?

Abstract:
Many undergraduate algorithms courses and texts often organize the material in terms of "algorithmic paradigms", such as greedy algorithms, backtracking, dynamic programming, local search, primal-dual, etc. We seem to be able to intuitively describe what we have in mind when we discuss these classes of algorithms but rarely (if ever) do we (or any of the standard texts) attempt to define precisely what we mean by greedy, dynamic programming, etc.

In a paper co-authored with Nielsen and Rackoff, we proposed a definition for greedy-like (approximation) algorithms, called priority algorithms. We did so in the context of classical scheduling problems. Angelopoulos and Borodin then applied the priority algorithm framework to the facility location and set cover problems. Impagliazzo and Davis have recently applied the framework to a number of traditional graph theoretic optimization problems.

Similar to online algorithms, we want to derive limitations on the (approximation) power of priority algorithms based only on the structure of the algorithm (allowing unbounded time complexity). Hopefully such a study will also lead to new algorithms. We motivate the priority algorithm framework by discussing some well known greedy algorithms and the corresponding lower bounds provable within this framework. We also discuss some extensions of the model such as the ``one-pass'' framework of Erlebach and Spieksma.

 
Friday, November 7, 2003, 3 p.m. - Snell Hall (hill) 213
Robert Vrablik
Senior Strategist/Solution Architect, Enterprise Systems Grid Computing, IBM
The Era of Grid Computing in an "On-Demand" Environment

Abstract:
The hype of the Internet boom has faded. But the challenges of how to drive business value from technology investments are more critical now than ever before. Products will be built only when ordered. Orders will automatically trigger a ripple effect across your business, from billing to manufacturing to shipping. Whatever you deliver, whether it's products to consumers, information to employees, or parts to a factory halfway around the world will need to be coordinated through a technology backbone that makes the most of tightly integrated, streamlined critical business processes.

Enter the new world of "e-business on Demand". A place where it's not just being on the Net but being a part of it, with the ability to respond with speed to any customer demand, market opportunity or competitive threat. To meet this challenge, Grid Computing has emerged as an enabling technology that forms the foundation of the "on-demand" computing environment. Grid Computing is a new, services oriented architecture based on "Open Grid Services Architecture" (OGSA) that embraces heterogeneous systems and involves distributed computing-over the Internet or any private network-via open standards. It is designed to help businesses efficiently consolidate, pool, share and manage IT resources.

Come learn about how Grid Computing helps:

  • manage resources across distributed heterogeneous platforms
  • deliver seamless Quality of Service (QoS) across integrated Grid resources
  • define open, published interfaces
  • integrate with existing IT resources
 
Thursday, October 16, 2003, 5 p.m. - Science Center 360
(Co-Sponsored with Pi Mu Epsilon Math Honorary Society)
Cristina Ruiz
Department of Mathematics, Binghamton University
From Matrices to Oriented Matroids
Abstract:
Oriented matroids are a combinatorial abstraction of linear algebra. They are a natural concept which has applications in areas like topology, combinatorics and algebraic geometry to name a few. In this talk I will introduce oriented matroids by using points, vectors and lines on the plane. As an example we will use oriented matroids to read off the faces of a high dimensional polyhedron. This talk is accessible to undergraduates. However some knowledge of linear algebra is required. (Students interested in pursuing a Ph.D. are particularly encouraged to talk with the speaker.)
 
Thursday, October 9, 2003, 4 p.m. - SC 354
(Co-Sponsored by Pi Mu Epsilon Math Honorary Society)
Jeffrey Weeks
MacArthur Fellow, Canton, NY
The Shape of Space
Abstract:
Then we look out on a clear night, the universe seems infinite. Yet this infinity might be an illusion. During the first half of the presentation, computer games will introduce the concept of a "multiconnected universe". Interactive 3D graphics will then take the viewer on a tour of several possible shapes for space. Finally, we'll see how satellite data released in February 2003 are beginning to reveal the true shape of our universe. The only prerequisites for this talk are curiosity and imagination. For first-year students on up.
 
Thursday, October 2, 2003, 4 p.m. - Science Center 346
(Co-Sponsored with Pi Mu Epsilon Math Honorary Society)
John Derrico
Eastman Kodak
Spline Interpolation and Modeling at Eastman Kodak
Abstract:
Splines have many uses in industry, ranging from process control contexts to research and development. While splines are often seen as functions of one variable, there are also uses for higher dimensional splines. (You may even be making use of 3d splines and never know it.) In this talk I'll sample some of these uses, I'll also suggest why we have used a particular class of spline model as opposed to some other model form.
 
Thursday, September 25, 2003, 4 p.m. - SC 356
Joel Foisy
Department of Mathematics, SUNY Potsdam
Knots and Links in Spatially Embedded Graphs
Abstract:
A graph is said to be intrinsically knotted if it contains a knotted cycle in every (tame) spatial embedding. Similarly, a graph is said to be intrinsically 3-linked if it contains a non-split 3 component link in every spatial embedding. It is an open question to determine all "minor-minimal" intrinsically knotted (or intrinsically 3-linked) graphs. We will discuss recent results concerning this open question. Some of the results discussed represent work done jointly with Garry Bowlin. No prior knowledge of knot theory will be assumed.
 
Monday, August 18, 2003, 11:30 a.m. - Science Center 354
Vineet S. Raghavan
Department of Mathematics and Computer Science, Clarkson University
Medium Access Control Protocols and Quality of Service in Ad Hoc Wireless Networks
Abstract:
Ad hoc wireless networks are a relatively new field, and as they gain more popularity, various new applications are bound to use them because of several reasons. In a network, the Medium Access Control (MAC) protocols are responsible for enabling the communication between two nodes. In a wireless network, however, these protocols gain even more importance since the communication channel is inherently prone to errors and unique problems like those caused by hidden terminals and signal fading effects. Although a lot of research has been conducted on MAC protocols, the various issues involved have mostly been presented in isolation of each other. We therefore make an attempt to present a comprehensive review of several schemes, integrating the various related issues and challenges with a view to providing a big-picture outlook to this vast area. We present a classification of MAC protocols based on their operation principles and underlying features. We look at the aspect of Quality of Service (QoS) in the context of Ad hoc wireless networks, and attempt to correlate the challenges involved with suitable approaches presented in the literature. We also consider the unique application of Sensor Networks, and review some solutions that have been presented in this regard. In conclusion, we present a brief summary of key ideas and a general direction for future work.
 
Wednesday, July 23, 2003, 9 a.m. - Science Center 356
Mehul Vora
Department of Mathematics and Computer Science, Clarkson University
Dynamic Management of QoS with Priority (DMQP): A Dynamic Mechanism for Multimedia Multicasting

Abstract:
Multimedia on the Internet has become not only a reality but also a necessity in the Web's appeal to millions of users, and the lack of effective end-to-end service (QoS) in such applications is now a cause for significant concern. In addition, the heterogeneity of the networks and end systems makes it difficult to multicast Internet multimedia in an efficient and flexible way. The interaction between multicasting and the delivery of multiple time-correlated continuous media streams with real-time delay requirements poses various new and interesting problems in research on communication protocols and architectures.

We propose a novel resource reservation mechanism, Dynamic Management of QoS with Priority (DMQP), for multimedia multicasting over the Internet. DMQP is a resource reservation based approach to provide QoS guarantee for real-time data traffic, in which an application requests QoS by specifying a range of values and the network tries to provide service at a specified point within this range depending upon its service class. DMQP achieves the service differentiation by classifying the end-users into two classes : normal user and prioritized user.

 
Wednesday, June 18, 2003, 2 p.m. - Science Center 307
Barbara Morawska
Department of Mathematics and Computer Science, Clarkson University
Goal-Directed E-Unification - Completeness, Decidability and Complexity

Abstract:
In the thesis I explore a new approach to solving the E-unification problem, i.e., the problem of solving equations modulo an equational theory presented as a set of identities.

This approach is goal-directed, because it searches for a solution for an equation by analyzing the structure of the goal (the equation to be solved), as opposed to procedures based on a completion of an equational theory. It is top-down, since it analyzes the goal and applies inference rules to the goal only with respect to the top symbols in the terms in the goal equation.

Such a goal-directed, top-down procedure is shown to be a complete, semi-decision procedure.

Next I will examine different ways of putting syntactic restrictions on a presentation of an equational theory, so that this semi-decision procedure becomes a decision procedure (it always terminates).

Hence we are able to prove complexity results for E-unification in several classes of equational theories.

 
Thursday, June 12, 2003, 10 a.m. - Science Center 307
Shuangtiao Huang
Department of Mathematics and Computer Science, Clarkson University
Distributed Network Management System with CORBA
Abstract:
Network management systems (NMS) have played an important role in today's telecommunication network. Now telecommunication network environments have become more complicated, and heterogeneous devices are provided by different vendors. What is more, the amount of network devices is rapidly increasing. These trends require NMS to provide standard interfaces to share information and be scalable to manage complex networks. This paper presents a method to design an open, standards-based and distributed NMS with Common Object Request Broker Architecture (CORBA). CORBA facilitates the inter-communication among different NMS. Also CORBA's inherent distributed capability makes it possible to manage large-scale networks. Obviously, for a lot of devices, a single server is not enough to host the NMS. Here I propose an efficient algorithm to divide the managed system into partitions and distribute every partition to each server. This way, each server will hold an equivalent workload. At the same time, this method tries to reduce server-server communication. The basic principle here is based on heuristic optimization strategy.
 
Friday, May 23, 2003, 10:30 a.m. - Science Center 307
Jingyi Zou
Department of Mathematics and Computer Science, Clarkson University
Migrating Between Relational Databases and XML Documents
Abstract:
XML (eXtensible Markup Language) is fast emerging as the dominant standard for representing and exchanging information over the Internet. However, for the foreseeable future, a significant amount of business data will continue to be stored in the relational database system. It is necessary to be able to extract relational data from XML documents and store it in a database, as well as to generate XML documents from known databases. We build two frameworks XML Generator and X2R to solve the following problems: 1) Generating XML document and DTD from relational schema, 2) Mapping relational schema from an XML document with DTD.