Security Policy Specification for Software-Defined Networks
Software-Defined Networking (SDN) provides considerable simplification for designing and deploying security applications for large networks. Each security application has its own view of security policies and a significant challenge for network administrators is to implement a consistent and accurate security policy based on policy requirements of different security applications.
Recent advances in artificial intelligence (AI) have provided powerful problem solving techniques that are capable of dealing with inconsistencies through the provision of constraints and preferences. One such technique is the Answer Set Programming paradigm.
In this research, we aim to design and develop a policy specification and verification language for SDN controllers and apply recent AI problem solving techniques to determine the best policy that meets that specification. To this end, the research will develop the following outcomes: (i) a high level declarative policy specification language; (ii) a translation of specifications in the language into an Answer Set Program; and, (iii) a sound and complete method for translating answer sets (i.e., determinied policies) into the low-level OpenFlow messages.
Provenance-Aware Security Risk Analysis for Hosts and Network Flows
A significant challenge for monitoring of large enterprise networks is the complexity of extracting risky network flows (the most likely malicious flows) from a large quantity of flows. Identifying the risky flows makes taking effective countermeasures a feasible task. The security risk level of a network flow can be evaluated based on both the risk level of the content of the flow and the amount of the risk which is propagated by its related flows. Such recursive assessment indicates that, in order to address the problem of flow risk assessment, we need a comprehensive solution which considers the whole interdependency risk relationship among the network flows as well as the hosts initiating and targeted by the flows.
In this project, we introduced two novel concepts. The first is an interdependency relationship among the risk scores of a network flow and its source and destination hosts. In one hand, the risk score of a host depends on risky flows such a host initiates and is targeted by. On the other hand, the risk score of a flow depends on the risk scores of its source and destination hosts. The second concept, which we call flow provenance, represents risk propagation among network flows which takes into account the likelihood that a particular flow is caused by other flows. Based on these two concepts, we develop an iterative algorithm for computing the risk level of hosts and network flows. We give a rigorous proof that our algorithm rapidly converges to unique risk estimates, and provide its extensive empirical evaluation using two real-world datasets.
Trust Based Data Aggregation in the Presence of Faults and Attacks
Today, many real life distributed systems such as WSNs, participatory sensing networks, and e-commerce systems are collecting observed data using many different, possibly independent sources. Since some of these systems are unattended, the sources are subject to faults and node compromise attacks which allow an attacker to inject false data. Trust and reputation systems are widely employed in distributed systems to help decision making processes by providing the trustworthiness of sources as well as a robust data aggregation. Although fault detection and tolerance problem have been widely studied in trust based data aggregation systems, taking into account malicious behaviour such as collusion attacks is still a challenging problem.
In this Project, we design and develop data aggregation schemes which leverage a novel trust computation method robust in the presence of faults and malicious attacks. To this end, we focus on iterative filtering algorithms due to the fact that such algorithms simultaneously aggregate data from multiple sources and provide trust assessment of these sources, usually in a form of corresponding weight factors assigned to data provided by each source.