ABSTRACT - Image processing and storage are enormously resource intensive tasks that can benefit from cloud computing. Lack of robust mechanisms for controlling the privacy of the data outsourced to clouds is one of the concerns in using clouds for image processing. This paper presents a new image encoding scheme that enhances the privacy of the images outsourced to the clouds while allowing the clouds to perform certain forms of computations on the images. Our encoding scheme uses a chaotic map to transform the image after it is masked with an arbitrarily chosen ambient image. A simplified prototype of the image processing system was implemented and the experimental results are presented in this paper. Our prototype shows the feasibility of performing a class of image processing tasks on images encoded for privacy.
ABSTRACT - Cloud computing is ideally suited for hosting ap- plications with large images because clouds can perform high performance computations and communications at very low upfront cost. While the benefits of clouds are compelling for image processing applications, security and privacy issues in using clouds are creating major problems. Because clouds are not passive storage devices, it is important to avoid encryption schemes that secure the data while preventing efficient processing of the data by the clouds. Therefore, this paper, is investigating approaches where segmentation is used to split the image data across multiple clouds. We show that segmentation, although a simple idea, can bring several benefits when deployed in cloud computing. A prototype of the system has been developed and initial performance results are reported here.
ABSTRACT - The amount of data stored in social media sites is humongous. It has very large in numbers and very diverse in variety. Researchers have already started using this information in various information security applications such as censorship resistance. When image data from social media sites are used in computational applications for hiding data or obfuscating data, it is important to retrieve the particular data used in the encoding process to retrieve the original data. Therefore, the availability of the image data is an important concern. In this paper, we investigated the availabilities of the images on Flickr. We examined nearly one million images hosted on Flickr and measured their availability in a two year period. We used the EXIF parameters normally generated by the cameras to further categorize the availabilities.
ABSTRACT - Image processing and storage are enormously resource intensive tasks that can benefit from cloud computing. Lack of robust mechanisms for controlling the privacy of the data outsourced to clouds is one of the concerns in using clouds for image processing. This paper presents a new image encoding scheme that enhances the privacy of the images outsourced to the clouds while allowing the clouds to perform certain forms of computations on the images. Our prototype shows the feasibility of performing a class of image processing tasks on images encoded for privacy.
ABSTRACT - Template matching is a fundamental building block for image search operations. In this paper, we present a scheme that allows privacy-preserving template matching operations on images that are stored on clouds. Our scheme uses “ambient image data” (images that are found in social media sites such as Flickr) as well as a privacy-preserving encoding technique to encode a given image before it is stored in a cloud. We show a particular encoding strategy that allows template matching to take place in the cloud while not revealing any information about the image or queried template to the cloud. A simplified prototype of the image processing system was implemented and the experimental results are presented in this paper. Our prototype shows the feasibility of performing privacy-aware template matching on encoded images.
ABSTRACT - Recently, searching over encrypted data has become a hot research topic. Basic idea of privacy enhanced search is to generate an intermediate representation of the original text data and use it to perform the search. Prior research has used hash maps, tries, and other data structures to create the intermediate representation. We use a texture scheme for representing the characters. To enhance the privacy the textures are split and noise is added to each of the portions such that all portions of a texture needs to be collected to recover the original data in an unambiguous manner. One of the key advantages of our scheme is the ability to implement most of the search schemes (e.g., wildcard searches) that are performed with plain text searches. This paper fully describes our data representation scheme and presents the experimental data we gathered by implementing the scheme in a server cluster.
ABSTRACT - Integrity is an important concern in any knowledge management system. This paper discusses an ongoing research work that aims to develop a community-centric integrity management system for a large-scale knowledge management system that works on the Internet.
ABSTRACT - Social networks are emerging as the arteries for information flow on the web. In a previous paper, we introduced a new community-centric approach for information flow control for the social web. This paper introduces two key improvements to the previously introduced mechanisms. The first improvement is the ability to model user heterogeneity with regard to information relaying on the social web. The second improvement is the ability to reduce information flow to a specific user (i.e., block information flowing to a specific user). We evaluate the algorithmic ideas using traces of interactions obtained from Facebook and Flickr. Our evaluations indicate that the algorithmic ideas developed by us are useful in controlling the information flow in the social web.
ABSTRACT - This paper investigates how a hybrid hosting platform made from dedicated and opportunistic resources can be used to host data stream processing applications. We pro- pose a system model for the hybrid hosting platform and develop resource management algorithms that are neces- sary to coordinate the allocation of the two classes of resources to the stream processing tasks. We used extensive simulations driven by traces styled from realistic system observations for evaluating the proposed resource alloca- tion heuristics. The results show that with proper man- agement, the synergy of dedicated and opportunistic re- sources yields considerably higher service throughput and thus, higher return on investment over expensive dedicated resources.
ABSTRACT - The scale and cost efficiencies provided by clouds make them ideal platforms for handling data intensive applications in variety of different sectors including e-health, e-commerce, and surveillance applications. This chapter investigates the privacy and security issues of large data sets that are stored and processed in cloud computing systems. If the experience gained from web-based transaction management systems is any indication, the safety of data held by cloud computing systems is not impregnable. There are various factors contributing to the data insecurity including the data handling policies adopted by the cloud operators, best practices adopted by the cloud provider in recycling used storage elements, and characteristics of the data handled by the clouds.
ABSTRACT - The dominant role of social networking in the web is turning human relations into conduits for information flow. This means the way information spreads on the web is determined to a large extent by human decisions. Consequently, information security lies on the quality of decisions made by the users. Moreover, information spreading patterns rely on the collective decisions of interconnected sets of users. In this paper, we present a novel community-centric confidentiality control mechanism for information flow management on the web. We use a Monte Carlo based algorithm to determine the potential spread of a shared data object and inform the user of risk of information leakage associated with the different sharing decisions she can make in a social network. Using the information provided by our algorithm a user can curtail sharing decisions to reduce the risk of information leakage. Alternatively, our algorithm can provide input to a fully- or semi-automatic sharing decision maker that determines the outcomes of sharing requests. We used datasets from Facebook and Flickr to evaluate the performance of the proposed algorithms under different sharing conditions. The simulation results indicate that information sharing can be effectively controlled by our algorithm.
ABSTRACT - In this paper, for controlling information sharing in online social networks (OSNs) we present a community-centric access control mechanism called myCommunity. We develop heuristic ideas for efficiently computing myCommunities in OSNs and evaluate it using traces from actual OSNs. The experimental results indicate that myCommunity is a feasible idea and simple estimation strategies can be effective to obtain an initial value. In ongoing work, we are extending myCommunity to incorporate the dynamic nature of the trust within OSNs.
ABSTRACT - We present an accountability framework for the Internet which ties a user's action to her identity on an online social network. The framework is optional in that users do not need to be accountable at all times, but various web services can force accountability on the part of their users by only allowing accountable users access. Our design is general enough that higher level applications can place additional policies/restrictions on the basic accountability provided. In this paper, we introduce the design, discuss how various applications can be mapped onto our framework, and provide performance numbers from an experimental prototype.
ABSTRACT - One of the key challenges in digital media distribution is digital rights management (DRM). DRM is a classic example of a problem that arises due to incentive misalignments that exists among transacting parties in a network. Because of misaligned incentives users have the inclination to break DRM if it becomes technically possible. This paper draws upon work in group lending and social psychology to develop a novel social media distribution scheme. The key objective is to x the incentive misalignment that exists between the users and media distributors. Using game theoretic modelling, certain conditions that should hold for social media distribution to work are educed. Tech Report (PDF)
ABSTRACT - Phishing is a major problem on the Internet. The cornerstone of anti-phishing is detecting whether a given site is good or bad. Most of the approaches for anti-phishing rely on looking up centrally maintained repositories. In this paper, we present a decentralized framework called CASTLE that allows a collaborative approach for anti-phishing services. We implemented a prototype and then tested it on Planet-lab. The experiments indicate the viability of our framework.
ABSTRACT - In this paper, we investigate the problem of online data sharing on social networks from a game theoretic framework. We introduce blacklisting as trigger strategy to elicit cooperation among the players of a noncooperative sharing game. Using game theoretic analysis, we show the existence of an equilibrium in which the sharing conditions are honored when the involved players employ blacklisting strategies. Full Paper (PDF)
ABSTRACT - The rapid evolution of the Internet has forced the use of Network Address Translation (NAT) to help slow the decline of publicly available IPv4 addresses. While providing additional address space as well as privacy and security to its users, NAT eliminates the ability to establish incoming connections to devices within a private network. To address this issue, we propose combining social network topologies with the traditional NAT architecture to better integrate peer-to-peer communication through NATed networks. Called SocialNAT, this socially enhanced NAT allows incoming connections from trusted parties, resolving one of the central criticisms of the NAT approach. Full Paper (PDF)
ABSTRACT - Usernames and passwords that rely on the “something you know” factor is still the mainstay of authenticating on the web. In a mobile web, where users interact using mobile devices (mainly cellular phones), the username and password approach to authentication is not ideal.We need implicit approaches that require very little or no input from the users. This paper presents a new approach to use the “someone you know” factor for authentication using mobile phones. We have implemented the authentication protocol on Nokia N95 phones. We present some initial performance results from the implementation to demonstrate the viability of the approach in terms of time required to run the protocol and battery life. Full Paper (PDF)
ABSTRACT - This paper describes how social factors can be incorporated into digital rights management. Specifically, we outline a design for a social distribution network that is built by agents that have incentive to discourage piracy. We pose the social distribution network formation as a game theoretic problem and identify the games played by the two types of agents. Full Paper (PDF)
ABSTRACT - GINI (GINI Is Not Internet) is an open-source toolkit for creating virtual micro Internets for teaching and learning computer networking. It provides lightweight virtual elements for machines, routers, switches, and wireless devices that can be interconnected to create virtual networks. The virtual elements run as unprivileged user-level processes. All processes implementing a virtual network can run within a single machine or can be distributed across a set of machines. The GINI provides a user-friendly GUI-based tool for designing, starting, inspecting, and stopping virtual network topologies. This paper describes the dierent components of GINI, briefly discusses ways of using the toolkit in a computer networking course, and reports on user feedback on an early (incomplete) version of the toolkit. Full Paper (PDF)
ABSTRACT - Social interaction is already a proven component of informal identification: humans are naturally skilled at recognizing other people and are unlikely to be duped by impersonation. Based on this premise, a fourth-factor, someone-you-know, has already been proposed as an emergency authentication method. This paper explores leveraging a user’s preexisting social actions as a primary authentication tool, one that operates transparently and automatically without explicit user guidance. Specifically, we describe the feasibility of capturing a user’s local social context using short range wireless devices and evaluate the uniqueness of that context in comparison to that of possible aggressors. Full Paper (PDF)
ABSTRACT - The issue of certificate revocation in mobile ad hoc networks (MANETs) where there are no on-line access to trusted authorities, is a challenging problem. In wired network environments, when certificates are to be revoked, certificate authorities (CAs) add the information regarding the certificates in question to certificate revocation lists (CRLs) and post the CRLs on accessible repositories or distribute them to relevant entities. In purely ad hoc networks, there are typically no access to centralized repositories or trusted authorities; therefore the conventional method of certificate revocation is not applicable. In this paper, we present a decentralized certificate revocation scheme that allows the nodes within aMANET to revoke the certificates of malicious entities. The scheme is fully contained and it does not rely on inputs from centralized or external entities. Full Paper (PDF)
ABSTRACT - This paper presents a case for exploiting the synergy of dedicated and opportunistic network resources in a distributed hosting platform for data stream processing applications. Our previous studies have demonstrated the benefits of combining dedicated reliable resources with opportunistic resources in case of high-throughput computing applications, where timely allocation of the processing units is the primary concern. Since distributed stream processing applications demand large volume of data transmission between the processing sites at a consistent rate, adequate control over the network resources is important here to assure a steady flow of processing. In this paper, we propose a system model for the hybrid hosting platform where stream processing servers installed at distributed sites are interconnected with a combination of dedicated links and public Internet. Decentralized algorithms have been developed for allocation of the two classes of network resources among the competing tasks with an objective towards higher task throughput and better utilization of expensive dedicated resources. Results from extensive simulation study show that with proper management, systems exploiting the synergy of dedicated and opportunistic resources yield considerably higher task throughput and thus, higher return on investment over the systems solely using expensive dedicated resources. Full Paper (PDF)
ABSTRACT - We present a personal data access control (PDAC) scheme inspired by protection schemes used in communities for sharing valuable commodities. We assume PDAC users are members of an online social network such as facebook.com. PDAC computes a “trusted distance” measure between users that is composed of the hop distance on the social network and an affine distance derived from experiential data. The trusted distance classifies users into three zones: acceptance, attestation, and rejection. User requests falling in the acceptance zone are accepted immediately while the requests in the rejection zone are rejected outright. Requests in the attestation zone need additional authorization to gain access. PDAC also tracks reposts to minimize the spread of data beyond the limits set by the data originator. PDAC was implemented on a social network emulator to demonstrate its viability. The performance of certain PDAC functions were examined using simulations driven by portions of social graphs obtained from myspace.com. Full Paper (PDF)
ABSTRACT - This paper is proposing a new platform for implementing services in future service oriented architectures. The basic premise of our proposal is that by combining the large volume of uncontracted resources with small clusters of dedicated resources, we can dramatically reduce the amount of dedicated resources while the goodput provided by the overall system remains at a high level. This paper presents particular strategies for implementing this idea for a particular class of applications. We performed very detailed simulations on synthetic and real traces to evaluate the performance of the proposed strategies. Our findings on compute-intensive applications show that preemptive reallocation of resources is necessary for assured services. The proposed preemption-based scheduling heuristic can significantly improve utilization of the dedicated resources by opportunistically offloading the peak loads on uncontracted resources, while keeping the service quality virtually unaffected. Full Paper (PDF)
ABSTRACT - This paper presents a new bandwidth inference mechanism that can be used to predict bandwidth across two nodes on the Internet. We used simulation and actual implementation on PlanetLab to compare the performance of the proposed mechanism against an existing approach. The results indicate that our approach is lightweight and yields better performance. Full Paper (PDF)
ABSTRACT - Social networks are graphs that represent relations among people, institutions, and their activities. We introduce a novel social access control (SAC) strategy inspired by multi-level security (MLS)  for protecting data on social networks. In MLS, the data objects and subjects are classified in hierarchical levels based on security clearance and access controlled accordingly. Instead of clearance levels, we use trust levels to annotate objects and subjects. The trust level of an object is specified by the creator. The trust level of a subject is obtained from a trust modeling process [2, 3]. Reading a data object is controlled using the relative trust values of subjects and objects. We describe one aspect of the SAC model that supports the confidentiality of read-only data objects. We performed simulation studies using traces from the flickr.com social network to evaluate the performance of some key primitives used in the SAC design. Full Paper (PDF)
ABSTRACT - Secure routing in mobile ad hoc networks (MANETs) has emerged as a important MANET research area. MANETs, by virtue of the fact that they are wireless networks, are more vulnerable to intrusion by malicious agents than wired networks. In wired networks, appropriate physical security measures, such as restriction of physical access to network infrastructures, can be used to attenuate the risk of intrusions. Physical security measures are less effective however in limiting access to wireless network media. Consequently, MANETs are much more susceptible to infiltration by malicious agents. Authentication mechanisms can help to prevent unauthorized access to MANETs. However, considering the high likelihood that nodes with proper authentication credentials can be taken over by malicious entities, there are needs for security protocols which allow MANET nodes to operate in potential adversarial environments. In this paper, we present a secure on-demand MANET routing protocol, we named Robust Source Routing (RSR). In addition to providing data origin authentication services and integrity checks, RSR is able to mitigate against intelligent malicious agents which selectively drop or modify packets they agreed to forward. Simulation studies confirm that RSR is capable of maintaining high delivery ratio even when a majority of the MANET nodes are malicious. Full Paper (PDF)
ABSTRACT - Epidemic protocols such as gossip have proven to have many desirable properties for information sharing. However, trust is one of the issues that is yet to be examined with respect to these protocols. In this paper, we present a trusted gossip protocol that uses trust estimates to impede spreading of rumors with reasonable message and processing overheads. We use traces collected from known social networks to estimate the performance of trusted gossip. Full Paper (PDF)
ABSTRACT - In this paper, we propose simple and practical strategies to improve the trustworthiness of network positioning schemes. In particular, our strategies make network positioning immune to non-random perturbations such as denial-of-service attacks and localized network congestion. Additionally, we studied the overhead generated by existing network positioning algorithms and propose an algorithm that results in low overhead while retaining very high accuracies. We performed extensive simulations and implementations on PlanetLab to examine the performance trade-offs. Full Paper (PDF)
ABSTRACT - Several high-throughput distributed data-processing applications require multi-hop processing of streams of data. These applications include continual processing on data streams originating from a network of sensors, composing a multimedia stream through embedding several component streams originating from different locations, etc. These data-flow computing applications require multiple processing nodes interconnected according to the data-flow topology of the application, for on-stream processing of the data. Since the applications usually sustain for a long period, it is important to optimally map the component computations and communications on the nodes and links in the network, fulfilling the capacity constraints and optimizing some quality metric such as end-to-end latency. The mapping problem is unfortunately NP-complete and heuristics have been previously proposed to compute the approximate solution in a centralized way. However, because of the dynamicity of the network, it is practically impossible to aggregate the correct state of the whole network in a single node. In this paper, we present a distributed algorithm for optimal mapping of the components of the data flow applications. We propose several heuristics to minimize the message complexity of the algorithm while maintaining the quality of the solution. Full Paper (PDF)
ABSTRACT - Web-based social networks are emerging as the top applications on the Internet. With this immense popularity, many of the shortcomings of the current social network deployments are also coming to light. One of the glaring problems with existing web-based social networks is trust management. In this paper, we focus on trust modeling in social networks. Another allied issue that is not considered here is using trust in managing the activities within the social network. We introduce a gravity-based model for estimating trust. We present the complete model along with the trust computation algorithms. We present initial results from a simulation study that investigates the feasibility of the proposed scheme. Full Paper (PDF)
ABSTRACT - In a recent study we proposed a trusted gossip protocol for rumor resistant information sharing in peer-to- peer networks. Experiments using trace data collected from social networks like Flickr and other data sets showed that the trusted protocol can achieve significant reductions in rumor spreading with reasonable message and processing overheads. The study, however, did not consider node churn - a continuous process of node arrival and departure. In this paper, we show through experiments that the trusted gossip protocol can continue to perform equally well with churning nodes as in no-churn situations. We examine the trusted gossip protocol using synthetic and real traces for node churning collected from the Myspace social network. Our experiments show that the trusted protocol performance is considerably resilient even to extreme churning conditions. Full Paper (PDF)