Review And Summary Of Blocking Inference Channels - Best Essay Writing Service Reviews Reviews | Get Coupon Or Discount 2016
Free Essays All Companies All Writing Services

Review and Summary of Blocking Inference Channels

Blocking Inference Channels in Frequent Pattern Sharing by Zhuhui Wang, Wei Wang, and Baile Shi is a work of literature designed to address problems and concerns about the disclosure of private and sensitive personal information through the process is of data mining. While advances in the technology of data mining have been beneficial in a plethora and immense number of ways, the process itself brings forth a number of concerns about the distribution of sensitive information and the undesirable effects such a distribution can have on data owners.

Unfortunately, although many efforts have been made to protect privacy in the processes of data mining, much of the information that is derived through the mining process may also be, in and of itself, sensitive. The discovery for frequent patterns within large sets of data is one of the most reviewed problematic occurrences in the process of data mining. However, because data mining can yield outcomes that have proven commercially invaluable, it is, and will continue to be one of the most popular sources of research.

However, it has been revealed that within the aforementioned frequent patterns, sensitive personal and commercial information can be revealed. As such, the compromise of privacy has prompted review and research into more effective ways to beneficially mine data without revealing this compromising personal information while protecting data owner privacies. As such, negative effects can be brought about by the indiscriminate sharing of said sensitive material.

Blocking Inference Channels in Frequent Pattern Sharing seeks to address the issues associated with frequent pattern mining and seeks out new ways to protect privacy of data owners and the information discovered throughout the process. During the process of data mining, information is presented via a collection of various patterns over frequent duration in combination with their supports. This scenario is considered as a collective problem and is exemplified in the fact that data owners often regard the frequent patterns in and of themselves to be “sensitive”.

The ultimate goal set forth in the reviewed literary work Blocking Inference Channels in Frequent Pattern Sharing is the prevention of the disclosure of sensitive frequent patterns and to maximize the quality output while maintaining quantity of patterns that are not considered sensitive. In reviewing the assigned article, a new projection based process for the detection of anonymity of patterns is revealed. Within the research, it has been proven that the authors’ approach can reasonably detect all maximal inference channels associated with non-k-anonymous patterns.

Post experimentation process, it is revealed that the authors’ approach is more effective than most prior attempts. Within the piece, the authors seek to prove problems associated with data sanitization in that ultimate data sanitization reveals the fact that the results of ultimate sanitization of frequent patterns (even those that are not considered sensitive) are altered after the process is completed. Not to mention the fact that various approaches to data sanitization can also introduce false frequent patterns. This revelation prompted an approach known as Downright Sanitizing Algorithms or DSA.

Unfortunately, even with DSA, a potential aggressor can still obtain sensitive figure patterns from within the results. The authors of Blocking Inference Channels in Frequent Pattern Sharing seek to overcome these limitations via a thorough analysis of inference channels found within a series of collected frequent patterns. Throughout the process of research, they elected to classify each of these inference channels into three varying categories. The three categories were set to be superset inference channels, subset inference channels, and finally chain inference channels.

After the categorization process, and on the basis of past sanitization, the authors present 2 varying approaches for blocking inference channels within frequent pattern sharing. In order to best understand and exemplify the problems and solutions aforementioned, I have decided to reiterate and simplify some of the formulations brought forth by the writers in an effort to help facilitate the information brought to light on blocking inference channels in frequent pattern sharing. As such, some minimal basic notation will help to better perpetuate the aforementioned example:

If I= the collection of all information, this series can also be represented as I= {i1, i2…,in}. For simplicity sake, let T represent a set of items (transaction) derived from I, and finally, let D represent a database or a series of the aforementioned transactions. For each T, a unique qualifier or identifier must be associated therewith, in this case Tid will suffice. Let’s identify a pattern as X, which means X = {ij, ij+1,…,il}. Also, X=a subset of I. So, if there are k number of items in X, we can assume that the length of the represented pattern is k. This can be formulated, in simplified terms, as /X/ = k.

If T contains pattern X and if only X ( Y, to support (sup) X in D, the quantity of T within D that contain X is represented by sup(X). And finally, the empty pattern {} is supported by any series in D. Basically, it is revealed that sup({})=N and N represents the quantity of transactions in D. The goal of frequent pattern mining is to locate and pinpoint frequent patterns in a specific transaction database for a specified minimal support threshold. The result of this mining can be represented in the following example formula: if o represents the minimum support threshold and M represents frequent pattern mining results, then K= (o, F).

Herein, Y represents {X, sup(X))/sup(X)>= o, X C/ I}. This formulation can represent all the frequent patters with supports existing in D. A formal example of a data mining is data processing using sophisticated data search capabilities and statistical algorithms to discover patterns and correlations in large preexisting databases; a way to discover new meaning in data. (Princeton. edu). With this in mind, The underlying problem the writers seek to address is the transformation of M into K1 = (o, F1), given that K = (o, F).

The transformation of K = (o, F) into K1 = (o, F1) guarantees the information can be safely shared. Basically, for the aforementioned solution F1 to be shared safely requires that the following criterions are met: The initial condition requires that F1 cannot contain sensitive patterns, and must be a component of the knowledge revealed via frequent pattern mining. There must be no false frequent pattern therein, and most importantly, for any pattern, the pre-determined specified formulation representing a sensitive pattern cannot be altered in any way.

The second set of conditions that must be met is the condition of self-containment. In short, there must be no additional frequent patterns within the specified solution. In other words, F1 must contain no additional derivable frequent patterns. Through the elimination of a possible inference attack, research reveals further guarantees that the sharing of F1 cannot pose any danger to data owners through the revelation of sensitive patterns. The third and final condition that must be met is based solely on the fact that the authors expected to establish an optimal solution to their problem set.

That is to say that the third condition set forth is the maximization of the quantity of patterns that were not considered sensitive and were thereby shareable. The hope of the authors was that if F1 = F – F8. However, it is imperative to note that the limitation of certain patterns from that formulation Y is not enough to protect sensitive data thanks to the existence of inference channels. The idea behind superset inference channels is primarily based on monotonicity principle and can be exemplified through the designation of a specified inequality.

In order to block superset inference channels, the elimination of all super patterns is also required. To exemplify, in order to hide a specific sensitive pattern, for brevity sake abc, the elimination of abcd, 2 is also required. When an inference channel is a subset inference channel the basis thereof is the inclusion-exclusion principle. A study by T. Calders proved that specific inequalities hold true when the supports of specific sub-patterns are known. Based on the aforementioned proven inequality, it was revealed that there will always exist some sub-patterns with an equalized length.

Therefore, in order to block a subset inference channel it is possible to eliminate from the results of the pattern mining, one of the inference patterns with an equalized length. In addition to the subset inference channels and superset inference channels, it is possible that a separate inference channel exists. Chain inference channels are indirect inference channels (Blocking Inference Channels in Frequent Pattern Sharing). Within chain inference channels, it is possible to infer that a sensitive frequent in multiple steps.

In correspondence, superset inference channels as well as a superset inference channels are considered direct inference channels, in a direct inference channel, it is possible to infer a sensitive pattern is frequent in a singular step. After the examination of all the aforementioned data, the authors developed and presented two algorithms designed to block inference channels in frequent pattern sharing. The presented algorithms are designed around a basis for pattern sanitization. In short, the set of shared data is compiled upon the removal of sensitive as well as some related has patterns that are not considered sensitive.

As such, authors avoid bringing forth any false frequent patterns and avoid the distortion of the supports in any of the existing patterns. The present algorithms are considered solutions to an existing problem. As such, I have omitted the inclusion of the algorithms from this review. However, I will note that the authors used a synthetic dataset as derived from the IBM Data Generator. They obtained a collection of all recorded frequent patterns and their supports. Within the performance evaluation sensitive patterns were determined via the selection of random frequent patterns.

It is realized that blocking specified dataset will automatically prevent a derivative of a dataset from disclosure simultaneously. In the two exemplified algorithms presented by the authors of the piece, DSA was not considered in the experimentation process. The reason for this exclusion was because the DSA protocol does not have the ability to completely block inference channels of sensitive patterns in their entirety. The efficiency and effectiveness of each of the two algorithms was measured utilizing side effect, also known as SE, and running time.

Side effect measured the effectiveness of each algorithm while running time measured the efficiency of each algorithm. Within the initial experimentation, performance of the two algorithms was measured with notation of the quantity of frequent patterns in the existing data. In addition to that, the sensitive patterns were fixed. Sum total, the authors used 50 sensitive patterns, the supports thereof ranging from 1700 to 1900 wherein the average length of a specified sensitive pattern is 3. The efficiency and effectiveness of algorithms and also be measured by varying the quantity of sensitive patterns studied.

In a series of experiments performed by the authors of the literary work, the variance between sensitive pattern quantities ranged from 5 to 500. At the conclusion of the writers’ experimentation, it was realized that the more sophisticated algorithm presented less side effects, but also required additional running time. It was noted that an extensive algorithm takes more detail into consideration and thereby requires a longer running time. On the contrary, a simple algorithm requires less running time, and inevitably causes an over sanitization of data.

At the end of the literary work, the authors acknowledged the existence of adverse channels and also the need to increase the ability to protect sensitive data patterns. They also noted the positive effects of categorizing inference channels into the three aforementioned categories; subset inference, chain inference, and superset inference. The authors also acknowledged that the results of their two presented algorithms avoid the presentation of false knowledge, and also succeed in the maintenance of data accuracy.

However, based on their conclusions, they feel that further developed, more effective, and more efficient algorithms could be designed to better block inference channels within frequent pattern sharing. The researchers also believe that their findings can be extended into additional tasks of data mining such as clustering a classification. I feel the major contribution of the work is the presentation of usable and quantifiable data for the solution of a major concern in the world of data mining.

Research indicates that this work has been cited in a number scholarly periodicals and In review of the article, I found the information therein to be highly detailed and accurate. I did note that much of the implications in the development of the new algorithms were derived from information published in previously written reports. Nonetheless, the material outlined in the article exhibits detailed planning and thought. Because of this detail, it’s nearly impossible to exemplify the included information without risking plagiarism of the work.

I do believe that the information contained in the paper is real-world applicable as it is relevant to today’s technology and relevant to today’s data owner. The strengths of the applications are found in their protective abilities, however the weaknesses of the applications can be found in the margin for error (the blocking of too many non sensitive patterns) and the variables in side effect and running time. Further application of the findings may improve upon problems and provide additional solutions. As of late, Privacy Preserving Data Mining (PPDM) is a popular research protocol in the field of data mining.

Because of the discovery of the risks associated with data mining, in 2002 a series of rules were set about to aid in the protection of data owners when sharing data patterns. Part II The following simplified exercise and the solution thereto will help to exemplify understanding of the material brought forth by the reviewed article. In the first part of the exercise, identify the frequent patterns identified in the above data table. Secondly, assume that bcd is sensitive pattern and determine whether its elimination is sufficient for protecting the above data.

Why? Solution: Given the data set, it can be inferred that transactions containing bcdo also contain bcd (the sensitive pattern). As such, the elimination of bcd is not sufficient for protecting the data. Because sup(bcd) >= sup(bcdo), it can be deduced that {(bcdo,2)} is an inference channel of the existing pattern bcd. References Data Mining. (n. d. ). Retrieved July 14, 2009, from http://www. princeton. edu Wang, Z. , Wang, W. , & Shi, B. (2007, February 7). Blocking Inference Channels in Frequent Pattern Sharing. IEEE, 1425-1429.

Sample Essay of Edusson.com