![]() It purges the redundant vectors according to a purging threshold and keeps the primary representatives. In this paper, we propose a data compacting approach aimed to reduce the input volume and keep enough representative feature vectors to fit DBSCAN's (Density-based spatial clustering of applications with noise) criteria. However, we still face a data redundancy phenomenon in which some of the same feature vectors repeatedly emerged. In our previous work, BotCluster, we had designed a pre-processing filtering pipeline, including whitelist filter and flow loss-response rate (FLR) filter, for data reduction, which intended to wipe out irrelative noises and reduce the computing overhead. For approximate matching of data records, string matching algorithms (recursive algorithm with word base and recursive algorithm with character base) have been implemented and it is concluded that the results are much better with recursive algorithm with word base.īig data analytics helps us to find potentially valuable knowledge, but as the size of the dataset increases, the computing cost also grows exponentially. A prototype is also developed which shows that adaptive duplicate detection algorithm is the optimal solution for the problem of duplicate record detection. As a result of this research study, comparison among standard duplicate elimination algorithm (SDE), sorted neighborhood algorithm (SNA), duplicate elimination sorted neighborhood algorithm (DE-SNA), and adaptive duplicate detection algorithm (ADD) is provided. “duplicate record detection” which arises when the data is collected from various sources. In this research, focus is on one of the major issue of data cleansing i.e. Cleansing of data is one of the most crucial steps. The data in the integrated systems need to be cleaned for proper decision making. ![]() The integrated databases inherit the data quality problems that were present in the source database. These kinds of issues become prominent when various databases are integrated. The data collected from various sources may have data quality problems in it. Many organizations collect large amounts of data to support their business and decision making processes. ![]()
0 Comments
Leave a Reply. |