Support Forums

Full Version: Started working on a Duplicate file search program
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Here's my progress so far:
[Image: idAZa8.png]

In the future, it's going to have several methods of veryfying duplicates through different hashes, and it's also going to have a function that will search through files of similar file sizes to reduce the amount of time finding duplicates. It will compare files of similar file sizes first to speed that up a bit.

Right now, it displays everything in a table view, where the MD5 hash shows up in blue, with expandable, and checkbox enabled line entries for every node. It will delete the checked nodes that consist of file's with proper filepaths to files that actually exist, and it also has a quick search filter with several presets i've made through string arrays, and even an option to define your own filter's for file extensions to search through.

I still need to combine a few of the nodes, and allow it to have a recursive loop in order to search through subdirectories of subdirectories until there are no more folders to open. But I will have an option possibly to set the depth of the search as well, where it will stop after a certain level of routing through directories.

It's my priority to attain a feature that will display the total bytes wasted in duplicate files as well. I already have a few statistics set in stone which seem to prove fairly accurate, but I'm going to add more to the "informational" area sooner or later.

Before I actually got into this, I didn't think much of how difficult this would be, but if any of you want to test out compiling an application like this go for it lol. I guarantee you're going to be overwhelmed with loops, indexes and hashing.