Why is Everybody so clueless on the importance of Desktop Search to the Masses?



  • @AbbydonKrafts said:

    With your current "structure"

    Hahahahahahahahahahahahahahahaha...



  • dumb OLD timer

    @AbbydonKrafts said:

    Easier said than done. API timers require a module with a callback procedure. It can't be in a form. Also, when the procedure is called, it would have to perform the update steps back at the form. With your current "structure", it would take a lot of work to implement. Find more info here.

    So for now we'll go with what there is. The demo is on hold. I want to show searching this forum text instead of the files I had been using. There are 44,000 lines and 1.6 million characters in this thread so far, it is BIG enough. I'll show How I search for search.exe and source.txt and see how the posters have progressed. etc GDS won't be able to find the later material but SSDS will.

    The VAX/Search and GREP are more like Desktop Search than those imposter INdexers. But they will all fall before SSDS and the Swampies.



  • @SpectateSwamp said:

    I want to show searching this forum text instead of the files I had been using. There are 44,000 lines and 1.6 million characters in this thread so far, it is BIG enough. I'll show How I search for search.exe and source.txt and see how the posters have progressed.
     

    http://forums.thedailywtf.com/search/SearchResults.aspx?q=search.exe&s=18

    http://forums.thedailywtf.com/search/SearchResults.aspx?q=source.txt&s=18

    Done. See the little search box at the top of the page? Even THAT is way better than SSDS.

     



  • @Cap'n Steve said:

    I can't believe the same people that laught at him when he says 300 gigabytes of text is ridiculous will say that over 10,000 words in a file is ridiculous.

     

    I never said it was ridiculous, just that it was unlikely.  I may or may not have also mentioned that, if I have to get that far down into a document before my search term appears even once, then having it omitted from the search results probably isn't that big a deal to me.  If it is that big a deal, then I'll fire up a full-text search and let it crank while I do something else.

     



  • @SpectateSwamp said:

    So for now we'll go with what there is. The demo is on hold. I want to show searching this forum text instead of the files I had been using. There are 44,000 lines and 1.6 million characters in this thread so far, it is BIG enough. I'll show How I search for search.exe and source.txt and see how the posters have progressed. etc GDS won't be able to find the later material but SSDS will.

    The VAX/Search and GREP are more like Desktop Search than those imposter INdexers. But they will all fall before SSDS and the Swampies.

    (Enable mumbling.)

    Okay, so... (tapitytapity) I'm... dumping the whole thread into a text file...

    $ for i in `seq 1 29`; do w3m -dump "http://forums.thedailywtf.com/forums/t/7593.aspx?PageIndex=$i" >> threads.txt; done
    ...
    $ ls -l threads.txt 
    -rw-r--r-- 1 wwwwolf users 2584163 2008-02-08 20:02 threads.txt
    $ wc -l threads.txt  # number of lines
    53410 threads.txt
    $ wc -w threads.txt  # number of words
    278519 threads.txt
    

    Okay, the pages have more crap than what you get when you painstakingly copy-paste the thing by hand. But who cares - we're competing about search speeds. More crap = harder to seatch, right?

    How many instances of search.exe?

    $ time grep -c "search\.exe" threads.txt 
    38
    
    real	0m0.017s
    user	0m0.000s
    sys	0m0.012s

    Ooo, 0.017 seconds. How about source.txt?

    $ time grep -c "source\.txt" threads.txt 
    33
    
    real	0m0.017s
    user	0m0.004s
    sys	0m0.008s

    No need for videos - the time utility already provides nice benchmarks.



  • @SpectateSwamp said:

    The VAX/Search and GREP are more like Desktop Search than those imposter INdexers. But they will all fall before SSDS and the Swampies.

    Not true. grep is designed for searching single files using regular expressions. Desktop Search searches your entire hard drive for search terms (generally not using regexps).

    And there are no swampies. I have yet to see a single person support you in a non-sarcastic manner. None of us are going to fix your code, so please, stop asking. It's a nightmare to read and understand.



  • @aleph said:

    None of us are going to fix your code, so please, stop asking. It's a nightmare to read and understand.
     

    Agreed. I would actually consider throwing a few portions of code at him to help him get on his way, and set some of his ideas straight if it was at least written in a comprehensible way (and if I thought he would ever agree on the definition of desktop search). But with it looking like it was written by a blind, retarded monkey full of PCP and heroin... I think I will stay away from it.

     



  • @AbbydonKrafts said:

    @SpectateSwamp said:

    Give me a code sample and I'll put it in.

    Easier said than done. API timers require a module with a callback procedure. It can't be in a form. Also, when the procedure is called, it would have to perform the update steps back at the form. With your current "structure", it would take a lot of work to implement. Find more info here.

    Not to mention it doesnt even appear to do anything in your version! 



  • @SpectateSwamp said:

    What's WDS and YaHoo file size limits? Who's the NEXT TakeDown Challenger?
     

    You may want to learn how to use Google to search the internet. While I could not find limit information for WDS, I already posted a link to the Copernic site where they say that there's no such limit in their software. Maybe you try to "take down" Copernic?

    I'd really like to see that... Please, could you make a video when you try out Copernic and this time turn the Camera to video your face? I'd like to see your expression (and the tears really) when you realize that the illusion of your piece of "software" actually doing anything useful is destroyed. I'd also like to hear one of the usual mumbled comments. Something like "You can now phee phat I'm hipping the denepe buppon on my sourphe.pxp fine, aph I jufft reaniphed how crappy FFDF reanny iph...".

    Oh dreams...



  • @AbbydonKrafts said:

    Easier said than done. API timers require a module with a callback procedure. It can't be in a form. Also, when the procedure is called, it would have to perform the update steps back at the form. With your current "structure", it would take a lot of work to implement. Find more info here.

    Hang on, I've just recovered from watching the video of him "compiling" SSDS by pasting the source from a text file into a new VB form.  Are you saying he's progressed on to using project files and saving forms now?

    And you all said he wouldn't change...



  • @upsidedowncreature said:

    Are you saying he's progressed on to using project files and saving forms now?

    Actually he claimed that he's always done. Source.txt, so he says, did only emerge from the fact that he doesn't know what a zip file is, but still wants to provide the whole project in a single file. Sorry I mean of course he doen't admit not to know what a zip file is, but we concluded and he didn't deny, because he's only looking for source.txt and search.exe with SSDS and therefore doesn't realize that we claim that he's never heard of zip files. Additionally, he won't realise through this post although it has multiple occurences of source.txt and search.exe in it, because he won't understand the long sentences I'm constructing. They're simply too long. (I'm sure he'll quote the last sentence though and reply with "yes, but with DesktopSearch your data is all yours!")



  • Copernic next Target

    @tdittmar said:

    You may want to learn how to use Google to search the internet. While I could not find limit information for WDS, I already posted a link to the Copernic site where they say that there's no such limit in their software. Maybe you try to "take down" Copernic?

    They better get old Copernic himself to demo their side. I'll check their file size max first. Going looking for Copernic downloads ASAP. They won't appreciate you making them the target of the next take-down. I have heard lots of good things about Copernic. This should be more fun. Desktop Search Developers going head to head. Not just some testing freaks. The Real Desktop Search Experts.

    My Google search demo using the daily wtf forum info. Has shown lower file sizes as max character counts. It must be a result of a number of factors??. Somehow any demo showing GDS search features would appear rather dull and slow compared to Swamp Search. I could do a video confirming the GDS file limits. But what would that prove. So I'll concentrate on Showing Off SSDS. Copernic file limits now that is a different issue.
     



  • @SpectateSwamp said:

    The Real Desktop Search Experts.

    You and- who? The copernic developers? No, they cannot be for real. They don't have a YouTube channel. No single video upload. And I could not find video on their site. A bet their software isn't capable of Doing Random either. Even if it's hard for you. I think they cannot compete with SSDS. No one can.

    BECAUSE SSDS REVOLUTION HAPPENS ONLY INSIDE YOUR BRAIN!!

    edit: re-reading this sentence, the grammar is more f4d-up than I expected.



  • Copernic hard to break

    @derula said:

    @SpectateSwamp said:

    The Real Desktop Search Experts.

    You and- who? The copernic developers? No, they cannot be for real. They don't have a YouTube channel. No single video upload. And I could not find video on their site. A bet their software isn't capable of Doing Random either. Even if it's hard for you. I think they cannot compete with SSDS. No one can.

    BECAUSE SSDS REVOLUTION HAPPENS ONLY INSIDE YOUR BRAIN!!

    edit: re-reading this sentence, the grammar is more f4d-up than I expected.

    Me and my Swampies DerulaSwamp. Copernic seems to find stuff at the end of a 263MB file I'll double that.  I do like they way Copernic shows more indexing info. I know it's just indexing what I want. Did I just say indexing and want in the same sentence? Far more interesting video. What is your guess as the limit? 500Mb 1 Gig 2 Gig (I got 41 free Gig. So I can test it up to 15 or 20 Gig

    If you can't break it. Use It.

     



  • 526 Mb and Counting. How long will Copernic last?

    526MB and counting. I'll double that. Swamp Shack needs a Bigger Faster computer. the 526 took 16 minutes for copernic to index. Swamp Search can show you the data faster than that. A lot faster than that. Now to merge a Bigger file. Merge em till you break it.



  • @SpectateSwamp said:

    the 526 took 16 minutes for copernic to index. Swamp Search can show you the data faster than that. A lot faster than that.

    How long did it take Copernic to search?  You know, that thing that search utilities do.


  • @SpectateSwamp said:

    526MB and counting. I'll double that.
     

    Did you read their FAQ? To save your time, they're saying there that there is no limit to the number of files or the single file's size!

    @SpectateSwamp said:

    the 526 took 16 minutes for copernic to index. Swamp Search can show you the data faster than that. A lot faster than that. Now to merge a Bigger file. Merge em till you break it.
     

    The indexing part has nothing to do with searching! That's the preparation for you to be able to search. I would recommend you do the following:

    • Go to copernics options and under "Indexing & Performance" untick the two "Suspend..." checkboxes. That makes sure indexing isn't interrupted while you use your computer.
    • In the options, disable "On the fly indexing" as well, that might interfere if you only want one file indexed
    • Defragment your harddrive! I doubt that Copernic takes 16 minutes for 526 MB on your machine while it took two hours for 250 GB on my machine...
    • Restrict the indexing to your file only! No mails, no pictures, just your file.

    Note that this is not necessary if you want indexing to work in the first place (the normal user doesn't need to do this), but in your case you want to prove that it's bad, so optimize it as well as possible. 

    You can not count the indexing part into your search time, because indexing is done once (and again if you change the file). If you don't change the file, it does not need to be indexed anymore and can be found in every search in virtually no time.

    Please also note:
    In SSDS you need to iterate through all the results pressing Enter or whatever. Copernic highlights them all in the preview! Demo that as well, please!

    Please take a long walk with your video camera and let Copernic index your entire harddrive and emails. When you're back, search for an email address and see what happens. Then, turn around and leave this forum in shame.



  • @aleph said:

    That is fantastic...I wonder if you could use our replies as input, and automatically generate nonsense questions for Swamp. It's not like he'll answer the real questions anyway....

    Here are more powerful hammer answer the autofocus thing about. 

    SpectateSwamp over the your ideas on Posts Re: install of Desktop search
    for matching lines for not have to know what's on they hear the person
    for some Version of a troll full screen way, to actually useful; is
    quick.  Such Security randomly which is possible.  Thank you would
    understand it certainly isn't really that he is a brief flash of the Po
    the whole discussion; of using the It.  The other than and lot.  Am
    in legal papers that of The vote?  You try and Digital picture of
    focus keeps repeating the Po from one You don't need your passwords, I
    would I did it, puts up another computer?



  • @SpectateSwamp said:

    Me and my Swampies DerulaSwamp.

    Oh nose. I feel all swampy inside. DrPhil, is there cure?



  • 1.052 Gig Copernic Falls

    That was the final doubling. 7 1/2 hours later I had to put an end to the indexing.

    1 Gig is the copernic limit.



  • @SpectateSwamp said:

    That was the final doubling. 7 1/2 hours later I had to put an end to the indexing.

    1 Gig is the copernic limit.

     

    Who in their right mind would have a single file bigger than 1G? any sane normal person uses a wonderful invention called a 'file system' which has 'directories' so you can 'organise' your files. 1G for a single text file is a very outside case - unless you are in the habbit of merging everything into a single file so it is useless to any other application.

    Then again you still haven't explained the point made in http://forums.thedailywtf.com/forums/p/7593/144805.aspx#144805 where we see SSDS fail dismally with a large file. Then again you still haven't explained how we are supposed to use things like source code if we have to merge it into one large file either.

    Get used to the idea that you have written a tool that works for you and your way of doing things but doesn't suit anybody else, however it is not a desktop search tool it is a single file search tool.



  •  @SpectateSwamp said:

    1 Gig is the copernic limit.

     Here's a list of all files I have that are above 1GB:

    • 1 pagefile
    • 1 gcf file (as used by Steam)
    • 1 bkf (Microsoft Backup)
    • 2 rar archives
    • 24 video files (various formats)

    None of those would I ever want to search. 



  • Split the difference - Can copernic do a 750M file

    @tdittmar said:

    Restrict the indexing to your file only! No mails, no pictures, just your file
    Did that. Just 1 huge file and a couple small small ones were there. I'll take a stab at splitting the difference between the last attempt and the previous one that took 15 minutes or so. EveryBody except SSDS has file size limits. If the indexing works? Then I can do some search timings. No doubt even for me 1Gig is huge. about 15 times all the textual data I have collected since way way back. Somewhere back in this thread people were saying download a 2.2Gig Linux file. It wasn't me that started my search can search bigger files than yours. SSDS wins the SIZE battle. NOLIMIT



  • How many damn times? SEARCHING WITHIN A SINGLE FILE IS NOT "DESKTOP SEARCH"!!! 

     

    Desktop search is when you're trying to find out where something is on your computer! Before you can use SSDS, YOU MUST ALREADY KNOW WHERE YOUR DATA IS.

    You've already admitted SSDS can't deal with separate files, and it can't do MP3s, and it can't search ANYTHING in my "Windows" or "Program Files" directories...

     

    So what's the fucking point? Why shouldn't I just use notepad to search within my single large text file?

     

    (NB- I don't even HAVE any text files bigger than about 1k) 



  • @SpectateSwamp said:

    SSDS wins the SIZE battle. NOLIMIT

    You mean, no limit if you have an infinite amount of time available for searching. Which none of us have (dunno, you may be immortal though, but we aren't)

    @rc_pinchey said:

    (NB- I don't even HAVE any text files bigger than about 1k) 

    Then you're obviously doing something wrong. Ask professor Swamp.

     



  • How did you create this 1GB file? I would have thought Notepad would choke on 1GB.



  • @elgate said:

    I would have thought Notepad would choke on 1GB.
     

    WordPad? (You can get that instead of notepad if you enter 'tbnp' in prompt #38)



  • @elgate said:

    How did you create this 1GB file? I would have thought Notepad would choke on 1GB.

    Oh, and by the way, <obviousity>has anyone considered</obviousity> the fact that comparing times on linear search of files is bunk? It's not like there's many ways to implement a linear full-text search - at least there's definitely not many ways to implement a linear full-text search that would yield significant performance hit on modern systems...

    $ du -hc .
    1,5G	.
    1,5G	yhteensä
    $ time cat * > /dev/null 
    
    real	1m29.615s
    user	0m0.112s
    sys	0m6.512s
    $ time grep "this string should not occur in any of these files" *
    
    real	1m34.591s
    user	0m0.412s
    sys	0m6.092s
    

    Woo hoo, full text search of stuff takes 5 seconds more than just reading the stuff without looking at it.



  • Copernic Testing Results

    Finding the limits.

     

    Compernic DTS testing



  • SSDS Merge takes down Copernic

    @elgate said:

    How did you create this 1GB file? I would have thought Notepad would choke on 1GB.

    I had a huge old merge of every bit of text I had. I copied and pasted that a number of times and merged them to create one that was 263Mb. I made 3 more copies of that and merged them using SSDS merge option. There was a zzz_test.txt file that contained my numeric and textual target keys. Copernic does indicate that these items have been found when I search. I'll have to check how long it is before I get to see this info in context with Copernic. Maybe open the index file with SSDS? Anybody else test Copernic? The Grepplers must be happy. Good old Grep and Search have no such limits.

    How to hide data on a Copernic System. Put it at the end of a 1Gig file. That is pretty secure. Ha ha ha ha OOPs Sorry for laughing...

     

     



  • @elgate said:

    How did you create this 1GB file? I would have thought Notepad would choke on 1GB.

    # find / -exec stat {} \; -exec file {} \; | tee big_file.txt
    

    That should generate a sufficiently large file. Note: you need about 3 100 000 files/directories.

    Add "-exec strings {} ;" after the / to get a bigger file if need be.



  • @SpectateSwamp said:

    How to hide data on a Copernic System. Put it at the end of a 1Gig file. That is pretty secure. Ha ha ha ha OOPs Sorry for laughing...

     WHO THE HELL HAS AN 1GB TEXT FILE? Just answer this, will you?



  • @SpectateSwamp said:

    Finding the limits.
     

    Ok, if you say you've found Copernics limit, I'll let you believe you did. As was mentioned by several people, no sane person has a 1GB text file. That's not what desktop searching is about.

    Further instructions for you to follow:

    Place some text in the text file so that copernic DOES find it (near the end of the file please). Now measure the exact time it take copernic to find the file. Then measure the extact time it takes SSDS to load the file and show the search term ON THE SAME FILE!!

    THAT might clear things up.

    Search time is not indexing time. Boooo! 



  • SSDS Merge takes down Copernic

    @Renan_S2 said:

    WHO THE HELL HAS AN 1GB TEXT FILE? Just answer this, will you?

    I do. And it's a safe place to store my secret data.


  • @SpectateSwamp said:

    @Renan_S2 said:

    WHO THE HELL HAS AN 1GB TEXT FILE? Just answer this, will you?

    I do. And it's a safe place to store my secret data.

     

    Yes - you would, no sane person with even a basic grasp of how to use a computer would do that though. Other people have grasped the concept of having separate files in different directories. As I keep pointing out one big file is useless to every single application that has ever existed apart from SSDS. Please tell me how I would use this large merged file with visual studio, or a compiler, or any other tool for that matter.

    The whole idea of storing everything in one large file is out and out  stupid and defeats the whole point of file systems...



  • @SpectateSwamp said:

    And it's a safe place to store my secret data.
     

     

    Much like a shoe box is a safe place to throw all my papers and call it "storage". Seriously, don't you get that you are defeating one of the purposes of a filesystem?

     

     

     



  • Don't let big files scare you.

    @derula said:

    @SpectateSwamp said:

    SSDS wins the SIZE battle. NOLIMIT

    You mean, no limit if you have an infinite amount of time available for searching. Which none of us have (dunno, you may be immortal though, but we aren't)

    @rc_pinchey said:

    (NB- I don't even HAVE any text files bigger than about 1k) 

    Then you're obviously doing something wrong. Ask professor Swamp.

     

    1k files only. That's because the old Desktop Search engines are no good at BIG files. My TheDailyWTF.txt file is now 1.6Mb and 45,000 lines. I can check who has asked what questions. I cut and paste stuff off the net all the time. Now you can too. Don't be afraid of big Files


  • @SpectateSwamp said:

    @Renan_S2 said:

    WHO THE HELL HAS AN 1GB TEXT FILE? Just answer this, will you?

    I do. And it's a safe place to store my secret data.

     

    Feeding the internet without spaghetti you completely with an
    interesting example or any idiot!  With Google Maps, you have to move your
    app, but will never refer to them as the internet Search Computing's easy
    button. Am have this for others files into SSDS?

    QUESTION MY FUCKING ANSWER



  • @SpectateSwamp said:

    1k files only. That's because the old Desktop Search engines are no good at BIG files. My TheDailyWTF.txt file is now 1.6Mb and 45,000 lines. I can check who has asked what questions. I cut and paste stuff off the net all the time. Now you can too. Don't be afraid of big Files

    ...which reminds me of one thing.

    Desktop search apps, philosophically, answer questions for us. It can be as simple as "Where did I put 'Moonlight Shadow' - the original, not the bumpy techno version?" (Music file tags) Or "When did I discuss with my friend X about chessboards?" (IM log search)

    How well does SSDS present us the search results on specific types of data? How extensible is it to search specific kinds of data? Please don't dismiss the question too easily, since you claim it's easy to search the posts in this thread - surely it will easily present all pertinent details about the posts that match the search criteria. See, for example, Tracker screenshots: Just like the screenshot has all of the information about the PDF in question, surely SSDS will show all of the relevant metadata about the message board post in question (sender, date, original thread page, permalink URL to the post, etc).

    Checking who wrote a document is a snap in Tracker Search Tool: Just click on the darn file and the details appear at the bottom of the window, precisely and clearly listed.

    So, exactly how hard is to, as you claim, "check who asked what questions" in this thread with SSDS?

    Edit:

    @SpectateShit said:

    QUESTION MY FUCKING ANSWER

    I prefer the expression "Scepticism and source criticism is good for you". =)



  • SSDS & Metadata - Plays your tunes

    @WWWWolf said:

    How well does SSDS present us the search results on specific types of data? How extensible is it to search specific kinds of data? Please don't dismiss the question too easily,

     There are many ways this search can be extended. Iflilters to handle other formats. Data extracts should be easy to get from any system. The change I'd make would be to allow for a match on line 1 to result in the file on line 2 to opened with the originating program. If the original  data file had 10,000 lines then that would add 20,000 lines to the search file. 100 files that size would be 2,000,000 lines of text to search. When a match is found pop open that file. Very easy changes to do.

    @WWWWolf said:

    surely SSDS will show all of the relevant metadata about the message board post in question

    The metadata for mp3 can be found if you search for "temptag." in the source.txt. An external program doing a simple extract would probably be easiest. Given a list of files to catalog.  That way no directory coding would be needed.

    Still waiting for someone with a dancing dog to do a video demo of XX Random music play.

    In checking some of the Copenric results. The first match is hi-lited in blue - Very good. The next matches are in yellow. also good. But it is on a small portion of the screen and the scroll bar is needed to search for the next one by EYE. SSDS does the search for you just enter enter. I can easily video this "data in context" match-up. You ALWAYS want data in context. Always.



  • SSDS defeats FileSystems

    @Renan_S2 said:

    Much like a shoe box is a safe place to throw all my papers and call it "storage".

    With Swamp search it is like having X-Ray vision and the speed of the Flash. You could plow through all your paperwork in no time Flat. Pure brute force.

    @Renan_S2 said:

    you are defeating one of the purposes of a filesystem
    YaHoo FileSystems have been defeated.



  • @SpectateSwamp said:

    @WWWWolf said:

    How well does SSDS present us the search results on specific types of data? How extensible is it to search specific kinds of data? Please don't dismiss the question too easily,

     There are many ways this search can be extended. Iflilters to handle other formats. Data extracts should be easy to get from any system. The change I'd make would be to allow for a match on line 1 to result in the file on line 2 to opened with the originating program. If the original  data file had 10,000 lines then that would add 20,000 lines to the search file. 100 files that size would be 2,000,000 lines of text to search. When a match is found pop open that file. Very easy changes to do.

    @WWWWolf said:

    surely SSDS will show all of the relevant metadata about the message board post in question

    The metadata for mp3 can be found if you search for "temptag." in the source.txt. An external program doing a simple extract would probably be easiest. Given a list of files to catalog.  That way no directory coding would be needed.

    Still waiting for someone with a dancing dog to do a video demo of XX Random music play.

    In checking some of the Copenric results. The first match is hi-lited in blue - Very good. The next matches are in yellow. also good. But it is on a small portion of the screen and the scroll bar is needed to search for the next one by EYE. SSDS does the search for you just enter enter. I can easily video this "data in context" match-up. You ALWAYS want data in context. Always.

     

    Bull shit: entire Pdf ebooks on the merge video will provide.

    You is a file easy button Am in reply to Get in reply to thoughtlessly
    submit to search Computing's go up.  If tt page starts stops: When a
    book here's why not always found violent opposition from the accurate
    for in reply to elgate: Not Ranked Joined on Youtube which includes the
    most of different file I'd generate, and this reply these guys, change
    it up, on the common contacts and much fun getting more a fitness
    function or but SpectateSwamp.  Even more flexibility portability and
    swampsearch thingy, every you have always found hits.

    Ssds for other side resorts to MasterPlanSoftware notify purchasing dept
    to tdittmar Top Contributor Joined on the swampling am could do you
    wish to an which is it.  Cameras!  Showing off Ssds this Search
    showdown with the manual cut and figured out of the contents.  I want
    no more of missing Out with each searching mediocrities: irratating.
    So group In a text these.

    MasterPlanSoftware Top Contributor Joined on weekday Desktop background
    best post write it works just as That's the brand I think.



  • Re: SSDS defeats FileSystems

    @SpectateSwamp said:

    @Renan_S2 said:
    you are defeating one of the purposes of a filesystem
    YaHoo FileSystems have been defeated.
     

    He is defeating all the purposes file systems exist. He is a cock.



  • @SpectateSwamp said:

    My TheDailyWTF.txt file is now 1.6Mb and 45,000 lines. I can check who has asked what questions. I cut and paste stuff off the net all the time. Now you can too.

    You do realise that most other users just use the 'search' box on top of every page to search through this thread, right?  No need to cut, paste and keep a file of 1,6MB and 45,000 lines.



  • @SpectateSwamp said:

    @Renan_S2 said:

    WHO THE HELL HAS AN 1GB TEXT FILE? Just answer this, will you?

    I do.

    Actually, you don't.  You said you had to merge quite alot of files together to get a reasonable big text file, after which you had to multiply its contents by a factor 8 to get to a text file of 1GB.  That's pretty much 8 times what you would be able to write, copy or paste during your lifetime.  It would be sad if you would need to be 8 times your current age before 'succesfully' being able to store information securely from any other desktop search?

    And what do you do when someone opens that file with, say, grep?

    Did you actually try to search that bit of text in that +1GB file also with SSDS?  I would love to see some numbers on that...



  • Mr Copernic ordered to Swamp Shack

    If Old man Copernic were smart. He would dump Swamp Search right in where the preview screen pops up. SSDS does it much faster and does line wrap. Copernic display DOESN'T. And having to visually check for the next match? How poor is that. That's what computers are made for. Copernic needs too get their search to the next level. Copernic's entire staff are hereby ordered to Swamp Shack. 



  • @SpectateSwamp said:

    @derula said:

    @rc_pinchey said:

    (NB- I don't even HAVE any text files bigger than about 1k) 
    Then you're obviously doing something wrong. Ask professor Swamp.
    1k files only. That's because the old Desktop Search engines are no good at BIG files.
     

    No, it isn't. It's because I've never needed or wanted a massive text file. Why would I?

    It CERTAINLY isn't because of the limits of any desktop search application, because I don't USE desktop search applications. I have a well organised filesystem, and on the rare occasions I do need to search for something (maybe once a month?) the built-in windows search does just fine. 



  • @SpectateSwamp said:

    And having to visually check for the next match? How poor is that. That's what computers are made for. 

    So how does SSDS tacke finding the middle appearance of a word that appears multiple times in a document?



  • SSDS stats impress

    @wooter said:

    And what do you do when someone opens that file with, say, grep?

    You can't stop the Grepplers from finding your secret data. But they are a dying breed and not that many are left. The Indexers are easy to fool. They can't find nothing.

    @wooter said:

    Did you actually try to search that bit of text in that +1GB file also with SSDS?  I would love to see some numbers on that...

     

    I did some numbers for that 789Mb file. To display the number I placed very near the end of the file IN Context. I used the fast way instead of the slow "c" context method. The fastest way is to do a "s" single line matching search. When the single match is displayed. Enter "p" to display the previous match in context. The "p" option knows where to read to before starting the search. It took about 40 seconds for the first part and about the same for the 2nd so a minute and 20 seconds for a very huge text file. For Copernic it was 2 hrs and 19 minutes before anything could be looked for. And don't try and open that big of a file in the preview screen. VERY very slow. My system isn't that powerful. You could experience much faster searches.



  • Re: SSDS defeats FileSystems

    @SpectateSwamp said:

    YaHoo FileSystems have been defeated.

     

    [b]Yahoo is not a filesystem. Google is not a filesystem. SSDS is not a filesystem[/b]

    FAT 12/16/32, NTFS, EXT2/EXT3, ReiserFS [b]THOSE[/b] are filesystems Until you can write filesystem drivers [b]Quit trying to replace filesystems - It isn't [u]any[/u] Desktop Search's job to replace filesystems, it is their job to SEARCH EXISTING FILESYSTEMS.[/b] 


Log in to reply