-
844
搜索硬盘上所有指定文件并可选择清理
可以搜索硬盘上所有带"._"前缀的文件,并可根据自己的选择进行清理(Specified file search on your hard disk and select cleanup can search the hard disk with _ " prefix documents, and according to their own choice to clean up)
- 2013-03-30 00:29:53下载
- 积分:1
-
zhinengsousuo
智能搜索功能11111111111111111(Intelligent search function)
- 2013-11-22 14:24:06下载
- 积分:1
-
baidu
说明: 百度竞价自动点击程序,可以自动识别对手网站和自己的网站(Baidu bidding procedures for auto-clicks, you can automatically identify opponent and their own web site web site)
- 2008-09-29 16:13:43下载
- 积分:1
-
SinaSpider-master
使用master-slave模式的分布式新浪微博爬虫,采用纯python代码编写。(This spider system is programmed by pure Python code and works as Master-Slave schema.)
- 2016-12-23 17:33:07下载
- 积分:1
-
SearchEngine
一个搜索引擎的实现,基于Java技术和Lunence的实现方案。(A search engine based on java and lucence.)
- 2009-07-10 10:40:09下载
- 积分:1
-
1024crawer-master
说明: 基于python的1024爬虫,可爬下1024的文章和图片放到当前目录上(1024 crawler based on python, which can climb down 1024 articles and pictures and put them on the current directory)
- 2020-04-05 17:35:35下载
- 积分:1
-
418668
快速搜索磁盘源码例程,程序结合易语言扩展界面支持库,使用运行BAT命令行文件的方法实现快速搜索磁盘功能。(Quick Search disk source routines, programs combined with easy language support library interface extension , use the Run command file BAT way to achieve quick search disk function .)
- 2016-12-20 11:27:00下载
- 积分:1
-
PMVFAST
钻石搜索法可以在帧间预测中准确的收索到匹配的宏块或者是子块,从而有效的降低码率和改善图像质量(Diamond Search interframe prediction method can accurately claim to match the collection of sub-macroblock or block, so as to effectively reduce the bit rate and improve image quality)
- 2009-03-31 17:26:06下载
- 积分:1
-
chinafenci
中文分词,读取txt文档然后给词分类,中文分词,读取txt文档然后给词分类,中文分词,读取txt文档然后给词分类(Chinese word segmentation, read txt document and then to the word classification, the Chinese word segmentation, read txt document and then to the word classification, the Chinese word segmentation, read txt document and then to the word category)
- 2009-11-18 23:03:20下载
- 积分:1
-
python_sina_crawl
新浪微博的爬虫程序。程序运行方式:保存所有代码后,打开Main.py,修改LoginName为你的新浪微博帐号,PassWord为你的密码。运行Main.py,程序会在当前目录下生成CrawledPages文件夹,并保存所有爬取到的文件在这个文件夹中。(Sina microblogging reptiles. Program operation: save all the code, open Main.py, modify LoginName for your Sina Weibo account, PassWord for your password. Run Main.py, the program will generate CrawledPages in the current directory folder and save all files to crawling in this folder.)
- 2021-04-08 16:39:00下载
- 积分:1