▍1. 鸢尾花
说明: 封装KNN算法,了解IRIS数据集 分类鸢尾花数据集(Encapsulation of KNN algorithm to understand IRIS dataset classification iris dataset)
说明: 封装KNN算法,了解IRIS数据集 分类鸢尾花数据集(Encapsulation of KNN algorithm to understand IRIS dataset classification iris dataset)
学习sklearn和tensorflow的材料(materials for learning sklearn and tensorflow)
说明: 学习sklearn和tensorflow的材料(materials for learning sklearn and tensorflow)
说明: 详细的解释 无bug 可读性较高 手把手教学 代码全实现(Deep Learning Algorithms and Examples for One-Dimensional Signals)
编写简单的神经网络程序,使得通过神经网络对手写数字(将手写每个数字转换成784个数字表示)进行识别。在文件里放了100组训练集和10组训练集,训练准确率达到80%。事实证明,增大训练集至1000组可以将准确率提升至95%(Write a simple neural network program to recognize handwritten digits (converting each handwritten digit into 784 digital representations) by neural network. In the document, there are 100 training sets and 10 training sets, and the training accuracy rate reaches 80%. It has been proved that increasing the training set to 1000 groups can increase the accuracy rate to 95%.)
说明: 编写简单的神经网络程序,使得通过神经网络对手写数字(将手写每个数字转换成784个数字表示)进行识别。在文件里放了100组训练集和10组训练集,训练准确率达到80%。事实证明,增大训练集至1000组可以将准确率提升至95%(Write a simple neural network program to recognize handwritten digits (converting each handwritten digit into 784 digital representations) by neural network. In the document, there are 100 training sets and 10 training sets, and the training accuracy rate reaches 80%. It has been proved that increasing the training set to 1000 groups can increase the accuracy rate to 95%.)
人工智能课程作业,TensorFlow中使用CNN实现手写体数字识别,基于CNN实现手写体数字识别并对比MLP分析。文章对代码原理结构有较为详尽的分析和解释,结尾处附有程序完整代码,可在python中直接运行。(Artificial intelligence course assignments, TensorFlow uses CNN to realize handwritten numeral recognition, and CNN to realize handwritten numeral recognition and MLP analysis. This article has a more detailed analysis and explanation of the code principle and structure. At the end of the article, there is a complete program code, which can run directly in python.)
说明: 人工智能课程作业,TensorFlow中使用CNN实现手写体数字识别,基于CNN实现手写体数字识别并对比MLP分析。文章对代码原理结构有较为详尽的分析和解释,结尾处附有程序完整代码,可在python中直接运行。(Artificial intelligence course assignments, TensorFlow uses CNN to realize handwritten numeral recognition, and CNN to realize handwritten numeral recognition and MLP analysis. This article has a more detailed analysis and explanation of the code principle and structure. At the end of the article, there is a complete program code, which can run directly in python.)
实现深度学习中,对图像的注意力机制,强化网络的学习能力,提高网络精度和泛化能力。(It can improve the classification accuracy of deep learning network, and can be easily embedded into other networks such as densenet,resnet and etc.)
说明: 实现深度学习中,对图像的注意力机制,强化网络的学习能力,提高网络精度和泛化能力。(It can improve the classification accuracy of deep learning network, and can be easily embedded into other networks such as densenet,resnet and etc.)
泰坦尼克号数据的多种方法预测(svm,决策树,随机森林)(Various methods for predicting Titanic data (svm, decision tree, random forest))
说明: 泰坦尼克号数据的多种方法预测(svm,决策树,随机森林)(Various methods for predicting Titanic data (svm, decision tree, random forest))
说明: 锂离子电池参数辨识,把各个参数作为bp神经网络的权重阀值进行学习(Parameter identification of lithium-ion batteries and learning of each parameter as the weight threshold of BP neural network)
聊天机器人 原理: 严谨的说叫 ”基于深度学习的开放域生成对话模型“,框架为Keras(Tensorflow的高层包装),方案为主流的RNN(循环神经网络)的变种LSTM(长短期记忆网络)+seq2seq(序列到序列模型),外加算法Attention Mechanism(注意力机制),分词工具为jieba,UI为Tkinter,基于”青云“语料(10万+闲聊对话)训练。 运行环境:python3.6以上,Tensorflow,pandas,numpy,jieba。(Chat Robot Principle: Strictly speaking, it is called "Open Domain Generation Dialogue Model Based on Deep Learning". The framework is Keras (High-level Packaging of Tensorflow). The scheme is LSTM (Long-term and Short-term Memory Network)+seq2seq (Sequence to Sequence Model), plus Attention Mechanism (Attention Mechanism). The word segmentation tool is Jieba and the UI is Tkinter. Based on "Qingyun" corpus (100,000 + chat dialogue) training. Running environment: Python 3.6 or more, Tensorflow, pandas, numpy, jieba.)
说明: 聊天机器人 原理: 严谨的说叫 ”基于深度学习的开放域生成对话模型“,框架为Keras(Tensorflow的高层包装),方案为主流的RNN(循环神经网络)的变种LSTM(长短期记忆网络)+seq2seq(序列到序列模型),外加算法Attention Mechanism(注意力机制),分词工具为jieba,UI为Tkinter,基于”青云“语料(10万+闲聊对话)训练。 运行环境:python3.6以上,Tensorflow,pandas,numpy,jieba。(Chat Robot Principle: Strictly speaking, it is called "Open Domain Generation Dialogue Model Based on Deep Learning". The framework is Keras (High-level Packaging of Tensorflow). The scheme is LSTM (Long-term and Short-term Memory Network)+seq2seq (Sequence to Sequence Model), plus Attention Mechanism (Attention Mechanism). The word segmentation tool is Jieba and the UI is Tkinter. Based on "Qingyun" corpus (100,000 + chat dialogue) training. Running environment: Python 3.6 or more, Tensorflow, pandas, numpy, jieba.)
是最基础的图像压缩,是深度学习的入门教程,适合初学者进行学习(It is the most basic image compression, an introductory course for deep learning, suitable for beginners to learn.)
说明: 是最基础的图像压缩,是深度学习的入门教程,适合初学者进行学习(It is the most basic image compression, an introductory course for deep learning, suitable for beginners to learn.)
这是使用Python实现的DBScan。像Numpy、熊猫这样的图书馆也被使用过。DBScan算法已经在两个变色龙数据集t4.8k和t5.8k上进行了测试。然后利用matplotlib将得到的结果可视化。为了便于比较,本文将所得到的输出结果与DBScan实现的skLearning库的结果进行了比较。计算每个数据集的同质性和分离度,以观察簇间的相似性和不同的度量。epsilon和min值分别为8.5和16.5。(This is a DBScan implemented using Python. Libraries like Numpy and Panda have also been used. DBScan algorithm has been tested on two chameleon datasets t4.8k and t5.8k. Then the results are visualized by matplotlib. In order to facilitate comparison, the output results are compared with those of skLearning library implemented by DBScan. The homogeneity and segregation of each data set are calculated to observe the similarity and different measures between clusters. Epsilon and min were 8.5 and 16.5 respectively.)
说明: 这是使用Python实现的DBScan。像Numpy、熊猫这样的图书馆也被使用过。DBScan算法已经在两个变色龙数据集t4.8k和t5.8k上进行了测试。然后利用matplotlib将得到的结果可视化。为了便于比较,本文将所得到的输出结果与DBScan实现的skLearning库的结果进行了比较。计算每个数据集的同质性和分离度,以观察簇间的相似性和不同的度量。epsilon和min值分别为8.5和16.5。(This is a DBScan implemented using Python. Libraries like Numpy and Panda have also been used. DBScan algorithm has been tested on two chameleon datasets t4.8k and t5.8k. Then the results are visualized by matplotlib. In order to facilitate comparison, the output results are compared with those of skLearning library implemented by DBScan. The homogeneity and segregation of each data set are calculated to observe the similarity and different measures between clusters. Epsilon and min were 8.5 and 16.5 respectively.)
基于Python的有关深度学习的入门级的参考原码,有助于学习深度学习。(Python-based reference codes for entry-level in-depth learning are helpful for in-depth learning.)