1807.01622
代码说明:
深度神经网络在函数近似中表现优越,然而需要从头开始训练。另一方面,贝叶斯方法,像高斯过程(GPs),可以利用利用先验知识在测试阶段进行快速推理。然而,高斯过程的计算量很大,也很难设计出合适的先验。本篇论文中我们提出了一种神经模型,条件神经过程(CNPs),可以结合这两者的优点。CNPs受灵活的随机过程的启发,比如GPs,但是结构是神经网络,并且通过梯度下降训练。CNPs通过很少的数据训练后就可以进行准确的预测,然后扩展到复杂函数和大数据集。我们证明了这个方法在一些典型的机器学习任务上面的的表现和功能,比如回归,分类和图像补全(Deep neural networks perform well in function approximation, but they need to be trained from scratch. On the other hand, Bayesian methods, such as Gauss Process (GPs), can make use of prior knowledge to conduct rapid reasoning in the testing stage. However, the calculation of Gauss process is very heavy, and it is difficult to design a suitable priori. In this paper, we propose a neural model, conditional neural processes (CNPs), which can combine the advantages of both. CNPs are inspired by flexible stochastic processes, such as GPs, but are structured as neural networks and trained by gradient descent. CNPs can predict accurately with very little data training, and then extend to complex functions and large data sets. We demonstrate the performance and functions of this method on some typical machine learning tasks, such as regression, classification and image completion.)
文件列表:
1807.01622.pdf, 3514192 , 2018-09-14
下载说明:请别用迅雷下载,失败请重下,重下不扣分!