登录
首页 » matlab » IntroMatlabGraphics

IntroMatlabGraphics

于 2011-05-05 发布 文件大小:125KB
0 144
下载积分: 1 下载次数: 2

代码说明:

  Introduction to Matlab Graphics Document.

下载说明:请别用迅雷下载,失败请重下,重下不扣分!

发表评论


0 个回复

  • meaningfulalignment
    meaningful alignment 一文的实现(meaningful alignment)
    2010-05-16 13:49:50下载
    积分:1
  • 570486690TDIDF_Demo
    TFIDF计算方法详细解释,代码解释很多,也有帮助文档(TFIDF calculation method explained in detail, explain a lot of code, but also help documentation)
    2015-04-03 10:18:35下载
    积分:1
  • xepersian1.0.1-dev0.4.tar
    about latex for writing articles
    2013-12-24 05:06:26下载
    积分:1
  • gmskmod
    GMSK调制,先实现相位的计算,再加入载波调制,输出GMSK信号(GMSK modulation, the first implementation phase of the calculation, then add carrier modulation, the output GMSK signal)
    2010-06-30 21:31:45下载
    积分:1
  • suijizhengtaixulie
    使用M ATLA B产生随机 态分布序 列(The use of M ATLA B generates a random sequence of state distribution)
    2009-11-22 14:14:11下载
    积分:1
  • Example8_11
    利用matlab使用模糊数学的方法进行聚类分析(Using matlab to use the method of fuzzy clustering analysis)
    2011-04-21 22:39:02下载
    积分:1
  • image-processing-by-matlab
    image processing by matlab
    2011-05-20 01:08:27下载
    积分:1
  • P3026Y
    这也是中东地区的一种格式,数据处理不难,但是功能多了就很难(This is the Middle East region a format, data processing is not difficult, but it functions more difficult)
    2007-01-08 15:17:02下载
    积分:1
  • pid_dmc2
    动态矩阵控制算法与PID算法结合,有很强的理论性,是毕业设计的好程序(Dynamic Matrix Control algorithm combined with the PID algorithm, is very theoretical, is a graduate design program)
    2020-09-06 14:08:08下载
    积分:1
  • WindyGridWorldQLearning
    Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states. This paper presents and proves in detail a convergence theorem for Q,-learning based on that outlined in Watkins (1989). We show that Q-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many Q values can be changed each iteration, rather than just one.
    2013-04-19 14:23:35下载
    积分:1
  • 696518资源总数
  • 105243会员总数
  • 13今日下载