介绍TensorFlow是谷歌基于DistBelief开发的第二代人工智能学习系统。它广泛应用于许多机器深度学习领域,例如语音识别或图像识别。它的名字来源于它自己的工作原理。Tensor(张量)表示N维数组,Flow(流)表示基于数据流图的计算,TensorFlow表示张量从图像的一端流向另一端的计算过程,即传递复杂的数据结构对人工智能神经网络进行分析处理的过程。TensorFlow是完全开源的,任何人都可以使用它。可在从单个智能手机到数以千计的数据中心服务器的所有设备上运行。《机器学习进阶笔记》系列将深入剖析TensorFlow体系的技术实践,从零开始,由浅入深,与大家一起踏上机器学习的进阶之路。CUDA和TensorFlow的安装根据以往的经验,TensorFlow的安装可以通过pip命令来解决。前提是有fq工具。如果没有,就在墙上找别人分享的地址。不过gpu支持的安装有很多坑,需要提前安装好Nvidia的cuda。这里有很多陷阱。推荐使用ubuntudeb安装方式安装cuda。run.sh方法总感觉问题多多。cuda的安装可以参考详细。注意链接中的tensorflow版本是previous,目前tensorflow官方要求是cuda7.5+cudnnV4,安装时请注意。HelloWorldimporttensorflowastfhello=tf.constant('Hello,TensorFlow!')sess=tf.Session()printsess.run(hello)首先通过tf.constant创建一个常量,然后启动TensorflowSession,调用sess的run方法开始整个图。接下来我们做一个简单的数学方法:importtensorflowastfa=tf.constant(2)b=tf.constant(3)withtf.Session()assess:print"a=2,b=3"print"Additionwithconstants:%i"%sess.run(a+b)print"Multiplicationwithconstants:%i"%sess.run(a*b)#outputa=2,b=3Additionwithconstants:5Multiplicationwithconstants:6接下来使用tensorflow的placeholder定义变量做类似的计算:对于占位符的使用,参见https://www.tensorflow.org/versions/r0.8/api_docs/python/io_ops.html#placeholderimporttensorflowastfa=tf.placeholder(tf.int16)b=tf.placeholder(tf.int16)add=tf.add(a,b)mul=tf.mul(a,b)withtf.Session()assess:#Runeveryoperationwithvariableinputprint"Additionwithvariables:%i"%sess.run(add,feed_dict={a:2,b:3})打印"Multiplicationwithvariables:%i"%sess.run(mul,feed_dict={a:2,b:3})#output:Additionwithvariables:5Multiplicationwithvariables:6matrix1=tf.constant([[3.,3.]])matrix2=tf.constant([[2.],[2.]])和tf.Session()assess:result=sess.run(product)printresult以下代码来自GitHub-aymericdamien/TensorFlow-Examples:TensorFlowTutorialandExamplesforbeginners,只作学习用activation=tf.add(tf.mul(X,W),b)#Minimizethesquadererrorscost=tf.reduce_sum(tf.pow(activation-Y,2))/(2*n_samples)#L2lossoptimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)#Gradientdescent#Initializingthevariablesinit=tf.initialize_all_variables()#Launchthegraphwithtf.Session()assess:sess.run(init)#Fitalltrainingdataforepochinrange(training_epochs)复制代码):for(x,y)inzip(train_X,train_Y):sess.run(optimizer,feed_dict={X:x,Y:y})#Displaylogsperepochstepifepoch%display_step==0:print"Epoch:",'%04d'%(epoch+1),"cost=",\"{:.9f}".format(sess.run(cost,feed_dict={X:train_X,Y:train_Y})),\"W=",sess.run(W),"b=",sess.run(b)print"OptimizationFinished!"print"cost=",sess.run(cost,feed_dict={X:train_X,Y:train_Y}),\"W=",sess.run(W),"b=",sess.run(b)#Graphicdisplayplt.plot(train_X,train_Y,'ro',label='Originaldata')plt.plot(train_X,sess.run(W)*train_X+sess.run(b),label='Fittedline')plt.legend()plt.show()返回导入importtensorflowastf#ImportMINSTdatafromtensorflow。examples.tutorials.mnistimportinput_datamnist=input_data.read_data_sets("/tmp/data/",one_hot=True)#Parameterslearning_rate=0.01training_epochs=25batch_size=100display_step=1#tfGraphInputx=tf.placeholder(tf.float32,[None,784])#mnistdataimageofshape28*28=784y=tf.placeholder(tf.float32,[None,10])#0-9digitsrecognition=>10classes#SetmodelweightsW=tf.Variable(tf.zeros([784,10]))b=tf.变量(tf.zeros([10]))#Constructmodelpred=tf.nn.softmax(tf.matmul(x,W)+b)#Softmax#Minimizeerrorusingcrossentropycost=tf.reduce_mean(-tf.reduce_sum(y*tf.log(pred),reduction_indices=1))#GradientDescentoptimizer=tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)#初始化变量init=tf.initialize_all_variables()#Launchthegraphwithtf.Session()assess:sess.run(init)#Trainingcycleforepochinrange(training_epochs):avg_cost=0.total_batch=int(mnist.train.num_examples/batch_size)#Loopoverallbatchesforiinrange(total_batch):batch_xs,batch_ys=mnist.train.next_batch(batch_size)#Runoptimizationop(backprop)andcostop(togetlossvalue)_,c=sess.run([optimizer,cost],feed_dict={x:batch_xs,y:batch_ys})#Computeaveragolossavg_cost+=c/total_batch#Displaylogsperepochstepif(epoch+1)%display_step==0:print"Epoch:",'%04d'%(epoch+1),"cost=","{:.9f}".format(avg_cost)print"OptimizationFinished!"#Testmodelcorrect_prediction=tf.equal(tf.argmax(pred,1),tf.argmax(y,1))#Calculateaccuracyaccuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))print"精度:",accuracy.eval({x:mnist.test.images,y:mnist.test.labels})#result:Epoch:0001cost=29.860467369Epoch:0002cost=22.001451784Epoch:0003cost=21.019925554Epoch:0004cost=20.561320320Epoch:0005cost=20.109135756Epoch:0006cost=19.927862290Epoch:0007cost=19.548687116Epoch:0008cost=19.429119071Epoch:0009cost=19.397068211Epoch:0010cost=19.180813479Epoch:0011cost=19.026808132Epoch:0012cost=19.057875510Epoch:0013cost=19.009575057Epoch:0014cost=18.873240641Epoch:0015cost=18.718575359Epoch:0016cost=18.718761925Epoch:0017cost=18.673640560Epoch:0018cost=18.562128253Epoch:0019cost=18.458205289Epoch:0020cost=18.538211225Epoch:0021cost=18.443384213Epoch:0022cost=18.428727668Epoch:0023cost=18.304270616Epoch:0024cost=18.323529782Epoch:0025cost=18.247192113OptimizationFinished!(10000,784)Accuracy0.9206这里有一个小插曲。打开notebook时,ipythonnotebook一直在占用GPU资源。可能是之前开了一个notebook,然后占用了GPU资源,然后计算Accuracy"InternalError:Dsttensorisnotinitialized."然后我在github上发现了这个问题,InternalError:Dsttensorisnotinitialized.,是肯定是和GPU内存有关的问题,所以我尝试添加tf.device('/cpu:0')来拉取我在cpu上计算的Accuracy步骤,但是出现了OOM问题。在hacknvidia-smi的时候,发现有一个python脚本一直占用3g多显存。我杀了它并恢复了它。我还在抱怨怎么可能有10000*784个浮点数。刚才把显存炸了,原来是我自己的问题这里对于逻辑回归来说,模型是一个用于多元分类的softmax函数,大致意思是在10个中选择***预测概率***作为最终的分类。其实基本的tensorflow并不是特别好讲。对于语法课程,您可以转到基本文档。之后我会找一些经典有趣的tensorflow代码应用来看看。毕竟,“给我看代码”是程序员应该做的。一些态度。【本文为专栏作者“大U的科技课堂”原创文章,转载请微信♂(ucloud2012)联系作者】点此查看更多本作者好文
