TensorFlow and Deep Learning without a PhD, Part 1 (Google Cloud Next ’17) November 28, 2019 100 By Stanley Isaacs CategoryArticles BlogTags#GoogleNext17 Cloud cloud next convolutional network deep machine learning dense network developers GCP Google cloud machine learning network neural netowrk design neural network recognition accuracy software developers software engineering TensorFlow 100 Comments Prof. Dr. Jens Dittrich says: April 19, 2017 at 12:00 pm awesome! Reply Tej Kiran says: April 20, 2017 at 12:33 pm That is so cool!!! Awesome training!!👍🏽 Reply Bob Mickus says: April 20, 2017 at 5:59 pm Martin, you have a superb teaching style. A terrific way of walking-through how neural networks work. I was so drawn in and engaged, that I hardly noticed that 55 minutes had passed. Thank you for really bolstering my understanding and knowledge of neural nets and CNNs. Hope to see more from you! Reply Rohit N says: April 21, 2017 at 10:39 pm Woah ! this is so cool ! Reply Russ Abbott says: April 22, 2017 at 4:49 am I don't understand the training approach with multiple layers. With one layer one knows the correct answer, but what do you do with multiple layers? How do you know what the gradient is, i.e., which way down is? I couldn't find that in the video. Thanks. Reply Agung Setiaji says: April 24, 2017 at 6:39 am Martin can you share the python code fully, i wanna try by myself Reply ujjwal tamang says: April 28, 2017 at 6:42 pm DON'T LET THE FALSE TENSOR MOVEMENT TO BE TRASHED BECAUSE THEY CAN BE MEANING FULL FOR THE ANOTHER TENSOR MOVEMENT…! OR ANOTHER FUNCTION PUT ALL THE MOVEMENT IN A SYMMETRY SO ALL THE TENSOR BIT CAN BE REMEMBERED AND THOUGHTFUL FOR THE HIGHER PROGRAMMING DIMENSION AND MORE INTELLIGENT PROGRAMMING…! LIKE POLYFUNCTIONS OF SINGLE UNIT ACTING AND DECIDING IN A SAME TIME IN A VERSATILITY IN A SINGLE UNIT FOR THE AUTOMATIC DIVERSION AND CONVERSION, DUAL OR MULTIPLE FUNCTIONALITY..! IN ANOTHER WORD NEW LIFE WHICH CAN THINK AND ACT INDEPENDENTLY…! BECAUSE ALL OF THAT WE ARE GIVING THE INPUT THEORY FOR THE PROGRAM FOR TENSOR MOVEMENT BUT MAY BE LETTING ITSELF LEARN FROM THE MISTAKE CAN LET IT DECIDES AND MAY BE THINK OR CREATE IT SELF…! I MEAN THE PROGRAM SHOULD ALSO CAPABLE OF CREATING FROM THE MISTAKE BUT NOT ONLY TO BE TRACKED IN A RIGHT PATH..! Reply Sitthykun LY says: April 28, 2017 at 9:11 pm That is amazing Reply m.ali petek says: May 2, 2017 at 4:46 am There is a problem with sound Reply sudheer amara says: May 3, 2017 at 2:45 am Hi Martin, Thanks it's really help ful. Can you please share the IDE that you are using and how to open tensorflow dashboard Reply Nicky Lim says: May 3, 2017 at 4:12 am May i know what tools were used to visualize the training? thanks! Reply avatar098 says: May 4, 2017 at 5:41 pm Thank you for posting these conference videos. This was incredibly helpful and I wish to attend these conferences next time when I get the opportunity. Reply Sanyat Hoque says: May 6, 2017 at 1:56 am You guys are just awesome ! Reply Tam Gaming says: May 6, 2017 at 8:58 am Why not make the picture big and the teacher small instead- its so hard to follow what he says, when he explains something without seeing the picture. Reply Riley Lynn says: May 6, 2017 at 2:47 pm You could chain a side neural network based on the first learning sequence to train the dropout for the second network. Having a random dropout seems absurd since you already have tensor information extracted from your data set. Reply Shunsuke says: May 8, 2017 at 2:01 am MNIST is a bit cliche, but I really love this lecture!It's concise and visually clear. Highly recommended. Thanks for putting this up, Google. Reply Egils Jugans says: May 9, 2017 at 1:06 am 4:32 instructions unclear. pulled my wiener wat next Reply Sebastian Bohnen says: May 9, 2017 at 11:16 am ha ha Reply brian777ify says: May 9, 2017 at 9:44 pm Fantastic lecture Martin. Makes everything so clear. One of the best tutorials I've seen on any subject. Reply Muhammad Zakarya says: May 12, 2017 at 9:09 pm really good Reply Ansel Castro Cabrera says: May 19, 2017 at 5:06 pm but the RELU is not differentiable so how do you compute the derivates for computing the gradients? Reply Synergism, Inc., says: May 20, 2017 at 10:31 am Interesting demonstration of the simplicity of the Tensor Flow. However, the real world data in not necessarily the correct one to ensure accuracy of fit. What if that real-world data is incorrect/false due to such factors as the human cognitive dissonances, uncontrolled variables and faulty classifications (human errors) to begin with? In this view a theoretical mathematical data set could be more appropriate to ensure purity and the right fit. Reply Luis Miguel Villalba Mazzey says: May 23, 2017 at 5:54 pm That title is so Sheldon Cooper. Reply Knowledge_Seeker says: June 3, 2017 at 9:08 pm Subtitles please. Reply Jie says: June 4, 2017 at 9:38 pm I love this beautiful lecture, very clear. Thank you. Reply tobeornottobe says: June 9, 2017 at 5:23 pm Thank you for a great overview of Machine Learning and Tensor Flow. Reply TARINEE PRASAD says: June 10, 2017 at 8:17 pm This guy is amazing… i Wish I had a professor like this in college 🙁 Reply Nguyen Duc Bang says: June 19, 2017 at 5:05 pm Hi Martin,Could you explain the convolutional neural network again in your example?You choose one weight and stride it across the whole image. Am I right?What is the value of this weight? I read in the other materials and they said that we will use a small matrix and use dot product to find out the convolutional matrix -> use ReLU -> use Max pooling for the next layer,…Which one is correct here?Thank you so much Reply Chandra P Utomo says: June 21, 2017 at 2:04 am Great Intro! What's the IDE he used? Reply Koushik Khan says: June 22, 2017 at 5:37 pm Best ANN video I have ever seen. Thanks a lot sir. Reply Enjector says: June 25, 2017 at 6:27 am Excellent explanation, really enjoyed your video. Thank you! Reply Abhijit Annaldas says: July 2, 2017 at 4:34 pm LOL… Awesome sense of humour and a great talk!https://youtu.be/u4alGiomYP4?t=1689 Reply Education only says: July 4, 2017 at 6:53 am did you see that they usig macosx for security bcoz google dont believe in other oses. lol. Reply Julie Qiu says: July 6, 2017 at 8:28 pm I can't believe I finally understand this. Thank you! Amazing video. Reply Tom Ashley says: July 15, 2017 at 2:44 am This greatly helped jumpstart me. Thank you. Reply Dondrey Taylor says: July 16, 2017 at 5:10 am Wow, great explanation. But I'm not going to lie, I think I still need a PhD lol Reply Vitaliy Gyrya says: July 19, 2017 at 8:50 pm I don't know what everyone is raving about here.The presentation is far from being clear. Way too jumpy. Some concepts are not properly introduced and have to be deduced.– That woodoo with made up issues of adding matrices with different dimensions is just that – made up issue! All because speaker decided to jump to matrix multiplication. – Also, what's the point of scaling if you already have a bias for each neuron which after exponentiation acts like scaling?– The words like "this" should not be allowed during the presentation as it is often not clear what "this" is.– At 9:12 "network will output some probability" – probability?! This concept wrt network was never introduced.– 10:21: what's the point of exponentiating something just to take log later?!– 10:21: with that definition the network that outputs all 1s is the one that minimize entropy. Reply Nkdms. says: July 20, 2017 at 11:41 pm Va-rάι-able 😛 LOL Reply Gaurav Singh says: July 22, 2017 at 5:37 am This is just pure GOLD !!!! Reply Ansh Chauhan says: July 25, 2017 at 5:46 am Martin is an excellent teacher, but this is the 3rd or 4th time I'm seeing the same presentation given by him. We want the next level Martin Reply cinegraphics says: July 26, 2017 at 12:54 am It's actually not that well explained, unless you already have experience with neural networks. Here are some things that should be improved:– explain each variable and matrix variables with more details, especially on the second screen of python– take a bit of time to explain various runs. for example what's the difference in parameters between sess.run(train_step, feed_dict=train_data) and sess.run([accuracy, cross_entropy], feed=train_data), even if it's just for the display purposes. why are the different parameter names used (feed_dict vs feed), etc.– naming of the variables should be better. X, Y, Y_ is not very intuitive. Reply Zeeshan Ali Khan says: July 26, 2017 at 8:04 am can we use same procedure for speech recognition? Reply Ridahoan says: August 10, 2017 at 12:15 am Nice intro to Tensorflow! I found the run through of a single problem helpful.A bit of a nit to pick, though — shouldn't that 99% accuracy have been tested on a final final test set that had never been seen — how many informal model tweaking iterations occurred after peeking at the accuracy on the test set? Perhaps the ending model would not do so well on a truly novel test set. Not really important here, except that we should never forget that the whole point is to generalize to unseen data, which may be drawn from a different distribution. And it is a pain in the butt to not peek. Reply Speechrezz says: August 12, 2017 at 12:54 am Thank you for making convolutional neural networks clearer for me! (and teaching me how to use TensorFlow) Reply kakrafoon says: August 13, 2017 at 6:32 am More "democratization" of ANNs for the masses (thank you, NVIDIA, thank you google). Next thing you know, we'll liken them to commodities like toilet paper. Reply Tao Li says: August 16, 2017 at 5:47 pm 🐮 Reply Qiao Hu says: August 31, 2017 at 4:59 pm Fantastic presentation Martin! Just one question: where can I find the training and test images that you used in your tutorial? Reply Karthik Arumugham says: September 5, 2017 at 3:51 am Thank you Martin. One of the best tutorials on TensorFlow! Btw did you use Tensorboard for the realtime visualization? Reply Márton Balassa says: September 7, 2017 at 12:10 pm wow I want to be AI coder now Reply Earthcomputer says: September 7, 2017 at 5:55 pm This video was a great way for me to get up to date with my newfound machine learning skills after taking Andrew Ng's online course. It just amazes me how radical some of this stuff is since 2011! Reply Shubham Mittal says: September 14, 2017 at 11:34 am Very nice sir 🙂 Reply David Porter says: September 17, 2017 at 12:33 pm 2:00 brains don't have an L Reply J K says: September 18, 2017 at 10:55 pm very very clear. Reply 0x0055 0x0054 says: September 26, 2017 at 10:05 pm In 1:31 the softmax function: What is the purposes for the two absolute values? Reply Deepak Yadav says: October 5, 2017 at 9:41 pm when does a tensorflow model converge? Reply Rachel Harrison says: October 7, 2017 at 11:05 am This lecture was really easy to digest. Thank you!! Reply biorpg says: October 11, 2017 at 5:29 am Is referring to "shooting your neurons" as "dropout" a reference to LSD? Reply ghanshyam sahu says: October 25, 2017 at 5:47 am just amazing Reply Rational Israel says: October 27, 2017 at 6:47 am Is the room cold? Reply Irakli Koiava says: November 1, 2017 at 3:27 pm 22:10– In reality if you want to reach the bottom of the mountains very quickly you should take long steps. 😀 Reply joseph pareti says: November 12, 2017 at 9:38 am the best tutorial on ML I have ever seen Reply Martin Görner says: November 21, 2017 at 8:22 pm The next video in the series in online: https://youtu.be/vaL1I2BD_xY "Tensorflow, deep learning and modern convolutional neural nets". We build from scratch a neural network that can spot airplanes in aerial imagery and also cover recent (TF 1.4) Tensorflow high-level APIs like tf.layers, tf.estimator and the Dataset API. Developers that already know some basics (relu, softmax, dropout, …) I recommend you start there to see how a real deep model is built using the latest best practices for convnet design. Reply Techno Elite says: November 27, 2017 at 4:28 pm Recommended Top Data Analytics and Deep learning courses:this awesome post from website Kupons Hub helps you to learn Deep Learning quicklyMachine Learning, Data Science, Deep Learning, Artificial Intelligence A-Z Courseshttp://kuponshub.com/machine-learning-data-science-deep-learning-artificial-intelligence-a-z-courses/ Reply Hlophe Nkosinathi says: December 7, 2017 at 8:17 am I would like to get the jupyter notebook or python code for this presentation where can I get it Reply Manan Kalariya says: December 14, 2017 at 5:42 am Nice tutorial about machine learning though you need to have PhD to get that incredibly amazing stuff. Reply Djane Rey Mabelin says: December 19, 2017 at 1:13 pm This video alone was soooo useful. Here is what I was able to do https://github.com/djaney/ml-studies/blob/master/06_conv.py Reply Charlie Li says: December 24, 2017 at 12:54 pm When I changed batch size from 100 to 50 the program does not work. But the program worked fine for batch size > 100. Weird behavior. Reply jeds says: December 29, 2017 at 5:28 pm Best explanation , really easy of understand !!! Big thanks !!! can someone tell me the tool name he is using? those graphs of the error and the converge I dont see it those in the tensorflow I've installed Reply 문선형 says: January 4, 2018 at 12:47 am metrix 666x666x666의 3D space로 이우어진 것이 큐빅..이라고 난 생각을 합니다. Reply 문선형 says: January 4, 2018 at 12:47 am 여기에 사람과 생명은 인자로서 변수로 작용을 하고 이것이 신경회로망을 구축하는 하나의 본보기가 되도록 노력을 합니다. Reply 문선형 says: January 4, 2018 at 12:50 am 뉴런은 NEW RUN달립니다. 그리고 인간의 역사는 달리기 에서 시작을 하였고 이것은 추적의 역사을 가져온 것이 현재의 사회라 나는 봅니다. 이것이 뉴런의 생성목적이고 이것이 이루어지도록 생명체의 신경회로망이 구축이 된 것이라 봅니다. 양육강식이나 사회생활을 하는 개미의 습성 이나 혹은 거북이 의 장수 비결등 입니다… 신경이 둔 하면 오래 살고 신경이나 반사신경이 빠르면 그만큼 더 욱더 빨리 노화해 가는 습성을 가지고 있다고 봅니다. 이것이 NEW RUN의 LIFE CYCLE라고 봅니다. Reply whoislewys says: January 18, 2018 at 9:38 pm About the image in 45:33, shouldn't the first convolutional layer have dimensions of 24 x 24 x 4? If each patch is 5 x 5, you can scan this patch across 23 possible places in the x direction, and 23 places in the y direction, correct? Or does the padding='same' make it so that the first patch's position is with 4×5 pixels on off to the left of the image ('looking at padding'), and with 1×5 pixels on the beginning of the actual image? Reply Cedric Poutong says: January 22, 2018 at 3:57 pm Very amazing teaching! my grand-mother also could becomme Data Scientist. Great. Thanks a lot and I hope to hear you more and more. Woulb be the same thing if I want to do a simple regression wit mixed data? Reply satyam shekhar says: January 22, 2018 at 6:59 pm great video!!!! helped a lot Reply Saurabh Prakash says: January 23, 2018 at 8:57 pm Thanks for the video, at 8:46, the shape of W should be [784,10]. Reply Zihe Cheng says: February 3, 2018 at 10:58 am This is the most helpful tutorial that I have ever seen. It combines the theory and the practice together. The explanation is also very clear. Reply Dan C says: February 8, 2018 at 5:53 pm Excellent presentation and technology. Reply Ray VR says: February 22, 2018 at 8:51 am One of the clearest well organized tutorials even for a beginner. Reply Hugo Chiang says: February 27, 2018 at 7:17 am For some reason Martin's scripts run a lot faster than my replicated jupyter notebook code. Can anyone offer some insight? Reply Yanmin Tao says: February 27, 2018 at 4:29 pm The accuracy you referred: how is this calculated? Reply USONOFAV says: March 29, 2018 at 1:57 am Tensorflow is overrated. Reply terpenstien says: May 1, 2018 at 7:37 am I do not understand why now this is being taught when it's been know for 2 decades. This tutorial has no current application and nowhere to go because were actually very much past this. Reply Mike Reynolds says: May 10, 2018 at 12:05 am I've been casually watching machine learning tutorials for over a year and this is by FAR the clearest explanation of how a Convolutional Neural Network works, out of around two dozen that I've seen. Reply vj vargs says: May 18, 2018 at 10:09 pm and not hot dog..very important it's a hot dog and not hot dog… it's tecchology -Jian yang Reply Karl Pages says: May 25, 2018 at 4:04 am Awesome 🙂 Thanks to everyone for this enlightening vid Reply a guy says: June 6, 2018 at 8:12 pm wow Reply Eli Spizzichino says: June 11, 2018 at 1:07 pm What I would never have expected it's that he got >98% accuracy without making any "shape" correlation (that's almost magic to my eyes). CNN definitely important but maybe plays a bigger role with more difficult datasets. Reply Himanshu Soni says: June 24, 2018 at 6:18 pm Yeah Martin, It was really good one. Reply Bilal Khan says: June 27, 2018 at 10:49 am fantastic introduction Reply monoham1 says: June 28, 2018 at 3:55 am you might not need a phd but a high school certificate in maths and 3 years working in programming is certainly not enough Reply santiago marco says: July 2, 2018 at 5:52 pm Confusing explanations Reply itshgirish says: July 6, 2018 at 11:53 am 9:05: Should it be – Σ Y . log (Y-hat) ? Reply itshgirish says: July 9, 2018 at 12:43 pm 45:05 – Could someone pls explain W1[5,5,1,4] ? … i dont understand whats a patch of 5,5 and applying 4 of those to the images. Reply Oops, I'm not invited says: July 31, 2018 at 2:26 pm We don't need to have PhD just for utilizing Tensorflow/DL, however that doesn't mean we can learn them within 1 hours without having any pre-knowledge or reading a text before. I'm also an novice not having any real tensorflow coding myself, but have read a related text, still am confusing at many parts. So I visited yt and watched this. I'm not sure it's perfect or not. But this video was very very helpful to me. Thanks Mr. scarf. 🙂 Reply Alexander Arzhanov says: September 23, 2018 at 8:31 pm Hey, shouldn‘t the probabilities sum up to 1? @10:20 Reply Vijay Prabhakaran says: December 9, 2018 at 2:06 am Very nice talk conveying the big picture and the general intuition. I do not think anyone can convey and address everything from the big picture to the nitty gritty in a 1 hr talk, given that I think this was a excellent introduction. Nice work Martin ! Reply shanaka jayatilake says: December 15, 2018 at 11:16 pm Such a great presentation. Reply Pham Xuan Trung says: December 27, 2018 at 3:15 pm Very clear and easy to understand. The lecture gives by Gooogle is alway great and incredible! Reply Pubudu Goonetilleke says: June 14, 2019 at 6:37 am This is a great presentation. Thanks for sharing. How can I use this with my own color images instead of using minst data set ? Can I create my own color mnist data set (how) ? Reply Francis Thibault says: September 5, 2019 at 6:51 pm Great course, it helps a lot! Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Save my name, email, and website in this browser for the next time I comment.