Deep Learning to Solve Challenging Problems (Google I/O’19) November 22, 2019 41 By Stanley Isaacs CategoryArticles BlogTagscomputer systems computing hardware developer news developers engineering google google 2019 Google Brain’s research google i o Google I/O 2019 google io google io updates google livestream Healthcare I/O IO io announcements io19 machine learning National Academy of Engineering's Grand Engineering Challenges for the 21st Century robotics software systems solve challenging problems 41 Comments Xavier Pan says: May 9, 2019 at 4:28 am Jeff ! We need TPUv4! Reply Dwight Walker says: May 9, 2019 at 11:45 am This is good for robotics and computer vision applications. Reply M says: May 9, 2019 at 4:51 pm Are these slides available anywhere? Reply Alaap Dhall says: May 10, 2019 at 7:36 am can I access the power of v3 tpu pod on google cloud platform? Reply Domin shar says: May 10, 2019 at 9:56 am Expect more developers to share the trained AI models into the Model Play. Reply Mehmet Filiz says: May 10, 2019 at 12:39 pm cool Reply Hu Xixi says: May 12, 2019 at 4:22 am Outline:Restore & improve urban infrastructure(Combining vision and robotics for grasping task, Self-supervised imitation learning)Advance health informatics(Predicting properties molecules)Engineer the tools of scientific discovery(Tensorflow and its applications)Some pieces of work and how they fit together / Bigger models, but sparsely activated(Sparsely gated mixture of experts layer-MoE)AutoML-Automated Machine learning"learning to learn"(Cloud AutoML)Special computation properties of deep learning(Reduce precision, Handful of specific operations)More 36:49 Reply Pratik Chatse says: May 13, 2019 at 7:38 am Best talk of #io2019 Reply srikanth k says: May 13, 2019 at 5:35 pm Superb thank you for uploading Reply Andy Low says: May 14, 2019 at 4:27 am i wonder how can someone trust decisions he does not understand. models built by a.i. must be clear to humans before use.i wonder how can someone say that image recognition made a big leap when child need a few examples and computer need a huge library of images. something wrong with this "nutcracker" approach – isn't it obvious? Reply Satish Goda says: May 15, 2019 at 3:03 am Very informative and insightful talk. Thank you Google for sharing it with us. Reply Bianca A. - There's art to data science says: May 16, 2019 at 5:45 pm The slide at 4:45 with the grand engineering challenges for the 21st century helped a lot. I often get overwhelmed or confused by all the projects and applications coming out of the tech world. Many of them don't make sense. This slide gave me a good framework for making sense of what these technologies are trying to solve. Good presentation. 😊 Reply Lahiru Oshara says: May 17, 2019 at 6:29 pm Is there a way to get those slides? Reply James Kwan says: May 17, 2019 at 10:31 pm I always had this idea that AI should be smart enough to determine what models should be tried when it was given a data file. It should be able to run an initial analysis to classify the nature of the file and predict the intention of file usage. Base on this analysis it should be able to find the best one from the existing models. And if it could not find one, it should be able to create a new one. I guess Google has already put my idea in practice. Reply Justin Stalbaum says: May 18, 2019 at 10:38 am This is awe inspiring I love it thank you Google Reply magical girl says: May 18, 2019 at 5:39 pm that tree design behind himm is pretty cool Reply Thomas Bingel says: May 21, 2019 at 4:46 am Recommendable! Reply yecai hua says: May 21, 2019 at 10:24 am Jeff Dean! my idol! yes! recently i began to try something new, and me with my team has made an app for google coral dev board edge tpu, and it goes well my app: model.gravitylink.com Reply Max S. says: May 22, 2019 at 4:16 am Thank you! Reply David Lycan says: May 22, 2019 at 5:15 pm Please explain how developing artificial intelligence solutions for subsurface data analysis in oil and gas exploration and production is 'socially beneficial'. Reply Mehrdad Niasari says: May 23, 2019 at 11:20 pm Excellent talk. This is a great example that how a true expert talks about innovations and deep learning with simple but accurate words; without overhyping and bombarding buzz words on their audience. Reply Inaam Ilahi says: May 26, 2019 at 12:38 pm Very insightful. Reply brayan illatopa principe says: May 27, 2019 at 5:54 pm This is good for robotics and computer vision applications.Are these slides available anywhere?Expect more developers to share the trained AI models into the Model Play. Reply Ed Heinbockel says: May 28, 2019 at 1:48 pm Great info, thank you for sharing! Reply Suhas Kurse says: May 31, 2019 at 3:11 am How can it solve problems, when Humanity is at stake now. Reply Sss Kkk says: June 1, 2019 at 12:43 am Regarding automobiles, we built the auto interface for humans. We leveraged our built in sensors (eyes and ears) and designed a bipartite system – vehicle and road. But why are we now trying to shoehorn AI into that human centric system? If we were designing a system from scratch for AI and machines, would we build it the same way? Would it not make sense to build telemetry into the road – make the road more intelligent and let it direct vehicles more directly? Do we need vehicles that can go where there are no roads? This would reduce the cost of complex and hack able vehicle based systems. Reply Sss Kkk says: June 1, 2019 at 1:18 am Regarding autoML, after time there would seem to be an ever increasing corpus of models. Humans, being the limited creatures that tend to have the same problems, might not actually need to have a ‘fresh’ model trained every time their brain perceives a problem that needs solving. That solution probably already exists and has been solved. Rather , it might be faster (and much less energy intensive)to simply archive these models with a set of useful metadata so that a google search can find the model that solves the problem. And metadata selection and assignment to individual models can be automated after they are designed by autoML. The metadata can be considered as the ‘label’ for the model. This metadata can also be used to ‘explain’ to a user ‘why’ the machine selected a particular model/algorithm. in addition, the machine would be able to engage the user in a ‘conversation’ – as it ‘asks for metadata’. The user would perceive this discourse as questions about the dataset/problem that he/she has-meanwhile the machine is building an information tree to sift/sort from its vast library of models. This also addresses the human problem where the user often starts by choosing the wrong approach to solving the problem. Or just as often uses the ‘cooked spaghetti’ approach to model selection – throw them all against the wall of the problem and see what sticks. Reply Kevin Cho says: June 3, 2019 at 4:31 pm Thank you for sharing this informative video~! 😃Machine Learning techniques applied to data from The Cancer Genome Atlas revealed interesting information. For anyone who's interested: http://bit.ly/2HSBTR3 (CellPress) or other resource compendium: http://bit.ly/2EMR6RG (Google Sheet) Reply Nisarg Rai says: June 4, 2019 at 10:29 am 11:19 wrong india map Reply Cody Weber says: June 6, 2019 at 5:43 pm I was all happy until the very end whenever I heard them talk about bias. I'm sorry, too many legitimate channels have been brought down for supposed "bias". Until you can 99.9999999999999% chance that a computer is unbiased, please just stick to image recognition, because Google, so far you have been not good regarding bias. Reply tmusic99 says: June 8, 2019 at 2:23 pm With reference to scientific learning: When you have a lot of data, but no data in the particular point in parameter hyperspace that you are interested in, what do you do? Extrapolate the model will result in bias an loss of accuracy. Experiments in real world systems seems unavoidable , experimental datapoint that is ofthen very expensive. The interaction between Machine Learning modeling and the planing and execution of experiments seems to be new and very interesting research area. Reply benzobox says: June 10, 2019 at 5:05 am amazing talk Reply Marcelo Santos says: June 13, 2019 at 4:21 pm This is awe inspiring I love it thank you GoogleRecommendable! Reply Science Compliance says: June 16, 2019 at 4:51 am Great talk. One thing I hear being said too much, though, is that humans don't get to pool their experiences whereas robots do. I'm sure the efficiency and integrity of robots sharing knowledge is much higher than with humans, but shared knowledge amongst humans is the basis of civilization. There would be no Google if every person ever born had to learn from scratch. Rant over. Reply Sabyasachi Mukhopadhyay says: June 21, 2019 at 3:44 am Amazing talk! Reply Tyler Jeffries says: June 29, 2019 at 6:40 pm Lidar lol Reply Kaiwen Yang says: July 11, 2019 at 9:37 pm machine learning is taking machine learning experts' jobs. 🙂 Reply Satyabrata Behera says: August 5, 2019 at 11:41 am Nice classes sir I want to neads thisclasses Reply Abdelkader Bouazza says: September 9, 2019 at 11:07 pm @8:30 the same technic used by naruto when he was training using his many clones 🙂 Reply Robert Alaverdyan says: September 24, 2019 at 6:30 pm Thank you a lot for the talk given. The idea of ML automation sounds great. Reply Dan One says: October 16, 2019 at 2:49 am I train my models on GpuClub com and don't worry about maintaining these huge machines. No investment is the best investment… Reply Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Save my name, email, and website in this browser for the next time I comment.