If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Deep learning holds many mysteries for theory, as we have discussed on this blog. Lately many ML theorists have become interested in the generalization mystery: why do trained deep nets perform well on previously unseen data, even though they have way more free parameters than the number of datapoints (the classic "overfitting" regime)? Zhang et al.'s paper Understanding Deep Learning requires Rethinking Generalization played some role in bringing attention to this challenge. Their main experimental finding is that if you take a classic convnet architecture, say Alexnet, and train it on images with random labels, then you can still achieve very high accuracy on the training data. Needless to say, the trained net is subsequently unable to predict the (random) labels of still-unseen images, which means it doesn't generalize.
A revolution in AI is occurring thanks to progress in deep learning. How far are we towards the goal of achieving human-level AI? What are some of the main challenges ahead? Yoshua Bengio believes that understanding the basics of AI is within every citizen's reach. That democratizing these issues is important so that our societies can make the best collective decisions regarding the major changes AI will bring, thus making these changes beneficial and advantageous for all.
Several weeks ago, Jefferies' analyst James Kisner published a scathing report, shedding light onto the shortcomings of IBM Watson. Kisner focused on the $60 million disastrous Watson project for MD Anderson, and highlighted how much IBM is lagging behind Amazon and Apple. As John Mannes pointed out on TechCrunch, "things would look much worse if Google, Microsoft and Facebook were added to this table." He also eloquently summarized the common pitfall in our approach to AI: "Reality is that AI isn't an amorphous black hole that sucks in unstructured data to produce insights. A solid data pipeline and a domain-specific understanding of the AI business problem at hand is table minimum."
Computer algorithms analyzing digital pathology slide images were shown to detect the spread of cancer to lymph nodes in women with breast cancer as well as or better than pathologists, in a new study published online in the Journal of the American Medical Association.1 Researchers competed in an international challenge in 2016 to produce computer algorithms to detect the spread of breast cancer by analyzing tissue slides of sentinel lymph nodes, the lymph node closest to a tumor and the first place it would spread. The performance of the algorithms was compared against the performance of a panel of pathologists participating in a simulation exercise. Images of lymph node tissue sections used to test the ability of the deep learning algorithms to detect cancer metastasis. Specifically, in cross-sectional analyses that evaluated 32 algorithms, seven deep learning algorithms showed greater discrimination than a panel of 11 pathologists in a simulated time-constrained diagnostic setting, with an area under the curve of 0.994 (best algorithm) versus 0.884 (best pathologist). The study found that some computer algorithms were better at detecting cancer spread than pathologists in an exercise that mimicked routine pathology workflow.
Microsoft on Wednesday announced new artificial intelligence features and functionality for several of its flagship products and services, including Office 365, Cortana and Bing, at an event in San Francisco. Building on the progress the company has made in integrating AI over the past year, the new enhancements are designed to help users perform increasingly complex and complicated tasks. "AI has come a long way in the ability to find information, but making sense of that information is the real challenge," said Kristina Behr, a partner design and planning program manager with Microsoft's Artificial Intelligence and Research group. One of the advances, machine reading comprehension, will improve an AI-based system's understanding of context -- for example, recognizing that one's cousin is a family member. Bing users will get more personalized answers, Microsoft said, such as restaurant recommendations based on travel destinations, or a greater variety of answers to offer different perspectives on a topic.
Sophia, the eerily realistic humanoid robot by Hanson Robotics, is asking the public to help fund her Artificial Intelligence (AI). In a video released today, Sophia announced the details of an upcoming token sale for SingularityNET, an open source platform for AI and machine learning that powers her "brain" and endless other robots. Blockchain technology will manage transactions on this open AI ecosystem. You can this executive guide as a PDF (free registration required). The token sale will begin on Dec. 8, 2017, and there is no minimum contribution amount, so anyone can participate.
AWS DeepLens Looking for a new way to learn machine learning? Let a machine teach you with AWS DeepLens, the world's first deep learning enabled video camera for developers. Designed to connect securely to a variety of AWS offerings, including AWS IoT, Amazon SQS, Amazon SNS, and Amazon DynamoDB, AWS DeepLens uses Amazon Kinesis Video Streams to stream video back to AWS and Amazon Rekognition Video to apply advanced video analytics. Easy to customize and fully programmable with AWS Lambda, AWS DeepLens runs on any deep learning framework, including TensorFlow and Caffe. Amazon SageMaker Amazon SageMaker offers developers and data scientists a quick and simple way to build, train, and deploy machine learning models at any scale.
The most challenging part of deep learning is labeling, as you'll see in part one of this two-part series, Learn how to classify images with TensorFlow. Proper training is critical to effective future classification, and for training to work, we need lots of accurately labeled data. In part one, I skipped over this challenge by downloading 3,000 prelabeled images. I then showed you how to use this labeled data to train your classifier with TensorFlow.