Posted by Rishabh Singh, Research Scientist and Max Lin, Software Engineer, Google Research Hundreds of millions of people use spreadsheets, and formulas in those spreadsheets allow users to perform sophisticated… Read More »Predicting Spreadsheet Formulas from Semi-structured Contexts
Announcing the ORBIT dataset: Advancing real-world few-shot learning using teachable object recognition
Object recognition systems have made spectacular advances in recent years, but they rely on training datasets with thousands of high-quality, labelled examples per object category. Learning new objects from only… Read More »Announcing the ORBIT dataset: Advancing real-world few-shot learning using teachable object recognition
Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics… Read More »Artificial networks learn to smell like the brain
Posted by Alex D’Amour and Katherine Heller, Research Scientists, Google Research Machine learning (ML) models are being used more widely today than ever before and are becoming increasingly impactful. However,… Read More »How Underspecification Presents Challenges for Machine Learning
Artificial intelligence is transforming industries around the world — and health care is no exception. A recent Mayo Clinic study found that AI-enhanced electrocardiograms (ECGs) have the potential to save… Read More »Putting artificial intelligence at the heart of health care — with help from MIT
Posted by Zirui Wang, Student Researcher and Yuan Cao Research Scientist, Google Research, Brain Team Vision-language modeling grounds language understanding in corresponding visual inputs, which can be useful for the… Read More »SimVLM: Simple Visual Language Model Pre-training with Weak Supervision
Posted by Zachary Nado, Research Engineer and Dustin Tran, Research Scientist, Google Research, Brain Team Machine learning (ML) is increasingly being used in real-world applications, so understanding the uncertainty and… Read More »Baselines for Uncertainty and Robustness in Deep Learning
Cross-posted from Bounded Regret.
Earlier this year, my research group commissioned 6 questions for professional forecasters to predict about AI. Broadly speaking, 2 were on geopolitical aspects of AI and 4 were on future capabilities:
Geopolitical: How much larger or smaller will the largest Chinese ML experiment be compared to the largest U.S. ML experiment, as measured by amount of compute used? How much computing power will have been used by the largest non-incumbent (OpenAI, Google, DeepMind, FB, Microsoft), non-Chinese organization? Future capabilities: What will SOTA (state-of-the-art accuracy) be on the MATH dataset? What will SOTA be on the Massive Multitask dataset (a broad measure of specialized subject knowledge, based on high school, college, and professional exams)? What will be the best adversarially robust accuracy on CIFAR-10? What will SOTA be on Something Something v2? (A video recognition dataset)
Forecasters output a probability distribution over outcomes for 2022, 2023, 2024, and 2025. They have financial incentives to produce accurate forecasts; the rewards total $5k per question ($30k total) and payoffs are (close to) a proper scoring rule, meaning forecasters are rewarded for outputting calibrated probabilities.
Posted by Joel Shor, Software Engineer, Google Research and Sercan Arik, Research Scientist, Google Research, Cloud AI Team Over the past 20 months, the COVID-19 pandemic has had a profound… Read More »An ML-based Framework for COVID-19 Epidemiology