The proliferation of big data across domains, from banking to health care to environmental monitoring, has spurred increasing demand for machine learning tools that help organizations make decisions based on… Read More »3 Questions: Kalyan Veeramachaneni on hurdles preventing fully automated machine learning
Posted by Jae Hun Ro, Software Engineer and Ananda Theertha Suresh, Research Scientist, Google Research Federated learning is a machine learning setting where many clients (i.e., mobile devices or whole… Read More »FedJAX: Federated Learning Simulation with JAX
When it comes to games such as chess or Go, artificial intelligence (AI) programs have far surpassed the best players in the world. These “superhuman” AIs are unmatched competitors, but… Read More »Artificial intelligence is smart, but does it play well with others?
Posted by Ali Kemal Sinop, Senior Research Scientist, Google Research and Mahmuda Ahmed, Senior Software Engineer, Google Maps Design techniques based on classical algorithms have proved useful for recent innovation… Read More »Efficient Partitioning of Road Networks
Posted by Rishabh Agarwal, Research Associate, Google Research, Brain Team Reinforcement learning (RL) is a sequential decision-making paradigm for training intelligent agents to tackle complex tasks such as robotic locomotion,… Read More »Improving Generalization in Reinforcement Learning using Policy Similarity Embeddings
Along with researchers from Google Brain and OpenAI, we are releasing a paper on Unsolved Problems in ML Safety. Due to emerging safety challenges in ML, such as those introduced by recent large-scale models, we provide a new roadmap for ML Safety and refine the technical problems that the field needs to address. As a preview of the paper, in this post we consider a subset of the paper’s directions, namely withstanding hazards (“Robustness”), identifying hazards (“Monitoring”), and steering ML systems (“Alignment”).
Robustness research aims to build systems that are less vulnerable to extreme hazards and to adversarial threats. Two problems in robustness are robustness to long tails and robustness to adversarial examples.
Examples of long tail events. First row, left: an ambulance in front of a green light. First row, middle: birds on the road. First row, right: a reflection of a pedestrian. Bottom row, left: a group of people cosplaying. Bottom row, middle: a foggy road. Bottom row, right: a person partly occluded by a board on their back. (Source)
The share of federal spending on infrastructure has reached an all-time low, falling from 30 percent in 1960 to just 12 percent in 2018. While the nation’s ailing infrastructure will… Read More »Making roadway spending more sustainable
Fig 1. A wavelet adapting to new data.
Recent deep neural networks (DNNs) often predict extremely well, but sacrifice interpretability and computational efficiency. Interpretability is crucial in many disciplines, such as science and medicine, where models must be carefully vetted or where interpretation is the goal itself. Moreover, interpretable models are concise and often yield computational efficiency.
Getting a quick and accurate reading of an X-ray or some other medical images can be vital to a patient’s health and might even save a life. Obtaining such an… Read More »Using AI and old reports to understand new medical images