IT WASN’T until this Sydney real estate agent realised she deserved better that she got out of her own way and paved her path to success. This is how she did it
What are the most significant machine learning advances in 2017?
2016–2017 may very well go down in history as the year of “the Machine Learning hype”. Everyone now seems to be doing machine learning, and if they are not, they are thinking of buying a startup to claim they do.
Now, to be fair, there are reasons for much of that “hype”. Can you believe that it has been only a year since successful , so things are at least even between Facebook and Google on that front.they were open sourcing Tensor Flow? TF is already a very active project that is being used for anything ranging from drug discovery to . Google has not been the only company open sourcing their ML software though, many followed lead. , , and Amazon just recently announced that they in their new AWS ML platform. Facebook, on the other hand, are basically supporting the development of not one, but two Deep Learning frameworks: and . On the other hand, Google is also supporting the highly
Besides the “hype” and the outpour of support from companies to machine learning open source projects, 2016 has also seen a great deal of applications of machine learning that were almost unimaginable a few months back. I was particularly impressed by the quality of area has improved in a year.’s audio generation. Having worked on similar problems in the past I can appreciate those results. I would also highlight some of the , a great application of video recognition that is likely to be very useful (and maybe scary) in the near future. I should also mention Google’s impressive . It is amazing to see how much this
As a matter of fact, machine translation is not the only interesting advance we have seen in machine learning for language technologies this past year. I think it is very interesting to see some of the recent approaches to combine deep sequential networks with side-information in order to produce richer language models. In “”, Bengio’s team combines knowledge graphs with RNNs, and in “ ”, the Deepmind folks incorporate topics into the LSTM model. We have also seen a lot of interesting work in modeling attention and memory for language models. As an example, I would recommend “ ”, presented in this year’s ICML.
Also, I should at least mention a couple of things from NIPS 2016 in Barcelona. Unfortunately, I had to miss the conference the one time that was happening in my hometown. I did follow from the distance though. And from what I gathered, the two hottest topics were probably(including the very popular l by Ian Goodfellow) and the .
Let me also mention some of the advances in my main area of expertise: Recommender Systems. Of course Deep Learning has also impacted this area. While I would still not recommend DL as the default approach to recommender systems, it is interesting to see how it is already being used in practice, and in large scale, by products like . That said, there has been interesting research in the area that is not related to Deep Learning. The best paper award in this year’s ACM Recsys went to “ ”, an interesting extension to Sparse Linear Methods (i.e. ) using an initial unsupervised clustering step. Also, “ ”, which describes the winning approach to the is a good reminder that Factorization Machines are still a good tool to have in your ML toolkit.
I could probably go on for several other paragraphs just listing impactful advances in machine learning in the last twelve months. Note that I haven’t even listed any of the breakthroughs related to image recognition or deep reinforcement learning, or obvious applications such as self-driving cars, chat bots, or game playing, which all saw huge advances in 2016. Not to mention all the controversy around how machine learning is having or could have negative effects on society and the rise of discussions around.
AlphaGo’sagainst one of the best Go players of all time, Lee Sedol, was a landmark for the field of AI, and especially for the technique known as deep reinforcement learning.
Reinforcement learning takes inspiration from the ways that animals learn how certain behaviors tend to result in a positive or negative outcome. Using this approach, a computer can, say, figure out how to navigate a maze by trial and error and then associate the positive outcome—exiting the maze—with the actions that led up to it. This lets a machine learn without instruction or even explicit examples. The idea has been around for decades, but combining it with large (or deep) neural networks provides the power needed to make it work on really complex problems (like the game of Go). Through relentless experimentation, as well as analysis of previous games, AlphaGo figured out for itself how play the game at an expert level.
The hope is that reinforcement learning will now prove useful in many real-world situations. And the recent release ofshould spur progress on the necessary algorithms by increasing the range of skills computers can acquire this way.
In 2017, we are likely to see attempts to apply reinforcement learning to problems such as automated driving and industrial robotics. Google has already boasted of using deep reinforcement learning to. But the approach remains experimental, and it still requires time-consuming simulation, so it’ll be interesting to see how effectively it can be deployed.
Dueling neural networks
At the banner AI academic gathering held recently in Barcelona, the Neural Information Processing Systems conference, much of the buzz was about a new machine-learning technique known as.
Invented by Ian Goodfellow, now a research scientist at OpenAI, generative adversarial networks, or GANs, are systems consisting of one network that generates new data after learning from a training set, and another that tries to discriminate between real and fake data. By working together, these networks can produce very realistic synthetic data. The approach could be , de-blur pixelated video footage, or apply stylistic changes to computer-generated designs.
Yoshua Bengio, one of the world’s leading experts on machine learning (and Goodfellow’s PhD advisor at the University of Montreal), said at NIPS that the approach is especially exciting because it offers a powerful way for computers to learn from unlabeled data—something many believe mayin years to come.
China’s AI boom
Recommended for You
This may also be the year in which China starts looking like a major player in the field of AI. The country’s tech industry is shifting away from copying Western companies, and it has identified AI and machine learning as the next big areas of innovation.
China’s leading search company, Baidu, has successful mobile-first messaging and networking app WeChat, opened an AI lab last year, and the company at NIPS. Didi, the ride-sharing giant that bought Uber’s Chinese operations earlier this year, is also building out a lab and reportedly working on its own driverless cars., and it is reaping the rewards in terms of improvements in technologies such as voice recognition and natural language processing, as well as a better-optimized advertising business. Other players are now scrambling to catch up. Tencent, which offers the hugely
Chinese investors are now pouring money into AI-focused startups, and the Chinese government has signaled a desire to see the country’s AI industry blossom, 2018.by
Ask AI researchers what their next big target is, and they are likely to mention language. The hope is that techniques that have produced spectacular progress in voice and image recognition, among other areas, may also help computers parse and generate language more effectively.
This is a long-standing goal in artificial intelligence, and the prospect of computers communicating and interacting with us using language is a fascinating one. Better language understanding would make machines a whole lot more useful. But the challenge is a , given the complexity, subtlety, and power of language.
Don’t expect to get into deep and meaningful conversation with your smartphone for a while. But some impressive inroads are being made, and you can expect further advances in this area in 2017.
Backlash to the hype
As well as genuine advances and exciting new applications, 2016 saw the hype surrounding artificial intelligence reach heady new heights. Whilein the underlying value of technologies being developed today, it’s hard to escape the feeling that the publicity surrounding AI is getting a little out of hand.
Some AI researchers are evidently irritated. A launch party was organized during NIPS for a fake AI startup called, to the growing mania and nonsense around real AI research. The deception wasn’t very convincing, but it was a fun way to draw attention to a genuine problem.
One real problem is that hype inevitably leads to a sense of disappointment when big breakthroughs don’t happen, causing overvalued startups to fail and investment to dry up. Perhaps 2017 will feature some sort of backlash against the AI hype machine—and maybe that wouldn’t be such a bad thing.