Deep learning is the poster child of machine learning and artificial intelligence more generally. But it is just one of multiple useful techniques available to us. I use it most frequently for pattern matching tasks like those required in modelling perception, but it's no so hot at determining the relative meaning or implications of that perception.
Because of the lack of broad understanding modelled in (most) current deep learning models, even quite small changes in stimuli can lead to spectacularly different and usually incorrect results. For many business tasks this isn't disastrous as long as you have kept a human-in-the-loop to keep an eye on things.
I recommend “The Master Algorithm” (2015) by Pedro Domingos to anyone who'd like to expand their knowledge of the other four main tribes of thought in AI today: Symbolists, Evolutionaries, Bayesians and Analogizers. The Bayesian approach is particularly interesting for legal practice.
In the meantime, do check out Gary Marcus's new book which draws attention to some of the shortcomings of the Connectionist deep learning approach.
From a technical perspective, deep learning may be good at mimicking the perceptual tasks of the human brain, like image or speech recognition. But it falls short on other tasks, like understanding conversations or causal relationships. To create more capable and broadly intelligent machines, often referred to colloquially as artificial general intelligence, deep learning must be combined with other methods.