There are caveats. The exact strength of the privacy guarantee depends on the parameters you use and the number of computations you do, so simply saying "we use a differentially private algorithm" doesn't guarantee privacy in isolation.
Or even more briefly, if you want to know how many people in your database have characteristic X, you can compute that number and add Laplace(1/epsilon) noise [2] and output the result. That's epsilon-differentially private. In general, if you're computing a statistic that has sensitivity s (one person can change the statistic by at most s), then adding Laplace(s/epsilon) noise to the statistic makes it epsilon-differentially private (see e.g. Theorem 3.6 here [3]). The intuition is that, by scaling the added noise to the sensitivity, you cover up the presence or absence of any one individual.
[1] https://github.com/frankmcsherry/blog/blob/master/posts/2016...