There were some big positives. Everyone there is very smart and depending on your tastes, it can be pretty fun to be in meetings where you try to explain Davidsonian ontology to perplexed business people. I suspect a decent fraction of the technical staff are reading this comment thread. There are also some genuine technical advances (which I wish were more publicly shared) in inference engine architecture or generally stemming from treating symbolic reasoning as a practical engineering project and giving up on things like completeness in favor of being able to get an answer most of the time.
There were also some big negatives, mostly structural ones. Within Cycorp different people have very different pictures of what the ultimate goals of the project are, what true AI is, and how (and whether) Cyc is going to make strides along the path to true AI. The company has been around for a long time and these disagreements never really resolve - they just sort of hang around and affect how different segments of the company work. There's also a very flat organizational structure which makes for a very anarchic and shifting map of who is responsible or accountable for what. And there's a huge disconnect between what the higher ups understand the company and technology to be doing, the projects they actually work on, and the low-level day-to-day work done by programmers and ontologists there.
I was initially pretty skeptical of the continued feasibility of symbolic AI when I went in to interview, but Doug Lenat gave me a pitch that essentially assured me that the project had found a way around many of the concerns I had. In particular, they were doing deep reasoning from common sense principles using heuristics and not just doing the thing Prolog often devolved into where you end up basically writing a logical system to emulate a procedural algorithm to solve problems.
It turns out there's a kind of reality distortion field around the management there, despite their best intentions - partially maintained by the management's own steadfast belief in the idea that what Cyc does is what it ought to be doing, but partially maintained by a layer of people that actively isolate the management from understanding the dirty work that goes into actually making projects work or appear to. So while a certain amount of "common sense" knowledge factors into the reasoning processes, a great amount of Cyc's output at the project level really comes from hand-crafted algorithms implemented either in the inference engine or the ontology.
Also the codebase is the biggest mess I have ever seen by an order of magnitude. I spent some entire days just scrolling through different versions of entire systems that duplicate massive chunks of functionality, written 20 years apart, with no indication of which (if any) still worked or were the preferred way to do things.
Cyc doesn't do anything Bayesian like assigning specific probabilities to individual beliefs - IIRC they tried something like that and it had the problem where nobody felt very confident about attaching any particular precise number to priors and also the inference chains can be so long and involve so many assertions that anything less than 1 probability for most assertions would result in conclusions with very low confidence levels.
As to what they actually do, there are a few approaches.
I know that for one thing, there are coarse grained epistemic levels of belief built into the representation system - some predicates have "HighLikelihoodOf___" or "LowLikelihoodOf___" versions that enable very rough probabilistic reasoning that (it's argued - I have no position on this) is actually closer to the kind of folk-probabilistic thinking that humans actually do.
Also Cyc can use non-monotonic logic, which I think is relatively unique for commercial inference engines. I'm not going to give the best explanation here, but effectively, Cyc can assume that some assertions are "generally" true but may have certain exceptions, which makes it easy to express a lot of facts in a way that's similar to human reasoning. In general, mammals don't lay eggs. So you can assert that mammals don't lay eggs. But you can also assert that statement is non-monotonic and has exceptions (e.g. Platypuses).
Finally, and this isn't actually strictly about probabilistic reasoning, but helps represent different kinds of non-absolute reasoning: knowledge in Cyc is always contextualized. The knowledge base is divided up into "microtheories" of contexts where assertions are given to hold as if they're both true and relevant - very little is assumed to be always true across the board. This allows them to represent a lot of different topics, conflicting theories or even fictional worlds - there are various microtheories used for reasoning events in about popular media franchises, where the same laws of physics might not apply.
I understand that any practical system of this kind would have to be very coarse, but even at the coarse level, does it have any kind of "error bar" indicator, to show how "sure" it is of the possibly incorrect answer? And can it come up with pertinent questions to narrow things down to a more "correct" answer?
The latter thing sounds like something Doug Lenat has wanted for years, though I think it mostly comes up in cases where the information available is ambiguous, rather than unreliable. There are various knowledge entry schemes that involve Cyc dynamically generating more questions to ask the user to disambiguate or find relevant information.