This talk will focus on the question of what precise signatures one should look for in an experiment to rule out the possibility that the experiment admits a well-defined classical model. By a “classical” model, we refer to a particular notion of classicality, namely, noncontextuality, inspired by the Kochen-Specker theorem. The Kochen-Specker theorem is a mathematical result that points out the inconsistency between quantum theory and any putative underlying model of it where the outcomes of a measurement are fixed prior to the act of measurement (that is, deterministically) by some (possibly hidden) physical states of the system in a manner that does not depend on (operationally irrelevant) details of the measurement context, i.e., the assignments are fixed noncontextually in the model. This theorem is not experimentally testable, unlike Bell’s theorem, because of idealizations (such as outcome determinism) that are implicit in its statement. I will describe some recent work on how to go from the Kochen-Specker theorem to an experimentally robust signature of the failure of noncontextuality. Making minimal assumptions about the operational theory describing the experiment, such signatures — noise-robust noncontextuality inequalities — do not rely on the validity of the entire quantum formalism. In other words, they can be used to assess nonclassicality even if an experiment admits deviations from quantum theory.