To be useful in Bayesian practice, a probabilistic language must support conditioning: imposing constraints in a way that preserves the relative probabilities of program outputs. Every language to date that supports probabilistic conditioning also places seemingly artificial restrictions on legal programs, such as disallowing unbounded loops and recursion, and restricting conditions to simple equality constraints such as x = 2.
These common restrictions arise from reasoning about probability using only probability densities. To overcome them, we have developed a probabilistic language semantics using a measure-theoretic approach: defining program output distributions by the probabilities of preimages.
The translation compiles probabilistic programs into any functional language capable of uncountably infinite operations. Unfortunately, no such language is actually realizable (as far as we know). We therefore have derived an approximating semantics, whose translation compiles programs into Turing-equivalent functional languages, and have proved that it is sound. We have implemented the approximating semantics almost directly in Typed Racket.
We demonstrate the implementation's expressive power on some typically Bayesian problems. We also demonstrate it on some problems whose straightforward solutions cannot be expressed using densities, such as problems involving bounded measuring devices, and stochastic ray tracing.
Neil Toronto is a doctoral student in Computer Science at Brigham Young University.