Thursday 1 November 2018

The simplest example on c++11 <random>

For some reasons, the internet lacks a beginner-level tutorial on how to use the c++11 <random> library when your code is made up of different files.
This is not a tutorial, this is just what I managed to put together...

I want
  • one global random engine (e.g. the mersenne twister)
  • one real-value uniform distribution in the interval [0,1), let's call this function RANDOM()
  • to be able to call RANDOM() from anywhere in my code.

There are three files: random.h, random.cpp and main.cpp (any additional .cpp file that includes random.h can use the function RANDOM() ).
The content of the files is as follow:
 
random.h
// random.h
#ifndef _RND_HH_
#define _RND_HH_

#include <random> //--- FOR THIS YOU NEED c++11, enable with -std=c++11 flag

// Declare engine - single instance for the whole code
//extern std::mt19937 my_rng;
extern std::mt19937_64 my_rng;

//Declare distributions:
extern std::uniform_real_distribution<double> my_unif_real_dist;
//extern std::uniform_int_distribution<double> my_unif_int_dist;

int Seed(int seed);
double RANDOM();

#endif 

// end of random.h 
random.cpp
//random.cpp
#include <stdio.h>
#include <iostream>
#include <chrono>
#include "random.h"

//std::mt19937 my_rng {}; 
std::mt19937_64 my_rng {}; // Defines an engine
std::uniform_real_distribution<double> my_unif_real_dist(0., 1.); //Define distribution
// Function to seed the random number generator from main file
// useful if you want the seed from a parameter file
// a negative value for seed gets you a random seed
// outputs the seed itself
int Seed(int seed)
{
  if (seed < 0) {
    long rseed=static_cast<long unsigned int>(std::chrono::high_resolution_clock::now().time_since_epoch().count());
    std::cerr << "Randomizing random generator, seed is "<<rseed<<std::endl;
    my_rng.seed(rseed);
    return rseed;
  } else {
    std::cerr << "User-provided seed is "<<seed<<std::endl;
    my_rng.seed(seed);
    return seed;
  }
}
// This is the function to call if you want a random number in the interval [0,1)
double RANDOM(void)
{
  return my_unif_real_dist(my_rng);
}
// end of random.cpp
And finally for the main.cpp file
//main.cpp
#include <stdio.h>
#include <iostream>
#include "random.h"

int main()
{
  int max = 10;
  int my_seed = 235;
  
  int my_new_Seed = Seed(my_seed);
  
  for(int i=0; i<max;++i){
    double one_random_number = RANDOM();
    std::cerr << RANDOM() << std::endl;
  }
}// end of main.cpp
That's it! This is really all you need.
Compile it with:
g++ -std=c++11 random.cpp main.cpp -o my_pretty_random_numbers
and happy random number generation.

By the way, this seems to work fine on my machine (running Ubuntu 18).
Can this be improved in simplicity and/or performance? Are there bugs?
Please let me know!

Sunday 3 June 2018

Easy (Bayesian) multidimensional scaling with Stan

Multidimensional scaling (MDS) is a data visualization technique in which the dimension of the data is reduced in a non-linear way. The data is represented as a \(N\times N\) distance matrix \((d_{ij})_{ij}\), and \(N\) points \(x_i\) in a \(D\) dimensional space (typically \(D=2\)) are chosen such that the Euclidean distances \(\|x_i - x_j\|\) resemble the input distances \(d_{ij}\) "as good as possible".

In metric MDS, an objective function \(E(x) = \sum_{1\leq i < j \leq N} (d_{ij} - \|x_i - x_j\|)^2\) is defined that needs to be minimized. For different flavors of MDS, this objective function is defined differently. In order to minimize the objective function, e.g. the conjugate gradient descent method is used. This method requires that one calculates the gradient \(\nabla E\) of the objective function. Of course, this is not so difficult is the case of metric MDS, but more difficult objective functions might require more effort. Enter Stan.

Stan uses automatic differentiation for Hamiltonian Monte Carlo, but Stan can also be used for maximum likelihood. Hence, if we can formulate the MDS problem in terms of a likelihood function, we can let Stan do all the work. The parameters of the model are the \(x_i\), the data is given by the distances \(d_{ij}\). If we assume that given the parameters, the data is distributed as \[ d_{ij} \sim \mathcal{N}(\|x_i - x_j\|, \sigma^2)\,, \] then maximizing the (log) likelihood is equivalent to minimizing the function \(E\). The parameter \(\sigma^2\) is merely a nuisance parameter that needs to be estimated as well.

An implementation of MDS in the Stan programming language

Implementing MDS in Stan is fairly straightforward, but there are a few snags that we should be aware of. First, if \(x\) solves the MDS problem, then also any Euclidean transformation of \(x\) is a solution. Hence, the model as stated above has too many parameters. We solve this by fixing the first point at the origin, restricting the next point to a \(1\)-dimensional half space, the third point to a \(2\)-dimensional half space et cetera. The last \(N-D-1\) points are unrestricted. In Stan, we can accomplish this by using a cholesky_factor_cov matrix: A positive-definite lower-trangular matrix. We then use the transformed parameters block to concatenate the points together into a single matrix.

Secondly, the data that we use in the example below is highly censored. Many of the distances are missing, and some are right censored. In such a case MDS can be used to infer the missing distances, and not merely visualize the data. The data that is passed to Stan, therefore, is a list of edges, a list of distances, and a list of codes that determine the type of censoring.

Thirdly, as the title of this post suggests, we will use Stan to do some sort of Bayesian MDS. In this case, we will sample a collection of "maps" \(x\) from a posterior distribution, that gives information about the location of each point, but also the uncertainty of this location. In this case, the fact that we restrict the first \(D+1\) points, comes back to bite us, as the uncertainty of these points will be different than the unrestricted points. Furthermore, it might be hard to compare the individual maps to one another, and for instance compute sensible mean locations of the points, as some maps may be "twisted" more than others. Therefore, we use the generated quantities block to center and rotate (cf. PCA) the sampled maps.



Example: Antigenic cartography of the influenza virus

An interesting application of MDS is antigenic cartography of the influenza virus. Influenza virus circumvents human antibody responses by continually evolving its surface proteins, in particular, hemagglutinin (HA). This is known as antigenic drift. In order to decide whether flu vaccines need to be updated, the hemagglutination inhibition (HI) assay is used to determine if the induced antibody response is still effective against future strains. The titers measured in the HI assay can be used to define "distances" between antisera and antigens. Using MDS, the antisera and antigens can be drawn into a "map", that shows the antigenic drift of the virus. This was done by Smith et al. in 2004. Conveniently, the data used for their map is available online. This table gives HI titers \(H_{ij}\) of antigen \(i\) and antiserum \(j\). A small titer corresponds to a large distance, which are defined as \(d_{ij} = \log_2(H_{\max,j}) - \log_2(H_{ij})\), where \(H_{\max,j} = \max_{k} H_{kj}\). As an example, I recreated their antigenic map using the Stan model above, and a Python script below.


The white squares denote the relative positions of the antisera in the "antigenic space", while the colored circles represent the antigens. The colors map to the years in which the infuenza strains were isolated.

Bayesian multidimensional scaling

For antigenic cartography of IAV, Bayesian MDS has been introduced by Bedford et al., who used multiple HI assay results per antigen/antiserum pair to incorporate the uncertainty of these measurements in their antigenic map. Moreover, they were able to use genetic and temporal information about the antigens (i.e. the RNA sequences of HA and their isolation dates) to inform the position of the antigens and antisera on the map. We will not go this far in this post, but since we have already formulated the MDS algorithm in Stan, we might as well make a "Bayesian" antigenic map. This can give some insight into the uncertainty of the positions of the antigens and antisera. This is not unlike the confidence areas as drawn by Smith et al. (the ellipsoid shapes). The result is given by the following figure.


Again, squares indicate antisera and colored circles the antigens. All the individual MCMC samples are represented by the grey dots. The MCMC samples for each antigen or antiserum are used to draw a two-dimensional error bar (i.e. ellipse) around the mean location.
A Python script for parsing the HI titer data, compiling and running the Stan model and drawing the maps is added below. For it to work, you will need to download the mds_model.stan file and make a csv file called baselinemap.csv with the HI table

Tuesday 6 February 2018

Computing q-values with C++

When looking for associations between features \(i = 1,\dots, m\) and some trait, it is often necessary to have some sort of multiple-testing correction. A very conservative method is the Bonferroni correction, that minimizes the family-wise error rate (FWER), but at the cost of many false negatives. This is not desirable when one wants to discover features or associations, and therefore other methods have been developed. One particularly intuitive method is based on the false discovery rate (FDR) and uses so-called \(q\)-values, which are (under certain conditions) elegantly analogous to \(p\)-values.

False discovery rate

Let \(S\) be the number of features called significant, and \(F\) the number of false positives among the significant features (i.e. false discoveries). In an article by John D. Storey, The (positive) false discovery rate is defined as \[ {\rm pFDR} := \mathbb{E}\left[\left.\frac{F}{S}\right| S > 0 \right]\,. \] Hence, the pFDR is the expected fraction of false positives among the features that are called significant. The condition \(S > 0\) ensures that it is well defined.

In the case of hypothesis testing, one typically has a test statistic \(T\), and one wants to test if the null hypothesis is true (\(H = 0\)), or rather that the alternative hypothesis is true (\(H = 1\)). The statistical model specifies the distribution of \(T | H = 0\), and the null hypothesis is rejected when the realization of \(T\) falls into a pre-defined significance region \(\Gamma\).
When testing multiple features, we typically have a sequence \((T_i, H_i)_{i=1}^m\), here assumed to be identically distributed and independent. The \({\rm pFDR}\) then depends on \(\Gamma\): \[ {\rm pFDR}(\Gamma) = \mathbb{E}\left[\left.\frac{F(\Gamma)}{S(\Gamma)}\right| S(\Gamma) > 0 \right] \,, \] where \(F(\Gamma) := \#\{i : T_i \in \Gamma \wedge H_i = 0 \} \) and \(S(\Gamma) := \#\{i : T_i \in \Gamma\}\). Storey derives that under certain conditions, we can write \[ {\rm pFDR}(\Gamma) = \mathbb{P}[H = 0 | T \in \Gamma] = \frac{\mathbb{E}[F(\Gamma)]}{\mathbb{E}[S(\Gamma)]} \]

The q-value

Let \(\{\Gamma_{\alpha}\}_{\alpha=0}^1\) be a nested family of significance regions. That is \(\Gamma_{\alpha} \subseteq \Gamma_{\alpha'}\) whenever \(\alpha \leq \alpha'\) and \(\mathbb{P}[T \in \Gamma_{\alpha} | H=0] = \alpha\). For instance, if \(T | H = 0 \sim \mathcal{N}(0,1)\), then we could choose \(\Gamma_{\alpha} = [z_{\alpha}, \infty)\), where \(z_{\alpha} = \Phi^{-1}(\alpha)\), with \(\Phi\) the CDF if \(\mathcal{N}(0,1)\).
The \(q\)-value of a realization \(t\) of \(T\) is then defined as \[ q(t) = \inf_{\Gamma_{\alpha} | t \in \Gamma_{\alpha}} {\rm pFDR}(\Gamma_{\alpha}) \] We can now give the above-mentioned analogy between \(p\)-values and \(q\)-values. The \(p\)-value is defined as: \[ p(t) = \inf_{\{\Gamma_{\alpha} : t \in \Gamma_{\alpha}\}} \mathbb{P}[T \in \Gamma_{\alpha} | H = 0] \] While under the right conditions, the \(q\)-value can be written as: \[ q(t) = \inf_{\{\Gamma_{\alpha} : t \in \Gamma_{\alpha}\}} \mathbb{P}[H = 0 | T \in \Gamma_{\alpha} ] \]

Computing q-values

In order to compute \(q\)-values, given a sequence of \(p\)-values, we follow the steps given in a paper by Storey and Tibshirani In this scenario, the \(p\)-value plays the role of the realization \(t\) of the statistic \(T\). Under the null hypothesis, these \(p\)-values are uniformly distributed. As a family of significance regions, we simply take \(\Gamma_{\alpha} = [0,\alpha]\), and write for instance \(S(\alpha) := S(\Gamma_{\alpha})\).

First, we have to estimate \(\mathbb{E}[S(\alpha)]\), for which we use \(\#\{i : p_i \leq \alpha\}\), and we estimate \(\mathbb{E}[F(\alpha)]\) with \(m \cdot \hat{\pi}_0 \cdot \alpha\), where \(\hat{\pi}_0\) is an estimate for \(\pi_0 = \mathbb{P}[H=0]\).
The most difficult part of computing \(q\)-values is estimating \(\pi_0\). An often used estimate is the average \(p\)-value, but we can do a little bit better by making use of the empirical CDF of the \(p\)-values. The figure below shows a "typical" histogram of \(p\)-values, where the \(p\)-values sampled from the alternative distribution are given by the gray bars. The marginal distribution of \(p\)-values becomes relatively flat towards the right, where most \(p\)-values should come from the null distribution, which is \({\rm Uniform}(0,1)\). The right panel of the figure shows a "rotated" empirical CDF of the \(p\)-values, i.e. \[ x \mapsto R(x) = 1-{\rm CDF}(1-x)\,, \] and the fact that large \(p\)-values should be uniformly distributed is resembled by the fact that \(R(x)\) is a straight line for small \(x\). The slope of this straight line is an estimate of \(\pi_0\). In the C++ code below, I use GSL to fit a line though \(R\) (indicated by the red line in the figure), using a weight function \(x \mapsto (1-x)^2\) to give more precedence to small values of \(x\), thereby mostly ignoring the non-straight part of \(R\).

In this example, the marginal \(p\)-values are sampled from a mixture distribution \(\pi_0 {\rm Uniform}(0,1) + (1-\pi_0) {\rm Beta}(10,1)\), where \(\pi_0 = 3/4\).


Now we sort the \(p\)-values such that \(p_1 \leq p_2 \leq \dots \leq p_m\), such that \(\#\{j : p_j \leq p_i\} = i\) and first determine the \(q\)-value corresponding to \(p_m\): \[ q_m = \hat{\pi}_0 \cdot p_m \] The \(q\)-value \(q_i\) corresponding to \(p\)-value \(p_i\) is computed as \[ q_i = \min(\hat{\pi}_0 p_i m/i, q_{i+1}) \] Recently, I had to implement this algorithm in C++. The function is given in the following header file, and accepts a "keyed" list of \(p\)-values, where the key is used to identify the feature. The function returns a keyed list of \(q\)-values.



The following code gives an example of how to use qvalues.hpp, and can be used for testing:


After compiling and running the program, the result should be something like:
$ g++ -O3 -std=c++11 qvalues_test.cpp -lgsl -lgslcblas -o qvaltest
$ ./qvaltest 1728
                true false
  discoveries:   143    32
    negatives:   712   113
realized FDR: 0.182857