Compressive sensing (CS) is a framework whereby one performs N nonadaptive measurements to constitute a vector upsilon is an element of R-N, with upsilon used to recover an approximation (u) over cap is an element of R-M to a desired signal u is an element of R-M, with N << M; this is performed under the assumption that a is sparse in the basis represented by the matrix Psi is an element of R-MxM. It has been demonstrated that with appropriate design of the compressive measurements used to define upsilon, the decompressive mapping upsilon -> (u) over cap may be performed with error parallel to u - (u) over cap parallel to(2)(2) having asymptotic properties analogous to those of the best adaptive transform-coding algorithm applied in the basis T. The mapping upsilon -> (u) over cap constitutes an inverse problem, often solved using l(1) regularization or related techniques. In most previous research, if L > 1 sets of compressive measurements {upsilon(i)}(i)=1, L are performed, each of the associated {(u) over cap (i)}(i)=1, L are recovered one at a time, independently. In many applications the L "tasks" defined by the mappings upsilon(i) -> (u) over cap (i) are not statistically independent, and it may be possible to improve the performance of the inversion if statistical interrelationships are exploited. In this paper, we address this problem within a multitask learning setting, wherein the mapping v; - u; for each task corresponds to inferring the parameters (here, wavelet coefficients) associated with the desired signal u;, and a shared prior is placed across all of the L tasks. Under this hierarchical Bayesian modeling, data from all L tasks contribute toward inferring a posterior on the hyperparameters, and once the shared prior is thereby inferred, the data from each of the L individual tasks is then employed to estimate the task-dependent wavelet coefficients. An empirical Bayesian procedure for the estimation of hyperparameters is considered; two fast inference algorithms extending the relevance vector machine (RVM) are developed. Example results on several data sets demonstrate the effectiveness and robustness of the proposed algorithms.