make_priors_and_init.Rd
Here we fix the hyperparameters for priors on \(\theta_i\) and \(W_i\), i.e., \(B\), \(U\), \(\mu\) and \(\nu_w\).
make_priors_and_init( df.background, col.variables, col.item, use.priors = c("ML", "vague"), use.init = c("random", "vague"), ... )
df.background | the background dataset |
---|---|
col.variables | columns with variables: names or positions |
col.item | column with the id of the item (\(i\)): names or positions |
use.priors | see details |
use.init | see details |
... | additional variables for priors and init (see the description) |
a list of variables
An appropriate initialization value for \(W^{-1}_i\), \(i=1,2\) ($h_p$ and $h_d$ chains respectively) is also generated.
Notice that we have three chains in the LR computations, the same initialization is used thrice.
use.priors
:
'ML'
: maximum likelihood estimation as in (Bozza et al. 2008)
'vague'
: low-information priors
U
is alpha*diag(p)
, B is beta*diag(p)
, mu
is mu0
.
By default alpha = 1, beta = 1, mu0 = 0
The Wishart dofs nw
are set as small as possible without losing full rank:
\(\nu_w = 2(p + 1) + 1\)
Generate values for \(W^{-1}_1\), \(W^{-1}_2\) that will be used as starting values for the Gibbs chains. Two methods:
use.init
:
'random'
: initialize according to the model
generate one \(W_i \sim IW_p(\nu_w, U)\)
then invert: \(W^{-1}_i = (W_i)^{-1}\)
'vague'
: low-information initialization
\(W_i = \alpha_{\text{init}} + \beta_{\text{init}} * \operatorname{diag}(p)\)
Some constants can be changed by passing the new values to ...
:
use.priors = 'vague'
: accepts parameters
alpha
(default: 1
)
beta
(default: 1
)
mu0
(default: 0
)
use.init = 'vague'
: accepts parameters
alpha_init
(default: 1
)
beta_init
(default: 100
)
A list of variables:
mu
: the global mean
B.inv
: the between-source prior covariance matrix, as the inverse
U
: the Inverted Wishart scale matrix
nw
: the Inverted Wishart dof \(\nu_w\)
W.inv.1
, W.inv.2
: the initializations for the within-source covariance matrices, as their inverses
Other core functions:
bayessource-package
,
get_minimum_nw_IW()
,
marginalLikelihood_internal()
,
marginalLikelihood()
,
mcmc_postproc()
,
samesource_C()
,
two.level.multivariate.calculate.UC()
# Use the iris data head(iris, 3) #> Sepal.Length Sepal.Width Petal.Length Petal.Width Species #> 1 5.1 3.5 1.4 0.2 setosa #> 2 4.9 3.0 1.4 0.2 setosa #> 3 4.7 3.2 1.3 0.2 setosa col_variables <- c(1,3) col_item <- 5 # Elicitation using MLE priors_init <- make_priors_and_init( df.background = iris, col.variables = col_variables, col.item = col_item ) priors_init #> $mu #> [,1] #> Sepal.Length 5.843333 #> Petal.Length 3.758000 #> #> $U #> Sepal.Length Petal.Length #> Sepal.Length 0.2650082 0.1675143 #> Petal.Length 0.1675143 0.1851878 #> #> $B.inv #> [,1] [,2] #> [1,] 135.25611 -51.13409 #> [2,] -51.13409 19.56022 #> #> $nw #> [1] 7 #> #> $W.inv.1 #> [,1] [,2] #> [1,] 4.852533 -4.417948 #> [2,] -4.417948 6.689959 #> #> $W.inv.2 #> [,1] [,2] #> [1,] 4.852533 -4.417948 #> [2,] -4.417948 6.689959 #> priors_init_2 <- make_priors_and_init( df.background = iris, col.variables = col_variables, col.item = col_item, use.priors = "vague", alpha_init = 100 ) priors_init_2 #> $mu #> [1] 0 0 #> #> $U #> [,1] [,2] #> [1,] 1 0 #> [2,] 0 1 #> #> $B.inv #> [,1] [,2] #> [1,] 1 0 #> [2,] 0 1 #> #> $nw #> [1] 7 #> #> $W.inv.1 #> [,1] [,2] #> [1,] 4.715651 -3.956177 #> [2,] -3.956177 8.275267 #> #> $W.inv.2 #> [,1] [,2] #> [1,] 4.715651 -3.956177 #> [2,] -3.956177 8.275267 #>