Skip to content

Speed-up for qdecr_fastlm: Tweaks to code #38

@slamballais

Description

@slamballais

The slowest part of the vertex-wise regressions is the calculation of the standard errors:
s2 <- lapply(res, function(z) colSums(z^2 / df))
se <- lapply(1:m, function(z) do.call("cbind", lapply(s2[[z]], function(q) sqrt(diag(q * XTX[[z]])))))

It can be sped up:
se <- lapply(1:m, function(z) sqrt(tcrossprod(diag(XTX[[z]]), colSums(res^2)[[z]]) / df))

With 10,000 vertices of 1,000 participants, this is approximately the timing:
Unit: milliseconds
expr min lq mean median uq max neval
fun1() 98.1338 102.0863 106.89275 105.352 109.5212 152.5436 100
fun2() 24.2160 25.3097 30.44813 26.151 31.4312 150.1490 100

So from median 105.4 ms to 26.2 ms. The full loop went from 265.8 ms to 157.3 ms (~1.7x faster).

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions