## Transforming a QP Problem to an SOCP Problem

A Quadratic Programming problem (QP) in the form of where , can be transformed to a Second-Order Cone Programming (SOCP) problem in the form of Consider , and As is non-negative, minimizing is equivalent to minimizing , and hence is equivalent to minimizing . If we have  and , then the objective function in QP  can be written as . We can thus minimize . Thus, the QP problem can now be written as As , by definition of QP, is symmetric, a symmetric can be found such that . If the QP is assumed to be a convex QP, is positive semidefinite, applying Cholesky factorization gives (or ). In this case, (or ). Next, as is always non-negative, the equality constraint can be written as Finally, each row in the inequality constraint can be written as where is the i-th row of , and is the i-th element of . Therefore, a QP problem can be transformed to an equivalent SOCP problem in the following way. We need to introduce a few variables first. The sub-vector with the first elements in the solution of the transformed SOCP problem is the solution of the original QP problem. SuanShu has implementations to solve both SOCP and QP problems. SOCP interior point solver QP active set...

## Fastest Java Matrix Multiplication

Introduction Matrix multiplication occupies a central role in scientific computing with an extremely wide range of applications. Many numerical procedures in linear algebra (e.g. solving linear systems, matrix inversion, factorizations, determinants) can essentially be reduced to matrix multiplication [5, 3]. Hence, there is great interest in investigating fast matrix multiplication algorithms, to accelerate matrix multiplication (and other numerical procedures in turn). SuanShu was already the fastest in matrix multiplication and hence linear algebra per our benchmark. SuanShu v3.0.0 benchmark Starting version 3.3.0, SuanShu has implemented an advanced algorithm for even faster matrix multiplication. It makes some operations 100x times faster those of our competitors! The new benchmark can be found here: SuanShu v3.3.0 benchmark In this article, we briefly describe our implementation of a matrix multiplication algorithm that dramatically accelerates dense matrix-matrix multiplication compared to the classical IJK algorithm. Parallel IJK We first describe the method against which our new algorithm is compared against, IJK. Here is the algorithm performing multiplication for is ,  is , and  is : for (i = 1; i < = m; i ++){ for (j = 1; j <= p; j ++){ for (k = 1; k <= n; k ++){ C[i,k] += A[i,j] * B[j,k]; } } } 1234567 for (i = 1; i < = m; i ++){    for (j = 1; j <= p; j ++){        for (k = 1; k <= n; k ++){            C[i,k] += A[i,j] * B[j,k];        }    }} In Suanshu, this is implemented in parallel; the outermost loop is passed to a  ParallelExecutor . As there are often more rows than threads available, the time complexity of this parallelized IJK is still roughly the same as IJK: ,...

## SuanShu is the Best Numerical and Statistical Library, ever!

a picture is worth a thousand words…   Check out the release notes here: http://numericalmethod.com/forums/topic/suanshu-3-0-0-is-now-released/   Happy birthday to me...