**Abstract** : We propose a fast algorithm for constructing an optimal partition, in terms of mutually independent random vectors, of the components of a non-Gaussian random vector that is only defined by a given set of realizations. The method proposed and its objective are different from the independent component analysis (ICA) that was introduced to extract independent source signals from a linear mixture of independent stochastic processes. The algorithm that is proposed is based on the use of the mutual entropy from information theory and on the use of graph theory for constructing an optimal partition. The method has especially been developed for random vectors in high dimension and for which the number of realizations that constitute the data set can be small. The proposed algorithm allows for improving the identification of any stochastic model of a non-Gaussian random vector in high dimension for which a data set is given. Instead of directly constructing a unique stochastic model for which its stochastic dimension, which is identified by solving a statistical inverse problem, can be large, the proposed preprocessing of the data set allows for constructing several mutually independent stochastic models with smaller stochastic dimensions. Consequently, such a method allows for decreasing the cost of the identification and/or to make possible an identification for a case that is a priori in high dimension and that could not be identified through a direct and global approach. The algorithm is completely defined in the paper and can easily be implemented.