The Nomis wallet scoring algorithm (NomisScore) is designed to assess the reliability of addresses in blockchain networks through a comprehensive evaluation that combines statistical methods and machine learning algorithms. This algorithm considers both structural and non-linear dependencies among multiple features, ensuring enhanced accuracy in assessment.
$$ D = alldata(address) $$
The algorithm begins by collecting structured data about the maximum available information in blockchain networks. Key summary characteristics are computed, and additional data from third-party services, such as CyberConnect and Greysafe, are incorporated. For instance, wallet age is enriched with data from these services, as outlined in Appendix A.
$$ D_e= \{{D,enrich(D)}\} $$
NomisScore applies statistical methods based on expert assessment to analyze the received raw data. These evaluations can already be utilized to identify addresses with suspicious activity or low value.
$$ NomisScore_{EXP}=ExpertDecision(D _e ) $$
To account for non-linear dependencies and improve rating accuracy, NomisScore incorporates a neural network model trained on the original data obtained during the use of NomisScore products.
$$ ML=NN⟨ScoringData⟩ $$
$$ NomisScore_{AI}=ML(D_e) $$
At the initial stage, an attempt was made to utilize fully connected networks of a redesigned type. After exploring various combinations of hyperparameters, the neural network with the following configuration demonstrated the highest accuracy: the number of neurons in the input layer corresponds to the dimensionality of the input data used for expert assessment. The internal structure comprises five fully connected layers with the ReLU activation function. At the output of the neural network, there is a single parameter determining the value of the scoring score in the range [0; 100], with the TanH activation function. The Adam optimizer is employed with cross-entropy as the loss function.
To further enhance prediction, it was decided to employ deep neural networks. The use of deep neural networks (DNNs) for analyzing time series has become a noticeable and effective approach in various domains, including finance, healthcare, and climatology. Deep learning techniques, especially recurrent neural networks (RNNs) and long short-term memory networks (LSTMs), have shown remarkable success in detecting complex temporal dependencies in time series data. Deep neural networks allow for consideration of many more factors than classical algorithms or statistical methods. They operate not with simplified statistics but with a sequence of transactions associated with a given address. Additionally, neural networks can account for the current dynamics in the blockchain world, which changes over time, sometimes very rapidly. To accommodate these changes, we apply the concept of a dynamic dataset, where the dataset is supplemented with new transactions of pre-classified addresses.
Recurrent Neural Networks (RNN): RNNs represent a class of neural networks designed to handle sequential data, employing cycles to preserve information over time. However, traditional RNNs face challenges in detecting long-term dependencies due to the vanishing gradient problem [3].
Long Short-Term Memory Networks (LSTM): LSTMs are a specialized type of RNN that addresses the vanishing gradient problem. They are well-suited for analyzing time series data as they can capture long-term dependencies in sequential data [4].
Temporal Convolutional Networks (TCN): TCN is an architecture of neural networks that applies convolutions to temporal sequences. They have gained popularity for their ability to perform parallel processing and efficient structure [6].
Transformers and Attention Mechanisms: Attention mechanisms, popularized by transformers, are also applied to time series data to selectively focus on relevant temporal information [5]. According to the study [1], deep neural networks with a transformer architecture are a good solution for analyzing time series. We apply a similar architecture, not for predicting subsequent values, but for calculating a scoring rating based on a series of transactional data over time.
To stabilize the sample data, we applied the Box-Cox transformation. The Box-Cox Transformation is a statistical method used to stabilize variance, making time series data more suitable for analysis. This method is particularly useful when working with time series data that exhibit heteroskedasticity, where data variability changes over time. The Box-Cox transformation is a power transformation parameterized by the lambda variable ($λ$), which is chosen to achieve the best approximation to normality and constant variance. This transformation is defined as: