A big challenge of reservoir-based Recurrent Neural Networks (RNNs) is the optimization of the connection weights within the network so that the network performance is optimal for the intended task of temporal sequence recognition. One particular RNN called the Self-Organizing Recurrent Network (SORN) avoids the mathematical normalization required after each initialization. Instead, three types of cortical plasticity mechanisms optimize the weights within the network during the initial part of the training. The success of this unsupervised training method was demonstrated on temporal sequences that use input symbols with a binary encoding and that activate only one input pool in each time step. This work extends the analysis towards different types of symbol encoding ranging from encoding methods that activate multiple input pools and that use encoding levels that are not strictly binary but analog in nature. Preliminary results show that the SORN model is able to classify well temporal sequences with symbols using these encoding methods and the advantages of this network over a static network in a classification task is still retained.