A Study of Possible Improvements to the Alopex Training Algorithm

Alejandro Bia

We have studied the performance of the Alopex algorithm [12], and propose modifications that improve the training time, and simplify the algorithm. We have tested different variations. Here we will describe the best cases and we will summarize the conclusions we arrived at. One of the proposed variations (99/B) performs slightly faster than the Alopex algorithm described in [12], showing less unsuccessful training attempts, while being simpler to implement. Like Alopex, our versions are based on local correlations between changes in individual weights and changes in the global error measure. Our algorithm is also stochastic, but it differs from Alopex in the fact that no annealing scheme is applied during the training process and hence it uses less parameters.

Caso o link acima esteja inválido, faça uma busca pelo texto completo na Web: Buscar na Web

Biblioteca Digital Brasileira de Computação - Contato:
     Mantida por: