Journal Articles
Permanent URI for this collection
Browse
Browsing Journal Articles by Author "Albert Selebea Lutakamale"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Machine Learning-Based Fingerprinting Positioning in Massive MIMO Networks: Analysis on the Impact of Small Training Sample Size to the Positioning Performance(Springer Science and Business Media LLC, 2023) Albert Selebea Lutakamale; Yona Zakaria ManyeselaIt is well known that the bigger the training dataset, the higher the performance of deep learning algorithms. But gathering/collecting huge real measured CSI samples to be used as fingerprints to deep learning-based positioning models is a very challenging task both in terms of time and resources. Training deep learning models using very big training dataset is also very costly because it requires access to very powerful computing devices which are very expensive and thus not affordable to everyone. This might be one of many reasons that could hinder research and development of powerful deep learning algorithms to solve different societal problems. This necessitates the need to engage more in research to build high-performing deep learning models capable of giving out satisfactory performance using limited computing resources and small training dataset sizes. In this paper, we analyzed the impact of small training sample size to the positioning performance of CSI-based deep learning fingerprinting positioning models. Results show that with better design of deep learning models, it is possible to achieve high positioning performance using relatively small training sample sizesItem The Influence of Non-learnable Activation Functions on the Positioning Performance of Deep Learning-Based Fingerprinting Models Trained with Small CSI Sample Sizes(Springer Science and Business Media LLC, 2022) Albert Selebea Lutakamale; Yona Zakaria ManyeselaActivation functions, being mathematical ‘gates’ in between the input feeding the current neuron and its output going to the next layer, is very crucial in the training of deep learning models. They play a big part in determining the output of a model, its accuracy, and computational efficiency. In some cases, activation functions have a major effect on the model’s ability to converge and the convergence speed. To be able to train deep learning based fingerprint positioning models using small CSI sample sizes and have satisfactory positioning results, the choice of appropriate activation functions is very important. In this paper we explore several non-learnable activation functions and conduct a comprehensive analysis to study the influence they have on the positioning performance of deep learning fingerprint-based positioning models using small CSI sample sizes. We then propose a better model training approach with a view of getting the best out of those activation functions.