## 1 Introduction

Location based service (LBS) has a significant meaning for applications like location-based advertising, outdoor/indoor navigation and social networking, etc. With the help of the advancement of the smartphone technology in recent decades, smartphone devices are integrated with various built-in sensors, such as GPS modules, WiFi modules, cellular modules, etc. By acquiring the data from these sensors, researchers are capable of studying human activities. Among these research topics, user location analysis and prediction has been a research foci. There are several possible classes of methods can be applied for such subjects. Since the GPS equipment can provide relatively accurate outdoor position information, GPS-based methods are favored by many researchers [4], [22]. However, such methods are not suitable for indoor positioning. A more applicable approach is to make use of WiFi fingerprints of the smartphone devices. In this case, received signal strength indicator (RSSI) of WiFi access points scanned by the mobile phones is adopted to identify the locations of the users.

In literature, researchers have exploited various kinds of machine learning techniques, both conventional learning methods and deep learning, on location recognition and prediction with WiFi fingerprints. In the previous work of

[3], [6], [8], [10], [21], the researchers adopted several conventional machine learning methods for classification, clustering and regression tasks, for instances, decision trees, K-nearest neighbors, naive Bayes, neural networks, K-means, the affinity clustering algorithm, Gausian Process, etc. Deep learning based methods, such as convolutional neural networks ), autoencoders and recurrent neural networks also have been applied in WiFi based positioning methods. Note that in real world, a building may be equipped with a relatively large number of WiFi hotpots to provide good wireless connections. Consequently, this leads to the issue of high dimensionlity. Naturally, some deep-learning based dimension-reduction methods like auto-encoders can be used before the classification or prediction tasks

[18], [19], [13].In our work, we attempt to utilize the WiFi fingerprints to predict the accurate user location. This task can be regarded as a high dimensional time-series prediction. The training inputs of our model are the RSSI value vectors and the training targets are the future coordinate values (2D). Though, some previous researchers have used CNNs or RNNs for accurate indoor localization

[12], [19], [11], we argue that the models with ordinary euclidean-distance loss functions (for instance, mean square errors) are not capable of overcoming the serve nonlinearity of the data caused by the signal-fading and multi-path effects, WiFi signals are not always stable

[11].In contrast with the aforementioned methods, we devise an innovative hybrid deep learning structure, the convolutional mixture density recurrent neural network (CMDRNN). Compared to other existing models, first, our approach does not need to pre-train an autoencoder to reduce the dimension, instead, we deploy a CNN structure to detect the feature of input. Second, we make use of a RNN to exhibit the temporal dynamics behavior of user trajectory. As for the final output of the network, we employ a MDN structure to calculate the conditional probability density rather than predict the output directly as other conventional neural networks. Therefore, our model consists of three sub-models, a CNN sub-structure, a RNN sub-structure and a MDN sub-structure, which enables our model to tackle complicated time-series prediction problem as the WiFi fingerprints based location prediction. The main contributions of our work are summarized as follows.

In order to predict user location with WiFi fingerprints, we devise a novel hybrid deep-learning model, in which the advantages of CNNs, RNNs and MDNs are merged. We conduct the evaluation experiments on a real-world dataset to test our model and compare other models as well. The final results show the superiority of our method.## 2 Proposed Method

### 2.1 Proposed Model Overview

Combined with the merits of three different deep neural networks, we devise a novel deep neural network architecture, which is called the convolutional mixture density recurrent neural network (CMDRNN). The proposed model is composed of an one-dimensional convolutional neural network, a recurrent neural network and a mixture density network. The whole structure of our model is demonstrated in Fig 1.

### 2.2 1D Convolutional Neural Network Sub-structure

The first step of our approach is to capture the feature of the high dimensional input. In practice, we find that, for each vector, only a few elements has real meaningful values, whereas most elements are not activated. It is easy to understand because the smartphone only can detect very limited number of WiFi access points at each time in a building with a considerable number of WiFi access points. As a consequence, the feature of the input is hard to be captured. To deal with this issue, we resort to the powerful deep-learning technique, convolutional neural networks (CNNs) [16]

. CNNs are widely used for tasks such as imagine processing, natural language processing and sensor signal processing. In our case, the inputs are vectors, therefore, a CNN structure whose convolutional layers and max pooling layers are both one dimensional, is incorporated into our model.

### 2.3 Recurrent Neural Network Sub-structure

Recurrent neural networks (RNNs) are widely used for analysing time-series issues, such as natural language process (NLP), computer vision and signal processing

[7]. Unlike the hidden Markov model (HMM)

[15], RNNs can capture higher order dependency and has relatively less expensive computation. The state transition of a RNN can be expressed follow.(1) |

where, is the hidden state,

is the activation function,

is the hidden weight, is the input, is the output weight, and is the bias. The output of a conventional RNN can be expressed as follow.(2) |

where, is the output of RNN, is the activation function, is the output weight and is the bias.

Furthermore, a special variant of RNNs, the long short-term memory network (LSTM)

[9]can solve the long-term dependency problem during learning process, which makes RNN even more powerful. More recently, the researchers proposed a type of LSTM, the gated recurrent unit (GRU)

[5], which has almost the same accuracy as LSTM but less computing cost. In the following experiments, we will compare these three RNN structures as the sub-model of our approach.The loss function is the mean square error (MSE) between RNN outputs and the training targets. Usually, such a MSE loss function is enough for many prediction problems. However, for our case, this type of loss function is not robust enough because the inputs and the outputs of our model have very complicated nonlinear relationship.

### 2.4 Mixture Density Network Sub-structure

A traditional neural network with a distance-based loss function can be optimized by a gradient descent-based method. Generally, such scheme can perform quite well on the problems that can be described by a deterministic function , i.e., each input only corresponds to a output with one possible target value. However, for some stochastic problems like our case, one input may have more than one possible output values. Hence, this type of problems are better to be described as a conditional distribution than a deterministic function .

To tackle this issue, intuitively, we can replace the original distance-based loss function with a conditional probability function, for a regression task, the Gaussian distribution can be a appropriate choice. Moreover, using the mixed Gaussian distributions instead of a single Gaussian can improve the representation capacity of the model. Based on that, the researcher proposed the mixture density networks (MDNs)

[2]. In contrast with traditional neural network, the output of MDNs is the parameters a set of mixed Gaussian distributions and the loss function become the conditional probabilities of given inputs. As a result, the optimization process is to minimize the negatived log probability. Hence, the loss function can be described as follow:(3) |

where, is the assignment portion for each sub-distribution, with , and is the total mixture models number. is the internal parameters of the base mixture distributions. For Gaussian distribution, , and

are the means and variances, respectively. Now, we can draw

samples according to Eq. (3) instead of computing directly based on Eq. (2). In fact, Eq. (2) will be used as the input of the MDN to depict the state transition. Thus, as the training process is finished, we can take use of the mixed Gaussian distributions to sample the target values according to the given inputs. To this end, we can use the maximum likelihood estimation (MLE), i.e, the means of the distributions are taken as the final prediction. In summation, the overall training process is depicted in Algorithm

1.## 3 Experiments and Results

The implementation details of our model are illustrated in Table 1. In the proposed model, the CNN sub-network consists a convolutional layer, a max-pooling layer and a flatten layer. The RNN sub-structure includes a hidden layer with neurons. The MDN sub-model is composed of a hidden layer and an output layer. The mixture model number for the MDN is , and each mixture has parameters, the 2D means, variances and the mixture portions. As for the optimizer, according to [1]

, for very nonstationary optimization problems, RMSProp

[20] can outperform Adam [14], thus we choose RMSProp as the optimizer.Sub-network | Layer | Hyperparameter | Activation function |

CNN | convolutional layer | filter number: 100; stride: 2 |
sigmoid |

CNN | max pooling layer | neuron number: 100 | relu |

CNN | flatten layer | neuron number: 100 | relu |

RNN | hidden layer | memory length: 5; neuron number: 200 | sigmoid |

MDN | hidden layer | neuron number: 200 | leaky relu |

MDN | output layer | 5*mixed Gaussians number (5*30) | - |

Optimizer: RMSProp; learning rate: 1e-3 | |||

In order to test our model on the WiFi fingerprints based sequential location prediction task, we conduct a series of experiments on the real-world dataset. We select two WiFi RSSI-coordinate paths from the Tampere dataset [17]. This dataset includes a set of sequential RSSI value vectors with the input dimension of . The detected RSSI values range from dm to dm while the undetected WiFi access points are filled with value of . Each vector has its own corresponding 2D coordinates labels. Therefor, for our task, the input is the RSSI values vector at current time point and the modeling target is the coordinates at next time point.

Fig. 2 and Fig. 3 show the prediction results of our proposed model. We also varies the mixture numbers in the MDN sub-model to find the optimal mixture number, which is . The results are demonstrated in Fig. 4. Further, we compare our CMDRNN model to a set of deep learning approaches, RNN, CNN + RNN and RNN + MDN. The results are demonstrated in Table 2. From the experimental results, we can see that our proposed mode significantly improves the modeling accuracy compared to other deep learning methods on sequential user location prediction. Moreover, the GRU-based CMDRNN model has the best performance among the CMDRNN models.

Method | Path 1 | Path 2 |
---|---|---|

RNN | ||

CNN+RNN | ||

RNN+MDN | ||

CMDRNN(Vanilla-RNN) | ||

CMDRNN(LSTM-RNN) | ||

CMDRNN(GRU-RNN) | ||

## 4 Conclusions and perspectives

In this paper, we attempt to tackle the WiFi fingerprint-based user position prediction problem. In contrast with existing approaches, our solution is a novel hybrid deep-learning model. The proposed model is composed of three sub-deep neural networks, a CNN, a RNN and a MDN. This unique deep architecture takes advantage of the strengths of three deep learning models, which allows us to predict user location with high accuracy. For the validation, we tested our model on the real-world dataset, and the final results proves the effectiveness of our approach.

For the future work, we plan to exploit other deep generative models, for instance, variational autoencoders, Besysian neural network and normalising flows, for the potential applications on the WiFi fingerprint-based positioning problems. We should be aware that the labeled data is not always easy to acquire. As a matter of fact, in many cases, the available dataset are unlabeled. Hence, for the future work, we plan to investigate the semi-supervised learning techniques for human activity study.

## References

- [1] (2017) Wasserstein gan. arXiv preprint arXiv:1701.07875. Cited by: §3.
- [2] (1994) Mixture density networks. Cited by: §2.4.
- [3] (2015) A comparative study on machine learning algorithms for indoor positioning. In 2015 International Symposium on Innovations in Intelligent SysTems and Applications (INISTA), pp. 1–8. Cited by: §1.
- [4] (2016) Exploiting machine learning techniques for location recognition and prediction with smartphone logs. Neurocomputing 176, pp. 98–106. Cited by: §1.
- [5] (2014) Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Cited by: §2.3.
- [6] (2016) Clustering benefits in mobile-centric wifi positioning in multi-floor buildings. In 2016 International Conference on Localization and GNSS (ICL-GNSS), pp. 1–6. Cited by: §1.
- [7] (1990) Finding structure in time. Cognitive science 14 (2), pp. 179–211. Cited by: §2.3.
- [8] (2007) Wifi-slam using gaussian process latent variable models.. In IJCAI, Vol. 7, pp. 2480–2485. Cited by: §1.
- [9] (1999) Learning to forget: continual prediction with lstm. Cited by: §2.3.
- [10] (2006) Gaussian processes for signal strength-based location estimation. In Proceeding of robotics: science and systems, Cited by: §1.
- [11] (2019) Recurrent neural networks for accurate rssi indoor localization. arXiv preprint arXiv:1903.11703. Cited by: §1.
- [12] (2018) CNN based indoor localization using rss time-series. In 2018 IEEE Symposium on Computers and Communications (ISCC), pp. 01044–01049. Cited by: §1.
- [13] (2018) A scalable deep neural network architecture for multi-building and multi-floor indoor localization based on wi-fi fingerprinting. Big Data Analytics 3 (1), pp. 4. Cited by: §1.
- [14] (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §3.
- [15] (2001) Predicting transmembrane protein topology with a hidden markov model: application to complete genomes. Journal of molecular biology 305 (3), pp. 567–580. Cited by: §2.3.
- [16] (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §2.2.
- [17] (2017) Crowdsourced wifi database and benchmark software for indoor positioning. Data set], Zenodo. doi 10. Cited by: §3.
- [18] (2017) Low-effort place recognition with wifi fingerprints using deep learning. In International Conference Automation, pp. 575–584. Cited by: §1.
- [19] (2019) A novel convolutional neural network based indoor localization framework with wifi fingerprinting. IEEE Access 7, pp. 110698–110709. Cited by: §1, §1.
- [20] (2012) Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning 4 (2), pp. 26–31. Cited by: §3.
- [21] (2015) Gaussian process assisted fingerprinting localization. IEEE Internet of Things Journal 3 (5), pp. 683–690. Cited by: §1.
- [22] (2017) Modeling user activity patterns for next-place prediction. IEEE Systems Journal 11 (2), pp. 1060–1071. Cited by: §1.

Comments

There are no comments yet.