In cognitive radio networks (CoR), the performance of cooperative spectrum sensing is improved by reducing the overall error rate or maximizing the detection probability. Several optimization methods are usually used to optimize the number of user-chosen for cooperation and the threshold selection. However, these methods do not take into account the effect of sample size and its effect on improving CoR performance. In general, a large sample size results in more reliable detection, but takes longer sensing time and increases complexity. Thus, the locally sensed sample size is an optimization problem. Therefore, optimizing the local sample size for each cognitive user helps to improve CoR performance. In this study, two new methods are proposed to find the optimum sample size to achieve objective-based improved (single/double) threshold energy detection, these methods are the optimum sample size N* and neural networks (NN) optimization. Through the evaluation, it was found that the proposed methods outperform the traditional sample size selection in terms of the total error rate, detection probability, and throughput.

Cognitive radio (CR) comes as an intelligent solution to get over spectrum scarcity and inefficient spectrum usage issues. Cognitive users can use the licensed spectrum when such spectrum usage does not result in harmful interference with the license users. This spectrum usage form leads to new challenges such as spectrum sensing. Being aware of the primary licensed user activities is the first step performed by the cognitive radio system. Many spectrum-sensing techniques have been proposed in the literature in [

Through these sensing methods, the most commonly used technique is energy detection; because it is low cost and simple implementation [

The conventional energy detector maximizes the generalized likelihood function, this is may not maximize the detection probability or minimize the false alarm/missed detection probability. Chen, replace the squaring operation of the conventional energy detector with arbitrary positive power, a new energy detector “improved energy detector” with better detection performance can be derived [

In this paper, local and cooperative spectrum sensing performance has been optimized based improved double threshold energy detection scheme by finding the optimum sample size, which minimizes the total error rate. The rest of this paper is organized as follows: Section 2 review the background and motivations. In Section 3, the machine and learning approach in cognitive radio is presented. The concept of sample size effect on the performance metrics reviewed in Section 4. In Section 5, the methodologies of study illustrated show the theoretical and mathematical calculations related to optimum sample size. In Section 6, results and discussion are presented, and the paper is concluded in Section 7.

Learning techniques for spectrum sensing are known as tools in solving complex classification problems. These techniques are used to efficiently manage the spectrum in wireless communication, in addition to managing the resource power to obtain high quality of service (QoS) to mobile users. In the CR domain, one of the objectives of using learning techniques is to enhance the performance of spectrum sensing.

In general, learning mechanisms in CR are divided into two types. namely learning and prediction, so that the data is provided in a format that is related to the learning stage of the primary user (PU) features and secondary user (SU) sensor parameters such as test statistics, signal to noise ratio (SNR), geographical location, etc., while the prediction stage can be related to with the result of the sensor, energy efficiency, and the operating model to be adopted [

In normal spectrum sensing, SU has to determine the threshold for the test statistic before deciding on the PU presence. This threshold may be calculated based on target false alarm and detection rates. Thus, many statistical parameters related to noise, channel, and PU signal must be known in advance. The majority of work focuses on tuning learning methods with numerical statistics of two hypotheses: H0, where PU is assumed absent, and H1, where PU is assumed active.

Learning methods help to determine the optimum threshold value for the minimum error that appears on the spectrum sensing, which can lead to interference between the transmission from primary and secondary users, or the secondary user misses the transmission opportunity caused by using the available spectrum bands, see

According to what has been presented, the motivation of this paper is to present a method based on the use of machine learning to improve spectrum sensing based on the analysis and evaluation of an optimized error minimization model. The scope of the study is a case for integrating double threshold energy detection and machine learning into perceptual radio systems to optimize the error minimization method. In short, the contributions of this paper can be summarized as follows:

Reviews the effects of sample size on detection and false alarm probabilities in spectrum sensing technique.

Review the mathematical calculations related to spectrum sensing and double threshold energy detection.

Develop an optimization model for minimizing error probability based on machine learning and optimum sample size.

Analysis of the developed model in performance compared with local spectrum sensing and optimum sample size.

Learning can enable performance improvement for the cognitive radio network by using stored information both of its actions and the results of these actions and the actions of other users to aid the decision-making process. Intelligence, as the main ingredient of cognitive radio technology, can be employed in every stage of the cognition cycle [

Based on this definition, learning is related to the ability to synthesize the acquired knowledge to improve the future behavior of the learning agent. This makes knowledge a fundamental component of the learning process and relates to the term cognition, which is defined as the act or process of knowing or perception [

Artificial neural networks (ANN), shown as classifiers, often employ supervised learning where the information is processed to achieve a predefined target output [

The network obtains knowledge from its environment through a learning process.

The strengths of interneuron connection, commonly called synaptic weights, are used for storing the gained knowledge. ANN has some capabilities such as nonlinearity fitness to underlying physical mechanisms. In addition, adaptive to minor changes in the surrounding environment.

To the classification of the pattern, the ANN provides information about which particular pattern to choose, besides the confidence in making the suitable decision. The main features of ANN are; their ability to learn complex nonlinear input/output relationships. Usage of sequential training procedures and adapting themselves to the data. These features qualified ANN to be a significant tool for classification, which is needed frequently in decision–making assignments of human activities [

In cognitive spectrum sensing sample size can affect performance metrics because the sample size N must be large enough to achieve the requirements on detection and false alarm probabilities (i.e., IEEE standard Pf ≤ 0.1 and Pd ≥ 0.9,). When the number of the sample (N), is large the sensing time becomes longer which leads to high detection probability. Sensing time depends on and increases with the sample size (N). The decrease in false alarm probability and the increase in detection probability with the sensing time increasing it supposed to increase the network throughput [

The achievable throughput of cooperative spectrum sensing can be defined as the average amount of the successfully delivered transmitted bits. Considering the sampling time as (t), sensing time is derived as in

Let P (H0) and P (H1) be the probabilities that hypothesis H0 and H1 are true respectively. Let γs represent the signal-to-noise ratio of the secondary point-to-point link and γp represents the signal-to-noise ratio of the primary user at the secondary receiver [

where, Co is the capacity of the secondary network in the absence of the primary user, and C1 is the capacity of the secondary network in the presence of the primary user.

This paper aims to develop two new methods to calculate optimum sample size, which minimizes the total error probability; based on an improved double threshold energy detector depending on network requirement. In the first method, the optimization problem, which minimizes total error rate probability, is solved mathematically. In the second method, the neural network has been used as an intelligent way to find the optimum sample size that minimizes error probability when the inputs are the targeted detection, false alarm probabilities, and SNR is known. These methods are shown as follows.

The local spectrum-sensing problem can be formulated as follows;

where, s(n), and u(n) represent the primary user’s signal and the noise, where they are assumed to be an independent and identically distributed (i.i.d) random process with mean zero and variance σs2. The test statistic of the improved energy detector is given by

For any p,

where, Γ (.) is a gamma function, (µ0, µ 1) and (σ02, σ12), are the means and variance under H0 and H1.

As

The performance of the energy detector is characterized by using the following metrics, which have been introduced based on the test statistic under the binary hypothesis as follows;

where, Pf, Pd, and Pm, are false alarm, detection, and miss detection probabilities. T is the threshold.

A good sensing technique must have a high Pd for (PU protection) and a low Pf (for SU protection). P_{m} is proportional to the interference induced from each SU on the PU, which must be as low as possible. Now the total error probability

For improved double threshold energy detector, the optimization problem when K, L = 1, 2… K, and SNR are known, can be defined by the

where,

We can get the optimum sample size

For improved double threshold energy detector, the optimal (N*), that minimizes the total error probability for improved double threshold energy detector [

The optimal N (

where,

In Section 5.3, the optimal sample size was calculated which enables minimization of the error probability to zero. In some cases, the optimal sample size might be too large. This is the main disadvantage in spectrum sensing at a low SNR environment because of the limitation on the maximal allowable sensing time and this may lead to inefficient spectrum utilization as is mentioned earlier in this subsection [

The neural networks, shown as classifiers, often employ supervised learning where the information is processed to achieve a predefined target output. This approach can be considered as an optimization problem whose objective function is to minimize the total error resulting from the difference between the target and the actual outputs [

Step1: Designing neural network based on, input-output parameters, network type, network parameters, in addition to database length.

Step2: Analyze the results.

Step3: Redesign the network

Parameters | Value |
---|---|

Maximum number of sensing SUs | 150 |

Noised channel | AWGN |

Probability of false alarm (P_{f}) |
0 to 1 |

Number of samples during detection period (N) | 10 to 1000 |

5, 10, 20, and 150 | |

Noise power (σ_{n}^{2}) |
1 |

Signal to noise ratio (SNR) | −20 to 0 dB |

Number of input neurons | 3 |

Number of output neurons | 1 |

Number of neurons in the hidden layer | 10 |

Learning rate | 0.1 |

Sigmoid function in the hidden layer | Log sig |

Training algorithm | Feedforward |

In the study, the performance metrics like total error rate (Qr), throughput, and error probability are used to evaluate the NN method compared with fixed N samples and optimum N methodologies. The total error rate (Qr) can be calculated based on the overall probabilities of false alarm (

The false alarm (

From the

According to

where M is the total number of secondary users, and k is the number of secondary users chosen for cooperation.

The throughput against sensing time calculations, the cooperative spectrum-sensing throughput is given by the average amount of the successfully delivered transmitted bits. It depends on the correct identification of the spectrum hole. The sensing time optimization problem can be mathematically formulated as,

where,

All simulations in this paper are executed on MATLAB (version R2020a). The Monte Carlo (MC) method, which is a stochastic technique, based on the use of random numbers forms the basis of these simulations [

For simulation, additive white Gaussian noise (AWGN) channel is considered with signal-to-noise ratio SNR range is taken from −20 dB to 0 dB. For cooperation k-out-of-M fusion rule is used with M = 5, 10, 20,150. M is the total number of SUs participating in cooperation. The number of samples during the detection period (N) is from 10 to 1000 calculated by the proposed methods. Pf varies from 0 to 1. Noise power is assumed to be known σn2 = 1.

Before we evaluate the optimized error minimization-based machine learning method, we analyze the fixed N and optimum N methods as calculated in Sections 5.1 and 5.3 in terms of throughput and power operation. The optimum power operation value that minimizes the total error rate is found (p = 2.4) as shown in

The evaluation of throughput in

According to the last evaluation outputs, we analyze the optimized error minimization-based machine learning method compared with fixed N and optimum N methods in terms of error probability, detection, and miss detection probabilities, and optimal fusion rule in the following sections [

Using the above-mentioned methods to find optimum sample size, compression between the two methods and a constant N selection are shown in terms of error probability [

In terms of detection probability from simulation, it is clear that when the optimum number of sample methods are used; better detection can be achieved as shown in

The optimum fusion rule is the rule, which minimizes error probability [

Since sample size affects the local sensing performance at each cognitive user, this will affect the optimal fusion rule when all users cooperate. The optimal fusion rule changed with the total number of cooperative users changing. In addition, deference in SNR at each cognitive user might affect the optimal fusion rule as shown in

In this paper, the effects of sample size in performance metrics are discussed and shown that the sample size is an important optimization problem, especially in the low signal-to-noise ratio region. Two new methods to calculate the optimum number of samples are proposed in this paper. These methods outperform the traditional sample size selection in terms of the total error rate, detection probability, and throughput. The optimum power operation which minimizes the total error rate is found (p = 2.4). We found that optimum fusion is the majority rule when all user has the same SNR, otherwise closed to the majority fusion rule. In addition, it found that when sensing time is increased; the throughput of the CR network is enhanced. It increases with sensing time and reaches a maximum, thereafter the throughput decreases.

This research was funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP2022R97), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.