The blue line plots the final training error as a function of the number of neurons, and the orange line plots the final selection error as a function of the number of neurons.Īs we can see, the final training error always decreases with the number of neurons. The following chart shows the results of the incremental order algorithm. Order selection algorithms are responsible for finding the optimal number of perceptrons in the neural network. The best selection error is achieved by using a model whose complexity is the most appropriate to produce an adequate data fit. We want to improve the final selection error obtained before (0.112 NSE). The objective of model selection is to find the network architecture with the best generalization properties. Here the final selection error is selection error = 0.112 NSE. Indeed, this is a measure of the generalization capabilities of the neural network. The most important training result is the final selection error. The following chart shows how the training (blue) and selection (orange) errors decrease with the training epoch during the training process. Once the strategy has been set, we can train the neural network. This optimization algorithm is the default for medium-sized applications like this one. The optimization algorithm chosen is the quasi-Newton method. This loss index is the default in approximation applications. The loss index chosen is the normalized squared error with L2 regularization. A general training strategy is composed of two concepts: The next step is to select an appropriate training strategy, which defines what the neural network will learn. This neural network represents a function containing 22 adjustable parameters. The next figure shows the resulting network architecture. Here the mean and standard deviation unscaling method will also be used. The unscaling layer transforms the normalized values from the neural network into the original outputs. The second layer has three inputs and one neuron. The first layer has five inputs and three neurons. This number of layers is enough for most applications. Here two perceptron layers are added to the neural network. Here the mean and standard deviation scaling method is set so that the input values have a mean of 0 and a standard deviation of 1. The scaling layer transforms the original inputs to normalized values. The neural network will output the scaled sound pressure level as a function of the frequency, angle of attack, chord length, free stream velocity, and suction side displacement thickness.įor this approximation example, the neural network is composed of: However, the scaled sound pressure level depends on all the inputs simultaneously. In general, the more frequency, the less scaled sound pressure level. We can also plot a scatter chart with the scaled sound pressure level versus the frequency. The above chart shows that the wave’s frequency has the most significant impact on the noise. As we can see, the wave’s frequency has the most significant impact on the noise. This might help us see the different inputs’ influence on the sound level. The following figure depicts inputs-targets correlations. The following figure depicts the histogram for the target variable.Īs we can see, the scaled sound pressure level has a normal distribution. Once all the data set information has been set, we will perform some analytics to check the quality of the data.įor instance, we can calculate the data distribution. More specifically, 753 samples are used here for training, 375 for validation, and 375 for testing. They are divided at random into training, selection, and testing subsets, containing 60%, 20%, and 20% of the instances, respectively. On the other hand, the NASA data set contains 1503 instances. scaled_sound_pressure_level, in decibels, used as the target.suction_side_displacement_thickness, in meters, used as input.free_stream_velocity, in meters per second, used as input.chord_length, in meters, used as input.angle_of_attack, in degrees, used as input.In that way, this problem has the 6 following variables: Here, the number of variables (columns) is 6, and the number of instances (rows) is 1503. The file airfoil_self_noise.csv contains the data for this example. The first step is to prepare the data set, which is the source of information for the approximation problem. The fundamental goal is to model the sound pressure level as a function of the airfoil features and airspeed. Therefore, this is an approximation project. The variable to be predicted is continuous (sound pressure level). To follow it step by step, you can use the free trial. This example is solved with Neural Designer.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |