Skip to main content

Stretch the dynamic range of the given 8-bit grayscale image using MATL...

How to train NARX neural network in closed loop

 I am am trying to use the neural network toolbox to predict an internal temperature given a number of input conditions. I have used an automatically generated code for a NARX network and made some small changes. I am aware that the typical workflow is to train open and then change to closed, however I would like to compare the results from this approach with training the network initially in closed form.

 
When training with the fourth input arguement of narxnet command set to 'open' the network trained with no problems. When I change this to 'closed' I am getting the following error messages:
 
 
Error using network/subsasgn>network_subsasgn (line 91)
Index exceeds matrix dimensions.

Error in network/subsasgn (line 13)
net = network_subsasgn(net,subscripts,v,netname);

Error in narx_closed (line 28)
net.inputs{2}.processFcns =
{'removeconstantrows','mapminmax'};
I'm not really sure what the problem is as the Neural Network Toolbox Users Guide seems to suggest that this is all you need to do to create a closed loop NARX network and train the network directly. I have included my full code below:
 
 
%%Closed Loop NARX Neural Network

%%Load data and create input and output matrices

load('junior_class_data.mat');
U = [Outdoor_Temp, Position, Wind_Speed, Wind_Direction];
Y = [Zone_Temp];

inputSeries = tonndata(U,false,false);
targetSeries = tonndata(Y,false,false);

%%Create a Nonlinear Autoregressive Network with External Input

inputDelays = 0:2;
feedbackDelays = 1:2;
hiddenLayerSize = 10;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize,'closed');

%%Pre-Processing

% Choose Input and Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
% Customize input parameters at: net.inputs{i}.processParam
% Customize output parameters at: net.outputs{i}.processParam
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
net.inputs{2}.processFcns = {'removeconstantrows','mapminmax'};

% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);

% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivide
net.divideFcn = 'divideblock';
% The property DIVIDEMODE set to TIMESTEP means that targets are divided
% into training, validation and test sets according to timesteps.
% For a list of data division modes type: help nntype_data_division_mode
net.divideMode = 'value';  % Divide up every value
net.divideParam.trainRatio = 80/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 5/100;

%%Training Function
% For a list of all training functions type: help nntrain
% Customize training parameters at: net.trainParam
net.trainFcn = 'trainlm';  % Levenberg-Marquardt

% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
% Customize performance parameters at: net.performParam
net.performFcn = 'mse';  % Mean squared error

% Choose Plot Functions
% For a list of all plot functions type: help nnplot
% Customize plot parameters at: net.plotParam
net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
  'ploterrcorr', 'plotinerrcorr'};

%%Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);

%%Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)

% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(targets,tr.trainMask);
valTargets = gmultiply(targets,tr.valMask);
testTargets = gmultiply(targets,tr.testMask);
trainPerformance = perform(net,trainTargets,outputs)
valPerformance = perform(net,valTargets,outputs)
testPerformance = perform(net,testTargets,outputs)

%%View the Network
view(net)



ANSWER



Matlabsolutions.com provide latest MatLab Homework Help,MatLab Assignment Help for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research.



 close all, clear all, clc
disp('DIRECT TRAINING OF A CLOSELOOP NARXNET')
load('maglev_dataset');
whos
%   Name              Size      Bytes   Class 
%   maglevInputs      1x4001    272068  cell                
%   maglevTargets     1x4001    272068  cell                
X   = maglevInputs; 
T   = maglevTargets;
ID  = 1:2, FD = 1:2, H  = 10   % Default values
netc                    = closeloop(narxnet(ID,FD,H));
view(netc)
netc.divideFcn          = 'divideblock';
[ Xcs, Xci, Aci, Tcs ]  = preparets( netc, X, {}, T );
tcs                     = cell2mat(Tcs);
whos X T Xcs Xci Aci Tcs tcs
%  Name     Size     Bytes   Class
%   Aci     2x2         416  cell                
%   T       1x4001   272068  cell                
%   Tcs     1x3999   271932  cell                
%   X       1x4001   272068  cell                
%   Xci     1x2         136  cell                
%   Xcs     1x3999   271932  cell                
%   tcs     1x3999    31992  double              

 MSE00cs = var(tcs,1)  % 2.0021 ( 1-dim MSE reference)

 rng(4151941)
tic
[netc trc Ycs Ecs Xcf Acf ] = train(netc,Xcs,Tcs,Xci,Aci);
toc                           % 197 sec
view(netc)
whos Ycs Ecs Xcf Acf
%   Name    Size     Bytes     Class
%   Acf     2x2         416    cell               
%   Ecs     1x3999   271932    cell               
%   Xcf     1x2         136    cell               
%   Ycs     1x3999   271932    cell               

 stopcriterion  = trc.stop                    % Validation stop
bestepoch      = trc.best_epoch               % 4
ecs            = cell2mat(Ecs);
NMSEcs         = mse(ecs)/MSE00cs             %  1.2843
tcstrn         = tcs(trc.trainInd);
tcsval         = tcs(trc.valInd);
tcstst         = tcs(trc.testInd);
NMSEcstrn      = trc.best_perf/var(tcstrn,1)  %  1.3495
NMSEcsval      = trc.best_vperf/var(tcsval,1) %  0.9325
NMSEcstst      = trc.best_tperf/var(tcstst,1) %  1.6109
I consider a good design to have a normalized MSE,

SEE COMPLETE ANSWER CLICK THE LINK


Comments

Popular posts from this blog

https://journals.worldnomads.com/scholarships/story/70330/Worldwide/Dat-shares-his-photos-from-Bhutan https://www.blogger.com/comment.g?blogID=441349916452722960&postID=9118208214656837886&page=2&token=1554200958385 https://todaysinspiration.blogspot.com/2016/08/lp-have-look-at-this-this-is-from.html?showComment=1554201056566#c578424769512920148 https://behaviorpsych.blogspot.com/p/goal-bank.html?showComment=1554201200695 https://billlumaye.blogspot.com/2012/10/tagg-romney-drops-by-bill-show.html?showComment=1550657710334#c7928008051819098612 http://blog.phdays.com/2014/07/review-of-waf-bypass-tasks.html?showComment=1554201301305#c6351671948289526101 http://www.readyshelby.org/blog/gifts-of-preparedness/#comment_form http://www.hanabilkova.svet-stranek.cz/nakup/ http://www.23hq.com/shailendrasingh/photo/21681053 http://blogs.stlawu.edu/jbpcultureandmedia/2013/11/18/blog-entry-10-guns-as-free-speech/comment-page-1443/#comment-198345 https://journals.worldnomads.com

USING MACHINE LEARNING CLASSIFICATION ALGORITHMS FOR DETECTING SPAM AND NON-SPAM EMAILS

    ABSTRACT We know the increasing volume of unwanted volume of emails as spam. As per statistical analysis 40% of all messages are spam which about 15.4 billion email for every day and that cost web clients about $355 million every year. Spammers to use a few dubious techniques to defeat the filtering strategies like utilizing irregular sender addresses or potentially add irregular characters to the start or the finish of the message subject line. A particular calculation is at that point used to take in the order rules from these email messages. Machine learning has been contemplated and there are loads of calculations can be used in email filtering. To classify these mails as spam and non-spam mails implementation of machine learning algorithm  such as KNN, SVM, Bayesian classification  and ANN  to develop better filtering tool.   Contents ABSTRACT 2 1. INTRODUCTION 4 1.1 Objective : 5 2. Literature Review 5 2.1. Existing Machine learning technique. 6 2.2 Existing

Why are Fourier series important? Are there any real life applications of Fourier series?

A  Fourier series  is a way of representing a periodic function as a (possibly infinite) sum of sine and cosine functions. It is analogous to a Taylor series, which represents functions as possibly infinite sums of monomial terms. A sawtooth wave represented by a successively larger sum of trigonometric terms. For functions that are not periodic, the Fourier series is replaced by the Fourier transform. For functions of two variables that are periodic in both variables, the trigonometric basis in the Fourier series is replaced by the spherical harmonics. The Fourier series, as well as its generalizations, are essential throughout the physical sciences since the trigonometric functions are eigenfunctions of the Laplacian, which appears in many physical equations. Real-life applications: Signal Processing . It may be the best application of Fourier analysis. Approximation Theory . We use Fourier series to write a function as a trigonometric polynomial. Control Theory . The F