I am attempting to declutter my MATLAB app code by separating some of the initialization into separate .m files. For this I have set up various files for each type of component (e.g. a file for buttons, graph, etc.). I am attempting to access a function in my master initialize file from the file for buttons. My code goes as follows in the buttons .m file goes as follow: classdef buttons < handle methods %initializes the UI function buttonCreate(app) %Create file load 1 app.fileload1 = uibutton(app.gridLayout, 'push'); app.fileload1.FontSize = 36; app.fileload1.Layout.Row = [8 9]; app.fileload1.Layout.Column = 1; app.fileload1.Text = 'Load 1'; %proceeds to create the rest of the buttons end end end Now I attempt to access the buttonCreate() function in my master initialize file initialize.m : classdef initialize < handle prop
I wanted to run a grid search to find suitable parameters for my SVM model but I have discovered that fitrsvm gives inconsistent errors if the value of the epsilon parameter is generated using a ‘for loop’. For example the RMSE for my model with epsilon = 0.8 will be different if I use the for loop:
for epsilon = 0.8:.1:1.2
compared with if I use the for loop
for epsilon = 0.1:.1:1.2
The RMSEs are 2.6868 and 2.7020 respectively
I thought this might be some floating point error, so I tried to ensure that the epsilon value passed to fitrsvm was exactly 0.8. I did this by creating variable d_epsilon (line 17) and passing its value to fitrsvm (ie by changing line 26 to ‘Epsilon’ = d_epsilon but this did not work. By contrast using c_epsilon which is completely independent of the for loop (line 16) does work.
In my real project, I use nested loops to search for values for Epsilon, Boxconstraint, and KernelScale. The inconsistencies in my results are about 10%. (I am using a grid search as the parameters returned using OptimizeHyperparameters perform worse that some of the parameters cited in journal articles for my dataset (UCI’s auto-mpg).
clear all %%read in auto-mpg.csv. This is a cleaned version of UCI dataset auto-mpg data = readtable('auto-mpg.csv','ReadVariableNames',false); VarNames = {'mpg','cylinders' 'displacement' 'horsepower' 'weight' 'acceleration' ... 'modelYear' 'origin' 'carName'}; data.Properties.VariableNames = VarNames; data = [data(:,2:9) data(:,1)]; data.carName=[]; %%carry out 10 fold cross-validation with different epsilon values testResults_SVM=[]; testActual_SVM=[]; rng('default') c = cvpartition(data.mpg,'KFold',10); for epsilon = 0.1:0.1:1.2 %c_epsilon= 0.80000; %d_epsilon = str2double(string(round(epsilon,2))) for fold = 1:10 cv_trainingData = data(c.training(fold), :); cv_testData = data(c.test(fold), :); AutoSVM = fitrsvm(cv_trainingData,'mpg',... 'KernelFunction', 'gaussian', ... 'PolynomialOrder', [], ... 'KernelScale', 5.5, ... 'BoxConstraint', 100, ... 'Epsilon', epsilon, ... 'Standardize', true); convergenceChk(fold)=AutoSVM.ConvergenceInfo.Converged; testResults_SVM=[testResults_SVM;predict(AutoSVM,cv_testData)]; testActual_SVM=[testActual_SVM;cv_testData.mpg]; end %%generate summary statistics and plots residual_SVM = testResults_SVM-testActual_SVM; AutoMSE_SVM=((sum((residual_SVM).^2))/size(testResults_SVM,1)); AutoRMSE_SVM = sqrt(AutoMSE_SVM); if round(epsilon,4) == 0.8 AutoRMSE_SVM end end
NOTE:-
Matlabsolutions.com provide latest MatLab Homework Help,MatLab Assignment Help for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research.
Can we simplify things a bit? Here's a version of your code that uses built-in validation instead of explicit loops.
The first loop below uses the range .1:.1:1.2. The second uses .8:.1:1.2, and the third uses the values .1,.2,...,1.2 individually.
In all 3 cases the cross-validation loss of the SVM is exactly the same. Notice that this is true even though there is roundoff error in the epsilons calculated in the first loop compared to the individual values in the last loop. So the SVM fitting is robust to tiny differences in epsilon (on the order of 1e-15 here).
So it doesn't look like SVM has a problem with epsilons generated in a loop.
clear all %%read in auto-mpg.csv. This is a cleaned version of UCI dataset auto-mpg data = readtable('auto-mpg.csv','ReadVariableNames',false); VarNames = {'mpg','cylinders' 'displacement' 'horsepower' 'weight' 'acceleration' ... 'modelYear' 'origin' 'carName'}; data.Properties.VariableNames = VarNames; data = [data(:,2:9) data(:,1)]; data.carName=[]; rng('default') c = cvpartition(data.mpg,'KFold',10); LossesLoop1 = []; LossesLoop2 = zeros(1,7); LossesIndividual = []; for epsilon = 0.1:0.1:1.2 AutoSVM = fitrsvm(data,'mpg',... 'CVPartition', c,... 'KernelFunction', 'gaussian', ... 'PolynomialOrder', [], ... 'KernelScale', 5.5, ... 'BoxConstraint', 100, ...
SEE COMPLETE ANSWER CLICK THE LINK
Comments
Post a Comment