Skip to main content

Stretch the dynamic range of the given 8-bit grayscale image using MATL...

Which statistical test for change in a nonlinear regression model?

 Hi guys,

 
I have a rather fundamental question regarding the analysis of my data involving nonlinear fitting and I hope it is appropriate to post it here. For the sake of brevity I will not provide the whole code and only summarize the essential steps, but of course I can add any details you request.
 
I have some data which represents some response to a stimulus as a function of the distance to the stimulation site. The data shows, as to be expected, a decay in the response variable, which may be best approximated by a sigmoidal fit. So I applied the BOLTZMANN equation to the data and let MATLAB predict confidence bounds for new observations:
 
 
% Define model function (BOLTZMANN);
f = @(beta0,conds)beta0(1) + ((beta0(2)-beta0(1)) ./ (1+exp((beta0(3) - conds) ./ beta0(4))));

% Find initialization parameters:
resp50 = (max(resp) + min(resp))/2;
x50 = 5000; %Educated guess

inidat = [0,max(resp),resp50,x50];

% Estimate the fitted function:
[beta,res,jac,covb] = nlinfit(conds',fliplr(resp),f,inidat);

% Fit the function:
xfit = linspace(min(conds),max(conds),100);
[yfit,delta,n,df,varpred] = nlpredci(f,xfit,beta,res,'Covar',covb,'PredOpt','observation'); %Function edited, see below
yfit = fliplr(yfit);
delta = fliplr(delta');
varpred = fliplr(varpred');
Behold the plotted result (Embedding this image did not work.)
I am now adressing the question, how far I can get off the reference site until responses are to be regarded non-maximum. I.e. up from which distance are my (predicted) responses signicantly different to the maximum a 0 mm? I did not find a pre-described solution to such a question, so I developed a little bit naively my own approach, and I would like to ask you to tell me if it is appropriate or if there is some superior method.
 
My idea was simply to run multiple pairwise t-tests given the statistics from the NLINFIT function (which I edited as to return sample size n, degrees of freedom v, and predicted variance varpred, so I would not have to do the calculations on my own). Thus, I iterate through the predictions unless the tested pair is significantly different:
alpha = 0.05;

for i=2:length(yfit)
     testdiff = yfit(1) - yfit(i);

     %Common MSE is mean of both estimated variances (s. ONLINESTATBOOK p.376):
     mse = (varpred(1) + varpred(i))/2;

     %Common SE:
     testse = sqrt(2*mse/n); %Correct?

     %Compute t-value:
     t = testdiff/testse;

     %Common df:
     testdf = (n-1) + (n-1); %Correct?

     p = tpdf(t,testdf);

     if p < (alpha/(i-1)) %With BONFERRONI correction (correct?)
        m = i;
        break;
     end
  end
As you can see, I also tried to add some BONFERRONI correction of the alpha-level to account for these multiple comparisions. I am aware, that the t-test I used may be inappropriate for correlated pairs (which is evidently the case).
According to my rule of thumb, I would expect a cut-off x-value somewhere were the confidence intervals of the fit do not intersect anymore. Surprisingly, I obtain a way earlier cut-off as you can see in the picture above.


NOTE:-


Matlabsolutions.com provide latest MatLab Homework Help,MatLab Assignment Help , Finance Assignment Help for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research.

I suggest doing paired t-tests ( ttest2 ) between your reference (at 0 mm) and data taken from stimuli at various distances, for instance 0 mm and 1 mm, 0 mm and 2 mm, etc. (My guess is that you would see significance at 10 to 15 mm, depending on whether your error bars represent standard errors or 95% confidence limits.) This is relatively common in the literature I am familiar with, and generally does not require the Bonferroni correction, because you are not also comparing 2 mm and 3 mm and others. (I also suggest consulting the statistics guidance for the journal you plan to submit your data to.)
 
The Boltzmann equation is interesting, but you might consider using a model more appropriate to the system you are measuring (unless you're doing the sort of physics the Boltzmann equation describes). I suspect a reviewer would want to know the reason you chose it and how it describes your experiment. You have not described the system you are investigating, but using a regression model fit may be redundant if you are only interested in the differences between the result of stimulation at various distances from the reference site.

Comments

Popular posts from this blog

https://journals.worldnomads.com/scholarships/story/70330/Worldwide/Dat-shares-his-photos-from-Bhutan https://www.blogger.com/comment.g?blogID=441349916452722960&postID=9118208214656837886&page=2&token=1554200958385 https://todaysinspiration.blogspot.com/2016/08/lp-have-look-at-this-this-is-from.html?showComment=1554201056566#c578424769512920148 https://behaviorpsych.blogspot.com/p/goal-bank.html?showComment=1554201200695 https://billlumaye.blogspot.com/2012/10/tagg-romney-drops-by-bill-show.html?showComment=1550657710334#c7928008051819098612 http://blog.phdays.com/2014/07/review-of-waf-bypass-tasks.html?showComment=1554201301305#c6351671948289526101 http://www.readyshelby.org/blog/gifts-of-preparedness/#comment_form http://www.hanabilkova.svet-stranek.cz/nakup/ http://www.23hq.com/shailendrasingh/photo/21681053 http://blogs.stlawu.edu/jbpcultureandmedia/2013/11/18/blog-entry-10-guns-as-free-speech/comment-page-1443/#comment-198345 https://journals.worldnomads.com

USING MACHINE LEARNING CLASSIFICATION ALGORITHMS FOR DETECTING SPAM AND NON-SPAM EMAILS

    ABSTRACT We know the increasing volume of unwanted volume of emails as spam. As per statistical analysis 40% of all messages are spam which about 15.4 billion email for every day and that cost web clients about $355 million every year. Spammers to use a few dubious techniques to defeat the filtering strategies like utilizing irregular sender addresses or potentially add irregular characters to the start or the finish of the message subject line. A particular calculation is at that point used to take in the order rules from these email messages. Machine learning has been contemplated and there are loads of calculations can be used in email filtering. To classify these mails as spam and non-spam mails implementation of machine learning algorithm  such as KNN, SVM, Bayesian classification  and ANN  to develop better filtering tool.   Contents ABSTRACT 2 1. INTRODUCTION 4 1.1 Objective : 5 2. Literature Review 5 2.1. Existing Machine learning technique. 6 2.2 Existing

Why are Fourier series important? Are there any real life applications of Fourier series?

A  Fourier series  is a way of representing a periodic function as a (possibly infinite) sum of sine and cosine functions. It is analogous to a Taylor series, which represents functions as possibly infinite sums of monomial terms. A sawtooth wave represented by a successively larger sum of trigonometric terms. For functions that are not periodic, the Fourier series is replaced by the Fourier transform. For functions of two variables that are periodic in both variables, the trigonometric basis in the Fourier series is replaced by the spherical harmonics. The Fourier series, as well as its generalizations, are essential throughout the physical sciences since the trigonometric functions are eigenfunctions of the Laplacian, which appears in many physical equations. Real-life applications: Signal Processing . It may be the best application of Fourier analysis. Approximation Theory . We use Fourier series to write a function as a trigonometric polynomial. Control Theory . The F