Skip to main content

Stretch the dynamic range of the given 8-bit grayscale image using MATL...

Noise samples of gaussian mixture distribution

 I'm new to matlab. I want to generate noise samples of Gaussian mixture with PDF= sqrt((u.^3)./pi).*exp(-u.*(x.^2)) + sqrt((1-u)^3/pi).*exp(-(1-u).*x.^2);

 
The length of noise sample is (1,200) Please help me out.

NOTE:-


Matlabsolutions.com provide latest MatLab Homework Help,MatLab Assignment Help for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research.

Ok, so you have formulated what seems to me to be a rather strange mixture model.
 
You have two modes, with variances that are related to each other, the mean of both terms is zero.
 
So, if the PDF is:
 
PDF = sqrt((u.^3)./pi).*exp(-u.*(x.^2)) + sqrt((1-u)^3/pi).*exp(-(1-u).*x.^2)

So we have two terms there.

syms u positive
syms x real

Look at the first term:

pdf1 = sqrt((u.^3)./pi).*exp(-u.*(x.^2));
int(pdf1,x,[-inf,inf])
ans =
u

So, it integrates to u. The second term is similar,

pdf2 = sqrt(((1-u).^3)./pi).*exp(-(1-u).*(x.^2));
int(pdf2,x,[-inf,inf])
ans =
piecewise([u == 1, 0], [1 < u, Inf*1i], [u < 1, 1 - u])
For u <= 1, it integrates to 1-u.
 
So the integral of the sum is indeed 1, therefore it is a valid PDF.
 
Essentially, you have two Gaussian modes, such that the variances are 2/u and 2/(1-u) respectively. You have chosen the mixture parameter as the inverse of those variances. Effectively, when u is small (or very near 1), you have a mixture distribution that rarely, you will generate large outliers. When u is exactly 1/2, this reduces to a standard normal distribution, with a unit variance.
 
I'm not sure why, but I suppose that is not really important. Normally, one would choose a mixture parameter that is independent of the variances.
 
The solution is simple.
 
1. Pick some value for u. u determines both variances for each Gaussian mode, as well as the mixture coefficient.
2. For a sample size of N, pick N uniformly distributed random numbers. These will determine which mode you will sample from. Then just use a classical tool like randn to do the sampling.
 
For a sample size of 1e7, and some arbitrary value for u, say 1/100. I've picked a tiny value for u to make the result clear. I've also chosen a large sample to make it easier to see as a histogram.
 
 
N = 10000000;
u = 0.01;

% random mixture selection
% Here s will be 1 if the element is sampled from first pdf.
%      s will be 0 if the element is sampled from second pdf.
s = rand(1,N) < u;

% sample using randn, with a unit variance initially
x = randn(1,N);;

% now scale each element based on the desired variance
x(s) = x(s)*sqrt(2/u);
x(~s) = x(~s)*sqrt(2/(1-u));

hist(x,1000)
So, for this fairly tiny value of u, see that the histogram looks much like a traditional Gaussian, BUT with very wide tails.
I'll next plot the separate pieces of the PDF, split into two parts. Here I've chosen u as 0.25.

Comments

Popular posts from this blog

https://journals.worldnomads.com/scholarships/story/70330/Worldwide/Dat-shares-his-photos-from-Bhutan https://www.blogger.com/comment.g?blogID=441349916452722960&postID=9118208214656837886&page=2&token=1554200958385 https://todaysinspiration.blogspot.com/2016/08/lp-have-look-at-this-this-is-from.html?showComment=1554201056566#c578424769512920148 https://behaviorpsych.blogspot.com/p/goal-bank.html?showComment=1554201200695 https://billlumaye.blogspot.com/2012/10/tagg-romney-drops-by-bill-show.html?showComment=1550657710334#c7928008051819098612 http://blog.phdays.com/2014/07/review-of-waf-bypass-tasks.html?showComment=1554201301305#c6351671948289526101 http://www.readyshelby.org/blog/gifts-of-preparedness/#comment_form http://www.hanabilkova.svet-stranek.cz/nakup/ http://www.23hq.com/shailendrasingh/photo/21681053 http://blogs.stlawu.edu/jbpcultureandmedia/2013/11/18/blog-entry-10-guns-as-free-speech/comment-page-1443/#comment-198345 https://journals.worldnomads.com

What are some good alternatives to Simulink?

Matlabsolutions provide latest  MatLab Homework Help, MatLab Assignment Help  for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research. SIMULINK is a visual programing environment specially for time transient simulations and ordinary differential equations. Depending on what you need there are plenty of Free, Libre and Open Source Software (FLOSS) available: Modelica language is the most viable alternative and in my opinion it is also a superior option to MathWorks SIMULINK. There are open source implementations  OpenModelica  and  JModelica . One of the main advantages with Modelica that you can code a multidimensional ordinary differential equation with algebraic discrete non-causal equations. With OpenModelica you may create a non-causal model right in the GUI and with

USING MACHINE LEARNING CLASSIFICATION ALGORITHMS FOR DETECTING SPAM AND NON-SPAM EMAILS

    ABSTRACT We know the increasing volume of unwanted volume of emails as spam. As per statistical analysis 40% of all messages are spam which about 15.4 billion email for every day and that cost web clients about $355 million every year. Spammers to use a few dubious techniques to defeat the filtering strategies like utilizing irregular sender addresses or potentially add irregular characters to the start or the finish of the message subject line. A particular calculation is at that point used to take in the order rules from these email messages. Machine learning has been contemplated and there are loads of calculations can be used in email filtering. To classify these mails as spam and non-spam mails implementation of machine learning algorithm  such as KNN, SVM, Bayesian classification  and ANN  to develop better filtering tool.   Contents ABSTRACT 2 1. INTRODUCTION 4 1.1 Objective : 5 2. Literature Review 5 2.1. Existing Machine learning technique. 6 2.2 Existing