Skip to main content

Posts

Showing posts from June, 2021

Stretch the dynamic range of the given 8-bit grayscale image using MATL...

Matrix of possible combinations

  I have a vector A of 30 elements. Each of those elements could be one of three values (1, 2 or 3). There are therefore there are 30^3=27000 possible vectors A. How could I create a matrix [30 x 27000] where each colum in that matrix is a unique A? ANSWER Matlabsolutions.com  provide latest  MatLab Homework Help, MatLab Assignment Help  for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research. If you don't have the Deep Learning toolbox,       [c{1:30}]=ndgrid(1:3); allAs=cat(31,c{:}); allAs=reshape(allAs, [],30).'; Otherwise, you can use combvec, c(1:30)={1:3}; allAs=combvec(c{:});

How to control which GPUs and CPUs get which tasks during multiple calls to trainNetwork?

  I am working on a machine with a number of CPU cores (40) and a number of GPUs (4). I need to train a large number of shallow LSTM neural networks (~500,000), and would like to use my compute resources as efficiently as possible.   Here are the options I've come up with:   1)  parpool('local')  gives 40 workers max, which are the number of CPU cores available. Apparently  parpool('local')  does not provide access to the GPUs - is this correct? I can then use  spmd  to launch separate instances of trainNetwork across individual CPUs on my machine, and this runs 40 such instances at a time.   I have three questions about this:   First, is there a way to use both the GPUs and CPUs as separate laboratories (i.e., with different  labindex  values) in my  spmd  loop? Why do I not have a total of 44 avaialble workers from parpool? Second, is there a way to assign more than one CPU to a particular lab, for example, could I divide my 40 cores up into 8 gorups of 5 and depl

Extracting Corner Features with PCA and feeding it to neural network

  I am doing a project on vehicle type classification with Neural Networks( classification basis is => sedan,pick up,hatchback,etc type vehicles.) I am doing   image processing  for the first time and I have detected about 40 corners using Harris Edge Detection and thus I got a matrix A[40x2]. I am using only this feature as for classification. Now I want to know how can I use PCA to extract features from it.I know what PCA is and what pca(A) or princomp(A)in matlab will return but I dont get how to use the output of pca function as a feature matrix. 1.Does a feature matrix need to be 1-D array. 2.Should I use principle components array which is the 2nd matrix returned by pca function as a feature matrix (its a 2-D matrix) 3.How can I train Neural Network for 3 classes(hatchback,sedan and pickup). 4.Lastly suppose I have N images for each class to train so do I need to train on each image individually or do I have to create a feature matrix that has a extra dimension = N. ANSWER Mat

Crossentropy loss function - What is a good performance goal?

  Good Afternoon, Looking around ANSWER and exploring GOOGLE GROUPS i found this method by Dr. Greg Heath to define a valid training goal for the MSE performance function:   [I,N]=size(x); [O,N]=size(t); MSE00a=mean(var(t,0,2)); Ntrn=floor(0.7*N); Hub=floor((Ntrn-O)/(I+1+O)); MSEgoal=0.01*(Ndof/Ntrneq)*MSE00a; And i was wondering if there is a similar method to set a Crossentropy reference goal for neural net performance, since i want to experiment different type of loss functions in order to get the best results. ANSWER Matlabsolutions.com  provide latest  MatLab Homework Help, MatLab Assignment Help  for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research. These equations are not necessarily precise. For example: data = design + test d

how to predict from a trained neural network ?

  Hello I am trying to use neural network to make some prediction based on my input and target data. I have read all related tutorial in Matlab and also looked at the matlab examples. I kinda learned how to develop a network but I dont know how to use this train network to make some prediction ? is there any code that im missing ? does anyone have a sample script that can be shared here? that's what I have, for example : x=[1 2 3;4 5 3] t=[0.5 0.6 0.7] , net=feedforwardnet(10) , net=train(net,x,t) , perf=perform(net,y,t) how can I predict the output for a new set of x (xprime=[4 2 3;4 7 8]) based on this trained network? thanks ANSWER Matlabsolutions.com  provide latest  MatLab Homework Help, MatLab Assignment Help  for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research.

NARXNET closed-loop vs open-loop

  I have three questions regarding the difference between a closed-loop and an open-loop narxnet, and it's behavior. First, a little about the problem I'm trying to solve. I have an 2xN dimensional  matrix  of observation (X), from which I'm trying to predict an output Y, 1xN. Now, a narxnet takes as input both X and Y. The Matlab documentation says that an open-loop narxnet finds a function 'f' where y(t) = f( y(t-1), y(t-2), x(t-1), x(t-2) ), for a delay of 2. However, the results that I get are much more accurate than I expect them to be. This suggests that the narxnet uses the actual y(t) as input as well. When I convert the open-loop to a closed-loop, and retrain it, I get much more reasonable results, not good, but reasonable. 1.a) What is the actual input to an open-loop narxnet, and the closed-loop. When trying to predict y(t), are the inputs [ y(t), y(t-1), y(t-2), x(t-1), x(t-2) ] or [ y(t-1), y(t-2), x(t-1), x(t-2) ], for both? 1.b) For my problem I need

Why does not Matlab use the full capacity of my computer while training a neural network?

  My goal is to train a neural network to classify objects form pictures of my webcam. I use   transfer learning with Alexnet   and I have a labeled training data set with 25,000 images. My training script works perfectly, but the progress of the iterations during training is very slow. I have the Parallel Computing Toolbox installed and the training runs on the Single GPU. But when looking at the task manager, Matlab only uses 13 % of the CPU and just 2 % of the GPU. Why does not Matlab use more resources to speed up the training process? The software is Windows 10 and I have the newest version of Matlab 64bit installed. ANSWER Matlabsolutions.com  provide latest  MatLab Homework Help, MatLab Assignment Help  for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research.   You sure

How can predict multi step ahead using Narnet

  I want to predict the future prices, I have used only the daily historical prices as input. I can predict only one step ahead using this code:     clear all; clear all; load('prices.mat'); set_size = 1413; targetSeries =prices(1:set_size); targetSeries = targetSeries'; targetSeries_train = targetSeries(1:set_size * (4/5) ); targetSeries_test = targetSeries(set_size * (4/5): end); targetSeries_train = num2cell(targetSeries_train); targetSeries_test = num2cell(targetSeries_test); feedbackDelays = 1:4; hiddenLayerSize = 10; net = narnet(feedbackDelays, hiddenLayerSize); net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'}; [inputs, inputStates, layerStates,targets] = preparets(net,{},{}, targetSeries_train); [inputs_test,inputStates_test,layerStates_test,targets_test] = preparets(net,{},{},targetSeries_test); net.trainFcn = 'trainrp'; % Levenberg-Marquardt net.performFcn = 'mse'; % Mean squared error net.plotFcns =

How to create a Target data ? Neural Network Tool

  Hello, I want to make a feedforward backpropagation neural network in order to solve a classification problem. Let's say I want to import a data set from UCI Machine Learning Repository (   this one  ) which is 4x306. How do I create a Target data set in order to train it? ANSWER Matlabsolutions.com  provide latest  MatLab Homework Help, MatLab Assignment Help  for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research. The target for 2-class classification has dimensions [1 N] (N=306) with values {0,1}. However, if the ratio of N1/N0 is not in the interval [0.5, 2 ], then randomly add duplicates of the smaller class until the class sizes are equal. Occasionally, it helps to add a small noise component to the duplicates.   Use PATTERNNET with a LOGSIG activation function an

How to compute the derivative of the neural network?

  Hi, Once you have trained a neural network, is it possible to obtain a derivative of it? I have a neural network "net" in a structure. I would like to know if there is a routine that will provide the derivatives of net (derivative of its outputs with respect to its inputs).   It is probably not difficult, for a feedforward model, there is just  matrix  multiplications and sigmoid functions, but it would be nice to have a routine that will do that directly on "net". ANSWER Matlabsolutions.com  provide latest  MatLab Homework Help, MatLab Assignment Help  for students, engineers and researchers in Multiple Branches like ECE, EEE, CSE, Mechanical, Civil with 100% output.Matlab Code for B.E, B.Tech,M.E,M.Tech, Ph.D. Scholars with 100% privacy guaranteed. Get MATLAB projects with source code for your learning and research. Differentiate to obtain dyi/dxn y = b2 + LW*h h = tanh(b1+IW*x) or, with tensor notation(i.e., summation over repeated indices), yi = b2i + Lwij*hj