python - Regression with PyBrain -
i'm trying build surrogate model 100 samples. have 2 inputs , 2 responses of normalised magnitude of respective maxima.
normalisation:
for in range(0, len(array(self.samples)[0])): self.max_samples.append(abs(self.samples[:,i].max())) self.samples[:,i] /= self.max_samples[-1] self.minmax_samples.append([self.samples[:,i].min(), self.samples[:,i].max()]) in range(0, len(array(self.targets)[0])): self.max_targets.append(abs(self.targets[:,i].max())) self.targets[:,i] /= self.max_targets[-1]
the network built follows:
self.ann = feedforwardnetwork(bias = true) inlayer = tanhlayer(len(array(self.samples[0])[-1])) hiddenlayer = tanhlayer(17) outlayer = linearlayer(len(array(self.targets[0])[-1])) self.ann.addinputmodule(inlayer) self.ann.addmodule(hiddenlayer) self.ann.addoutputmodule(outlayer) in_to_hidden = fullconnection(inlayer, hiddenlayer) hidden_to_out = fullconnection(hiddenlayer, outlayer) self.ann.addconnection(in_to_hidden) self.ann.addconnection(hidden_to_out) self.ann.sortmodules() self.dataset = superviseddataset(len(array(self.samples[0])[-1]),len(array(self.targets[0])[-1])) "adding training points" i, j in zip(self.samples, self.targets): self.dataset.appendlinked(i, j) trainer = backproptrainer( self.ann, dataset=self.dataset, momentum=0.99, learningrate = 0.1, verbose=true, weightdecay=0.1) trainer.trainondataset(self.dataset, 200)
the total error generated trainer of order 1e-2. presume can better. responses being generated neural net not @ close expected values.
am using few data points? artificial neural networks job when have input vector of dimensions on 20 , multiple responses (> 5) when number of sample points can generated under 120?
you have few samples such complex network.
your network have 2*17=34 connections input hidden layer, , 17*2=34 connections hidden layer output , 17+2=19 connections bias units. means have 87 parameters tune.
if train data 70% of sample data, , use 30% of sample data cross validation , testing, 84 "known" values. when number of known values similar (or lower than) number of parameters neural network can overfit makes perfect match training data (very low training error) useless @ other data.
you need less complex network or more samples.
Comments
Post a Comment