Error in nls - number of iterations exceeded maximum - r
I am having an issue when trying to fit my data.
This is the data:
x = c(1, 1.071519305, 1.148153621, 1.230268771, 1.318256739, 1.412537545, 1.513561248, 1.621810097, 1.737800829, 1.862087137, 1.995262315, 2.13796209, 2.290867653, 2.454708916, 2.630267992, 2.818382931, 3.01995172, 3.235936569, 3.467368505, 3.715352291, 3.981071706, 4.265795188, 4.570881896, 4.897788194, 5.248074602, 5.623413252, 6.025595861, 6.45654229, 6.918309709, 7.413102413, 7.943282347, 8.511380382, 9.120108394, 9.77237221, 10.47128548, 11.22018454, 12.02264435, 12.88249552, 13.80384265, 14.79108388, 15.84893192, 16.98243652, 18.19700859, 19.498446, 20.89296131, 22.38721139, 23.98832919, 25.70395783, 27.54228703, 29.51209227, 31.6227766, 33.88441561, 36.30780548, 38.9045145, 41.68693835, 44.66835922, 47.86300923, 51.2861384, 54.95408739, 58.88436554, 63.09573445, 67.60829754, 72.44359601, 77.62471166, 83.17637711, 89.12509381, 95.4992586, 102.3292992, 109.6478196, 117.4897555, 125.8925412, 134.8962883, 144.5439771, 154.8816619, 165.9586907, 177.827941, 190.5460718, 204.1737945, 218.7761624, 234.4228815, 251.1886432, 269.1534804, 288.4031503, 309.0295433, 331.1311215, 354.8133892, 380.1893963, 407.3802778, 436.5158322, 467.7351413, 501.1872336, 537.0317964, 575.4399373, 616.5950019, 660.693448, 707.9457844, 758.577575, 812.8305162, 870.96359, 933.2543008, 1000)
y = c(0, 39.42531967, 81.67031097, 126.9366341, 179.8504534, 237.9146471, 300.9332733, 373.9994125, 452.2911911, 544.5717812, 644.4305916,757.5670443, 880.1954813, 1015.045167, 1160.563695, 1316.477197, 1483.424418, 1668.380672, 1869.099593, 2083.298305, 2308.72922, 2552.533248, 2806.782363, 3074.749213, 3354.913032, 3653.567198, 3961.982443, 4285.416754, 4625.505185, 4974.839962, 5329.418374, 5696.722268, 6069.748689, 6447.903256, 6826.334958, 7218.057591, 7607.64304, 8005.992733, 8403.318251, 8798.355661, 9201.456877, 9613.1821, 10022.47749, 10430.83497, 10841.5067, 11256.68048, 11675.94707, 12085.72448, 12500.17168, 12905.54582, 13311.92593, 13707.0245, 14089.76524, 14459.48122, 14813.21421, 15145.30591, 15459.10593, 15752.7922, 16023.09928, 16269.888, 16493.69043, 16693.68774, 16869.79643, 17021.69506, 17154.34004, 17264.76423, 17355.82129, 17427.48725, 17486.7706, 17530.49824, 17563.61638, 17588.39795, 17605.32753, 17617.36935, 17624.3971, 17629.48694, 17632.2512, 17633.91595, 17634.67971, 17635.11862, 17635.35591, 17635.4941, 17635.61014, 17635.64404, 17635.66099, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794, 17635.67794)
EDIT
Number of iterations increased untill 1000.
This is the code I used with the equation that has physical meaning when interpreting the results of the coefficients:
# tanh function definition:
ftanh = function(x, x0, a, b, k) {
(1/2) * a * (1 + (tanh(k*(x-x0)))) + b
}
# Fitting code using nonlinear least square:
nlc <- nls.control(maxiter = 1000)
fitmodel <- nls(y ~ ftanh(x, x0, a, b, k), control=nlc, start=list(x0 = 7, a = -29500, b = 17500, k = -0.032))
# Plotting fitted cumulative function
options(scipen = 10)
plot(x, predict(fitmodel), type="l", log = "x", col="blue", xlab = "x", ylab = "y")
points(x, y, col = "red")
legend("topleft", inset = .05, legend = c("exp","fit"),
lty = c(NA,1), col = c("red", "blue"), pch = c(1,NA), lwd = 1, bty = "n")
summary(fitmodel)
This is the best fit I got using Excel just by guessing the initial values (orange line):
I am assuming that my initial values are not the best, but I am in a constant loop and it seems like there is no way out. I am pretty sure that b value is okay, and that x0 and a are around my initial values, but k is actually determining the steepness. Any idea?
The problem is that your fit function does not describe your data. In the logarithmic plot it looks OK but is is actually more like a simple tanh. As the bending differs from tanh an even better model is of type a tanh(b*(x-c)**d)**(1/d)
Not using r but simple python it looks like:
import matplotlib
matplotlib.use('Qt4Agg')
from matplotlib import pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
def func1(x, a, b, c):
return a*np.tanh(b*(x-c))
def func2(x, a, b, c,d):
return a*np.tanh(b*np.fabs((x-c))**d)**(1/d)
initialGuess1=[y[-1], 1, 0]
initialGuess2=[y[-1], 1, 0,1]
popt1,pcov1 = curve_fit(func1, x, y,initialGuess1)
popt2,pcov2 = curve_fit(func2, x, y,initialGuess2)
print popt1
print popt2
fittedData1=[func1(s, *popt1) for s in x]
fittedData2=[func2(s, *popt2) for s in x]
fig=plt.figure()
ax=fig.add_subplot(1,1,1)
ax.plot(x,y)
ax.plot(x,fittedData1)
ax.plot(x,fittedData2)
ax.set_xscale('log')
plt.show()
>> [ 1.74207552e+04 3.53554258e-02 2.20477585e-01]
>> [ 1.76893061e+04 1.90416542e-01 1.19819247e+00 5.59529032e-01]
Data in blue, tanhfit in orange and modified fit function in green.
Initial guesses are straight forward. (You could play around a bit to get rid of the kink near zero)
Related
Julia I can't make a streamplot with my own function of data
I'm doing my dissertetion and I need to make a streamplot with the velocities matrix. I have resolved Navier-stokes equations and I have one matrix of u-velocity 19x67 and other matrix of v-velocity 19x67. To obtain a continuos function I have done a bilinear interpolation but I have problem with the plotting. I don't know if I explain myself very well but y let you the code. #BILINEAR INTERPOLATION# X=2 Y=0.67 x_pos=findlast(x->x<X, x) y_pos=findlast(x->x<Y, y) x1=((x_pos-1))*Dx x2=(x_pos)*Dx y1=((y_pos-1)-0.5)*Dy y2=(y_pos-0.5)*Dy u1=u[y_pos-1,x_pos-1] u2=u[y_pos-1,x_pos] u3=u[y_pos,x_pos-1] u4=u[y_pos,x_pos] u_int(Y,X)=(1/(Dx*Dy))*((x2.-X).*(y2.-Y).*u1+(X.-x1).*(y2.-Y).*u2+(x2.-X).*(Y.-y1).*u3+(X.-x1).*(Y.-y1).*u4) xx1=((x_pos-1)-0.5)*Dx xx2=(x_pos-0.5)*Dx yy1=((y_pos-1))*Dy yy2=(y_pos)*Dy v1=v[y_pos-1,x_pos-1] v2=v[y_pos-1,x_pos] v3=v[y_pos,x_pos-1] v4=v[y_pos,x_pos] v_int(Y,X)=(1/(Dx*Dy))*((x2-X)*(y2-Y)*v1+(X-x1)*(y2-Y)*v2+(x2-X)*(Y-y1)*v3+(X-x1)*(Y-y1)*v4) #PLOT# function stream(Y,X) u_c=u_int(Y,X) v_c=v_int(Y,X) return u_c,v_c end using CairoMakie let fig = Figure(resolution = (600, 400)) ax = Axis(fig[1, 1], xlabel = "x", ylabel = "y", backgroundcolor = :black) streamplot!(ax, stream, -2 .. 4, -2 .. 2, colormap = Reverse(:plasma), gridsize = (32, 32), arrow_size = 10) display(fig) end; Any solution? If you know other method with other package, pls tell me.
Julia: "Plot not defined" when attempting to add slider bars
I am learning how to create plots with slider bars. Here is my code based off the first example of this tutorial using Plots gr() using GLMakie function plotLaneEmden(log_delta_xi=-4, n=3) fig = Figure() ax = Axis(fig[1, 1]) sl_x = Slider(fig[2, 1], range = 0:0.01:4.99, startvalue = 3) sl_y = Slider(fig[1, 2], range = -6:0.01:0.1, horizontal = false, startvalue = -2) point = lift(sl_x.value, sl_y.value) do n, log_delta_xi Point2f(n, log_delta_xi) end plot(n, 1 .- log_delta_xi.^2/6, linecolor = :green, label="n = $n") xlabel!("ξ") ylabel!("θ") end plotLaneEmden() When I run this, it gives UndefVarError: plot not defined. What am I missing here?
It looks like you are trying to mix and match Plots.jl and Makie.jl. Specifically, the example from your link is entirely for Makie (specifically, with the GLMakie backend), while the the plot function you are trying to add uses syntax specific to the Plots.jl version of plot (specifically including linecolor and label keyword arguments). Plots.jl and Makie.jl are two separate and unrelated plotting libraries, so you have to pick one and stick with it. Since both libraries export some of the same function names, using both at once will lead to ambiguity and UndefVarErrors if not disambiguated. The other potential problem is that it looks like you are trying to make a line plot with only a single x and y value (n and log_delta_xi are both single numbers in your code as written). If that's what you want, you'll need a scatter plot instead of a line plot; and if that's not what you want you'll need to make those variables vectors instead somehow. Depending on what exactly you want, you might try something more along the lines of (in a new session, using only Makie and not Plots): using GLMakie function plotLaneEmden(log_delta_xi=-4, n=3) fig = Figure() ax = Axis(fig[1, 1], xlabel="ξ", ylabel="θ") sl_x = Slider(fig[2, 1], range = 0:0.01:4.99, startvalue = n) sl_y = Slider(fig[1, 2], range = -6:0.01:0.1, horizontal = false, startvalue = log_delta_xi) point = lift(sl_x.value, sl_y.value) do n, log_delta_xi Point2f(n, 1 - log_delta_xi^2/6) end sca = scatter!(point, color = :green, markersize = 20) axislegend(ax, [sca], ["n = $n"]) fig end plotLaneEmden() Or, below, a simple example for interactively plotting a line rather than a point: using GLMakie function quadraticsliders(x=-5:0.01:5) fig = Figure() ax = Axis(fig[1, 1], xlabel="X", ylabel="Y") sl_a = Slider(fig[2, 1], range = -3:0.01:3, startvalue = 0.) sl_b = Slider(fig[1, 2], range = -3:0.01:3, horizontal = false, startvalue = 0.) points = lift(sl_a.value, sl_b.value) do a, b Point2f.(x, a.*x.^2 .+ b.*x) end l = lines!(points, color = :blue) onany((a,b)->axislegend(ax, [l], ["$(a)x² + $(b)x"]), sl_a.value, sl_b.value) limits!(ax, minimum(x), maximum(x), -10, 10) fig end quadraticsliders() ETA: A couple examples closer to what you might be looking for
Using bias in PyTorch for basic function approximation
Using R, it is very easy to approximate basic functions through a neural network: library(nnet) x <- sort(10*runif(50)) y <- sin(x) nn <- nnet(x, y, size=4, maxit=10000, linout=TRUE, abstol=1.0e-8, reltol = 1.0e-9, Wts = seq(0, 1, by=1/12) ) plot(x, y) x1 <- seq(0, 10, by=0.1) lines(x1, predict(nn, data.frame(x=x1)), col="green") predict( nn , data.frame(x=pi/2) ) A simple neural network with one hidden layer of a mere 4 neurons is sufficient to approximate a sine. (As per stackoverflow question Approximating function with Neural Network.) But I cannot obtain the same in PyTorch. In fact, the neural network created by R contains not only an input, four hidden and an output, but also two "bias" neurons - the first connected towards the hidden layer, the second towards the output. The plot above is obtained through the following: library(devtools) library(scales) library(reshape) source_url('https://gist.github.com/fawda123/7471137/raw/cd6e6a0b0bdb4e065c597e52165e5ac887f5fe95/nnet_plot_update.r') plot.nnet(nn$wts,struct=nn$n, pos.col='#007700',neg.col='#FF7777') ### this plots the graph plot.nnet(nn$wts,struct=nn$n, pos.col='#007700',neg.col='#FF7777', wts.only=1) ### this prints the weights Attempting the same with PyTorch produces a different network: the bias neurons are missing. Following is an attempt to do in PyTorch what was done previously in R. The results will not be satisfactory: the function is not approximated. The most evident difference is that absence of the bias neurons. import torch from torch.autograd import Variable import random import math N, D_in, H, D_out = 1000, 1, 4, 1 l_x = [] l_y = [] for a in range(1000): r = random.random()*10 l_x.append( [r] ) l_y.append( [math.sin(r)] ) tx = torch.cuda.FloatTensor(l_x) ty = torch.cuda.FloatTensor(l_y) x = Variable(tx, requires_grad=False) y = Variable(ty, requires_grad=False) w1 = Variable(torch.randn(D_in, H ).type(torch.cuda.FloatTensor), requires_grad=True) w2 = Variable(torch.randn(H, D_out).type(torch.cuda.FloatTensor), requires_grad=True) learning_rate = 1e-5 for t in range(1000): y_pred = x.mm(w1).clamp(min=0).mm(w2) loss = (y_pred - y).pow(2).sum() if t<10 or t%100==1: print(t, loss.data[0]) loss.backward() w1.data -= learning_rate * w1.grad.data w2.data -= learning_rate * w2.grad.data w1.grad.data.zero_() w2.grad.data.zero_() t = [ [math.pi] ] print( str(t) +" -> "+ str( (Variable(torch.cuda.FloatTensor( t ))).mm(w1).clamp(min=0).mm(w2).data ) ) t = [ [math.pi/2] ] print( str(t) +" -> "+ str( (Variable(torch.cuda.FloatTensor( t ))).mm(w1).clamp(min=0).mm(w2).data ) ) How to make the network approximate to the given function (sine in this case), through either inserting the "bias" neurons or other missing detail? Moreover: I have difficulties in understanding why R inserts the "bias". I found information that the bias could be akin to the "Intercept in a Regression Model" - I still find it not clear. Any information would be appreciated. EDIT: an excellent explanation turned out to be at stackoverflow question Role of Bias in Neural Networks EDIT: An example to obtain the result, though using the "fuller" framework ("not reinventing the wheel") is as follows: import torch from torch.autograd import Variable import torch.nn.functional as F import math N, D_in, H, D_out = 1000, 1, 4, 1 l_x = [] l_y = [] for a in range(1000): t = (a/1000.0)*10 l_x.append( [t] ) l_y.append( [math.sin(t)] ) x = Variable( torch.FloatTensor(l_x) ) y = Variable( torch.FloatTensor(l_y) ) class Net(torch.nn.Module): def __init__(self, n_feature, n_hidden, n_output): super(Net, self).__init__() self.to_hidden = torch.nn.Linear(n_feature, n_hidden) self.to_output = torch.nn.Linear(n_hidden, n_output) def forward(self, x): x = self.to_hidden(x) x = F.tanh(x) # activation function x = self.to_output(x) return x net = Net(n_feature = D_in, n_hidden = H, n_output = D_out) learning_rate = 0.01 optimizer = torch.optim.Adam( net.parameters() , lr=learning_rate ) for t in range(1000): y_pred = net(x) loss = (y_pred - y).pow(2).sum() if t<10 or t%100==1: print(t, loss.data[0]) loss.backward() optimizer.step() optimizer.zero_grad() t = [ [math.pi] ] print( str(t) +" -> "+ str( net( Variable(torch.FloatTensor( t )) ) ) ) t = [ [math.pi/2] ] print( str(t) +" -> "+ str( net( Variable(torch.FloatTensor( t )) ) ) ) Unfortunately, while this code works properly, it does not solve the matter of making the original, more "low level" code work as expected (e.g. introducing the bias).
Following up on #jdhao's comment - this is a super-simple PyTorch model that computes exactly what you want: class LinearWithInputBias(nn.Linear): def __init__(self, in_features, out_features, out_bias=True, in_bias=True): nn.Linear.__init__(self, in_features, out_features, out_bias) if in_bias: in_bias = torch.zeros(1, out_features) # in_bias.normal_() # if you want it to be randomly initialized self._out_bias = nn.Parameter(in_bias) def forward(self, x): out = nn.Linear.forward(self, x) try: out = out + self._out_bias except AttributeError: pass return out However, there's an additional bug in your code: from what I can see, you don't train it - i.e. you do not call an optimizer (like torch.optim.SGD(mod.parameters()) before you delete the gradient information by calling grad.data.zero_().
Drawing Separated Shaded Areas in R Plot
I'm trying to draw a figure of waveforms. I want to show which part of the waveforms are statistically significant. Currently, I draw a semi-transparent polygon: plot(wave, type = "l", col = "red", lwd = linewid, pch = 19, lty = 1, xlab = "Time", xaxt = "n", xlim = c(1,duration/2), ylab = "Difference", ylim = c(-1, 1)) abline(h = 0) sig <- ifelse(pval<.05,wave,0) polygon(c(1:length(sig)), sig, col = rgb(1,0,0,0.5), border = NA) But the polygon isn't centered on the zero line of the y-axis. I'd like to shade only the areas underneath the curve of the wave. Instead, the polygon extends above the zero-line on the y-axis. Any ideas? wave <- c(0.0484408316308237, 0.0474054439781486, 0.0467022242629086, 0.046515614318914, 0.0466686947981267, 0.0466777796491931, 0.0460966374555009, 0.0457341620230469, 0.0455045060507858, 0.0457719372614871, 0.0461446812125276, 0.0460051963987539, 0.0456347093964464, 0.0430700479769684, 0.0435837207487517, 0.0443970279017918, 0.0457508738133201, 0.0472350978374988, 0.0482361020656729, 0.0494006907171422, 0.0504508971582255, 0.0521263688769232, 0.0532433489463588, 0.0537137380543864, 0.0540428548151276, 0.0544949143122896, 0.0544225549891838, 0.0538337952743033, 0.053135984213764, 0.0523491809303349, 0.0520472332622518, 0.0517736309847163, 0.0518684887760298, 0.0514603496453925, 0.050769752723635, 0.0504389714171051, 0.0502927164308292, 0.0504354031597342, 0.0498799936275558, 0.0490825606436222, 0.0497213009991454, 0.0501938355481634, 0.0514117871384259, 0.0519380643522052, 0.0517968505801706, 0.05123157072507, 0.0520909551474945, 0.0486858357936371, 0.0493763701994425, 0.0500160784148426, 0.0505488877007248, 0.0497678090074788, 0.0480758661250716, 0.0462675525180396, 0.0453516919016191, 0.0448339366059345, 0.0445615385738649, 0.044013178561506, 0.0439648543393159, 0.0438670362724258, 0.0440913799994017, 0.0507925460506875, 0.0509727145985309, 0.0510872776847506, 0.0508104967241469, 0.0503812271559447, 0.0503631548556902, 0.0505562349708585, 0.050869650537224, 0.050115073380279, 0.0496307336460131, 0.0486946602966385, 0.0451240814629419, 0.0439636677233932, 0.0428989167765818, 0.0420026704819646, 0.0411584695778936, 0.0403788602838661, 0.040233539087147, 0.0397175149422268, 0.0389289880494877, 0.0378327839257036, 0.0360351888196015, 0.0347926091711749, 0.0341079891575494, 0.0348740749311286, 0.0349125506875405, 0.0352951033387814, 0.0344798859212136, 0.0327899391390952, 0.0303925310234825, 0.0275215845941342, 0.0265832329092289, 0.0220646495463752, 0.0122404036320984, 0.00743877530625988, 0.00181246669131438, -0.00479231506410288, -0.0117717813867494, -0.0192370027513411, -0.027223713620762, -0.0348613553743107, -0.0397127268587883, -0.045622681570717, -0.0515358709254366, -0.0568288397365667, -0.0620165857779051, -0.0669105535816898, -0.0720264470900791, -0.0766882017731929, -0.0804427064040755, -0.0815328596670379, -0.0826051939881404, -0.0879974600724217, -0.0924894198404777, -0.0949544486488778, -0.104737046247734, -0.11695750473657, -0.132892205151458, -0.15164997657278, -0.172597865364775, -0.196113512673009, -0.216646106105455, -0.244400723622597, -0.267988695108909, -0.292598978473393, -0.317086468049069, -0.342530108945073, -0.368486808868852, -0.399730966642985, -0.433385374917961, -0.469543692326107, -0.507867318915593, -0.547443797215136, -0.586749203029937, -0.625603126037644, -0.6626968183054, -0.697811797372003, -0.730226229712439, -0.760716192518167, -0.789754092566875, -0.819837732291987, -0.844265792897494, -0.865853848085839, -0.88772546496204, -0.908008383337203, -0.926193346905058, -0.943720637018896, -0.958657012974673, -0.971195039738284, -0.981680462787076, -0.989209920087862, -0.994760927508405, -0.998179967730494, -1, -0.99985631234348, -0.998513746223383, -0.996286337260218, -0.994167673024296, -0.992029087667234, -0.98942063019129, -0.986657143470197, -0.982080217651251, -0.97535310006632, -0.967706563058861, -0.959931873177486, -0.953053148939477, -0.945050149326435, -0.936863678952672, -0.927476791110897, -0.917244708485839, -0.910092208942064, -0.898659859433262, -0.887894272800133, -0.874966302881798, -0.860464316462603, -0.843766510522863, -0.826854760379226, -0.809030674702751, -0.790830214526396, -0.773448702041121, -0.757822022781962, -0.742415284193991, -0.726650963278141, -0.708671839205669, -0.690647887135473, -0.669566331925841, -0.647484673858103, -0.625415272964118, -0.603346317669516, -0.580945690740634, -0.559174605387308, -0.537166524153666, -0.514979755494959, -0.492190905789554, -0.4688961948937, -0.445648845564897, -0.421370990246327, -0.394957034231288, -0.367257387362894, -0.339759436905685, -0.31347732732076, -0.287795514335449, -0.263477496955318, -0.240335559473844, -0.219537822868833, -0.199356394938618, -0.182724128026289, -0.162855943390834, -0.143113588174923, -0.122168088165277, -0.100967800471397, -0.0777710229443332, -0.0539643915465976, -0.029677750446946, -0.00631959566058126, 0.0169258078383277, 0.0389612599575379, 0.0609770118878408, 0.0806172669927091, 0.102073705616963, 0.122665362014863, 0.142328282171209, 0.161475578433955, 0.17913203293436, 0.199700604404855, 0.216864487908698, 0.232810273813389, 0.248031682891701, 0.262732844598723, 0.276791405782004, 0.289592381780554, 0.302904563305743, 0.315933177369042, 0.331194285957781, 0.34328787498597, 0.355317635956366, 0.37161156141851, 0.385496981280364, 0.39906005718835, 0.410609126043194, 0.424557700817611, 0.432614645845991, 0.440840405298792, 0.446859449278095, 0.451067862561763, 0.454108550491332, 0.45822766032593, 0.463669025285741, 0.468751886735504, 0.477222371161998, 0.480825169330004, 0.484778309657409, 0.490686208003411, 0.496271560877119, 0.502429910894803, 0.504627189056216, 0.509277950091572, 0.512796647131139, 0.51920303298796, 0.526246371444159, 0.530172995082546, 0.537137361380815, 0.540646539738041, 0.541350429187775, 0.538037378748107, 0.53640396369579, 0.533982169384575, 0.531715003489143, 0.537771148135836, 0.541774025400292, 0.542397661260989, 0.542734418123511, 0.541900634648552, 0.551336284191385, 0.550830246001875, 0.543935588412506, 0.54850236576883, 0.546095769955119, 0.546560717106744, 0.54862072115046, 0.549873891251691, 0.549022573851835, 0.556546139368112, 0.560110491782052, 0.563328136311772, 0.556118487214394, 0.556145005611977, 0.56177222985652, 0.56211974460993, 0.563139174079545, 0.567595800986669, 0.564911164049346, 0.55370467099592, 0.549142761248365, 0.553638360274585, 0.552297050636191, 0.551520866080127, 0.545782391084414, 0.550845817729848, 0.551708815237408, 0.552680776857495, 0.560806019030281, 0.566586972251016, 0.570035930685996, 0.57778944843747, 0.575407636591647, 0.576215607600804, 0.580856907896507, 0.58111203256702, 0.580550879830819, 0.579468018580888, 0.569996133796829, 0.571472497444831, 0.570186841693214, 0.579257568699911, 0.585984875703449, 0.592864903673866, 0.592890757109184, 0.602046235792163, 0.613343736000507, 0.61690652667541, 0.615073893877543, 0.603386282668406, 0.605140483512513, 0.602317438726602, 0.601349093084652, 0.60175903066173, 0.595748856842631, 0.592466664315233, 0.579486755875179, 0.561987007946437, 0.542492099234415, 0.526383565525105, 0.5230428319822, 0.513300797695766, 0.515049254563791, 0.518848257099875, 0.506235765674015, 0.49998294091854, 0.506344856229246, 0.50475172054043, 0.507702294279798, 0.506348179486846, 0.517341319120319, 0.523551522223028, 0.530756340907612, 0.53032745351512, 0.533111198195776, 0.526453901436172, 0.529598066926201, 0.523624800099041, 0.516232193418245, 0.517039928536979, 0.501904786914197, 0.49713387651536, 0.505916234878408, 0.502395600515955, 0.489414472392961, 0.476329169842886, 0.485088777888902, 0.48219652186471, 0.469886812116599, 0.453441250799586, 0.441168281111148, 0.437074892507931, 0.438771226156789, 0.434582625256592, 0.434393831049118, 0.455313871944686, 0.46667176154786, 0.451614470975422, 0.446531804915084, 0.448747422127997, 0.442389961671837, 0.452345098594434) pval <- structure(c(0.0237628302370617, 0.0262165284800235, 0.0285686821840486, 0.0297087853681475, 0.0299733840361694, 0.0301202024224222, 0.0323058180308351, 0.0339316553612034, 0.0381144580106053, 0.04487408115707, 0.057804021059368, 0.085158184225468, 0.136972404452296, 0.0594954230338983, 0.059815561375559, 0.0586890211651837, 0.054984325985342, 0.0505545271596015, 0.0488952044969173, 0.0471087059136155, 0.0460746356928649, 0.0424450757403614, 0.0405444189843919, 0.0411829464282132, 0.0412927140116675, 0.0417488854396546, 0.0445117370815736, 0.0550285077721912, 0.0747425914230263, 0.111097457523042, 0.15043556390993, 0.0494268999973122, 0.0517649576685409, 0.0562362352328751, 0.0608441259158306, 0.0635243703413948, 0.0648470007376663, 0.0649911512153626, 0.0678711704637943, 0.0714853336274117, 0.0656394713851027, 0.0631287997230727, 0.0691221286169588, 0.0998026839190638, 0.149978285257545, 0.195667278177658, 0.139258740157594, 0.0641286728653938, 0.0590485245121715, 0.0549751823968685, 0.0525288256855241, 0.0566756473108664, 0.0675151254516982, 0.0797001157496837, 0.0893120076127157, 0.101040948627555, 0.113937187901608, 0.136477809610576, 0.151438694370951, 0.168116109618061, 0.167693574892715, 0.055626851340157, 0.055837522780422, 0.0561583595040639, 0.0594879473890239, 0.0632759635313892, 0.065859251440062, 0.065772893740802, 0.0639366385273011, 0.0671023312185317, 0.0694330478072044, 0.0733540770931381, 0.0927895033698356, 0.0987657684863097, 0.106752925133343, 0.113783571923594, 0.121501003290016, 0.129032949168017, 0.136116836003546, 0.151762019664018, 0.175690091657166, 0.202670406948741, 0.246692883279612, 0.29427387065597, 0.339407794720914, 0.437126245014218, 0.457917873424833, 0.481184291252777, 0.523332782522114, 0.577097846183451, 0.630184174499617, 0.685063979797142, 0.719978579352288, 0.772396520583243, 0.880445896200477, 0.930819867424807, 0.983821279666508, 0.958549578396286, 0.901224976502897, 0.843510707377769, 0.785288039341357, 0.732678879305588, 0.705605278527497, 0.668649736650829, 0.632671877729848, 0.601886683772611, 0.572995378718532, 0.547203236154384, 0.522892433560929, 0.50484004632912, 0.495167623435313, 0.501651238918971, 0.508756506196948, 0.504741256872503, 0.496754657283146, 0.483869763822611, 0.453324086883336, 0.417453905522249, 0.37213400662027, 0.324969615847784, 0.279986826269611, 0.238305111925328, 0.217603724159869, 0.18615434974012, 0.165914268934125, 0.148553539851396, 0.134089539639757, 0.120930326838334, 0.108816102552138, 0.0943727175866119, 0.0809908262000654, 0.0688676027935202, 0.057965295683139, 0.0485177042268327, 0.0406774650506777, 0.0343750114181405, 0.0296041466961569, 0.0257754426846805, 0.0228491378540575, 0.0203762486696415, 0.0182050611089517, 0.0151389244136079, 0.0143291699625514, 0.0135272132937536, 0.0129494727394939, 0.0125413188392993, 0.0120654899893143, 0.0115593848885709, 0.0110971538277043, 0.0106563593895204, 0.0102594799209655, 0.00995618264115798, 0.00976027085065125, 0.00966493285702107, 0.0096809864774456, 0.00984417807568116, 0.0101140178609648, 0.0104536689988165, 0.0107455438459773, 0.0109879390786196, 0.0112476004827354, 0.0115064972197262, 0.0118436117195952, 0.012250241785758, 0.0127098856583327, 0.0131923546938643, 0.0135900513062996, 0.0140907822475909, 0.0146146781894138, 0.0153666265314495, 0.0163587611869966, 0.0165777793501283, 0.0179231032274303, 0.0193146107648153, 0.02112595949975, 0.023429090158608, 0.0263502163261689, 0.029668879819774, 0.0333886702034785, 0.0373851358797521, 0.0414244618086564, 0.0452421641298093, 0.0491730557898139, 0.0534544517392593, 0.0586659192358243, 0.0642835353988069, 0.0717515494462237, 0.0803329713593607, 0.089702211494325, 0.100024733160442, 0.111522126685171, 0.123611882298163, 0.136659951151313, 0.150762549445491, 0.166322127637489, 0.183448713985098, 0.201980927862526, 0.223096893280781, 0.248582577870016, 0.278233697709439, 0.310833681563501, 0.345332013645289, 0.382705067848975, 0.421376239222594, 0.460853428063773, 0.498027141394812, 0.535355843788925, 0.572293181350887, 0.611900790530155, 0.652931049846446, 0.698550246370366, 0.746816303915033, 0.801970397733016, 0.860682735611148, 0.922505922723998, 0.983357734959766, 0.955187832059461, 0.89653477564825, 0.837851886741715, 0.781781567273432, 0.723265191915832, 0.667546011138303, 0.614998358930736, 0.564647511026012, 0.519566323581666, 0.470697912789023, 0.432376848315165, 0.398409375708932, 0.367655920642225, 0.338940852965934, 0.312212428665514, 0.288459290398895, 0.264878592699687, 0.242704842685072, 0.217387439877778, 0.19835325593237, 0.180556517879591, 0.158171361155571, 0.141413156894301, 0.126554452518633, 0.114979049999799, 0.102918102342695, 0.0962216185484208, 0.0901194930496858, 0.0860130895609037, 0.0832340770284259, 0.0812259095628692, 0.078291162078237, 0.0742796112910293, 0.070046963535911, 0.0638805429841068, 0.0600325581294298, 0.0556731158844295, 0.0505107871259209, 0.045748968232586, 0.0409239605265858, 0.038056048177405, 0.0346897761994006, 0.0322017147611021, 0.0290191309178551, 0.0259412764702286, 0.0239876220554575, 0.0213928561132381, 0.0196012142768874, 0.0188687591178971, 0.0191111545249072, 0.0190313087419941, 0.019003849481473, 0.0188195734297014, 0.0172119066683942, 0.015771401215516, 0.0148547142814098, 0.0142875281889536, 0.0138706460080897, 0.0117612653636045, 0.0113802773390233, 0.0119929691882122, 0.011301932221071, 0.0113391829691033, 0.0111974712246918, 0.0109290980999376, 0.0105620733291735, 0.0102904702021988, 0.0088461881724392, 0.00730041492139247, 0.00668346613062251, 0.00766574212937, 0.00783946561969262, 0.00788512630201997, 0.00782058882264157, 0.00767516163055654, 0.00747564784065695, 0.00812968647923146, 0.0091908030084859, 0.00940025876764169, 0.0084452386928757, 0.00773239171673778, 0.00774262750329184, 0.00817566848920374, 0.00775737017229728, 0.00833705507967201, 0.00848219824583305, 0.00775531985314739, 0.00731753429028076, 0.00624994399211075, 0.00566319903650268, 0.00580078551643243, 0.00589284481572378, 0.00557777343775064, 0.00561015236859766, 0.00558045776809902, 0.0062741801741615, 0.00693752198709533, 0.00680148894969647, 0.00682650280136682, 0.00648353640440989, 0.00559647299682892, 0.00560310591626879, 0.00630599774150389, 0.00625849640781645, 0.00496543301213065, 0.00527106355173805, 0.00606524474497961, 0.00782669446225163, 0.00804663337329233, 0.00680216943527376, 0.00549477843115955, 0.00521495965843119, 0.00547975304067046, 0.0058083233283578, 0.00752964959779626, 0.00928425608337011, 0.0121528464847978, 0.0151204119890877, 0.0167466130623839, 0.0184702177828752, 0.0190546966808364, 0.0194217711957074, 0.0172588666752732, 0.0220542021375166, 0.0191777261144048, 0.0207814209894351, 0.0194965472902094, 0.0209260124322778, 0.0170037074001148, 0.0125849197829535, 0.0116502405937885, 0.0120529295967995, 0.0116481500418768, 0.013878269349617, 0.0138614750865797, 0.0153190475349968, 0.0194121459536112, 0.017739167396571, 0.0204726597729821, 0.0245650013533451, 0.0208055307319631, 0.0210348908355722, 0.0278106725250406, 0.035960679421438, 0.0358929798768166, 0.0405937721886476, 0.0510142810649717, 0.0561354348677331, 0.0862664430860822, 0.101130560202344, 0.111832280018731, 0.0774469081405625, 0.0813608282057401, 0.063555234011225, 0.0517641089682014, 0.0663811539378655, 0.0602507226995198, 0.0626085278234381, 0.0676820928084705, 0.0566520960018359), .Dim = c(376L, 1L))
The devil is always in the details with polygon vertices! Using your data from above, here is a solution and some explanation. Get the polygon vertices, almost sigy <- ifelse(pval < 0.05, wave, 0) sigx <- 1:length(wave) sigxy <- data.frame(sigx, sigy) This is pretty much what you used in your original question. It doesn't quite work because while the polygon function accepts and connects x,y pairs, it also closes the polygon. Also, your approach was drawing a lot of polygons with zero area. So, some processing of sigxy is necessary to split it into separate polygons. This could be done manually, but it's more fun and useful to automate the process. First, remove the entries where sigy is zero. Then let's replot to see where things stand: tmp <- sigxy[sigxy$sigy !=0,] plot(wave, # removed some unnecessary items to simplify type = "l", col = "red", xlab = "Time (by index)", ylab = "Difference", ylim = c(-1, 1)) abline(h = 0) lines(tmp, col = "green") At this point, we are much closer but are still connecting the dots and not making polygons, since we don't return to the zero line when necessary. Let's create a column that will show us where the polygons should start and stop, then gather that info into a couple of vectors of indices: tmp$diff <- c(1, diff(tmp$sigx)) st <- 1 # start indices end <- c() # end indices for (i in 1:nrow(tmp)) { if (tmp$diff[i] > 1) end <- c(end, i-1) if ((tmp$diff[i] > 1) & (i != nrow(tmp))) st <- c(st, i) if (i == nrow(tmp)) end <- c(end, i-1) } Now refresh the plot and add the polygons one at a time plot(wave, # removed some unnecessary items to simplify type = "l", col = "red", xlab = "Time (by index)", ylab = "Difference", ylim = c(-1, 1)) abline(h = 0) for (i in 1:length(st)) { DF <- tmp[st[i]:end[i], 1:2] # Just the data to be plotted # Add the points needed to drop to the zero line DF <- rbind(c(DF$sigx[1], 0), DF, c(DF$sigx[nrow(DF)],0)) polygon(DF, col = "green") } The result:
Bryan Hanson's solution translated into a function: drawarea <- function(wave, pval, color){ sigy <- ifelse(pval < 0.05, wave, 0) sigx <- 1:length(wave) sigxy <- data.frame(sigx, sigy) tmp <- sigxy[sigxy$sigy !=0,] tmp$diff <- c(1, diff(tmp$sigx)) st <- 1 # start indices end <- c() # end indices for (i in 1:nrow(tmp)) { if (tmp$diff[i] > 1) end <- c(end, i-1) if ((tmp$diff[i] > 1) & (i != nrow(tmp))) st <- c(st, i) if (i == nrow(tmp)) end <- c(end, i-1) } for (i in 1:length(st)){ DF <- tmp[st[i]:end[i], 1:2] # Just the data to be plotted # Add the points needed to drop to the zero line DF <- rbind(c(DF$sigx[1], 0), DF, c(DF$sigx[nrow(DF)],0)) polygon(DF, col = color, border=NA) } }
(scilab) x = [-6,6] y = 1/(1+%e^-x) why it doesn't work?
I'm trying to draw sigmoid function using this code on scilab, but the result I got is not from the equation. what's wrong with my code? x = -6:1:6; y = 1/(1+%e^-x) y = 0.0021340 0.0007884 0.0002934 0.0001113 0.0000443 0.0000196 0.0000106 0.0000072 0.0000060 0.0000055 0.0000054 0.0000053 0.0000053 http://en.wikipedia.org/wiki/Sigmoid_function thank you so much
Try: -->function [y] = f(x) --> y = 1/(1+%e^-x) -->endfunction -->x = -6:1:6; -->fplot2d(x,f) which yields:
Your approach calculates the pseudoinverse of the (1+%e.^x) vector. You can verify by executing: (1+%e^-x)*y Here are two things you could do: x = -6:1:6; y = ones(x)./(1+%e.^-x) This gives the result you need. This performs element-wise division as expected. Another approach is: x = -6:1:6 deff("z = f(x)", "z = 1/(1+%e^-x)") // The above line is the same as defining a function- // just as a one liner on the interpreter. y = feval(x, f) Both approaches will yield the same result.
With Scilab ≥ 6.1.1, simply x = (-6:1:6)'; plot(x, 1./(1+exp(-x)))