I have not needed to do things like this for years and was also never good at it. Below is my graph :
Looking at my artistic numbering on the graph :
I have the X and Y Values : X = 7282, Y = 235
I have the X and Y values : X = 8178, Y = 173
I have the X but not the Y : X = 7882, Y = ?
I need to calculate Y, and im sure it is pretty simple, but I cant seem to figure it out. Ive googled a lot, but all my calculations never work(i.e the new Y point is never on the line, always above or below), so im clearly missing something.
Can anyone help with the formula of how to calculate the new Y value ?
Thanks!
Try this:
Y = (Ymax - Ymin)/(Xmax - Xmin) * K + Ymin;
If slope goes up: K = X - Xmin, another way K = Xmax - X.
Since you know the coordinate values of 2 points
m = ((y2 - y1)/(x2 - x1))
y3 = m*x3 + c
where c is a constant
to calculate c use x = x1 and y = y1
c = y-m*x
apply this c in the above equation we get y3
so for this case
m = (235 - 173)/(7282 - 8178)
c = 8178 - m*173
y3 = m*7882 + c
Related
x = np.linspace(-np.pi, np.pi, 100)
x = torch.tensor(x, requires_grad=True)
y = torch.sin(x)
This is the given problem. Trying to find its derivative and plot the function
I'm new to R and a pretty novice programmer. I've come up with the following, but I'm stumped on this question. I don't need the answer, just a nudge in the right direction.
x <- c(1,5,7,9,10,22)
y <- c(22.2,33.4,45.7,50.2,55.9,89.1)
#a = B0 b = B1
linear_regression <- function(x,y){
x2 <- x * x
y2 <- y * y
xy <- x * y
a = (sum(y)*sum(x2) - sum(x)*sum(xy))/(length(x)*sum(x2) - (sum(x)^2))
b = (length(x)*sum(xy) - sum(x)*sum(y))/(length(x)*sum(x2) - (sum(x)^2))
coeff = c(a,b)
return(coeff)
}
linear_regression(x,y)
#2 Using the betas you generated, make a new prediction for the value of y, given the value of x is 40```
Recall that the predicted Y will equal B0 + B1 * X (where B0 and B1 are the estimated coefficients) and you should be on your way
After constructing a graph, when using the detach method to change some tensor value, it is expected that an error pops up when computing the back propagation. However, this is not always the case. In the following two blocks of code: the first one raises an error, while the second one does not. Why does this happen?
x = torch.tensor(3.0, requires_grad=True)
y = x + 1
z = y**2
c = y.detach()
c.zero_()
z.backward(retain_graph=True)
print(x.grad) # errors pop up
x = torch.tensor(3.0, requires_grad=True)
y1 = x+1
y2 = x**2
z = 3*y1 + 4*y2
c = y2.detach()
c.zero_()
z.backward(retain_graph=True)
print(x.grad) # no errors. The printed value is 27
TLDR; In the former example z = y**2, so dz/dy = 2*y, i.e. it's a function of y and requires its values to be unchanged to properly compute the backpropagation, hence the error message when applying the in-place operation. In the latter z = 3*y1 + 4*y2, so dz/dy2 = 4, i.e. y2 values are not needed to compute the gradient, as such its values can be modified freely.
In the former example you have the following computation graph:
x ---> y = x + 1 ---> z = y**2
\
\ ---> c = y.detach().zero_()
Corresponding code:
x = torch.tensor(3.0, requires_grad=True)
y = x + 1
z = y**2
c = y.detach()
c.zero_()
z.backward() # errors pop up
When calling c = y.detach() you effectively detach c from the computation graph, while y remains attached. However, c shares the same data as y. This means when you call the in-place operation c.zero_, you end up affecting y. This is not allowed, because the y is part of a computation graph, and its values will be needed for a potential backpropagation from variable z.
The second scenario corresponds to this layout:
/--> y1 = x + 1 \
x ---> z = 3*y1 + 4*y2
\--> y2 = x**2 /
\
\ ---> c = y2.detach().zero_()
Corresponding code:
x = torch.tensor(3.0, requires_grad=True)
y1 = x + 1
y2 = x**2
z = 3*y1 + 4*y2
c = y2.detach()
c.zero_()
z.backward()
print(x.grad) # no errors. The printed value is 27
Here again, we have the same setup, you detach then in-place modify c and y with zero_.
The only difference is the operation performed on y and y2 (in the 1st and 2nd example respectively).
In the former, you have z = y**2, so the derivative is 2*y, hence the value of y is needed to compute the gradient of that operation.
In the latter example though z(y2) = constant + 4*y2 so the derivative with respect to y2 is just a constant: 4, i.e. it doesn't require the value of y2 to compute its derivative. You can check this by, for instance, defining in 2nd example z with z = 3*y1 + 4*y2**2: it will raise an error.
I'm having problems implementing this exercise from a quantitative economics course.
Here's my code:
N = 50
M = 20
a = 0.1
b = 0.2
c = 0.5
d = 1.0
σ = 0.1
estimates = zeros(M, 5)
for i ∈ 1:M
x₁ = Vector{BigFloat}(randn(N))
x₂ = Vector{BigFloat}(randn(N))
w = Vector{BigFloat}(randn(N))
# Derive y vector (element wise operations)
y = a*x₁ .+ b.*(x₁.^2) .+ c.*x₂ .+ d .+ σ.*w
# Derive X matrix
X = [x₁ x₁ x₂ fill(d, (N, 1)) w]
# Implementation of the formula β = inv(XᵀX)Xᵀy
estimates[i, :] = (X'*X)\X'*y
end
histogram(estimates, layout=5, labels=["a", "b", "c", "d", "σ"])
I get a SingularException(5) error, as the matrix X'X has a determinant of 0 and has no inverse. My question is, where have I gone wrong in this exercise? I heard that a reason the determinant might be zero is floating point inaccuracy, so I made the random variables BigFloats to no avail. I know the mistake I'm making isn't very complicated but I'm lost. Thank you!
Your X should be
X = [x₁ x₁*x₁ x₂ fill(d, (N, 1))]
Explanation
It looks that you are trying to test OLS to estimate the parameters of the model:
y = α₀ + α₁x₁ + α₁₁x₁² + α₂x₂ + ϵ
where α₀, is the intercept of the model, α₁, α₁₁, α₂ are parameters for explanatory variables, and ϵ is the random error with the expected value 0 and variance σ². Hence the structure of X must match your case.
Putting the α₁ twice you introduced co-linearity and got the error.
You also do not want to "estimate" the parameter for ϵ because it represents the randomness.
I have a, hopefully, simple question. Im using Nuke to do a linear animation and I have 2 points.
point1 # frame 1 is (5,90)
point2 # frame 10 is (346,204)
Using a linear interpolation type, I want to fiqure out where the x and y point is at frame 30.
The way i tried is using the slope formula and then finding the y intercept.
m = (204 - 90) / (346 - 5)
m = 114/341 = .3343
then I got the intercept by:
Y = Mx + b
90 = .3343(5) + b
90 = 1.6715 + b
88.3285 = b
so...I got the formula for my line. y = .3343X + 88.3285
Can someone help me figure out where the point is going to be at any given frame?
If you'd please refer to the image attached... you can see image of my graph.
I guess the problem I'm having is relating the time to the coord points.
Thanks
Just consider x as a function of time (t).
Here's some coordinates:
(t, x)
(1, 5)
(10, 346)
and some calculation of the line equation:
x = mt+b
m = (346-5) / (10-1)
m = 341/9
b = 5 - (341/9)*1
b = - 296/9
x = (341t - 296)/9
And using my formula (t -> x) and your formula (x -> y), I can calculate where things are at t=30
t = 30
x = 1103 + 7/9
y = 457.3214