I am trying to insert data into an Access mdb file using a list as the source for the values.
cursor.execute("select * from Components")
cursor.executemany("""
INSERT INTO Components
([Database Number],Description, [Warehouse Code],[Supplier Code], Usage, Major1, Minor1)
VALUES (?,?,?,?,?,?,?)
"""), input_list
cursor.commit()
I get the error "TypeError: function takes exactly 2 arguments (1 given)". The error refers to the line """), input_list
What I am I doing wrong? Thanks in advance for your help.
Here is a print of the input_list
['7', '1/2" PVC 90° Elbow', '406-005', 'SUP2', 'Y', 'PVC FS', 'PVC FS']
['7', '3/4" PVC 90° Elbow', '406-007', 'SUP2', 'Y', 'PVC FS', 'PVC FS']
['7', '1" PVC 90° Elbow', '406-010', 'SUP2', 'Y', 'PVC FS', 'PVC FS']
['7', '1.25" PVC 90° Elbow', '406-012', 'SUP2', 'Y', 'PVC FS', 'PVC FS']
['7', '1.5" PVC 90° Elbow', '406-015', 'SUP2', 'Y', 'PVC FS', 'PVC FS']
['7', '2" PVC 90° Elbow', '406-020', 'SUP2', 'Y', 'PVC FS', 'PVC FS']
I figured it out. The last line under cursor.executemany should read:
""", input_list)
I had the close parenthesis in the wrong place
Related
I'm trying to understand the OpenMDAO error messages
RuntimeError: Singular entry found in '' for column associated with state/residual 'x'.
and
RuntimeError: Singular entry found in '' for row associated with state/residual 'y'.
Can someone explain these? E.g. When running the code
from openmdao.api import Problem, Group, IndepVarComp, ImplicitComponent, ScipyOptimizeDriver, NewtonSolver, DirectSolver, view_model, view_connections
class Test1Comp(ImplicitComponent):
def setup(self):
self.add_input('x', 0.5)
self.add_input('design_x', 1.0)
self.add_output('z', val=0.0)
self.add_output('obj')
self.declare_partials(of='*', wrt='*', method='fd', form='central', step=1.0e-4)
def apply_nonlinear(self, inputs, outputs, resids):
x = inputs['x']
design_x = inputs['design_x']
z = outputs['z']
resids['z'] = x*z + z - 4
resids['obj'] = (z/5.833333 - design_x)**2
if __name__ == "__main__":
prob = Problem()
model = prob.model = Group()
model.add_subsystem('p1', IndepVarComp('x', 0.5))
model.add_subsystem('d1', IndepVarComp('design_x', 1.0))
model.add_subsystem('comp', Test1Comp())
model.connect('p1.x', 'comp.x')
model.connect('d1.design_x', 'comp.design_x')
prob.driver = ScipyOptimizeDriver()
prob.driver.options["optimizer"] = 'SLSQP'
model.add_design_var("d1.design_x", lower=0.5, upper=1.5)
model.add_objective('comp.obj')
model.nonlinear_solver = NewtonSolver()
model.nonlinear_solver.options['iprint'] = 2
model.nonlinear_solver.options['maxiter'] = 20
model.linear_solver = DirectSolver()
prob.setup()
prob.run_model()
print(prob['comp.z'])
I get the error message:
File "C:\Scripts/mockup_component3.py", line 46, in <module>
prob.run_model()
File "C:\Python_32\lib\site-packages\openmdao\core\problem.py", line 315, in run_model
return self.model.run_solve_nonlinear()
File "C:\Python_32\lib\site-packages\openmdao\core\system.py", line 2960, in run_solve_nonlinear
result = self._solve_nonlinear()
File "C:\Python_32\lib\site-packages\openmdao\core\group.py", line 1420, in _solve_nonlinear
result = self._nonlinear_solver.solve()
File "C:\Python_32\lib\site-packages\openmdao\solvers\solver.py", line 602, in solve
fail, abs_err, rel_err = self._run_iterator()
File "C:\Python_32\lib\site-packages\openmdao\solvers\solver.py", line 349, in _run_iterator
self._iter_execute()
File "C:\Python_32\lib\site-packages\openmdao\solvers\nonlinear\newton.py", line 234, in _iter_execute
system._linearize()
File "C:\Python_32\lib\site-packages\openmdao\core\group.py", line 1562, in _linearize
self._linear_solver._linearize()
File "C:\Python_32\lib\site-packages\openmdao\solvers\linear\direct.py", line 199, in _linearize
raise RuntimeError(format_singluar_error(err, system, mtx))
RuntimeError: Singular entry found in '' for column associated with state/residual 'comp.obj'.
This error I was able to solve, by adding - outputs['obj'] in the equation for resids['obj']. But I still have little understanding as to what the two error messages mean. What matrix is it that is singular? And what does it mean to have
1) a singular entry for a column?
2) a singular entry for a row?
I realized that the cause for the singular row was that I had not defined the partial derivatives for the component. I fixed this problem by adding the command declare_partials to the top level system. The traceback gave me the clue that the matrix was related to linearization.
The case with the singular column seems related to that I had two equations in apply_nonlinear, but only one unknown (z).
I am not sure to understand the access to the GS convergence information when running a problem which contains a Group class with a cycle.
To illustrate this, consider these two versions of the Sellar problem:
prob = Problem()
model = prob.model
model.add_subsystem('px', IndepVarComp('x', 1.0), promotes=['x'])
model.add_subsystem('pz', IndepVarComp('z', np.array([5.0, 2.0])), promotes=['z'])
model.add_subsystem('d1', SellarDis1.SellarDis1(), promotes=['x', 'z', 'y1', 'y2'])
model.add_subsystem('d2', SellarDis2.SellarDis2(), promotes=['z', 'y1', 'y2'])
nlgbs = model.nonlinear_solver = NonlinearBlockGS()
nlgbs.options['maxiter'] = 8
prob.setup()
A = prob.run_model()
In this version, in the variable A there are convergence results such as
(False, 1.3188028447075339e-10, 3.6299074030587596e-12)
However, when defining the Sellar problem in the following form:
class SellarMDA(Group):
def setup(self):
indeps = self.add_subsystem('indeps', IndepVarComp(), promotes=['*'])
indeps.add_output('x', 1.0)
indeps.add_output('z', np.array([5.0, 2.0]))
cycle = self.add_subsystem('cycle', Group(), promotes=['*'])
d1 = cycle.add_subsystem('d1', SellarDis1.SellarDis1(), promotes_inputs=['x', 'z', 'y2'], promotes_outputs=['y1'])
d2 = cycle.add_subsystem('d2', SellarDis2.SellarDis2(), promotes_inputs=['z', 'y1'], promotes_outputs=['y2'])
nl = cycle.nonlinear_solver = NonlinearBlockGS()
nl.options['maxiter'] = 8
prob = Problem()
prob.model = SellarMDA()
prob.setup()
prob['x'] = 2.
prob['z'] = [-1., -1.]
C = prob.run_model()
In the variable C there is no relevant information to GS convergence, there is only
(False, 0.0, 0.0)
Is it possible to get the GS convergence information in the 2nd version as in the 1st without using a recorder ?
If you would like to see what the solvers are doing, I would definitely recommend turning on the residual printing. You can do that individually for each solver, but I find it easier to use the problem method to turn it all on:
prob.set_solver_print(level=2)
I’m trying to create a basic binary classifier in Pytorch that classifies whether my player plays on the right or the left side in the game Pong. The input is an 1x42x42 image and the label is my player's side (right = 1 or left = 2). The code:
class Net(nn.Module):
def __init__(self, input_size, hidden_size, num_classes):
super(Net, self).__init__()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, num_classes)
def forward(self, x):
out = self.fc1(x)
out = self.relu(out)
out = self.fc2(out)
return out
net = Net(42 * 42, 100, 2)
# Loss and Optimizer
criterion = nn.CrossEntropyLoss()
optimizer_net = torch.optim.Adam(net.parameters(), 0.001)
net.train()
while True:
state = get_game_img()
state = torch.from_numpy(state)
# right = 1, left = 2
current_side = get_player_side()
target = torch.LongTensor(current_side)
x = Variable(state.view(-1, 42 * 42))
y = Variable(target)
optimizer_net.zero_grad()
y_pred = net(x)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
The error I get:
File "train.py", line 109, in train
loss = criterion(y_pred, y)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 206, in __call__
result = self.forward(*input, **kwargs)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/modules/loss.py", line 321, in forward
self.weight, self.size_average)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 533, in cross_entropy
return nll_loss(log_softmax(input), target, weight, size_average)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/functional.py", line 501, in nll_loss
return f(input, target)
File "/home/shani/anaconda2/lib/python2.7/site-packages/torch/nn/_functions/thnn/auto.py", line 41, in forward
output, *self.additional_args)
RuntimeError: Assertion `cur_target >= 0 && cur_target < n_classes' failed. at /py/conda-bld/pytorch_1493676237139/work/torch/lib/THNN/generic/ClassNLLCriterion.c:57
For most of deeplearning library, target(or label) should start from 0.
It means that your target should be in the range of [0,n) with n-classes.
It looks like PyTorch expect to get zero-based labels (0/1 in your case) and you probably feed it with one-based labels (1/2)
I had the same error in my program and i just realized that the problem was in the number of output nodes in my neural network
In my program the number of output nodes of my model was not equal to the number of labels in dataset
the number of output was 1 and the number of target labels was 10. then i changed the number of output to 10, there was no error
A branch of operator theory studies the shift operator S. Basically, given a graph with weights assigned to each vertex of the graph, the shift operator produces a new graph by taking the same graph (A) and replacing the weight of each vertex with the sum of the weights of the vertex's neighbors. For example, 3 in graph (A) is replaced by 5 + 5 + 2 + 0.
A
B
Does anyone know if networkx can help me automate such a process for an arbitrary graph, G? Also, what are the limits in size (vertexes, edges, etc) of graphs that I may construct?
First you need to create a graph and add the node weights.
I name the nodes with letters from a to h.
For larger graphs you'll need a different way of naming nodes (so each node has a unique name).
In the code bellow I also draw the node names.
Note that I manually set the node positions so I have the same example as you.
For larger graphs check out graph layouts.
import networkx as nx
from matplotlib import pyplot as plt
G = nx.Graph()
nodes = [
['a', {'weight' : 5}],
['b', {'weight' : 4}],
['c', {'weight' : 2}],
['d', {'weight' : 3}],
['e', {'weight' : 5}],
['f', {'weight' : 0}],
['g', {'weight' : 0}],
['h', {'weight' : 1}]
]
for node in nodes:
G.add_node(node[0], node[1]) # add node and node weight from list
G.add_edges_from([
('a', 'd'),
('b', 'e'),
('c', 'd'),
('d', 'e'),
('d', 'g'),
('e', 'h'),
('e', 'f')
])
pos = {'a' : (1, 2), 'b' : (2, 2), 'c' : (0, 1), 'd' : (1, 1), 'e' : (2, 1), 'f' : (3, 1), 'g' : (1, 0), 'h' : (2, 0)} # manual fixed positions
plt.figure()
nx.draw(G, pos=pos, with_labels=True, node_size=700, node_color='w') # draw node names
plt.show()
Output:
Here is the code which draws the node weights:
plt.figure()
nx.draw(G, pos=pos, labels=nx.get_node_attributes(G, 'weight'), node_size=700, node_color='w') # draw node weights
plt.show()
And finally the code for calculating your shift operator S.
You can get the neighbors of some node node with G[node].
The weight attribute for some node neighbor can be accessed with G.node[neighbor]['weight'].
Using that and list comprehension I sum the list of weights for all neighbor nodes of the current node. Note that the new weights are set with nx.set_node_attributes(G, 'weight', new_weights).
new_weights = {}
for node in G.nodes():
new_weights[node] = sum([G.node[neighbor]['weight'] for neighbor in G[node]]) # sum weights of all neighbors of current node
nx.set_node_attributes(G, 'weight', new_weights) # set new weights
plt.figure()
nx.draw(G, pos=pos, labels=nx.get_node_attributes(G, 'weight'), node_size=700, node_color='w') # draw new node weights
plt.show()
Final graph:
I have an MxNx2 array of 2D points, where each point represents the center of a measured property of a grid. The graphical representation is below, with white points being the positions:
The point structure is like this (shape: MxNx2):
[[[xij, yij], [xij, yij], ...]],
[[xij, yij], [xij, yij], ...]],
[[xij, yij], [xij, yij], ...]],
[ ..., ...., ............... ],
[[xij, yij], [xij, yij], ...]]]
The desired output would be like this:
[[[x1, x2], [y1, y2]],
[[x1, x2], [y1, y2]],
....................,
[[x1, x2], [y1, y2]]
So that I could plot every segment one by one (using each pair of x,y positions) like this:
I have trying something similar to:
segments = []
for row in xrange(a.shape[0] - 1):
for col in xrange(a.shape[1] - 1):
here = a[row, col]
below = a[row+1, col]
right = a[row, col+1]
segments.extend(((here, right), (here, below)))
but that leaves the right and bottom edges uncovered. Also, I suspect this is a somewhat "dumb", non-vectorized, brute-force way of doing it, it seems to be a common enough problem to have perhaps a mesh-creating function for it.
Any suggestion is welcome!
It can be done by adding segments separately for axis:
for row in xrange(a.shape[0]):
segments.extend( (a[row, col], a[row, col+1]) for col in xrange(a.shape[1] - 1) )
for col in xrange(a.shape[1]):
segments.extend( (a[row, col], a[row+1, col]) for row in xrange(a.shape[0] - 1) )
Or with zip():
s1 = (a.shape[0]*(a.shape[1]-1), 2)
s2 = (a.shape[1]*(a.shape[0]-1), 2)
segments = list(zip( a[:,:-1].reshape(s1), a[:,1:].reshape(s1))) + \
list(zip( a[:-1,:].reshape(s2), a[1:,:].reshape(s2)))
In case someone is interested, I modified the code I was using and it now works, perhaps not so elegantly or eficiently, but...
pairs = []
for row in xrange(pointarray.shape[0]):
for col in xrange(pointarray.shape[1]):
here = pointarray[row, col]
if row < pointarray.shape[0]-1:
below = pointarray[row+1, col]
pairs.append((here, below))
if col < pointarray.shape[1]-1:
right = pointarray[row, col+1]
pairs.append((here, right))