Suppose I'm optimising the number of wind turbines in a wind farm. The shape of the layout is a variable driven by the optimiser.
If I don't declare a shape or val in the parameter, I get the error:
ValueError: Shape of output 'U' must be specified because 'val' is not set
but, the shape of U depends on the size of the input parameter, which is unknown.
Should I build an external module to which the shape is written to by the optimiser and have my Component read the shape? Or is there a much simpler way? Thanks!
it can't really be unknown. I would usually do this with an argument to the __init__ method. When you're setting up your class you pass in the size of the variables you need (or some number that lets you compute the size of the class like n_turbines). I would do this in a subclass of Problem that you define.
The optimizer can't be changing the size of that input live. It must be fixed. If at some later time you want to change the size of the problem, just create a new instance of your Problem and pass in the new size. You'll just have to re-run setup, but that shouldn't be hugely expensive.
Related
I'm programming an Ignition project for work. Part of its job is to detect when a tote has been placed on the scale, then tell the scale to tare.
The values from the scale head (gross weight, tare weight, etc.) are readable from my project and I will raise a flag telling the scale to tare when it is the right time.
So far the best I can think of is to take an average weight of the totes (which are usually the same size and shape), give a small tolerance, and any time a gross weight is within tolerance, tell the scale to tare. If there are outliers, there will probably have to be a manual tare button (as a failsafe).
Ignition uses Jython.
I was wondering if there are better methods for automatically taring a scale or if someone can approve/critique my current ideas.
I have not tried my intended method yet, I'm looking for feedback before I go in the wrong direction.
I need to grab every pixel value of an raster image (.tif, single band, with pixel value as elevation value) and compare it with another image to see if the pixel values are identical or not. Tried gdalcompare.py, but this only gives generic differences such as file name, file type, file size etc.
I only have access to freeware, would be awesome to find out a way how to do this, as my google searches have been futile
You can probably use Imagemagick's compare tool for this. (If the usage examples on that page aren't enough, there's more here.)
For example, this command would compare image1.tiff and image2.tiff, output the number of differing pixels (other metrics are available too) to the console and write a difference map to differing_pixels.tiff.
compare -metric AE image1.tiff image2.tiff differing_pixels.tiff
I'm trying to apply a texture to a geometry created at runtime, reading a binary asset from a remote server.
I create the geometry assigning UVs (geometry.faceVertexUvs = uvs;), normals (face.vertexNormals.push(...)) and tangents (face.vertexTangents.push(...)).
If I try to create a mesh with a basic material, there are no problems, but when I create the mesh with that geometry and I try to apply my texture, webgl doesn't display any geometry and I get this warning:
[.WebGLRenderingContext]GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 1
Does anybody know what is going on? I think that my geometry has something wrong, because if I use a THREE.Sphere, I can actually apply the texture.
But everyone told me that in order to apply texture I need UVs, and I have'em.
I think that my faceVertexUvs is wrong.
The real question is: geometry.faceVertexUvs.length should be equal to geometry.vertices.length, or it should be equal to geometry.faces.length ?
Thank you very much.
PS: I've already read the following posts
WebGL drawElements out of range?
Three JS Map Material causes WebGL Warning
THREEjs can't use the material on the JSON Model when initializing. Gives me WebGL errors
Loading a texture for a custom geometry causes “GL_INVALID_OPERATION” error
problem solved!!
#GuyGood: you're right when you say that every vertex need a UV-Vector2, but it's wrong to say that geometry.faceVertexUvs.length should be equal to geometry.vertices.length...
it seems that facevertexUvs is a matrix, and not an array..well, it is an array of arrays..not properly a matrix..in fact I think it can be used to handle multi-mesh objects..if facevertexUvs.length == 3, we have 3 submeshes, so 3 arrays..each one of them has a length equal to the number of faces of a particular submesh..and every face knows the UV mapping about the 3 vertices defining that face..
hope this is clear and helpful!!
I have 3d models made in Blender. I need to load them for further rendering using OpenGL ES.
I also want to implement picking, so I need to know object dimensions. So far I did not find option like "export dimensions" while exportiong to wavefront obj in Blender. I decided to calculate them manually.
So, I have vertex and face data. How do I calculate object dimensions? It may be done roughly, whatever. Or maybe I'm on the wrong tracK?
Isn't it just the minimum and maximum coordinate value found for each vertex for each of the X, Y and Z axes?
I was wondering what kind of model / method / technique Trendly might use to achieve this model:
[It tries to find the moments where significant changes set in and ignores random movements]
Any pointers very welcome! :)
I've never seen 'Trendly', and don't know anything about it, but if I wanted to produce that red line from that blue line, in an algorithmic fashion, I would try:
Fourier the whole data set
Choose a block size longer than the period of the dominant frequency
Divide the data up into blocks of the chosen size
Compare adjacent ones with a statistical test of some sort.
Where the test says two blocks belong to the same underlying distribution, merge them.
If any were merged, go back to 4.
Red trend line is the mean of each block.
A simple "median" function could produce smoother curves over a mostly un-smooth curve.
Otherwise, a brute-force or genetic algorithm could be used; attempting to find the way to split the data into sections, so that more sections = worse solution, and less accuracy of the lines = worse solution.
Another way would be like this: Start at the beginning. As soon as the line moves outside of some radius (3 above or 3 below the first, for instance) set the new height to an average of the current line's height and the previous marker.
If you keep doing that, it would ignore small fluctuations. However, if the fluctuation was large enough, it would still effect it.