How can I generate a mask on a solid or create a custom (complex) drawing on that solid to Adobe After Effects only via scripting - adobe

I'm making an After Effects script that generates simple shapes & animations for kids, and I'm trying to avoid importing vector shapes from Illustrator to After Effects to animate them. And that is working perfectly with simple shapes such as squares and circles.
Is there any solution for generating complex shapes inside the Extendscript Toolkit, a pure code with no imports or locating some .txt file, just by setting the vertices, position and color of the shape and applies it to a new solid as a mask by running the script inside of After Effects?
If I wanted to do it manually, I will add a new solid, copy the first path from Illustrator, and back to after effects to paste it on that solid,then I'll add another solid, back to illustrator, copy another path, back to after effect, paste it on solid 2, and I'll repeat the process till the final result appears.
I want to end this switching between software 1 and 2 and save the drawing as an array of [vertices], [in-tangents], and [out-tangents] and call it whenever I want!
Running the script
The Result

I've done it like this, it can be used for import any kind of footage
var path = "File Path";
var input = new ImportOptinputns(File(path));
if (input.canImportAs(ImportAsType.FOOTAGE));
input.importAs = ImportAsType.FOOTAGE;
Or if you want to import an image sequence you can do it like this
// or if your footage is an image sequence
input.sequence = true;
input.forceAlphabetical = true;
imageSequence = app.project.importFile(input);
imageSequence.name = 'My automatically imported foorage";
theComp = app.project.activeItem; //import in to currently selected composition
theComp.layers.add(imageSequence);

I know how to create simple vector objects via script but I'm not sure if its work for you as you want it.
An example of two group rectangle
var shapeLayer = newComp.layers.addShape(); // adding shape layer
shapeLayer.name = "bannerLayer"; // name the shape layer
var shapeGroup1 = shapeLayer.property("Contents").addProperty("ADBE Vector Group"); / creating a group1
shapeGroup1.name = "Banner"; //name the group1
myRect= shapeGroup1.property("Contents").addProperty("ADBE Vector Shape - Rect"); // adding rectangle to the group1
Another example of a more complex shape, a triangle add to an existing shape layer, you can use this code as a base and create more complex shapes.
var shapeLayer = newComp.layers.addShape(); // adding shape layer
shapeLayer.name = "bannerLayer"; // name the shape layer
var shapeGroup1 = shapeLayer.property("Contents").addProperty("ADBE Vector Group"); // creating a group1
shapeGroup1.name = "Banner"; //name the group1
myRect = shapeGroup1.property("Contents").addProperty("ADBE Vector Shape - Rect"); // adding rectangle to the group1
// construct a Shape object that forms a triangle
var myTriShape = new Shape();
myTriShape.vertices = [[-50,50], [50,50], [0,100]];
myTriShape.closed = true;
// add a Path group to our existing shape layer
myTriGroup = shapeLayer.property("Contents").addProperty("ADBE Vector Group"); // adding rectangle to the group1
myTriGroup.name = "Triangle";
myTri = myTriGroup.property("Contents").addProperty("ADBE Vector Shape - Group");
// set the Path property in the group to our triangle shape
myTri.property("Path").setValue(myTriShape);
you can find more information on this page. I googled it myself.
Check this link https://forums.creativecow.net/docs/forums/post.php?forumid=2&postid=1119306&univpostid=1119306&pview=t

Related

image.filter is not a function in google earth engine

As a newbie to the google earth engine, I have been trying something (https://code.earthengine.google.com/6f45059a59b75757c88ce2d3869fc9fd) following a NASA tutorial (https://www.youtube.com/watch?v=JFvxudueT_k&ab_channel=NASAVideo). My last line (line 60) shows image.filter is not a function, while the one in the tutorial (line 34) is working. I am not sure what happened and how to sort this out?
//creating a new variable 'image' from the L8 collection data imported
var image = ee.Image (L8_tier1 //the details in the data will represent that the band resolution is 30m
//the details in the data will represent that the band resolution is 30m
//.filterDate ("2019-07-01","2021-10-03") //for a specific date range. maybe good to remove it for the function.
//the details in the data will represent that the band resolution is 30m
//the details in the data will represent that the band resolution is 30m
//.filterDate ("2019-07-01","2021-10-03") //for a specific date range. maybe good to remove it for the function.
.filterBounds (ROI) //for the region of interest we are interested in
//.sort ("COLUD_COVER") //for sorting the data between the range with a cloud cover, the metadata property we are interested in. Other way to do this is using the function below.
//.first() //this will make the image choose the first image with the least amount of cloud cover for the area. Other way to do this is using the function below.
);
//print ("Hague and Rotterdam", image); //printing the image in the console
//console on the right hand side will explain everything from the data
//id will show the image deatils and date of the image, for this case 29th July 2019
//under the properties tab cloud cover can be found, this is the least we can get for this area during this period
// //vizualisation of the data in the map with true color rendering
// var trueColour = {
// bands:["SR_B4","SR_B3","SR_B2"],
// min: 5000,
// max: 12000
// };
// Map.centerObject (ROI, 12); //for the centering the area in the center of the map with required zoom level
// Map.addLayer (image, trueColour, "Hague and Rotterdam"); //for adding the image with the variable of bands we made and naming the image
//Alternate way
//Function to cloud mask from the qa_pixel band of Landsat 8 SR data. In this case bits 3 and 4 are clouds and cloud shadow respectively. This can be different for different image sets.
function maskL8sr(image) {
var cloudsBitMask = 1 << 3; //remember to check this with the source
var cloudshadowBitMask = 1 << 4; //remember to check this with the source
var qa = image.select ('qa_pixel'); //creating the new variable from the band of the source image
var mask = qa.bitwiseAnd(cloudsBitMask).eq(0) //making the cloud equal to zero to mask them out
.and(qa.bitwiseAnd(cloudshadowBitMask).eq(0)); //making the cloud shadow equal to zero to mask them out
return image.updateMask(mask).divide(10000)
.select("SR_B[0-9]*")
.copyProperties(image, ["system:time_start"]);
}
// print ("Hague and Rotterdam", image);// look into the console now. How many images the code have downloaded!!!
//filtering imagery for 2015 to 2021 summer date ranges
//creating joint filter and applying to image collection
var sum21 = ee.Filter.date ('2021-06-01','2021-09-30');
var sum20 = ee.Filter.date ('2020-06-01','2020-09-30');
var sum19 = ee.Filter.date ('2019-06-01','2019-09-30');
var sum18 = ee.Filter.date ('2018-06-01','2018-09-30');
var sum17 = ee.Filter.date ('2017-06-01','2017-09-30');
var sum16 = ee.Filter.date ('2016-06-01','2016-09-30');
var sum15 = ee.Filter.date ('2015-06-01','2015-09-30');
var SumFilter = ee.Filter.or(sum21, sum20, sum19, sum18, sum17, sum16, sum15);
var allsum = image.filter(SumFilter);
Filtering is an operation you can do on ImageCollections, not individual Images, because all filtering does is choose a subset of the images. Then, in your script, you have (with the comments removed):
var image = ee.Image (L8_tier1
.filterBounds (ROI)
);
The result of l8_tier1.filterBounds(ROI) is indeed an ImageCollection. But in this case, you have told the Earth Engine client that it should be treated as an Image, and it believed you. So, then, the last line
var allsum = image.filter(SumFilter);
fails with the error you saw because there is no filter() on ee.Image.
The script will successfully run if you change ee.Image(...) to ee.ImageCollection(...), or even better, remove the cast because it's not necessary — that is,
var image = L8_tier1.filterBounds(ROI);
You should probably also change the name of var image too, since it is confusing to call an ImageCollection by the name image. Naming things accurately helps avoid mistakes, while you are working on the code and also when others try to read it or build on it.

How to add fontawasome icon as a layer in geotools using javafx (Desktop App)?

I have been using java 17 and I'm unable to add icons into the map as a layer. please help me.
void drawTarget(double x, double y) {
SimpleFeatureTypeBuilder builder = new SimpleFeatureTypeBuilder();
builder.setName("MyFeatureType");
builder.setCRS( DefaultGeographicCRS.WGS84 ); // set crs
builder.add("location", LineString.class); // add geometry
// build the type
SimpleFeatureType TYPE = builder.buildFeatureType();
// create features using the type defined
SimpleFeatureBuilder featureBuilder = new SimpleFeatureBuilder(TYPE);
// GeometryFactory geometryFactory = JTSFactoryFinder.getGeometryFactory();
// Coordinate[] coords =
// new Coordinate[] {new Coordinate(79,25.00), new Coordinate(x, y)};
// line = geometryFactory.createLineString(coords);
// ln = new javafx.scene.shape.Line();
FontAwesomeIcon faico = new FontAwesomeIcon();
faico.setIconName("FIGHTER_JET");
faico.setX(76);
faico.setY(25);
faico.setVisible(true);
// TranslateTransition trans = new TranslateTransition();
// trans.setNode(faico);
featureBuilder.add(faico);
SimpleFeature feature = featureBuilder.buildFeature("FeaturePoint");
DefaultFeatureCollection featureCollection = new DefaultFeatureCollection("external", TYPE);
featureCollection.add(feature); // Add feature 1, 2, 3, etc
Style style5 = SLD.createLineStyle(Color.YELLOW, 2f);
Layer layer5 = new FeatureLayer(featureCollection, style5);
map.addLayer(layer5);
// mapFrame.getMapPane().repaint();
}
I want to add a font-awesome icon to the map
Currently, your code is attempting to use an Icon as a Geometry in your feature. I'm guessing that's what isn't working since you don't say.
If you want to use an Icon to display the location of a Feature then you will need two things.
A valid geometry in your feature, probably a point (since an Icon is normally a point)
A valid Style to be used by the Renderer to draw your feature(s) on the map. Currently, you are asking for the line in your feature to be drawn using a yellow line (style5 = SLD.createLineStyle(Color.YELLOW, 2f);)
I can't really help with step 1, since I don't know where your fighter jet currently is.
For step 2 I suggest you look at the SLD resources to give you some clues of how the styling system works before going on the manual to see how GeoTools implements that.
Since you are trying to add an Icon I suggest you'd need something like:
List<GraphicalSymbol> symbols = new ArrayList<>();
symbols.add(sf.externalGraphic(svg, "svg", null)); // svg preferred
symbols.add(sf.externalGraphic(png, "png", null)); // png preferred
symbols.add(sf.mark(ff.literal("circle"), fill, stroke)); // simple circle backup plan
Expression opacity = null; // use default
Expression size = ff.literal(10);
Expression rotation = null; // use default
AnchorPoint anchor = null; // use default
Displacement displacement = null; // use default
// define a point symbolizer of a small circle
Graphic city = sf.graphic(symbols, opacity, size, rotation, anchor, displacement);
PointSymbolizer pointSymbolizer =
sf.pointSymbolizer("point", ff.property("the_geom"), null, null, city);
rule1.symbolizers().add(pointSymbolizer);
featureTypeStyle.rules().add(rule1);
But that assumes that you can convert your FontAwesomeIcon into a static representation that the renderer can draw (png, svg). If it doesn't work like that (I don't use JavaFX) then you may need to add a new MarkFactory to handle them.

How can I merge geometries in A-Frame without losing material information?

I have a large set of block objects using a custom geometry, that I am hoping to merge into a smaller number of larger geometries, as I believe this will reduce rendering costs.
I have been following guidance here: https://aframe.io/docs/1.2.0/introduction/best-practices.html#performance which has led me to the geometry-merger component here:
https://github.com/supermedium/superframe/tree/master/components/geometry-merger/
The A-Frame docs say:
"You can use geometry-merger and then make use a three.js material with vertex colors enabled. three.js geometries keep data such as color, uvs per vertex."
The geometry-merger component also says:
"Useful if using vertex or face coloring as individual geometries' colors can still be manipulated individually since this component keeps a faceIndex and vertexIndex."
However I have a couple of problems.
If I set vertexColors on my material (as suggested by the A-Frame docs), then this ruins the appearance of my blocks.
Whether or not I set vertexColors on my material, all material information seems to be lost when the geometries are merged, and everything just ends up white.
See this glitch for a demonstration of both problems.
https://tundra-mercurial-garden.glitch.me/
My suspicion is that the A-Frame geometry-merger component just won't do what I need here, and I need to implement something myself using the underlying three.js functions.
Is that right, or is there a way that I could make this work using geometry-merger?
For the vertexColors to work, you need to have your vertices coloured :)
More specifically - the BufferGeometry expects an array of rgb values for each vertex - which will be used as color for the material.
In this bit of code:
var geometry = new THREE.BoxGeometry();
var mat = new THREE.MeshStandardMaterial({color: 0xffffff, vertexColors: THREE.FaceColors});
var mesh = new THREE.Mesh(geometry, mat);
The mesh will be be black unless the geometry contains information about the vertex colors:
// create a color attribute in the geometry
geometry.setAttribute('color', new THREE.BufferAttribute(new Float32Array(vertices_count), 3));
// grab the array
const colors = this.geometry.attributes.color.array;
// fill the array with rgb values
const faceColor = new THREE.Color(color_hex);
for (var i = 0; i < vertices_count / 3; i += 3) {
colors[i + 0] = faceColor.r; // lol +0
colors[i + 1] = faceColor.g;
colors[i + 2] = faceColor.b;
}
// tell the geometry to update the color attribute
geometry.attributes.color.needsUpdate = true;
I can't make the buffer-geometry-merger component work for some reason, but It's core seems to be valid:
AFRAME.registerComponent("merger", {
init: function() {
// replace with an event where all child entities are ready
setTimeout(this.mergeChildren.bind(this), 500);
},
mergeChildren: function() {
const geometries = [];
// traverse the child and store all geometries.
this.el.object3D.traverse(node => {
if (node.type === "Mesh") {
const geometry = node.geometry.clone();
geometry.applyMatrix4(node.parent.matrix);
geometries.push(geometry)
// dispose the merged meshes
node.parent.remove(node);
node.geometry.dispose();
node.material.dispose();
}
});
// create a mesh from the "merged" geometry
const mergedGeo = THREE.BufferGeometryUtils.mergeBufferGeometries(geometries);
const mergedMaterial = new THREE.MeshStandardMaterial({color: 0xffffff, roughness: 0.3, vertexColors: THREE.FaceColors});
const mergedMesh = new THREE.Mesh(mergedGeo, mergedMaterial);
this.el.object3D.add(mergedMesh)
}
})
You can check it out in this glitch. There is also an example on using the vertex colors here (source).
I agree it sounds like you need to consider other solutions. Here are two different instances of instancing with A-Frame:
https://github.com/takahirox/aframe-instancing
https://github.com/EX3D/aframe-InstancedMesh
Neither are perfect or even fully finished, but can hopefully get you started as a guide.
Although my original question was about geometry merging, I now believe that Instanced Meshes were a better solution in this case.
Based on this suggestion I implemented this new A-Frame Component:
https://github.com/diarmidmackenzie/instanced-mesh
This glitch shows the scene from the original glitch being rendered with just 19 calls using this component. That compares pretty well with > 200 calls that would have been required if every object were rendered individually.
https://dull-stump-psychology.glitch.me/
A key limitation is that I was not able to use a single mesh for all the different block colors, but had to use one mesh per color (7 meshes total).
InstancedMesh can support different colored elements, but each element must have a single color, whereas the elements in this scene had 2 colors each (black frame + face color).

Apply gradient to sphere object with JavaFX

I'm working in JavaFX for a class, and I'm trying to apply a gradient to a sphere, but (obviously), I can't figure out how to do it. I'm stuck because I know that a sphere is an object, and so it needs to have a material, but (as far as colors go), a PhongMaterial only takes one color, and so it won't take a gradient because a gradient is a range of colors. So basically what I'm trying to is the following:
Sphere sphere = new Sphere(50);
RadialGradient rg = new RadialGradient(0, 0, 0, 0, 5, true, CycleMethod.REPEAT, /*arbitrary/irrelevant color Stop objects*/));
PhongMaterial pm = new PhongMaterial();
pm.setDiffuseMap(pm);
sphere.setMaterial(asdf);
Now obviously this code doesn't work, but I guess it's the idea/flow of what I'm trying to do.
You are right about one thing, PhongMaterial takes a Color as diffuse color, and that doesn't allow a Gradient. For that, it should accept Paint, but that is not the case.
So we have to look for different alternatives.
DiffuseMap
If you check PhongMaterial, you can set the diffuse map with an image. That means that you can use an existing image with some gradient and apply it to the sphere.
Something like this:
Sphere sphere = new Sphere(100);
PhongMaterial material = new PhongMaterial();
material.setDiffuseMap(new Image("http://westciv.com/images/wdblogs/radialgradients/simpleclorstops.png"));
sphere.setMaterial(material);
will produce the following result:
Dynamic DiffuseMap
Obviously, this has the disadvantage of depending on a static image. What if you want to modify that dynamically?
You can do that, if you generate your radial gradient, render it on a secondary scene and take a snapshot of it. This snapshot returns a WritableImage that you can use directly as diffuse map.
Something like this:
Scene aux = new Scene(new StackPane(), 100, 100,
new RadialGradient(0, 0, 0.5, 0.5, 1, true, CycleMethod.REPEAT,
new Stop(0, Color.GREEN), new Stop(0.4, Color.YELLOW),
new Stop(0.6, Color.BLUE), new Stop(0.7, Color.RED)));
WritableImage snapshot = aux.snapshot(null);
Sphere sphere = new Sphere(100);
PhongMaterial material = new PhongMaterial();
material.setDiffuseMap(snapshot);
sphere.setMaterial(material);
You will have now:
Density Map
There is still another option to use a mathematical function to generate a density map, and the colors will be given by a mapping to that function.
For that you can't use the built-in Sphere, but you have to either create your own TriangleMesh and play with the texture coordinates, or you can simply use FXyz, an open source JavaFX 3D library with a number of different primitives and texture options.
For this case, you can get the library from Maven Central (org.fxyz3d:fxyz3d:0.3.0), use a SegmentedSphereMesh control, and then select the texture mode `Vertices3D:
SegmentedSphereMesh sphere = new SegmentedSphereMesh(100);
sphere.setTextureModeVertices3D(1530, p -> p.z);
Note the function in this case is just based on the z coordinate, but obviously you can modify that as needed.
Check the library (there is a sampler), to explore other options.

three js mirror not reflecting all meshes

Objective:
To simulate a reflective floor(like this) in three js.
Idea:
Make the floor translucent by setting opacity to 0.5.
Place a Mirror below it to reflect the meshes above it.
Expected Output:
To be able to see reflections of the house via the floor mirror.
Obtained Output:
Doesn't reflect the meshes which is part of the house.
Instead, reflects only the skybox and that too only in certain angles.
Screenshots:
Mirror reflecting skybox fully - http://prntscr.com/6yn52y
Mirror reflecting skybox partially - http://prntscr.com/6yn5f7
Mirror not reflecting anything - http://prntscr.com/6yn5qy
Questions:
Why aren't the other meshes of the house reflected through the mirror?
Why is the mirror not reflecting in certain orientations of the camera?
Code Attached:
.......
.......
function getReflectiveFloorMesh(floorMesh) {
var WIDTH = window.innerWidth;
var HEIGHT = window.innerHeight;
floorMirror = new THREE.Mirror( renderer, firstPerson.camera,
{ clipBias: 0.003,
textureWidth: WIDTH,
textureHeight: HEIGHT,
color: 0x889999 } );
var mirrorMesh = floorMesh.clone();
mirrorMesh.position.y -= 10; // Placing the mirror just below the actual translucent floor; Fixme: To be tuned
mirrorMesh.material = floorMirror.material;
mirrorMesh.material.side = THREE.BackSide; // Fixme: Normals were flipped. How to decide on normals?
mirrorMesh.material.needsUpdate = true;
mirrorMesh.add(floorMirror);
return mirrorMesh;
}
function getSkybox() {
var urlPrefix = "/img/skybox/sunset/";
var urls = [urlPrefix + "px.png", urlPrefix + "nx.png",
urlPrefix + "py.png", urlPrefix + "ny.png",
urlPrefix + "pz.png", urlPrefix + "nz.png"];
var textureCube = THREE.ImageUtils.loadTextureCube(urls);
// init the cube shadder
var shader = THREE.ShaderLib["cube"];
shader.uniforms["tCube"].value = textureCube;
var material = new THREE.ShaderMaterial({
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
uniforms: shader.uniforms,
side: THREE.BackSide
});
// build the skybox Mesh
var skyboxMesh = new THREE.Mesh(new THREE.CubeGeometry(10000, 10000, 10000, 1, 1, 1, null, true), material);
return skyboxMesh;
}
function setupScene(model, floor) {
scene.add(model); // Adding the house which contains translucent floor
scene.add(getSkybox()); // Adding Skybox
scene.add(getReflectiveFloorMesh(floor)); // Adds mirror just below floor
scope.animate();
}
....
....
this.animate = function () {
// Render the mirrors
if(floorMirror)
floorMirror.render();
renderer.render(scene, firstPerson.camera);
};
You have to attach the mirror to the mesh before doing any transformation.
So the code would be:
floorMirror = new THREE.Mirror( ... );
var mirrorMesh = floorMesh.clone();
mirrorMesh.add(floorMirror); // attach first!
mirrorMesh.position.y -= 10;
...
But another problem here is that you are cloning mirrorMesh from floorMesh, which has already been (probably) transformed.
At creation, a mirror object has the same default transform matrix as a regular Mesh with plane geometry (which is by default 'vertical').
When you attach the mirror to a floor (or any horizontal mesh), the matrix doesn't match with the mesh one and that's why you don't see the reflections, or only from a certain angle.
So, always attach a mirror to a non-transformed plane mesh, before you apply your transformations (translations or rotations).
Idea:
"Make the floor translucent by setting opacity to 0.5.
Place a Mirror below it to reflect the meshes above it".
I suggest another way, make floor as solid add mirror on top and change the alpha of the mirror instead, I think you have issues with the translucent floor restricting the mirror projection through the alpha..
If you move the mirror over from under the translucent floor or into an empty scene with just a cube or sphere geometry attached basic material does it reflect as expected?
You may need 2 mirrors, one for the room assuming you want polished floor boards and one for outside general reflection

Resources