So I'm trying to learn and teach myself ExtendScript for After Effects, and I learned how to kinda reverse engineer a layer and dial down its properties to get the property name and its Adobe Match Name. I was able to create a for loop to do this, but it only does the first "column" of properties.
What I want: I want to be able to create a script that will display the entire hierarchy of the selected layer showing the property and its sub-properties (see image).
Below is the code I have that just gives me the first level of property names:
var project = app.project;
var comp = project.activeItem;
var lName = "test" ;
var theLayer = comp.layer(lName)
/*--I set a ridiculous number in the for loop so I can figure out how many properties there are.
Unfortunately ExtendScript (to my knowledge) won't count the length of the property group.
The TRY CATCH method will stop the loop.--*/
for (var i=1; i<1000; i++){
try{
var theLayer2 = theLayer.property(i).name;
$.writeln(i+" Name: "+theLayer.property(i).name+'\n'+
i+" MatchName: "+theLayer.matchName+'\n \n');
} catch (e){
break;
}
}
This is the output I get for the selected shape layer using the above script.
1 Name: Marker
1 MatchName: ADBE Vector Layer
2 Name: Contents
2 MatchName: ADBE Vector Layer
3 Name: Masks
3 MatchName: ADBE Vector Layer
4 Name: Effects
4 MatchName: ADBE Vector Layer
5 Name: Transform
5 MatchName: ADBE Vector Layer
6 Name: Layer Styles
6 MatchName: ADBE Vector Layer
7 Name: Geometry Options
7 MatchName: ADBE Vector Layer
8 Name: Material Options
8 MatchName: ADBE Vector Layer
9 Name: Audio
9 MatchName: ADBE Vector Layer
10 Name: Sets
10 MatchName: ADBE Vector Layer
Related
I am trying to display a list of points on the map. but these latitude and longitude are given as decimals:
index X y
1 24050.0000 123783.3333
2 24216.6667 123933.3333
3 24233.3333 123950.0000
4 24233.3333 124016.6667
........................
These data is taken from sources(page). It seems I can use these data directly with Google Map API, so what should I do?
How can I convert them into a format compatible with Google Map API to display? I am using JavaScript.
Looks to me those points are just the WGS84 coordinates multiplied by 1000. The code below gives me a reasonable location in Japan, where data is the array of points in the file you reference.
var marker = new google.maps.Marker({
map: map,
position: {
lat: data[i][1] / 1000,
lng: data[i][2] / 1000
}
});
proof of concept fiddle
I can't seem to get an answer that I understand. What is a collision when you have a hash table that uses linked nodes?
Is the collision +1 for every index that you must pass to get to the index needed for that node you are adding?
I know that collisions are unavoidable, I have learned that much through my research but I haven't been able to figure out what constitutes a collision when dealing with a hash table that has linked nodes.
My program after finding its proper place in the array (array of pointers to nodes), sticks the new node at the front. Each element points at a node that points at another node, I have essentially multiple linked lists. So, does the collision count only include the first node of the element where the new node belongs because I stick it at the front, or does it include every single node in the linked list for that element.
for example, if the name "Smith" goes to the element [5], which also has 5 other nodes that are linked together, and I add it to the front, how would I decide what the collision count is?
Thanks for any help!
A collision is when 2 distinct entries produces the same output through the Hash function.
Say your (poorly designed) hash function H consists in adding all the digits of a number:
5312 -> 5 + 3 + 1 + 2 = 11
1220 -> 1 + 2 + 2 + 0 = 5
So H(5312) = 11 and H(1220) = 5
H has a lot of collisions (this is why you should not use it):
H(4412) = 4 + 4 + 1 + 2 = 11
H(9200) = 9 + 2 + 0 + 0 = 11
etc...
I've got a DXF (rev 10) CAD file with some 2D drawings and I'm implementing a reader. Until now I've successfully loaded everything and rasterized with ImageMagick.
But the point is, I have manually set the zoom on the coordinates to a number that made sense for me. How do I know what was the original size of the components and what unit was used to draw? Is there any specific group I have to look at?
My header is like this:
0
SECTION
2
HEADER
9
$ACADVER
1
AC1006
9
$EXTMIN
10
-14.610075
20
-14.723197
9
$EXTMAX
10
14.556421
20
15.530217
9
$LTSCALE
40
0.000394
9
$PDMODE
70
35
9
$PDSIZE
40
0.000315
0
ENDSEC
I've read what each part is about and I don't seem to find anything that helps me.
I want to know the units, because I want to be able to change the drawing accurately as it will be plotted, e.g. move a point by 2 inches.
When implementing a viewer for a dxf file, you don't actually need to know anything about the units. Unless of course, you are going to implement a Measure function in your viewer, then it gets more complicated.
Your initial 'zoom' size in your viewer can be determined from the header information that you have shown: EXTMIN and EXTMAX are the 2 key pieces of info you need. In your example the minimum coordinate use3d in the dxf file is -14.610075,-14.723197 and the maximum coordinate used is 14.556421,15.530217. This gives you a total drawing size of 29.166496(width) x 30.253414.
For a simple viewer, you can just assume that the units in the DXF file be equal to the units in your viewer (pixels or points or whatever you are using).
Then the base drawing size in your viewer will be 29.166496x30.253414, and you can scale that up (zoom) to make it fill whatever display area you have available.
EDIT
DXF files are by no means 'unitless', so in the case where you absolutely need to know the units, you will need to read the $INSUNITS group code value, and to double-check it, you can also read the $MEASUREMENT group code value.
The R2000 dxf spec, or any of the other versions, contain all the info you need on what those values mean. If you go to the 'HEADER Section Group Codes' page, and search for 'units', you will be able to find the listing of all the unit types. For example:
$INSUNITS
70
4
indicates that the dxf file is using metric units, specifically millimeters, as the base unit. So any dimensional or coordinate value stored by the dxf file will be in millimeters.
Default drawing units for AutoCAD DesignCenter blocks: 0 = Unitless; 1
= Inches; 2 = Feet; 3 = Miles; 4 = Millimeters; 5 = Centimeters; 6 = Meters; 7 = Kilometers; 8 = Microinches; 9 = Mils; 10 = Yards; 11 =
Angstroms; 12 = Nanometers; 13 = Microns; 14 = Decimeters; 15 =
Decameters; 16 = Hectometers; 17 = Gigameters; 18 = Astronomical
units; 19 = Light years; 20 = Parsecs
EDIT
I just noticed you are using a very old dxf format (R10). If I remember right, the units were not introduced into the DXF spec until about R12. Before that time, the actual size of the drawing entities didn't change based on which units were being assumed. Only the labels on the dimensions were different from imperial to metric units.
If you are set on using the old R10 format, you will just have to make an arbitrary decision on what the units are; assuming you don't have any dimension labels on your drawings that would indicate what units are being implied.
I don't understand this 64 value...
For what I've understood we have 8 registers max ,
each one with a size of 128 bits ( 4 data32),
so we can not access more than 32 data32 ?
Am I wrong ?
for what are the other 32 data32 that we can store in a vertex ?
Thanks
You can access this data with setVertexBufferAt and offset. But you're right - it's not possible to use ALL data since only 8 registers are available.
This is related to the d3d caps value of MaxStreamStride, which is typically 256.
But seriously, what do you need that much stream data for?
Their are two points to consider.
The first one is that having 8 registers of four data32 each will not mean that you can use 32 * data32 because you will waste space for padding.
The second one is that most of the time, in a video game the image you see is rendered in multiple passes that use different data from the vertex.
Hence the need to put more data in each vertex that a single shader can handle.
Imagine a stupid scenario where you want to render a model with up to 10 bones per vertex.
(this is purely theoretical, flash has an hardcoded ~200 agal instructions limit, and by the way, totally useless unless you are modelling an octopus)
In each vertex you will have a lot of data like this.
This totals to 3*3 + 12*2 = 33 data32 per vertex.
3 data32 for position
3 data32 for normal
3 data32 for tangent
2 data32 for boneData1
2 data32 for boneData2
2 data32 for boneData3
2 data32 for boneData4
2 data32 for boneData5
2 data32 for boneData6
2 data32 for boneData7
2 data32 for boneData8
2 data32 for boneData9
2 data32 for boneData10
2 data32 for texture_uv
2 data32 for lightmap_uv
In a typical defered shading render scenario you will do the following:
1 - Render a "view space normal map" in a texture.
You will then write a shader that will need to use: position, normal, tangent, and all bonedata.
So the shader will use 29 data32.
But all registers will be full, because you need to pad the position, normal and tangent
(va0 position, va1 normal, va2 tangent, va3-7 bone data).
You will waste space on va0.w, va1.w and va2.w.
2 - Render a "view space depth map" in a texture.
You will then write a shader that will need to use: position and all bonedata.
So the shader will use 23 data32.
3 - Render a view space diffuse map in a texture
You will then write a shader that will need to use: position, texture_uv and all bone data.
So the shader will use 25 data32.
4 - Render a view space light map
You will then write a shader that will need to use: position, lightmap_uv and all bone data.
So the shader will use 25 data32.
5 - Finally composite and do defered lighting to build your final image.
All 33 data32 have been used.
No shader used more than 8 registers at the same time.
I have the following PathPoints and PathTypes arrays (format: X, Y, Type):
-177.477900, 11021.670000, 1
-614.447200, 11091.820000, 3
-1039.798000, 10842.280000, 3
-1191.761000, 10426.620000, 3
-1591.569000, 10493.590000, 3
-1969.963000, 10223.770000, 3
-2036.929000, 9823.960000, 3
-2055.820000, 9711.180000, 3
-2048.098000, 9595.546000, 3
-2014.380000, 9486.278000, 3
Here is what this GraphicsPath physically looks like. The 2 Arcs are very distinguishable:
I know that this GraphicsPath.PathData array was created by 2 AddArc commands. Stepping through the code in the debugger, I saw that the first 4 PathData values were added by the first AddArc command, and the remaining 6 points were added by the 2nd AddArc command.
By examining the raw pathpoints/pathtype arrays (without previously knowing that it was 2 AddArc commands so I would know that I have 2 start and end points), I would like to determine to start and end point of each arc.
I have tried several Bezier calculations to 'recreate' the points in the array, but am at a loss to determine how to determine the separate start and end points. It appears that GDI+ is combining the start/end point between the arcs (they are the same point and the arcs are connected), and I am losing the fact that one arc is ending and other one is starting.
Any ideas?
Use the GraphicsPathIterator class in combination with the GraphicsPath.SetMarkers method.
For example:
dim gp as new GraphicsPath
gp.AddArc(-50, 0, 100, 50, 270, 90) 'Arc1
gp.SetMarkers()
gp.AddArc(0, 25, 100, 50, 270, 90) 'Arc2
Dim iterator as New GraphicsPathIterator(gp)
Dim i as Integer = 0
Dim MyPts(3) As PointF
Dim temp as New GraphicsPath
Do until i > 2
iterator.NextMarker(temp)
MyPts(i) = temp.PathPoints(0)
MyPts(i + 1) = temp.GetLastPoint()
i += 2
Loop
'Free system resources...
iterator.Dispose()
temp.Dispose()
Arc1 -> start: MyPts(0); end: MyPts(1)
Arc2 -> start: MyPts(2); end: MyPts(3)
Hope this helps!
Take a look at the PathPointType Enum (System.Drawing.Drawing2D).
Value Meaning
0 Start (path)
1 Line
3 Bezier/Bezier3
7 PathType Mask
16 Dash Mode
32 Path Marker
128 Close Subpath
This one was bugging me a lot too! I had path created beyond my control without markers and couldn't figure out curve endpoints.
In this case you'd expect that the curve starts at [i + 1] but it is not! It turns out that GDI combines path points probably to make the points array shorter. In this case the curve points are: [0], [1], [2], [3].
It seems that if PathPointType.Start or PathPointType.Line is followed by PathPointType.Bezier, then you have to treat the PathPontType.Start or Path.PointType.Line as a first point of your Bezier curve, so in your example it should be like this:
-177.47, 11021.67, 1 // Draw line to this point AND use it as a Bezier start!
-614.44, 11091.82, 3 // Second Bezier point
-1039.79, 10842.28, 3 // Third Bezier point
-1191.76, 10426.62, 3 // Fourth Bezier point AND first point of the next Bezier!
-1591.56, 10493.59, 3 // Second Bezier point
-1969.96, 10223.77, 3 // Third Bezier point
-2036.92, 9823.96, 3 // Fourth Bezier point AND first point of the next Bezier!
-2055.82, 9711.18, 3 // Second Bezier point
-2048.09, 9595.54, 3 // Third Bezier point
-2014.38, 9486.27, 3 // Fourth Bezier point
So, when analysing PathPoints array point by point, you have to also check current and following indices. The docs on PatPointType might come in handy. In most cases you can probably ignore additional data stored on bits other than the first three (these three define Start, Line and Bezier). The only exception is CloseSubpath but it's irrelevant if you consider the next advice.
It's also worth noting that if you have a complex path that consists of huge number of PathPoints then it might be handy to split the path into chunks using GraphicsPathIterator. This simplifies the whole procedure as PathPointType.CloseSubpath can be ignored - it will be always the last point of your GraphicsPath chunk.
A quick look into Reflector or here might be helpful if you want to better understand PointTypes array.