Exporting AVI Videos from Adobe Flash CS6 using an automated script (JSFL) - adobe

I have a bunch of .fla CS6 files that I need to export to avi (after setting up the resolution, compression settings etc) manually. I am wondering if this can be achieved via a JSFL script or similar.
Appreciate the help.

Have a look the JSFL Reference: document.exportVideo() is what you're after:
from the docs:
Usage
exportVideo( fileURI [, convertInAdobeMediaEncoder] [,
[ transparent] [, stopAtFrame] [, stopAtFrameOrTime] )
Parameters
fileURI A string, expressed as a file:/// URI, that specifies the
fully qualified path to which the video is saved.
convertInAdobeMediaEncoder A boolen value that specifies whether or
not to send the recorded video to Adobe Media Encoder. The default
value is true, which sends the video to Adobe Media Encoder. This
parameter is optional.
transparent A boolean value that specifies
whether or not the background should be included in the video. The
default value is false, which includes the movie background in the
video. This parameter is optional. stopAtFrame A boolean value that
specifies whether the video should be recorded until it reaches a
certain frame or a specific time. The default value is true, stop when
a certain frame is reached. This parameter is optional.
stopAtFrameOrTime If stopAtFrame is true, this is an int specifying
the number of frames to record. If stopAtFrame is false, this is the
number of milliseconds to record. The default value is 0 which, if
stopAtFrame is true, will record all the frames in the main timeline.
This parameter is optional.
Returns Nothing.
Description Method;
exports a video from the document and optionally sends it to Adobe
Media Encoder to convert the video
Example
The following example illustrates the use of this method:
fl.getDocumentDOM().exportVideo("file:///C|/myProject/myVideo.mov");
I can't remember if that is in AVI format, however you should be able to convert the video. Setting the second parameter (convertInAdobeMediaEncoder) to true should send the video to Media Encoder for conversion, however it will be a lot less straight forward to automate that (very convoluted, but not impossible)
Alternatively I would recommend looking into exporting a PNG sequence that you can then convert to any video format you would like with ffmpeg or similar tools, baring in mind you could trigger the conversion process from JSFL (via FLfile.runCommandLine for example).
In short:
use the JSFL reference (e.g. document.exportVideo(), etc.)
prototype with a simple through away script and a short timeline
test/fix/clean/comment code for the full version of the script after all your questions are answered by prototypes

Related

Make basic Math in Draw.io (diagrams.net)

I want to make some basic math stuff like Sum in Diagrams.net (old Draw.io).Is it possible ?
Exemple : I create a new parameter on a shape, like "Elec : T16" and make several copy on this shape. Is it possible to have a Text which can give me the total of the shape with this parameter ?
Best Regards.
I search a lot in the Diagrams.net blog but anything relevent.
This is not supported.
Regards,
I also wanted to do something similar and while it doesn't seem possible to do it completely in the software (as of v20.3.0), I did find a bit of a workaround: If you add properties to the shape data, then do File > Export As > XML, the properties will be there in the XML data. You can then count them one of two ways:
Open the XML file with a text editor like Notepad++, do a find on the value you want to count. If you choose "Find All" it will tell you how many times it appears.
Use a programming language like Python to read through the file and count the instances of that value.
Example:
I created a red circle in a new diagram, edited the text to say "RedCircle" and used Edit Data to add a property called TestValue, to which I assigned a value of 1. When I exported to XML it contained this element:
<object label="RedCircle" TestValue="1" id="6byQ5fOap-RXn7mFit_J-1">
Notes
When you export, make sure you turn off the Compressed option, this will create an unusable file.
Don't use Save As > XML, this will also use compression.
Diagrams.net natively saves in a compressed XML format, with only slight differences between that and the other compressed XML options, but it seems happy to also read in the exported uncompressed XML. I didn't test but if you go the programming route and want to take it a step further, it seems you could have the program update the value of a given "counter" element with the count, then open the XML file in diagrams.net to see the updated value and save it as a native .drawio file or publish in whatever format you like.
Edit: I discovered that under File > Properties you can turn off the compression on the actual .drawio file. If you do that you can just work from this file instead of exporting, but you might want to check the size of your file with and without it.
I'm sure a plugin could be created to do all of that within the app itself, but the other methods are enough for me at this point.
Hope this helps you!

exiftool—writing one tag by adding/subtracting from another

I have a directory containing many .jpg and .mov files.
Every .jpg has a DateTimeOriginal exactly three hours more than FileModifyDate and a GPSDateTime exactly eight hours more than FileModifyDate.
But every .mov has neither DateTimeOriginal nor GPSDateTime.
To support future work with these files, I'd like to make them all consistent.  After studying the man page for a while, I tried
exiftool '-DateTimeOriginal<FileModifyDate+03:00' \
'-GPSDateTime<FileModifyDate+08:00' *.MOV
but I got an error message saying I must use = instead of <  So I tried
exiftool '-DateTimeOriginal=FileModifyDate+03:00' \
'-GPSDateTime=FileModifyDate+08:00' *.MOV
and got another message:
Warning: Invalid date/time (use YYYY:mm:dd HH:MM:SS[.ss][+/-HH:MM|Z]) in ExifIFD:DateTimeOriginal (PrintConvInv)
which implies I have to do each tag with a literal time instead of computing from another.
I see that I can copy one tag to another and afterward use += or -= to change it in another step.  But is there a way to do it in one command?
First the answer, but see below for some possible complications.
If the time shift was the same for both values, then you would use the GlobalTimeShift option, but since that is not the case, you need to use the ShiftTime helper function.
Additionally, the GPSDateTime tag in a MOV file is going to be an XMP tag (specifically in the XMP-exif group) which allows for the inclusion of a time zone and does not need to be set to UTC. Since the FileModifyDate already includes a time zone and it looks like you want to shift it to UTC, you will have to strip away the time zone. You can do this by either using the -d (-dateFormat) option, which will set the date format globally, or the DateFmt helper function, which will work on individual tags. A Perl regex substitution would also be an option, but for this example, I'll use the DateFmt option.
Your command would be
exiftool '-DateTimeOriginal<${FileModifyDate;ShiftTime("3")}' '-GPSDateTime<${FileModifyDate;ShiftTime("8");DateFmt("%Y:%m:%d %H:%M:%S")}' *.MOV
For the ShiftTime function, you only need to list the hours to be shifted as this is the default in a tag that includes both date and time. See the ExifTool Date/Time Shift Module for details.
Now, there are some problems with the way you are writing this data. In a MOV file, exiftool will include a time zone in the DateTimeOriginal tag unless forced not to. If the time zone isn't included, then exiftool will default to the local time zone of the computer. So if the local time zone isn't the one that would apply to your shifted DateTimeOriginal time, then you will need to include it. You would now also have to strip away the time zone as above. The change in the above command would be
'-DateTimeOriginal<${FileModifyDate;ShiftTime("3");DateFmt("%Y:%m:%d %H:%M:%S")}}±##:00'
Also, if removing the time zone from both tag copy operations, you could switch to using the -d option.
exiftool -d '%Y:%m:%d %H:%M:%S' '-DateTimeOriginal<${FileModifyDate;ShiftTime("3")}' '-GPSDateTime<${FileModifyDate;ShiftTime("8")}' *.MOV
While the specs say that the DateTimeOriginal tag in a video file does not need to include a time zone, if the time zone isn't included then Apple programs will display wildly inaccurate date/times (see this Exiftool forum thread).
Exiftool will also write to the XMP-exif:DateTimeOriginal tag in addition to the Quicktime tag.
For the GPSDateTime, there might be a better way to write that, if the file's other time stamps are accurate. In a video file, the CreateDate is supposed to be set to UTC. If that tag is correctly set, it would be easier to set the GPSDateTime like this
'-GPSDateTime<CreateDate'
It should be noted that Quicktime:DateTimeOriginal is not commonly read by most programs, the Apple photo app being an exception. Also, XMP tags in video files, in this case XMP-exif:DateTimeOriginal and XMP-exif:GPSDateTime, are usually not used by most programs, with Adobe programs being the exception. The Quicktime:CreateDate is the most widely used tag and since it is supposed to be in UTC, it also works for GPS date/time.
To break down why your original command didn't work, in the first case, you added a static string to the name of the tag. This caused exiftool to treat the whole thing as a string, which is why it told you to use an equal sign = instead of the greater/less than signs </> and the greater/less than signs are only used when copying tags. When you changed it to the equal sign, you were now writing a static string and since the DateTimeOriginal tag requires a date/time format, you receive the Invalid date/time error.
When combining a tag name with a static string, you need to prefix the tag name with the dollar sign. See the paragraph starting "A powerful redirection feature" under the -TagsFromFile option for details.

How to read DICOM private tags without reading/loading pixel data?

I would like to read DICOM private tags. These private tags are under hexadecimal tag x7fe11001.
I know one of the pydicom configurations that read till pixel data starts (so the memory is not loaded up).
pydicom.dcmread(raw, defer_size="2 MB", stop_before_pixels=True)
But the private tags I am trying to read are after the pixel data. So I am ending loading complete file in memory which is not optimal. What are the other ways to read it in an optimal way?
I know there is a config param for the above method, called specific_tags. But I could not find any examples of how to use it.
Any suggestions to read DICOM metadata without loading pixel data into memory would be awesome.
You are right, specific_tags is the correct way to do this:
ds = pydicom.dcmread(raw, specific_tags=[Tag(0x7fe1, 0x1001)]
In this case, ds shall contain only your private tag and the Specific Character Set tag (which is always read).
As DICOM is a sequential format, the other tags still have to be skipped over one by one, but their value is not read.
Note that you can put any number of tags into the specific_tags argument.

IE10 / Edge PDF not showing correct values

So I have a PDF that has been created on the fly by opening a template, modifying the values for certain fields, and then saved.
Works:
If I open the file in Chrome, I have the correct values.
If I save the file to disk and open it with Adobe Reader DC (or whatever that's called), I get the correct values.
Doesn't Work:
When I open the document in IE 10, it opens in Edge, and shows "default" values for the fields.
When I save the file to disk and open it with IE, it shows "default" values.
When I open the file using "PDF reader - Document Viewer and Manager" it shows "default" values.
I'm using Windows 10, the application I'm working on is done in ASP.Net. It works the same way whether I return a FileStreamResult, FilePathResult, or File.
And I'm now pretty much certain the problem is Microsoft's products and not my code.
Any idea why Microsoft products are incapable of opening my PDFs correctly? Do they have to be flattened in some specific way or something?
Edit (more information as requested in comments):
The documents are created using PdfSharp.
They have fields that are dynamically replaced with values (i.e. [MyFieldA] is replaced with "ActualValueA").
Once the merge fields are replaced with actual values, the file is written using File.WriteAllText(fileName,fileText); where fileText is obtained through File.ReadAllText(fileTemplateName);
Image of comparison of fields that are wonky:
So I found the solution to the problem, and it appears that it might be a two-fold thing.
One piece of the puzzle I believe was the memorystream being used to create the PDF. It was declared in an inner scope, used in a PdfReader.Open call in an assignment to a var that was declared in an outer scope, and then the scope ended, while the var was still being used. I moved the memorystream declaration to the outer scope where the var declaration was so that they would be in the same scope. Also, the memorystream wasn't being closed with a memoryStream.Close() call or in a using (var memoryStream = new MemoryStream()) { } block. So I added a call to memoryStream.Close().
The other piece is that the PDF template form fields had some default display values. It appears that Microsoft stuff (Edge, PDF Viewer) may not be able to get the inserted values, and is instead reading the default display values of the fields and displaying those. Once all of the default display values were removed, Edge began opening the PDF and displaying values correctly.
Since these two pieces were done in tandem, I can't say for certain that they both play an equal role in this, but these are the only two changes that were done to get values to start displaying correctly. My gut feeling is that the problem lies with the default display values and Edge.

add new headers parsing in tcpdump

I have a necessity to add support for a proprietary headers that FPGA in our design inserts in incoming Ethernet frames between MAC header and payload. Obviously have to dig in tcpdump sources and libpcap, but could anybody give some hints at where exactly to start, so that I could save time?
The first thing you need to do is to get a DLT_/LINKTYPE_ value for your proprietary headers. See the link-layer header types page on the tcpdump.org Web site for the existing DLT_/LINKTYPE_ link-layer header type values and information on how to either use one of the DLT_USERn values internally or get a new value assigned if you plan to have people outside your organization use this.
Once you have the value assigned, you'll have to do some work on libpcap:
If you've been assigned a DLT_ value, you'll have to modify the pcap/pcap.h file to add that link-layer type (and change the DLT_MATCHING_MAX value in that header file, and LINKTYPE_MATCHING_MAX in pcap-common.c, so that they are >= your DLT_ value), or wait for whoever at tcpdump.org (which will probably be me) assigns your DLT_ value and updates the libpcap Git repository (at which point you could use top-of-trunk libpcap).
If you plan to do live capturing, you may have to add a module to libpcap to support live capturing on your hardware, or, if your device looks like a regular networking device to your OS, so that you can use its native capture mechanism, modify the module for that OS to map whatever link-layer header type value the OS uses (e.g., a DLT_ value on *BSD/OS X or an ARPHRD_ value on Linux) to whatever DLT_ you're using for your link-layer header type.
You'd have to modify gencode.c to be able to compile capture filters for your DLT_ value.
Once that's done, libpcap should now work.
Now, for tcpdump:
Add an if_print routine that processes the proprietary headers (whether it just skips them or prints things for them), calls ether_print(), and then returns the sum of the length of your proprietary headers and the Ethernet header (ETHER_HDRLEN as defined in ether.h). See ether_if_print() in print-ether.c for an example.
Add a declaration of that routine to interface.h and netdissect.h, and add an entry for it, with the routine name and DLT_, to ndo_printers[] if you copied ether_if_print() (which you should) or to printers[] if you didn't (if you didn't, you'll have to pass &gndo as the first argument to ether_print()). Those arrays are in tcpdump.c.

Resources