I want to add MP4 and MP3 support in Cefpython, I read many things on the internet that add "proprietary_codecs=1 ffmpeg_branding=Chrome" in your GYP_DEFINES" but I want to ask that is these changes are same for cefSharp, cefPython? but after change what I have to do?
To have proprietary codecs support you need to build both CEF and cefpython/cefsharp from sources. Building CEF from sources is a long process that can take several hours at best. To build cefpython with proprietary codecs you would have to modify the automate.py tool that comes with cefpython and add the proprietary codecs variables to either GN_DEFINES or GYP_DEFINES (add to both to be sure):
env["GN_DEFINES"] = "use_sysroot=true use_allocator=none symbol_level=1"
Source line in automate.py: https://github.com/cztomczak/cefpython/blob/bbf3597ba47f72db66cf304ab8eb3ccfc3a7130c/tools/automate.py#L873
After that you should follow according to the Build-instructions.md document in cefpython and build CEF from sources.
Related
I'm experimenting with Ghidra and decompiling code intended for the MSP430 FR4133 Launchpad. I'm not sure if it's supported but Ghidra appeared to support MSP430 devices.
For a simple test, I'm using the example code at this link for the MSP EXP430FR4133 Launchpad.
This link contains a simple source program in this directory
MSP-EXP430FR4133_Software_Examples_windows\Firmware\Source\OutOfBox_MSP430FR4133. It's a simple program with a stop watch and temperature sensor.
I decided to load the binary that's also there in the Binary folder.
Then I selected TI MSP430 16-bit and let Ghidra do the analysis. The problem is that the decompiler doesn't provide any functions. I'm wondering if I've selected the wrong architecture or option?
UPDATE 1
I'm posting two extra images which show two functions but there's nothing of any significance.
You are not decompiling a raw binary, but a text file.
If you look at the Readme, it indicates that this is a pre-built TI-TXT image.
Basically, it contains small chunks of data encoded in hexadecimal, prefixed with the load address. See the format definition here.
Ghidra supports similar formats (Intel HEX or Motorola S-Records), but not TI-TXT. I didn't find a tool to convert it to a supported format, but this could probably be done with a small script.
I'm currently working on a Qt Quick application that will provide a map viewer for a smallish area (1 square KM or so), the map details for which will be provided in a single geo-referenced image file (GeoTIFF, geo-referenced PDF, ESRI Shape file etc.), along with display of current location, operator identified points of interest etc. It's primarily responsibility is the display of custom maps (as opposed to generic maps retrieved from public map image service providers (OSM, MabBox, ESRI etc)), and it will often be used in areas with limited connectivity
An extensive web search has identified others who have made similar enquiries in the past (here, Qt forums etc), and the general suggestions for solutions are as follows:
ArcGIS Runtime with Qt SDK Doesn't work for me as down the track I'm intending to target an embedded linux device using an ARM processor, and ArcGIS doesn't make source available for cross-compilation for arbitrary targets. They've recently produced an Android release, but nothing for ARM linux in general)
QGIS developer libraries GPL licence not compatible with my commercial development
Use the Qt Location Map component with a local tile server or offline tile collection (some plugins have recently introduced support for this) Seems a bit of a hack, as noted I'm primarily using custom maps, as opposed to offline copies of public map server images, and my images won't be big enough to really warrant tiling otherwise
It would be feasible to develop a Qt Quick component from scratch to do this, but given that the existing Qt Location Map component provides a well defined pre-existing front end interface for everything my map would need to do and has an extendable plugin based architecture, writing a custom Qt Location GeoServices plugin seems the most sensible and elegant way forward.
I've started examining the source code of the existing plugins, but can't shake the feeling that in a world containing 8 billion people, with "nothing new under the sun", this would have been done already if it was a good idea....
Would anyone with more familiarity with the Qt Location module care to comment?
Since geo-referenced images can be arbitrarily large, it is the standard to convert them into a tile pyramid, to be able to efficiently display them on any hardware (at the cost of doubling the size, at worst, depending on how many layers you want).
Even if you would write your own geoservice plugin, you most likely will end up (directly or by using 3rd party code) tiling your geotiff.
This said, QtLocation does allow you to use custom tilesets ( http://doc.qt.io/qt-5/location-plugin-osm.html, look for osm.mapping.custom.host) served in most ways (http, https, file, qrc, etc.).
So go ahead, fire up QGis, install the QTiler plugin, and convert your images.
If you need to serve these pictures over the net directly to the clients (thus needing to do the conversion on the client), you can either see what QTiler does, or build up your gdal pipeline (gdal_translate, gdalwarp and gdal2tiles), and ship the relevant gdal bits with your application.
If you need multiple images at the same time, you can either use multiple Plugin elements with different plugin parameters, or you can fork the osm plugin and support multiple custom hosts.
Based upon Paul's response, and a couple of similar responses to the same enquiry on Qt forums and mailing lists, plus my own investigation, I'd conclude the following:
Generating a custom Qt Location GeoServices plugin to directly provide map imagery from a geo-referenced image file would not be a great idea as the implementation would be less than straightforward, and in practice any non-trivial map image would likely be large enough that an initial tiling step, followed by use of one of the standard tiled mapping plugins referencing a local tile set would be more appropriate anyway.
I'm trying to split a *.mov file in to raw audio an raw video. I have a DirectShow filter which is working as decoder for the video stream and Windows Media Player can actually see and use it to play this video file but I having a hard time figuring out how does it work exactly since I need to compose a complex DirectShow graph. I assumed that WMP will use WM ASF Rreader but if I try to add this filter to the graph in GraphEdit with *.mov file as parameter it's failing with 0xc00d0026 error code which makes sense since it's suppose to work with uncompressed formats only.
Which other DirectShow source filters can be used by WMP in order to split a *.mov video file in to raw video and audio?
Windows Media Player (current versions, not ancient) does not use DirectShow for MOV files. Instead, it uses Media Foundation.
FYI: 0xC00D0026 is NS_E_UNRECOGNIZED_STREAM_TYPE "The specified protocol is not recognized. Be sure that the file name and syntax, such as slashes, are correct for the protocol."
I suppose you can find suitable DirectShow components to demultiplex MOV files: Haali Media Splitter, GDCL MPEG-4 Demultiplexer are among widely used.
I´m working on an application based on directshow that has to convert an AVI source file to to an mp4-file that can be played back with Quicktime.
Since 3ivx, according to my web research the most popular way to fulfill this task, has become commercial (and my budget is quite limited), I decided to use a solution based on ffdshow.
I created a simple graph in graphedit, using LAME for audio encoding and GDCL MPEG 4 Multiplexor for the muxing, but everytime I try to play the movie with Quicktime, I´m getting an error indicating a wrong "sample description".
Playback with Windows Media Player is working, except that there is no sound.
My guess is that there´s a problem with the muxer, because every time I try to add audio encoding, graphedit automatically adds an decoder after the encoding unit (see picture link).
http://imageshack.us/photo/my-images/39/graphjrgr.png/
Any ideas on how to integrate ffdshow in a better way, tips for alternative mp4 muxers, or a complete different approach are appreciated!
The GDCL muxer has limited number of audio formats that it supports, probably you should check the source code for the muxer to see if the formats you are using are in fact supported. Basically, you need to choose an audio encoder that the mux recognizes as valid. It might be possible to use GraphEdit to choose different properties for the encoder filter that allow things to work better.
I have had some luck with the Monogram x264(video) and AAC(audio) encoders. See http://blog.monogram.sk/janos/directshow-filters/
Finally, try the debug version of the GDCL mp4 muxer.
Also, you must be aware of MPEG-4 LA licensing requirements for x264 http://www.mpegla.com/main/programs/AVC/Pages/FAQ.aspx
What are some good authoring tools for creating cross-platform help files for end-users? (Our application is using the Qt framework, if that makes any difference.)
Note: I'm not interested in internal API documentation--we're using doxygen for that.
Ideally, a solution would:
Allow us to manage all help content (text, table of contents, images, etc.) in a single location.
Output to native help formats. (CHM for Windows--or at least something we could feed directly into the HTML Help API; not sure what other platforms' "standard" help formats are.)
Decent WYSIWYG support: handle common text entry, images, cross-references, etc. easily, but we can edit the HTML when we need to.
Text-based file-format for help project (XML, etc.) so that it can be versioned in Subversion.
Any hooks that help keep it in synch with the actual code base would be great. (Perhaps somehow a help topic is associated with a code file, and can check Subversion to see if any changes have been made and flag a topic as "possibly out of date" ... am I dreaming?)
Help content can be localized.
Not opposed to commercial product, but a free option would be nice.
I'll go ahead and make this a wiki and start with a few examples. Vote 'em up or down if you have experience with them, and leave some comments. Add additional tools as well.
I just discovered Sphinx; I think I'm in love.
Better than WYSIWYG over HTML: reStructuredText
Outputs to QtHelp (among other things), so will be easily to distribute (and integrate) in our application.
Not sure about localization yet, but we'll cross that bridge when we need to.
Was easy to set up and "just works"; looks professional.
I have used robohelp for years.
It is fine, but the core technology is very old now. Also the way they lock to Word versions is a total PITA (and has forced me to avoid MS office upgrades several times).
We are moving to madcap flare http://www.madcapsoftware.com/products/flare/robohelp.aspx
I think DocBook addresses all you requirements except possibly the synchronisation hooks, which I'll think a bit further on. It's essentially a subset of XML designed for creating documentation, and is free and open source. It's just a format plus a set of XSL output transforms that convert the Docbook into more useful formats (HTML and thus CHM, JavaHelp, PDF via XML-FO or Tex).
This means that you still need to choose an XML authoring tool to actually edit it so things like WYSIWYG will depend on the features of your XML authoring software. We use Syntext Serna as it has good support for WYSIWYG and inline editing of XML #includes (no-one else seems to support the latter). You may find other XML authoring tools better suit your needs - Serna is an reasonably pricey commercial offering.
Docbook provides a lot of flexibility via profiling, which allows you to include/exclude xml elements based on their attributes. Example use cases would be to have slightly different help output for OS=Windows than OS=Linux. Localization is also supported via profiling and other mechanisms.
A fairly good introduction to Docbook can be found here.
We use Docbook for our help format, and compile it to CHM files that contain help only for the features relevant to a specific product (ie Enterprise edition has features that aren't in the Standard or Demo versions). The relevant steps are:
Run the Profiling XSL templates on the XML Source (using eg XSLTproc).
Run the HTML-Help XSL templates on the output of 1.
Compile the output HTML files using Microsoft's HTML Help Compiler (HHC).
Help & Manual
Robohelp
The only one I know is Latex, one of the latex2html converters, and then a few adaptation to make the resulting html ready for the CHM archiver.
text,html,chm,pdf, ps no problem.
Converting to Word via RTF used to be a disaster, don't know current status.
latex 2 html converters, while several, all have their own problems.
The pdfs look absolutely great.
WYSIWYM (via lyx) possible.
This archive has a bunch of CHMs that way (notably the prog,ref and user parts, the rest (rtl,fcl,lcl) are generated by our own doxygen equivalent, fpdoc)
http://www.stack.nl/~marcov/doc-chm.zip
Note that the above CHMs are made with our own (portable) CHM compiler. Yes, no more workshop.
A Lyx document as PDF and html:
pdf: http://www.stack.nl/~marcov/buildfaq.pdf
html: http://www.stack.nl/~marcov/buildfaq/