So I am a gamer who loves editing videos and creating montages. I play a game called Apex Legends, and find that when editing long videos (Up to 5 hours long) I follow the same procedure every time. I skip ahead in the video until I see when I get a kill (Displayed in the top right of the screen as seen in picture below).
I then pause the video and begin the trimming process.
So my question is; Is there anything out there that can "read" videos? Assuming there is, is it possible to have a code that can read the screen, identify a change in kills, then trim say 5 seconds before and after that change? I am not asking anyone to write this code, but simply want to know if its possible
(Particularly with Python as that is my strongest language). Thank you all for your time and insight.
I am using Veins to do some Mobile Edge Computing simulations. To improve the visual effect, I am trying to show real time data of some nodes. For example, my application will return the communication delay in real time. In the Qtenv of Veins, I want to show the delay values in a text box just above the node icon. How to realize that? Or in which documentation can I get some help? Any advice will be appreciated!
Setting a node's Display String (specifically, its t tag) should do exactly that. See https://omnetpp.org/doc/omnetpp/manual/#cha:display-strings for the full documentation.
I will create a biped robot soon, i will add speech recognition and stuff to it.
I want it to find in my house. Is it possible to create like a map or something where i mark the
places in the house with numbers or something and then make the arduino robot
read it, so ex. when i say: "Go to your room" (the arduinos room = my room) it will go
to it's room (my room) automaticly.
UPDATE:
Is there a gps module or something that i can modify like i want so my robot
can find in my house? So i can mark where it can go and where the rooms are
and so i can program it to go to ex. my room when i say so, and it will find to my room.
There are many different ways of attacking this scenario. If you are talking a map it might be worth generating a measurement system whereby you can use an array of units to allow your robot to navigate the house. I think you will run into issues over time with unexpected variance so the large part if navigation code would have to tackle calibrating against a known map.
The advantage of using this method is that you could have it "learn" a new space by mapping a new array against your units
My webpage displays runtime generated FusionTables data on a Google Map.
The problem is: when you create a FusionTable with geometry type column and display it for the first time, Google has to load all related map tiles in its server side cache. This takes a while - sometimes 2-3 sec, sometimes 15 -20 sec.
During caching, the map should display a grey overlay saying "Data may still be loading...". I'd like to avoid this screen, because it's very buggy. Sometimes the overlay is displayed, sometimes not.
I'm looking for a way to detect if all map tiles cached so that i can display the map to the user.
I already tried to refresh the map tiles periodically, but this will give me no feedback when to stop refreshing:
$("img[src*='googleapis']").each(function(){
$(this).attr("src",$(this).attr("src")+"&"+(new Date()).getTime());
});
For this reason I'm looking for other solutions.
Try simplifying your geographic data. The message appears when the server-side process misses a deadline for serving the map tile in a reasonable time. By reducing the complexity of the polygons, you're much more likely to not see the "Data may still be loading...." message tiles.
You can reduce the complexity in two ways: reduce the number of vertices (points) that define the polygons, and reduce the precision of the lat/long locations.
Please also note as an FYI that as the exact same map gets called again and again by different viewers, the process results are cached server-side and the message becomes much less likely to appear, and then usually due to public cacheing.
-Rebecca
Answering my own question: simply there's no way to do it as of now.
As a sidenote: I'd never advise anyone to think about using FusionTables to display dynamically generated geographic data. It's not designed that way. Only use FT if you have a somewhat static dataset that changes rarely.
I am looking for advice for an application I am developing that uses Google Map.
Summary:
A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over.
Information:
I have a database of street segments with IDs, polyline points and centroid.
The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show.
Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008.
We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application.
Performance is a priority.
Problems:
Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal.
There is no available .NET wrapper for the API v3 yet.
Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.)
Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used.
Some thoughts:
KML/KMZ:
Pros:
Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same.
Cons:
LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time.
JavaScript file:
Pros:
JavaScript file can have encoded polyline and would significantly reduce the file to transfer.
Cons:
Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source.
GeoRSS:
This option isn't adapted for my needs I think, but I could be wrong.
MapServer:
I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without.
Monopoly City Streets:
The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city.
Clustering:
While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”.
Arrow:
I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays.
Criteria:
While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason.
Questions:
Should I create a web service to returns the street segments file or create an aspx page that return the file?
Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay?
Will I be able to display simple arrow on the polyline on the next version.
In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream?
This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.
Large numbers of short GPolylines run massively slower than small numbers of long GPolylines.
The speed difference between Google Maps v2 and Google Maps v3 is not going to be significant, because most of the CPU time will be taken up by the actual graphics system of the browser. Google Maps uses the VML, SVG or Canvas graphics systems, depending on the browser. Of these, VML is by far the slowest, and that gets used whenever the browser is MSIE.
Before embarking on tackling 250,000 line segments, I suggest you take a look at this quick speed test of 200 random polylines. Try zooming and paning that map in MSIE.
Then, also consider the amount of data that needs to be sent from the server to the client to specify 250,000 line segments. The amount of data will vary depending on whether you choose KML or JSON or GeoRSS, but if you end up with 20 bytes per line segment that would take 50 seconds to fetch on a 1 megabit broadband connection. Consider whether your users would be prepared to sit around for 50 seconds.
The only solution that really makes sense is to do what Google do for their traffic overlay, and draw the lines onto tiles in the server, and have those tiles be displayed as a GTileLayerOverlay in the client.
What you need is a spatially aware database, and a server-side graphics library like gd or ImageMagik. The client asks for a tile from the server. If the zoom is above a certain level the server scans the database for line segments that have bounding boxes that overlap the bounding box of the requested tile and use the graphics library to draw them.
The zoom level limit is there to limit the amount of work that your database and server needs to do. You don't want to end up drawing 250,000 line segments onto a single zoomed out tile because that's an awful lot of hard work for the server, and isn't going to mean very much to the user.
Regarding click handling:
The easy thing to do is to listen for clicks on the map, rather than on the objects, and send the click details to a server. The server then uses the click location to search the spatially aware database and returns the details of the clicked object if there is one. The client code does this:
GEvent.addListener(map,"click",function(overlay,point) {
var url="clickserver.php?lat=" + point.lat() + "&lng=" +point.lng();
GDownloadUrl(url, function(html) {
if (html.length) {
map.openInfoWindow(html)
}
});
});
The harder thing to do is to handle the changing of the cursor when the pointer is over the polylines. There's a known technique for doing cursor changes for small markers, which works like this:
Whenever a tile is fetched, the .getTileUrl() also makes a call to a server that returns a list of hotspot boxes for that tile. As the mouse moves, the client constantly calculates which tile the mouse is over, and then scans the corresponding list of hotspot boxes.
Google themselves, in their GLayer() code, add the sophistication of performing a quadtree search to speed up the search for hotspots within a tile, but other people who have implemented this strategy in their own code reckon that's not necessary, and a linear scan of the hotspot list is fast enough.
I've no idea how to extend that to handling cursor over polyline detection.