I am trying to load a very complex set of GLTF models in AFRAME.
My problem is very simple; my goal is to try to load about 9 million of gltf models in a unique scene.
My idea was to combine different level of detail in GLTF models depending on the camera distance and also only load those gltfs which are visible by the camera. If not the problem is that the assets are loaded in memory and my browser gets finally hung due to memory consumption.
Is this possible in AFRAME?
With some attention to A-Frame best practices, you should be able to make a performant scene with tens of thousands or even hundreds of thousands of polygons. But it will not be possible to load millions of distinct glTF models simultaneously in A-Frame, or any WebGL renderer for that matter.
Assuming you just want to show as many as models possible, try to take advantage of certain special cases:
If you need to render many copies of the same model, you can use a technique called "instancing". Check out aframe-instancing for some example code on how to do that. Depending on the complexity of your model, you may be able to show thousands (but probably not millions) of copies at once.
If you're making something like an RPG — which needs many things in the world, but only a few are in sight at any given time — then you can be clever about dividing your world into zones, and only loading models for the current zone.
Both of these are non-trivial to implement, and beyond the scope of a Stack Overflow question. My suggestion would be to try to get started on your own, and when you run into trouble, post new questions with the minimum amount of code necessary to see what you're trying to do. You may also find the A-Frame Slack group to be useful.
Related
I record birds cries with two microphones. The records can go up to 3 hours and it is time-consuming on audacity to listen to the whole file each day. What I want is a script that takes my original file and gives me a bunch of short audio files, each containing a bird cry. With my microphones I am able to record in mp3 or wav. But the script should take only cries that have a higher frequency than nHz. This frequency represents the background sound that is fixed and that should not be saved. I don't know which language is the best for that and I have absolutly no idea how to do that.
Thank you all,
Thomas
This should be pretty easily doable in a variety of languages but Python is a decent place to start. I'll link you some relevant resources to get you started and then you can narrow your question if you run into problems.
To read your audio file in .wav format look at this documentation.
To take the data from your audio file and put it into a numpy array see this question and answer.
Here is the documentation for computing the Fourier transform of your data (to get the frequency content).
I would suggest taking a moving window and computing the Fourier transform of the data within that window and then saving the result to a file if there's significant content above your threshold frequency. The first link should have info on saving the audio file.
You can get some background on using the Fourier transform for this type of application from this Q&A and if it turns out that your problem is really difficult, I would suggest looking into some of the methods for speech detection.
For a more out-there suggestion, you could try frequency shifting your recording by adjusting the sample rate to make bird sounds resemble human speech and then use a black box tool like Googles VAD to pick out the bird calls. I'm not sure how well that would work though.
The problem of cutting up a long file into sections of interest is usually referred to as (automatic) Audio Segmentation. If you are willing to have a fixed audio clips out (say 10 seconds), you can also treat it as an Audio Classification problem.
The latter is very well studied problem, also applied to birds.
The DCASE2018 challenge had one taks about Bird Detection, and has lots of advanced methods. Basically all the best performing systems use a Constitutional Neural Network on log-scaled mel-spectrograms. A mel-spectrogram is 2D, so it basically becomes image classification. Many of the submissions are open source, so you can look at the code and play with them. Do note they are mostly focused on scoring well in a research competition, not to be practical tools for splitting a few files.
If you want to build your own model for this, I would recommend going with a Convolutional Neural Network pretrained on images, then pretrain on DCASE2018 data, then test it on your own data. That should give a very accurate system, though it will take a while to set up.
i'm working on a simple game project with libgdx and i need some help to make a random infinite world. After googling for hours i knew that many games use the "chunk theory" to generate an infinite map and also implement tiles. There are many things i don't understand... for example:
What is a tile? and a chunk?
How can i implement this "chunk theory" in a game?
Is this the best way to generate an infinite random map?
can someone answer my questions to make some clarifications in my mind?
Thanks in advance
Tile-based maps are maps, which are organized as a grid. A Tile is then a cell of this grid and objects are placed inside this Tile/cell and can't be placed between to cells. Think about Minecraft, every Block there is one Tile.
A chunk is a part of the map, containing many Tiles. It has a fixed size and is used to be able to load only part of an infinite map.
Imagine a map with a size of 1600*1600 Tiles. You won't be able to see all Tiles at once. Also you don't need to update the logic for the whole map, as it won't affect you anyway. So you split your map into little parts, so called chunks, which have a fixed size (for example 16*16).
Depending on your position, adjacent chunks are loaded and far chunks are unloaded. So if you move from south to nord, chunks in the nord are loaded, chunks in the south are unloaded.
I never implemented a chunk-system myself, so i can't tell you how to implement it, but i guess there are many tutorials out there.
This is not the way to generate infinite maps, but the way to store, load and work with huge maps. The generation is usualy done with some noise functions, but thats a different story.
Anyways i suggest you to start with something smaller and simpler. Rushing into to complex things will just discourage you.
I am in the process of coding a level design tool in Qt with OpenGL (for a relevant example see Valve's Hammer, as Source games are what I'm primarily designing this for) and have currently written a few classes to represent 3D objects (vertices, edges, faces). I plan to implement an "object" class which ties the three together, keeps track of its own vertices, etc.
After having read up on rendering polygons on http://open.gl, I have a couple of questions regarding the most efficient way to render the content. Bear in mind that this is a level editor, so I am anticipating needing to render a large number of objects with arbitrary shapes and numbers of vertices/faces.
Edit: Updated to be less broad.
At what point would be the best point to create the VBO? The Qt OpenGL example creates a VBO when a viewport is initialized, but I'd expect it to be inefficient to create a close for each viewport.
Regarding the submitted answer, would it be a sensible idea to create one VBO for geometry, another for mesh models, etc? What happens if/when a VBO overflows?
VBOs should be re-/initialized whenever there's a need for it. Consider VBOs as memory pools. So you'd not allocate one VBO per object, but group similar objects into a single VBO. When you run out of space in one VBO you allocate a another one.
Today's GPUs are optimized for rendering indexed triangles. So GL_TRIANGLES will suffice in 90% of all cases.
Frankly modern OpenGL implementations completely ignore the buffer object access mode. So many programs did made ill use of that parameter, that it became more efficient to profile the usage pattern and adjust driver behavior toward that. However it's still a good idea to use the right mode. And in your case it's GL_STATIC_DRAW.
I'm currently working on a very large Flash platformer game (hundreds of classes) and dealing with an issue where the game slowly grinds to a halt if you leave it on for long enough. I didn't write the game and so I'm only vaguely familiar with its internals. Some of the mysterious symptoms include,
The game will run fine for a determinate amount of time (on a given level) when all of a sudden it will exponentially start leaking memory.
The time it takes for the game to hit the point where it leaks exponentially is shorter when there's more sprites on the screen.
Even when nothing is being visibly rendered to the screen, the game slows down.
The game slows down faster with more frequent sprite collisions.
Completely disabling the collision code does slow down the degradation but doesn't prevent the game from eventually dropping frames.
Looking at the source and using Flex profiler, my prime suspects are,
There are many loitering objects, especially WeakMethodClosure, taking up large amounts of memory.
The program makes extremely extensive use of weak event listeners (dozens dispatched per frame).
BitmapData is being copied every time a new sprite is created. These are 50x50 pixel sprites that spawn at about 8 sprites per second.
I know it's next to impossible to tell me the problem without seeing the source, so I'm just looking for tidbits that might help me narrow this down. Has anybody experienced this evasive performance degradation in their own projects? What was the cause in your case?
I recently completed a optimization of an large project.
And I can give you some architectural advices:
Main principle – try do as little as possible function / events calls
Get rid off all but one
onEnterFrame / onInterval / onTimer cycles. Do everything you
need in one general event call.
You probably will need a lot of static arrays for storing processed object
references.
Do your graphics / render stuff also in one main loop
Try use big (probably prerendered) canvases instead of small sprites / bitmaps.
Usually it usable for backgrounds. But also working good for a smaller objects
(like trees, platforms, etc)
Rid off small bitmap resources, assemble it to one tile-sheet and draw your stuff
directly from it, through source–rect property
I hope its helps you!
Memory leaks – such headash.
P.S. Test your game in different browsers, IE – most leaking of all, sometimes it don`t clear memory every after refresh.
Avoid anonymous methods - change them into class level methods.
Use weak reference in addEventListener and/or make sure you remove all the listeners of an object before removing it with removeChild
Make sure you removeChild all the sprites instead of just letting them fly off the screen. Also, if possible, reuse the sprites instead of creating new ones.
You should consider object pooling if you have a lot of creation/destruction going on, especially with heavy objects like bitmapdata.
see Object Pool class
Sounds like you need to profile your application to see what is happening.
This thread had a couple of suggestions, but, ultimately, you will need to just put in code to help determine what is going on.
Profiling ActionScript-3 Code
You may want to see if you can just run some smaller parts over a period of time and see if you see a slowdown.
You may want to unit test your application, so you can run various parts rapidly, looking for memory leaks.
One framework is: http://asunit.org/, and another is: http://opensource.adobe.com/wiki/display/flexunit/
Unit testing is something I use a great deal for profiling, so you can test at the top level of your game, run it thousands of times, looking for problems, then run each part and see which has problems, and just work your way down. This is a manual process, but if the two ideas in the SO thread listed in the beginning don't help, this may be your best approach.
Are you using too much memory? Or is your cpu usage too high?
First determine if its a memory or processor limit you are hitting. It sounds like the later, seems like there are many objects doing stuff around ... likely those extra sprites aren't being freed well. Look for dependencies among objects / variables / anything in those events, make sure the sprites are removed, pay attention to any handlers of EnterFrame or recurring events.
It sounds far more likely that you are capping out on processor speed than memory. You have to try extra hard to memory cap a Flash application.
Fortunately there are a lot of easy things you can do to keep CPU down...
1) Stringently manage event listeners for everything, especially mouse listeners. Do you have $texas event listeners on all your sprite objects? That could be a problem.
2) Access arrays using int or uint instead of Number. This is huge, and this is one of those backwater Adobe tricks. Accessing array object with int and uint is way faster than Number and if you do a lot of iteration (and it sounds like you do) this could shave precious milliseconds off your frame execution.
3) In the same vein as #2, monitor your math operations and what types you are using for certain operations. The slowest thing you can do in math operations for AS3 is repetitive casting (feeding ints to a function that returns a Number), or performing basic operations like add and subtract on Number instead of int.
The great thing about having a wtfhuge program like this in Flash is that even a minor optimization change could have a major impact on performance. I once toyed with a raytrace engine in AS3 where I declared one extra variable and it killed my FPS from 30 to 23.
Flash has a rather infamous issue (many consider it a bug) that causes event listeners for timers and the ENTER_FRAME event to not get garbage collected, even if they were weakly referenced. So, even though it's good practice to use weakly referenced events, you should still remove all your event listeners when they are no longer needed.
The studio I work at is currently developing the Tony Hawk XI website and I am responsible for the flash/AS3 development. As part of the pitch, I entered an augmented reality skateboard example to be shown which impressed the client very much.
After a few weeks of getting stronger with Papervision3D, and getting to know the Flar Toolkit, I have successfully imported md2 and dae files that load and interact with my custom marker.
Now it has come time to develop some of my own models; I will be using 3DSMAX. I want to know what the limitations are on things like poly-count, character rigging and animation, texturing, tricks for exporting and creating the proper format file and any other bits of information that may save me some serious headaches down the road.
Currently I have a Quake2 MD2 model, Ernie, pulled inside of a FlarToolkit demo here.
This is very low-poly and I was wondering how many polys could I expect to get away with being that today's machines are so much faster;
Brian Hodgeblog.hodgedev.com hodgedev.com
I've heard that 2000 polys is about the threshold for good performance. In practice though, its been hit or miss and a lot of things can have an impact. So far I've run into perfomance hits when using animated movieclip materials, animated materials with an alpha chanel and precise materieals.
Having to clip objects seems to be a double edged sword. In some cases, it will increase performance by a good deal, and in others (seems to be primarily when there are alot of polys on the edge of the viewport) it'll drop the framerate by a good 10-15 fps. So, I'd say the view you setup is something to think about as well.
For example, we have a model of an interior of a store with some shelves and products and customers walking around. In total we have just under 600 triangles (according to the StatsView, which you should check out if you haven't yet: org.papervision3d.view.stats.StatsView). On my computer, which is a new computer with a quad core it runs at a steady 30fps (which is where we want it), but on an old Dell XPS (Pentium 4) it runs between 20 and 30fps depending on what objects are being clipped, etc.
We try to reduce the poly count and texture creatively to fix as many of the performance issues as possible. Unfortunatley our minimum specs are really low, so we need to do alot to get it to run well.
Edit:
Another thing we're doing is swapping out less detailed models for higher detailed ones when zoomed in. If you aren't zooming at all, than this probably won't help.
Hope that helps a bit.