multinote instruments and chords - ableton-live

I am new in ableton live software. I would like to understand why for some instruments I can play several notes at the same time (and create chord progression) and for the others I can hear only one note of a chord.
For example, there are two guitars: 'Power Chords Guitar' and 'Please Rise For Jimi Guitar'. Both of them are basing on an operator. For the first one I am able to press several buttons on midi keyboard and hear a sound, for the second I can hear only one note of a chord.
I was trying to compare operator options, but I was unable to find the setting which causes this multinote/mononote functionality.
Thank you very much for your help.
J

Ableton's Operator consists of 4 monophonic oscillators. Depending on a preset, it can be setup to play polyphony. It seems that the Glide function is responsible for that effect. Make sure you have at least 2 oscillators (the more the more poly) with Glide "on" set.
Check out Ableton's webpage for reference on how to use its instruments.

Related

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

How do I generate a waypoint map in a 2D platformer without expensive jump simulations?

I'm working on a game (using Game Maker: Studio Professional v1.99.355) that needs to have both user-modifiable level geometry and AI pathfinding based on platformer physics. Because of this, I need a way to dynamically figure out which platforms can be reached from which other platforms in order to build a node graph I can feed to A*.
My current approach is, more or less, this:
For each platform consider each other platform in the level.
For each of those platforms, if it is obviously unreachable (due to being higher than the maximum jump height, for example) do not form a link and move on to next platform.
If a link seems possible, place an ai_character instance on the starting platform and (within the current step event) simulate a jump attempt.
3.a Repeat this jump attempt for each possible starting position on the starting platform.
If this attempt is successful, record the data necessary to replicate it in real time and move on to the next platform.
If not, do not form a link.
Repeat for all platforms.
This approach works, more or less, and produces a link structure that when visualised looks like this:
linked platforms (Hyperlink because no rep.)
In this example the mostly-concealed pink ghost in the lower right corner is trying to reach the black and white box. The light blue rectangles are just there to highlight where recognised platforms are, the actual platforms are the rows of grey boxes. Link lines are green at the origin and red at the destination.
The huge, glaring problem with this approach is that for a level of only 17 platforms (as shown above) it takes over a second to generate the node graph. The reason for this is obvious, the yellow text in the screen centre shows us how long it took to build the graph: over 24,000(!) simulated frames, each with attendant collision checks against every block - I literally just run the character's step event in a while loop so everything it would normally do to handle platformer movement in a frame it now does 24,000 times.
This is, clearly, unacceptable. If it scales this badly at a mere 17 platforms then it'll be a joke at the hundreds I need to support. Heck, at this geometric time cost it might take years.
In an effort to speed things up, I've focused on the other important debugging number, the tests counter: 239. If I simply tried every possible combination of starting and destination platforms, I would need to run 17 * 16 = 272 tests. By figuring out various ways to predict whether a jump is impossible I have managed to lower the number of expensive tests run by a whopping 33 (12%!). However the more exceptions and special cases I add to the code the more convinced I am that the actual problem is in the jump simulation code, which brings me at long last to my question:
How would you determine, with complete reliability, whether it is possible for a character to jump from one platform to another, preferably without needing to simulate the whole jump?
My specific platform physics:
Jumps are fixed height, unless you hit a ceiling.
Horizontal movement has no acceleration or inertia.
Horizontal air control is allowed.
Further info:
I found this video, which describes a similar problem but which doesn't provide a good solution. This is literally the only resource I've found.
You could limit the amount of comparisons by only comparing nearby platforms. I would probably only check the horizontal distance between platforms, and if it is wider than the longest jump possible, then don't bother checking for a link between those two. But you might have done this since you checked for the max height of a jump.
I glanced at the video and it gave me an idea. Instead of looking at all platforms to find which jumps are impossible, what if you did the opposite? Try placing an AI character on all platforms and see which other platforms they can reach. That's certainly easier to implement if your enemies can't change direction in midair though. Oh well, brainstorming is the key to finding something.
Several ideas you could try out:
Limit the amount of comparisons you need to make by using a spatial data structure, like a quad tree. This would allow you to severely limit how many platforms you're even trying to check. This is mostly the same as what you're currently doing, but a bit more generic.
Try to pre-compute some jump trajectories ahead of time. This will not catch all use cases that you have - as you allow for full horizontal control - but might allow you to catch some common cases more quickly
Consider some kind of walkability grid instead of a link generation scheme. When geometry is modified, compute which parts of the level are walkable and which are not, with some resolution (something similar to the dimensions of your agent might be good starting point). You could also filter them with a height, so that grid tiles that are higher than your jump height, and you can't drop from a higher place on to them, are marked as unwalkable. Then, when you compute your pathfinding, as part of your pathfinding step you can compute when you start a jump, if a path is actually executable ('start a jump, I can go vertically no more than 5 tiles, and after the peak of the jump, i always fall down vertically with some speed).

Decode IR (RC5) steps

I have captured the IR signal ( I believe RC5) of a HVAC remote control, like this one....
(using Saleae)
This gave me a sequence of pulses of different width that I can make the Arduino reproduce and the HVAC recognize the request. An example is:
unsigned int power_ON[180] = {2888,3918,1911,1049,907,1992,903,989,1936,1023,907,1049,903,989,903,1049,903,1049,907,1992,1851,1992,1915,1049,928,963,928,1023,903,1049,907,1049,928,963,928,1023,903,1053,928,1023,928,963,928,1023,928,1027,928,1023,928,963,928,1023,907,1049,928,1023,928,1906,1941,959,2940,3866,1962,997,932,1967,929,963,1962,997,933,1019,959,933,933,1023,954,997,928,1971,1902,1941,1941,1019,958,933,958,997,954,997,933,1019,959,933,959,997,954,997,928,1023,958,933,958,997,954,997,933,1019,958,933,958,997,954,997,933,1019,958,1881,1962,937,2940,3862,1966,993,958,1941,933,959,1966,993,958,997,954,937,954,997,933,1023,954,1941,1880,1966,1962,997,954,937,928,1023,933,1023,954,997,928,963,928,1023,933,1023,929,1023,928,963,929,1023,928,1027,928,1023,928,963,928,1023,928,1027,928,1023,928,1910,1911,989,3832};
Could anyone guide me on the steps to decode the message? or understand the different pulse width?
I guess there must be certain defined pulse widths? Each meaning something different?
My initial though is that I need to:
1) Decode raw data by converting pulses to digital 1,0
2) Identify from digital data each section of the code, I think all the configuration is send on every key press, so identify the section of the code where it states the temperature, fan speed, hvac mode, clock, etc
3) Be able to put together a full IR code based on wanted setup, instead of just saving the whole code and reproducing it.
Any hint or guideline on how to do this?
Am I on the right track?
edit:
I have tried analysing one same mode and try to figure out which pulses change, but I cant figure it out as the number of pulses varies. Here you can see Cooling mode and maximum fan speed with changing temperature setting.
here is the excel file for anyone really into helping:
http://www.filedropper.com/analysiscoolingmodefanspeedmaximum
and the end of the message
So I put your pulse widths (?) into a diagram: http://i.imgur.com/C9k64qB.jpg
Without knowing more about what this actually represents, this does not really help i guess..
What buttons did you press while recording this? How did you record this?
I would try to visualize all the data you can get. Record all buttons and put what you get in diagrams. Then stare at them and maybe you will find some logic hidden in there.
Also, open the remote, look what IC's are inside and look up their datasheets. Maybe there you will find the protocol and you won't have to do any reverse engineering at all.
Keep us updated!

IMediaControl::Run followed by IMediaControl::Stop followed by IMeidaControl::Run doesn't switch on certain Onboard cameras

I have a DirectShow webcam application. I make use of Sample Grabber to get the buffer callbacks and IVideoWindow to control the display co-ordinates for the Preview. I have Preview and Capture Streams which I run as below.
g_pBuild->RenderStream(&PIN_CATEGORY_CAPTURE, &MEDIATYPE_Video,cam,g_pGrabberF,pNullRenderer2); g_pBuild->RenderStream(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video,cam,NULL,NULL);
On certain On board cameras, IMediaControl::Run followed by IMediaControl::Stop followed by IMediaCOntrol::Run doesn't switch on the camera.
Extenal USB cameras work properly here. How can I diagnose more on this? Any pointers, please help.
Maybe its specific to a certain hardware issue in the unit.
Do a quick test by adding sleep of 1 sec between calls.
If it does help than you need to find a way to know when to unit state in idle or not.
There are two important parts of the question which you did not provide:
Filter graph topologies
HRESULTs of the method calls
A problem you might be having is that one of the filters in the topology does not handle well state transitions and fails somewhere between states. Supposedly your second Run meets it still trying to complete Stop. You might get a HRESULT there which indicates the issue (better for you) or the filter fails silently.
The filter graph's is the unlikely source of the bug itself. Chances are high that it does everything flawlessly, however since internally it distributes the calls between filters, one of the filter is letting you down.

SoundMixer.computeSpectrum with microphone

Flex has the SoundMixer.computeSpectrum function that lets you compute an FFT from the currently playing sound. What I'd like to do is compute an FFT without playing the sound. Since Flash 10.1 lets us access the microphone bytes directly, it seems like we should be able to compute the FFT directly off of what the user is speaking.
Unfortunately this doesn't work as far as I know. As stated on the Adobe help pages:
The SoundMixer.computeSpectrum()
method lets an application read the
raw sound data for the waveform that
is currently being played. If more
than one SoundChannel object is
currently playing the
SoundMixer.computeSpectrum() method
shows the combined sound data of every
SoundChannel object mixed together.
This implies two drawbacks:
It just works on the output (SoundChannel)
It just works on the mix of all outputs.
If you don't need the output channel at all, you may turn down it's volume to zero or near to zero!? Don't know if that could work.
For myself I don't see any other chance at the moment to implement the FFT on my own to compute a spectrum on the microphone data.
I'm not sure if there's a way to pass that data, but if all else fails, you can always compute the FFT yourself.

Resources