When an object collides with a wall, it cannot pass through but some of the sprite can. When the wall is on the right, the collision works fine but when approaching from another way, part of the object collides.
I have different collision code for different objects, so it's probably not the problem. The reason for this is that different objects react differently to the collision and they also move differently.
The event for my character that is controlled by the user
hor = keyboard_check(ord("D"))-keyboard_check(ord("A"));
ver = keyboard_check(ord("S"))-keyboard_check(ord("W"));
if (place_meeting(x+(hor * sp),y,Object_Wall))
{
while (!place_meeting(x+sign(hor * sp),y,Object_Wall))
{
x += sign(hor);
}
hor = 0
}
if (place_meeting(x,y+(ver * sp),Object_Wall))
{
while (!place_meeting(x,y+sign(ver * sp),Object_Wall))
{
y += sign(ver);
}
ver = 0
}
x += hor * sp;
y += ver * sp;
The code for the enemy which randomly roams around and should walk away after hitting the wall (Collision with wall event)
move_towards_point(random(room_width),random(room_height),sp)
time = random_range(50,150)
The problem with the enemy only occurs while he is chasing the player, so now that I think about it, the characters collision is probably the problem.
Any help on how to fix it?
You probably have a bug when checking the condition of collision, but it is hard to pinpoint your exact problem without seeing the definition of your sprite and your code (post them, please, but make them a Minimal, Complete, and Verifiable Example first).
Regarding your sentence: "I have different collision code for different objects, so it's probably not the problem." - well, this is a problem indeed. Why to have different code for the same task? You are asking for troubles... And that may well be the reason you see the problem now (e.g. the actual collision-checking code in action is not the one you think it is...).
Related
I am suddenly in a recursive language class (sml) and recursion is not yet physically sensible for me. I'm thinking about the way a floor of square tiles is sometimes a model or metaphor for integer multiplication, or Cuisenaire Rods are a model or analogue for addition and subtraction. Does anyone have any such models you could share?
Imagine you're a real life magician, and can make a copy of yourself. You create your double a step closer to the goal and give him (or her) the same orders as you were given.
Your double does the same to his copy. He's a magician too, you see.
When the final copy finds itself created at the goal, it has nowhere more to go, so it reports back to its creator. Which does the same.
Eventually, you get your answer back – without having moved an inch – and can now create the final result from it, easily. You get to pretend not knowing about all those doubles doing the actual hard work for you. "Hmm," you're saying to yourself, "what if I were one step closer to the goal and already knew the result? Wouldn't it be easy to find the final answer then ?" (*)
Of course, if you were a double, you'd have to report your findings to your creator.
More here.
(also, I think I saw this "doubles" creation chain event here, though I'm not entirely sure).
(*) and that is the essence of the recursion method of problem solving.
How do I know my procedure is right? If my simple little combination step produces a valid solution, under assumption it produced the correct solution for the smaller case, all I need is to make sure it works for the smallest case – the base case – and then by induction the validity is proven!
Another possibility is divide-and-conquer, where we split our problem in two halves, so will get to the base case much much faster. As long as the combination step is simple (and preserves validity of solution of course), it works. In our magician metaphor, I get to create two copies of myself, and combine their two answers into one when they are finished. Each of them creates two copies of themselves as well, so this creates a branching tree of magicians, instead of a simple line as before.
A good example is the Sierpinski triangle which is a figure that is built from three quarter-sized Sierpinski triangles simply, by stacking them up at their corners.
Each of the three component triangles is built according to the same recipe.
Although it doesn't have the base case, and so the recursion is unbounded (bottomless; infinite), any finite representation of S.T. will presumably draw just a dot in place of the S.T. which is too small (serving as the base case, stopping the recursion).
There's a nice picture of it in the linked Wikipedia article.
Recursively drawing an S.T. without the size limit will never draw anything on screen! For mathematicians recursion may be great, engineers though should be more cautious about it. :)
Switching to corecursion ⁄ iteration (see the linked answer for that), we would first draw the outlines, and the interiors after that; so even without the size limit the picture would appear pretty quickly. The program would then be busy without any noticeable effect, but that's better than the empty screen.
I came across this piece from Edsger W. Dijkstra; he tells how his child grabbed recursions:
A few years later a five-year old son would show me how smoothly the idea of recursion comes to the unspoilt mind. Walking with me in the middle of town he suddenly remarked to me, Daddy, not every boat has a lifeboat, has it? I said How come? Well, the lifeboat could have a smaller lifeboat, but then that would be without one.
I love this question and couldn't resist to add an answer...
Recursion is the russian doll of programming. The first example that come to my mind is closer to an example of mutual recursion :
Mutual recursion everyday example
Mutual recursion is a particular case of recursion (but sometimes it's easier to understand from a particular case than from a generic one) when we have two function A and B defined like A calls B and B calls A. You can experiment this very easily using a webcam (it also works with 2 mirrors):
display the webcam output on your screen with VLC, or any software that can do it.
Point your webcam to the screen.
The screen will progressively display an infinite "vortex" of screen.
What happens ?
The webcam (A) capture the screen (B)
The screen display the image captured by the webcam (the screen itself).
The webcam capture the screen with a screen displayed on it.
The screen display that image (now there are two screens displayed)
And so on.
You finally end up with such an image (yes, my webcam is total crap):
"Simple" recursion is more or less the same except that there is only one actor (function) that calls itself (A calls A)
"Simple" Recursion
That's more or less the same answer as #WillNess but with a little code and some interactivity (using the js snippets of SO)
Let's say you are a very motivated gold-miner looking for gold, with a very tiny mine, so tiny that you can only look for gold vertically. And so you dig, and you check for gold. If you find some, you don't have to dig anymore, just take the gold and go. But if you don't, that means you have to dig deeper. So there are only two things that can stop you:
Finding some gold nugget.
The Earth's boiling kernel of melted iron.
So if you want to write this programmatically -using recursion-, that could be something like this :
// This function only generates a probability of 1/10
function checkForGold() {
let rnd = Math.round(Math.random() * 10);
return rnd === 1;
}
function digUntilYouFind() {
if (checkForGold()) {
return 1; // he found something, no need to dig deeper
}
// gold not found, digging deeper
return digUntilYouFind();
}
let gold = digUntilYouFind();
console.log(`${gold} nugget found`);
Or with a little more interactivity :
// This function only generates a probability of 1/10
function checkForGold() {
console.log("checking...");
let rnd = Math.round(Math.random() * 10);
return rnd === 1;
}
function digUntilYouFind() {
if (checkForGold()) {
console.log("OMG, I found something !")
return 1;
}
try {
console.log("digging...");
return digUntilYouFind();
} finally {
console.log("climbing back...");
}
}
let gold = digUntilYouFind();
console.log(`${gold} nugget found`);
If we don't find some gold, the digUntilYouFind function calls itself. When the miner "climbs back" from his mine it's actually the deepest child call to the function returning the gold nugget through all its parents (the call stack) until the value can be assigned to the gold variable.
Here the probability is high enough to avoid the miner to dig to the earth kernel. The earth kernel is to the miner what the stack size is to a program. When the miner comes to the kernel he dies in terrible pain, when the program exceed the stack size (causes a stack overflow), it crashes.
There are optimization that can be made by the compiler/interpreter to allow infinite level of recursion like tail-call optimization.
Take fractals as being recursive: the same pattern get applied each time, yet each figure differs from another.
As natural phenomena with fractal features, Wikipedia presents:
Moutain ranges
Frost crystals
DNA
and, even, proteins.
This is odd, and not quite a physical example except insofar as dance-movement is physical. It occurred to me the other morning. I call it "Written in Latin, solved in Hebrew." Huh? Surely you are saying "Huh?"
By it I mean that encoding a recursion is usually done left-to-right, in the Latin alphabet style: "Def fac(n) = n*(fac(n-1))." The movement style is "outermost case to base case."
But (please check me on this) at least in this simple case, it seems the easiest way to evaluate it is right-to-left, in the Hebrew alphabet style: Start from the base case and move outward to the outermost case:
(fac(0) = 1)
(fac(1) = 1)*(fac(0) = 1)
(fac(2))*(fac(1) = 1)*(fac(0) = 1)
(fac(n)*(fac(n-1)*...*(fac(2))*(fac(1) = 1)*(fac(0) = 1)
(* Easier order to calculate <<<<<<<<<<< is leftwards,
base outwards to outermost case;
more difficult order to calculate >>>>>> is rightwards,
outermost case to base *)
Then you do not have to suspend items on the left while awaiting the results of calculations further right. "Dance Leftwards" instead of "Dance rightwards"?
TL;DR I think I'm passing my array into a function wrongly, and thus the data thats read from it is not right causing it to possibly mangle Arduino memory.
Full code can be found >here<
After a bit of reading, I'm still a tad confused the best way to go about passing an array into a function and modifying its data within that function.
So far these 2 questions sort of helped, and thus allowed my code to compile; but after a bit of testing I'm having issues whereby the data I would be expecting to see is not being read back correctly when I'm within the function.
Array/pointer/reference confusion
Passing an array by reference in C?
The basic program...
It lights up 3 LED strips with a base colour purple (after an initial fade each light one by one), then makes a sort of colours trail effect (7 pixels long) trace along the strip, and loop back from the beginning again.
Video can be seen here of the effect https://www.youtube.com/watch?v=S8tVfFfsiqI
I'm going to do the same effect but I have since tried to re-factor my code so that its easier for everyone to adjust the parameters of the colours.
Original source code can be found here >Click to View< (feel free to copy/modify/use the code how ever you want, its for anyone to use really, all in good fun)
What I'm trying to do now...
So the goal now is to re-factor the code from above so that its easier to set the Trail effect colour based on the user's preferences. Thus I want to define the colour of the trail elsewhere, and then have each instance of the trail just passed into function that handles updating it (this is done without using classes, just Structs and Arrays, as thats confusing for non programmery types which this code is aimed for)
//Setting up Trail effect length
#define TRAIL_LENGTH 7
typedef struct Color {
byte r;
byte g;
byte b;
};
typedef struct TrailPixel {
uint16_t position;
Color color;
};
//Function Prototypes
void trailEffectForward (Adafruit_NeoPixel &strip, struct TrailPixel (*trailArray)[TRAIL_LENGTH] );
//Make the arrays
TrailPixel trailLeft[TRAIL_LENGTH];
TrailPixel trailBottom[TRAIL_LENGTH];
TrailPixel trailRight[TRAIL_LENGTH];
So as you can see from the above, I create two Structs, and then make 3 arrays of those structs. I then populate the "position" value of each of the trail effects with...
for (int i = 0; i < TRAIL_LENGTH; i++) {
trailLeft[i].position = i + 5; //start just off the strip
trailBottom[i].position = 15 - i; //start off screen, but timed in a way so to look like the Left and Right trails converge onto the bottom at the same time
trailRight[i].position = i + 5; //start just off strip
}
Later on in the code, I call the function that I want to process the effect, and I hand off the details of the array to it. I want to have inside this function, to commands to update the pixel colour on the light strip and then update the position for next time.
BUT Things get mangled really fast to the point where my Arduino reboots every few seconds and colours aren't behaving as expected.
Here how I currently call the trail effect function...
trailEffectForward ( stripBottom , &trailBottom );
Once in there to try and figure out whats going on, I added some serial output to check the values.
void trailEffectForward(Adafruit_NeoPixel &strip, TrailPixel (*trailArray)[TRAIL_LENGTH]) {
Serial.println("---------------------");
Serial.println(trailArray[0]->position);
Serial.println(trailArray[1]->position);
Serial.println(trailArray[2]->position);
Serial.println(trailArray[3]->position);
Serial.println(trailArray[4]->position);
Serial.println(trailArray[5]->position);
Serial.println(trailArray[6]->position);
I would EXPECT if things worked according to plan, I would see the numbers
---------------------
15
14
13
12
11
10
9
But what I end up having is this :(
---------------------
15
5
5
43
1000
0
0
The full code that is currently in a state of Work In Progress can be found http://chiggenwingz.com/quads/ledcode/quad_leds_v0.2workinprogress.ino
Note: I've commented out a lot of the meat that applies colour to the pixels as I trying to narrow down what was going wrong. Basically I would be expecting the output as listed above to stop happening.
Once again feel free to use any of the code in your own projects :)
Okie it looks I found my answer from here [ Passing an array of structs in C ]
So the function was this previously...
void trailEffectForward(Adafruit_NeoPixel &strip, TrailPixel (*trailArray)[TRAIL_LENGTH])
and is now this
void trailEffectForward(Adafruit_NeoPixel &strip, struct TrailPixel trailArray[TRAIL_LENGTH] )
Got rid of the whole pointer/reference fun stuff. Had to put the word "struct" there I believe.
So when I call the function, I was previously using...
trailEffectForward ( stripBottom , &trailBottom );
but now I use this
trailEffectForward ( stripBottom , trailBottom );
I no longer have mangled data, and everything apperars to be working happily again.
Hopefully this helps someone out there in the years to come :)
I've looked around Stack Overflow for a solution but have not found one to a problem I have with random NPC movement. Essentially, what I have coded up to now, is a simple 2D platformer game using Sprite Kit: there's a separate class for the NPC object. I initialize it in my GameScene (SKScene) no problem & so far it's behaving with the physicsWorld I have set up properly. Now I'm at the part where it simply needs to move randomly in any direction. I've set up the boundaries & have made it move with things like SKActions, utilizing things like CGPointMake, that would move the NPC randomly as needed, have it wait a little bit in that location & resume movement. BOOL's helped this process. However, I had difficulty getting the sprite to look left when moving left & looking right when moving right (looking up & down is not needed at all). So I found a way in a book by using Vector's. I set up a method in the NPC class which is used in the GameScene:
-(void)moveToward:(CGPoint)targetPosition
{
CGPoint targetVector = CGPointNormalize(CGPointSubtract(targetPosition, self.position));
targetVector = CGPointMultiplyScalar(targetVector, 150); //150 is interpreted as a speed: the larger the # the faster the NPC moves.
self.physicsBody.velocity = CGVectorMake(targetVector.x, targetVector.y); //Velocity vector measured in meters per second.
/*SPRITE DIRECTION*/
[self faceCurrentDirection]; //Every time NPC begins to move, it will face the appropriate direction due to this method.
}
Now all of this works. But the issue at hand is calling this moveToward method appropriately in the update method. The 1st thing I tried was this:
-(void)update:(NSTimeInterval)currentTime
{
/*Called before each frame is rendered*/
if (!npcMoving)
{
SKAction *moving = [SKAction runBlock:^{ npcMoving = YES }]; //THIS IS THE CULPRIT!
SKAction *generate = [SKAction runBlock:^{ [self generateRandomDestination]; }]; //Creates a random CGFloat X & CGFloat Y.
SKAction *moveTowards = [SKAction runBlock:^{ _newLocation = CGPointMake(fX, fY);
[_npc moveToward:_newLocation]; }]; //Moves NPC to that random location.
SKAction *wait = [SKAction waitForDuration:4.0 withRange:2.0]; //NPC will wait a little...
[_npc runAction:[SKAction sequence:#[moving, generate, moveTowards, wait]] completion:^{ npcMoving = NO; }]; //...then repeat process.
}
}
The vector method 'moveToward' requires 'update' method to be present for NPC movement to happen. I turn this off with 'npcMoving = YES' in the beginning in hopes that the NPC will move to targeted location & start the process again. This is not the case. If I remove SKAction with 'npcMoving = YES', the 'update' method calls upon the entire sequence of above SKActions every frame, which in turn doesn't move my NPC far. It simply has it change targeted location every frame, in turn creating an 'ADHD' NPC. Could someone please recommend what to do? I absolutely need to retain the vector movement for the directional properties & other future things but I am at a loss on how to properly implement this with the 'update' method.
Actions perform a task over time. If your npcMoving flag is false, you run an action sequence every frame, which means over 10 frames you will have 10 action sequences running simultaneously. That will cause undefined behavior.
Next, even if you were to stop the existing sequence and run it anew, running an action every frame where at least one action has a duration is practically pointless. Because then that action with duration will not be able to complete its task in the given time because it'll be replaced the next frame.
Summary: actions with duration are unsuitable for tasks that require adjustment every frame.
Solutions:
perform tasks by changing the actor's properties (ie position etc) as/when needed (ie every frame)
decide on a task for the actor, then run the corresponding action sequence for that task and wait for it to end before you decide upon a new task
I'm trying to get my head around these integration methods and I'm thouroughly confused.
Here is the code:
public void update_euler(float timeDelta){
vPos.y += vVelocity.y * timeDelta;
vVelocity.y += gravity.y * timeDelta;
}
public void update_nsv(float timeDelta){
vVelocity.y += gravity.y*timeDelta;
vPos.y += vVelocity.y * timeDelta;
}
public void onDrawFrame(GL10 gl) {
currentTime = System.currentTimeMillis();
float timeDelta = currentTime - startTime;
startTime = currentTime;
timeDelta *= 1.0f/1000;;
// update_RK4(timeDelta);
// update_nsv(timeDelta);
// update_euler(timeDelta);
// update_velocity_verlet(timeDelta);
}
Firstly, I just want to make sure I've got these right.
I am simulating a perfectly elastic ball bouncing, so on the bounce I just reverse the velocity.
The Euler method, the ball bounces higher on each bounce. Is this due to an error in my code or is this due to the innacuracy of the method. I've read that with the Euler integration you lose energy over time. Well I'm gaining it and I don't know why.
The nsv method: I don't quite understand how this is different to the Eular method, but in any case the ball bounces lower on each bounce. It is losing energy which I've read isnt meant to happen with the nsv method. Why am I losing energy?
(The velocity verlet and RK4 methods are working as I'd expect them to).
I get the impression I'm lacking a fundamental bit of information on this subject, but I don't know what.
I do realise my timestep is lacking, and updating it to run the physics using a static timestep would stop me losing/gaining energy, but I am trying to understand what is going on.
Any help would be appreciated.
To add another option to #Beta's answer, if you average the two methods, your error should disappear (except for issues around handling the actual bounce).
public void update_avg(float timeDelta){
vVelocity.y += gravity.y*timeDelta/2;
vPos.y += vVelocity.y * timeDelta;
vVelocity.y += gravity.y*timeDelta/2;
}
What I'm doing here is updating the velocity to the average velocity over the interval, then updating the position based on that velocity, then updating the velocity to the velocity at the end of the interval.
If you have a more complicated scenario that you want to model, consider using the Runge-Kutta Method to solve differential equations of the form y' = f(x, y). (Note that here y can be a set of different variables. So in your case you'd have d(position, velocity)/dt = (velocity, -gravity). And the code I gave you works out to be the same as the second-order version of that method.
In real life, the ball moves upward and decelerates, reaches the apex (apogee) where its velocity is zero for a split-second, then moves downward and accelerates. Over any time interval it is exchanging kinetic energy (being fast) with potential energy (being high).
In the Euler method, it moves with constant velocity for the duration of the interval, then at the end of the interval it suddenly changes its velocity. So on the upward journey it goes up at high speed, then slows down, having gained more altitude than it should have. On the downward leg it creeps down slowly, losing little altitude, then speeds up.
In the nsv method, the opposite happens: on the way up it loses speed "too soon" and doesn't get very high, on the way down it hurries and reaches the ground without building up much speed.
The two methods are the same in the limit as timeDelta goes to zero. (If that statement made no sense, don't sweat it, it's just calculus.) If you make timeDelta small, the effect should fade. Or you could use energy as your primary variable, not {position, velocity}, but the math would be a little more complicated.
The integration introduces artificial damping into the system. I believe you can determine ho much by doing a Fourier analysis on the integration scheme, but I'd have to refresh my memory on the details.
I am building an ASP.NET web site where the users may upload photos of themselves. There could be thousands of photos uploaded every day. One thing my boss has asked a few time is if there is any way we could detect if any of the photos are showing too much 'skin' and automatically move flag these as 'Adults Only' before the editors make the final decision.
Your best bet is to deal with the image in the HSV colour space (see here for rgb - hsv conversion). The colour of skin is pretty much the same between all races, its just the saturation that changes. By dealing with the image in HSV you can simply search for the colour of skin.
You might do this by simply counting the number of pixel within a colour range, or you could perform region growing around pixel to calculate the size of the areas the colour.
Edit: for dealing with grainy images, you might want to perform a median filter on the image first, and then reduce the number of colours to segment the image first, you will have to play around with the settings on a large set of pre-classifed (adult or not) images and see how the values behave to get a satisfactory level of detection.
EDIT: Heres some code that should do a simple count (not tested it, its a quick mashup of some code from here and rgb to hsl here)
Bitmap b = new Bitmap(_image);
BitmapData bData = b.LockBits(new Rectangle(0, 0, _image.Width, _image.Height), ImageLockMode.ReadWrite, b.PixelFormat);
byte bitsPerPixel = GetBitsPerPixel(bData.PixelFormat);
byte* scan0 = (byte*)bData.Scan0.ToPointer();
int count;
for (int i = 0; i < bData.Height; ++i)
{
for (int j = 0; j < bData.Width; ++j)
{
byte* data = scan0 + i * bData.Stride + j * bitsPerPixel / 8;
byte r = data[2];
byte g = data[1];
byte b = data[0];
byte max = (byte)Math.Max(r, Math.Max(g, b));
byte min = (byte)Math.Min(r, Math.Min(g, b));
int h;
if(max == min)
h = 0;
else if(r > g && r > b)
h = (60 * ((g - b) / (max - min))) % 360;
else if (g > r && g > b)
h = 60 * ((b - r)/max - min) + 120;
else if (b > r && b > g)
h = 60 * ((r - g) / max - min) + 240;
if(h > _lowerThresh && h < _upperThresh)
count++;
}
}
b.UnlockBits(bData);
Of course, this will fail for the first user who posts a close-up of someone's face (or hand, or foot, or whatnot). Ultimately, all these forms of automated censorship will fail until there's a real paradigm-shift in the way computers do object recognition.
I'm not saying that you shouldn't attempt it nontheless; but I want to point to these problems. Do not expect a perfect (or even good) solution. It doesn't exist.
I doubt that there exists any off-the-shelf software that can determine if the user uploads a naughty picture. Your best bet is to let users flag images as 'Adults Only' with a button next to the picture. (Clarification: I mean users other than the one who uploaded the picture--similar to how posts can be marked offensive here on StackOverflow.)
Also, consider this review of an attempt to do the same thing in a dedicated product: http://www.dansdata.com/pornsweeper.htm.
Link stolen from today's StackOverflow podcast, of course :).
We can't even write filters that detect dirty words accurately in blog posts, and your boss is asking for a porno detector? CLBUTTIC!
I would say your answer lies in crowdsourcing the task. This almost always works and tends to scale very well.
It doesn't have to involve making some users into "admins" and coming up with different permissions - it can be as simple as to enable an "inappropriate" link near each image and keeping a count.
See the seminal paper "Finding Naked People" by Fleck/Forsyth published in ECCV. (Advanced).
http://www.cs.hmc.edu/~fleck/naked.html
Interesting question from a theoretical / algorithmic standppoint. One approach to the problem would be to flag images that contain large skin-colored regions (as explained by Trull).
However, the amount of skin shown is not a determinant of an offesive image, it's rather the location of the skin shown. Perhaps you can use face detection (search for algorithms) to refine the results -- determine how large the skin regions are relative to the face, and if they belong to the face (perhaps how far below it they are).
I know either Flickr or Picasa has implemented this. I believe the routine was called FleshFinder.
A tip on the architecture of doing this:
Run this as a windows service separate from the ASP.NET Pipeline, instead of analyzing images in real time, create a queue of new images that are uploaded for the service to work through.
You can use the normal System.Drawing stuff if you want, but if you really need to process a lot of images, it would be better to use native code and a high performance graphics library and P/invoke the routine from your service.
As resources are available, process images in the background and flag ones that are suspicious for editors review, this should prune down the number of images to review significantly, while not annoying people who upload pictures of skin colored houses.
I would approach the problem from a statistical standpoint. Get a bunch of pictures that you consider safe, and a bunch that you don't (that will make for a fun day of research), and see what they have in common. Analyze them all for color range and saturation to see if you can pick out characteristics that all of the naughty photos, and few of the safe ones have.
Perhaps the Porn Breath Test would be helpful - as reported on Slashdot.
Rigan Ap-apid presented a paper at WorldComp '08 on just this problem space. The paper is allegedly here, but the server was timing out for me. I attended the presentation of the paper and he covered comparable systems and their effectiveness as well as his own approach. You might contact him directly.
I'm afraid I can't help point you in the right direction, but I do remember reading about this being done before. It was in the context of people complaining about baby pictures being caught and flagged mistakenly. If nothing else, I can give you the hope that you don't have to invent the wheel all by yourself... Someone else has been down this road!
CrowdSifter by Dolores Labs might do the trick for you. I read their blog all the time as they seem to love statistics and crowdsourcing and like to talk about it. They use amazon's mechanical turk for a lot of their processing and know how to process the results to get the right answers out of things. Check out their blog at the very least to see some cool statistical experiments.
As mentioned above by Bill (and Craig's google quote) statistical methods can be highly effective.
Two approaches you might want to look into are:
Neural Networks
Multi Variate Analysis (MVA)
The MVA approach would be to get a "representative sample" of acceptable pictures and of unacceptable pictures. The X data would be an array of bytes from each picture, the Y would be assigned by you as a 1 for unacceptable and a 0 for acceptable. Create a PLS model using this data. Run new data against the model and see how well it predicts the Y.
Rather than this binary approach you could have multiple Y's (e.g. 0=acceptable, 1=swimsuit/underwear, 2=pornographic)
To build the model you can look at open source software or there are a number of commercial packages available (although they are typically not cheap)
Because even the best statistical approaches are not perfect the idea of also including user feedback would probably be a good idea.
Good luck (and worst case you get to spend time collecting naughty pictures as an approved and paid activity!)