Open Computer Vision Library or just OpenCV, is a cross-platform computer vision library focused on real-time image processing for video files or webcams.
You have two options to obtain the environment to develop on OpenCV. You can insert a new repository in your package manager or compile it by yourself.
To compile it you have to install some additional libraries compile it by your self. And it’s instructions vary for each distribution and version. For example, from Ubuntu Linux 9.10 to 9.04, the process varies slightly. I followed the instructions on this post “Installing OpenCV 2.0 on Ubuntu 9.10 Karmic Koala”.
After you have installed and have a well configured OpenCV development environment, you can compile a “source.c” file into a “program” binary like this:
gcc gcc source.c -o program `pkg-config opencv ‑‑libs ‑‑cflags`
This is a very simple example of how to open two images and display them added.
I got two pictures at project Commons from Wikimediathat were highlighted on Featured Pictures. I did a crop on both to have the same size, as I’m trying to make this example as simple as possible.
In this simple OpenCV code below, we open the images, create a new one to display the result and use cvAdd to add them. We do not save the result or handle more than the ordinary case of two images with the same size.
#include
#include
#include
int main( int argc, char **argv ){
IplImage *surfer, *milkyway, *result;
int key = 0;
CvSize size;
/* load images, check, get size (both should have the same) */
surfer = cvLoadImage("surfer.jpg", CV_LOAD_IMAGE_COLOR);
milkyway = cvLoadImage("milkyway.jpg", CV_LOAD_IMAGE_COLOR);
if((!surfer)||(!milkyway)){
printf("Could not open one or more images.");
exit -1;
}
size = cvGetSize(surfer);
/* create a empty image, same size, depth and channels of others */
result = cvCreateImage(size, surfer->depth, surfer->nChannels);
cvZero(result);
/* result = surfer + milkyway (NULL mask)*/
cvAdd(surfer, milkyway, result, NULL);
/* create a window, display the result, wait for a key */
cvNamedWindow("example", CV_WINDOW_AUTOSIZE);
cvShowImage("example", result);
cvWaitKey(0);
/* free memory and get out */
cvDestroyWindow("example");
cvReleaseImage(&surfer);
cvReleaseImage(&milkyway);
cvReleaseImage(&result);
return 0;
}
/* gcc add.c -o add `pkg-config opencv --libs --cflags` */
Compile it (on a well configured OpenCV development environment) and run it:
#include
#include
#include
int main(int argc, char *argv[]) {
int delay = 0, key=0, i=0;
char *window_name;
CvCapture *video = NULL;
IplImage *frame = NULL;
IplImage *grey = NULL;
IplImage *edges = NULL;
/* check for video file passed by command line */
if (argc>1) {
video = cvCaptureFromFile(argv[1]);
} else {
printf("Usage: %s VIDEO_FILE\n", argv[0]);
return 1;
}
/* check file was correctly opened */
if (!video) {
printf("Unable to open \"%s\"\n", argv[1]);
return 1;
}
/* create a video window with same name of the video file, auto sized */
window_name = argv[1];
cvNamedWindow(window_name, CV_WINDOW_AUTOSIZE);
/* Get the first frame and create a edges image with the same size */
frame = cvQueryFrame(video);
grey = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U, 1);
edges = cvCreateImage(cvGetSize(frame), IPL_DEPTH_8U, 1);
/* calculate the delay between each frame and display video's FPS */
printf("%2.2f FPS\n", cvGetCaptureProperty(video, CV_CAP_PROP_FPS));
delay = (int) (1000/cvGetCaptureProperty(video, CV_CAP_PROP_FPS));
while (frame) {
/* Edges on the input gray image (needs to be grayscale) using the Canny algorithm.
Uses two threshold and a aperture parameter for Sobel operator. */
cvCvtColor(frame, grey, CV_BGR2GRAY);
cvCanny( grey, edges, 1.0, 1.0, 3);
/* show loaded frame */
cvShowImage(window_name, edges);
/* load and check next frame*/
frame = cvQueryFrame(video);
if(!frame) {
printf("error loading frame.\n");
return 1;
}
/* wait delay and check for the quit key */
key = cvWaitKey(delay);
if(key=='q') break;
}
}
To compile it in a well configured OpenCV development environment:
Here’s a code developed using codes from nashruddin.com and samples from OpenCV, including the haar classifier xml. More detailed explanation on the theory about how the OpenCV face detection algorithm works can be found here.
The code:
#include
#include
#include
CvHaarClassifierCascade *cascade;
CvMemStorage *storage;
int main(int argc, char *argv[]) {
CvCapture *video = NULL;
IplImage *frame = NULL;
int delay = 0, key, i=0;
char *window_name = "Video";
char *cascadefile = "haarcascade_frontalface_alt.xml";
/* check for video file passed by command line */
if (argc>1) {
video = cvCaptureFromFile(argv[1]);
}
else {
printf("Usage: %s VIDEO_FILE\n", argv[0]);
return 1;
}
/* check file was correctly opened */
if (!video) {
printf("Unable to open \"%s\"\n", argv[1]);
return 1;
}
/* load the classifier */
cascade = ( CvHaarClassifierCascade* )cvLoad( cascadefile, 0, 0, 0 );
if(!cascade){
printf("Error loading the classifier.");
return 1;
}
/* setup the memory buffer for the face detector */
storage = cvCreateMemStorage( 0 );
if(!storage){
printf("Error creating the memory storage.");
return 1;
}
/* create a video window, auto size */
cvNamedWindow(window_name, CV_WINDOW_AUTOSIZE);
/* get a frame. Necessary for use the cvGetCaptureProperty */
frame = cvQueryFrame(video);
/* calculate the delay between each frame and display video's FPS */
printf("%2.2f FPS\n", cvGetCaptureProperty(video, CV_CAP_PROP_FPS));
delay = (int) (1000/cvGetCaptureProperty(video, CV_CAP_PROP_FPS));
while (frame) {
/* show loaded frame */
cvShowImage(window_name, frame);
/* wait delay and check for the quit key */
key = cvWaitKey(delay);
if(key=='q') break;
/* load and check next frame*/
frame = cvQueryFrame(video);
if(!frame) {
printf("error loading frame.\n");
return 1;
}
/* detect faces */
CvSeq *faces = cvHaarDetectObjects(
frame, /* image to detect objects in */
cascade, /* haar classifier cascade */
storage, /* resultant sequence of the object candidate rectangles */
1.1, /* increse window by 10% between the subsequent scans*/
3, /* 3 neighbors makes up an object */
0 /* flags CV_HAAR_DO_CANNY_PRUNNING */,
cvSize( 40, 40 )
);
/* for each face found, draw a red box */
for( i = 0 ; i < ( faces ? faces->total : 0 ) ; i++ ) {
CvRect *r = ( CvRect* )cvGetSeqElem( faces, i );
cvRectangle( frame,
cvPoint( r->x, r->y ),
cvPoint( r->x + r->width, r->y + r->height ),
CV_RGB( 255, 0, 0 ), 1, 8, 0 );
}
}
}
Yeah, I know the code needs a few adjustments. ¬¬
To compile it in a well configured OpenCV development environment:
To run it you have to put in the same directory of the binary the XML classifier (haarcascade_frontalface_alt.xml) that comes with OpenCV sources at OpenCV-2.0.0/data/haarcascades/. And so:
./faceplayer video.avi
The results I got so far is that it works well for faces but sometimes its also detects more than faces. And here a video of it working live.
A example of good result:
A example of bad result:
Maybe with some adjustments it could performs even better. But was really easy to create it using OpenCV.