kickstarter.com/projects/visionai/vmx-project-computer-vision-for-everyon...

We have two weeks left in the VMX Project Kickstarter campaign!  We have been busy putting together more demos and answering questions about VMX. One common question is about the local VMX install and which backers will be able to get the local license.  We are making the local install available to $100 and above backers (for trading in 100 of their Compute Hours), so if you pledged at $25 and will find the local install handy, then consider upgrading your pledge to $100.  Many of you can't wait to get your hands on VMX and talk to us, so remember that $500 and above backers are entitled to a conversation with us, the creators of VMX, about the technology, your project ideas, and anything else computer vision related.  We're dedicated to giving higher pledges truly awesome rewards and enabling two-communication with us, so this is a perfect opportunity to influence the direction of VMX development. Remember, Kickstarter is all-or-nothing.VMX NewsThis past week Tom talked over Skype with the Paris Machine Learning Meetup (organized by Igor Carron of the popular Nuit Blanche Compressed Sensing blog) about the VMX Project. Tom gave a highlight of the underlying technology and answered some questions to the machine learning-geared audience in Paris. And came to an interesting conclusion...To make a long story short, it seems the Machine Learning crowd was most excited about using VMX to detect cats.  I guess there are lot of Reddit enthusiasts out there. Let's see what VMX has to say about cats. VMX Cat Face DetectorAs a simple experiment, Tom trained a "Cat Face" detector using VMX and YouTube videos of cats as well as Google Image search for "Cat."  He then ran the detector on a popular object recognition dataset, the PASCAL VOC dataset to see how well VMX could detect these lovely felines.  VMX was ran on 10,000 images, many which don't actually contain any animals.  But the results are great: VMX had no problem finding the images with cats!Maybe a redditor wants to write a VMX app that will only show him top reddit images that actually contain cats?  Maybe you want to find cats inside your friends' photos? The possibilities are endless.Here is a animation of the top cat detections but cropped around the detection window. For those of you interested in making all sorts of animations which you can share with your friends (and/or on Reddit), VMX has a lot of potential. VMX + You = one mean cat GIF making machine.Pre-trained VMX Object DetectorsVMX will come with a set of pre-trained models.  While the fun part of using VMX is training your own specialized detectors, we understand that that are scenarios where you want access to a library of pre-existing object detectors.  You'll be able to modify those detectors and customize them for your own use (such as taking a generic face detector and making it better by giving it examples of your own face).  If there is an object you utterly love and really want a pre-trained version in VMX, just send us a message.  We already have cats under control.Building a VMX App ScreencastOn another front, Geoff recorded a longer 27 minute screencast of him making a "Flutter Clone" (alternatively, the VMX HandPlay App) using our VMX webapp and API.  For those of you who don't know, the Flutter App is an app which lets you pause/play your music and shows using hand gestures.  (Flutter was recently acquired by Google) Since VMX makes it really easy to train visual concepts, Geoff decided to show just quickly one can build a Flutter-like gesture-recognition and control app using VMX.  http://www.youtube.com/watch?v=sNOahnpXmR4 Here is another webcast, this time Tom showing off some real-time face part, hand, and gesture recognition.  You can see multiple objects being detected in real-time using a single computer.http://www.youtube.com/watch?v=Hfg5I6djHbE Finally, here is a quick demo of running VMX on top of a depth camera's output.  Some people were curious if VMX could utilize depth data, and while this example doesn't use the raw depth data, it's a great example of the "if you can render it on your screen, VMX can use it" philosophy.  Here you can see "left bicep," "right bicep," and "head" detectors running on top of the depth camera's.  NOTE: the gray/blue output is created by plugging in a PrimeSense depth camera.http://www.youtube.com/watch?v=chXiOz6RZGwRemember that the more people learn about VMX, the higher the likelihood of reaching our funding milestone.  This is where each you can help us out!  Spread the word to your friends -- each post and tweet about our project is helping more people find out about our awesome technology and bringing us a step closer to success.We'll keep you guys updated so keep the VMX Demo requests and questions coming!Geoff and Tom


Comments (0)

Sign in to post comments.