Tuesday, November 26, 2013

How I learned to mind my own business


I was walking the mental hospital the other day,
and all the patients were shouting , '13....13...13'.
The fence was too high to see over
but I saw little gap on the plank,
so I looked over to see what is going on...
some idiot poked me in the eye, then everyone started shouting, '14...14...14'.
Blog Directory | Submit to Directories and Promote your Blogs

Bionym launched SDK for identity-auntenticating wristband Nymi


Toronto-based startup Bionym has launched an SDK for its forthcoming identity-authenticating wristband, Nymi. It said today that more than 6,000 developers have registered their interest in building software that hooks into the heartwave-sensing bangle. It costs $79 and so far around 7000 wristbands have been presold. 
 
The Nymi wristband authenticates the wearer’s identity by matching the overall shape of their heartwave (captured via an electrocardiogram sensor). It sustains authentication, so long as the wristband remains in position, reducing the need for repeated authentications during the day. 
 
 

What kind of apps can be developed for Nymi?

The obvious ones that are simply unlocking various things, from doors to their personal devices or for the money transaction. But there are people that are thinking outside the box in terms of different environments and different appliances — that, how could they(things or the machine) behave differently if they knew who you are??
 
there are applications that let the coffee machine knows your settings and it will produce your coffee the way you want it.
 
Another of its use could be in restaurant or a bar and the Nymi passing info on your dietary restrictions or favorite drink or specific order.
 
There are lot of people interested in making Arduino or other kinds of hardware modules that can interact with Nymi. you can monotize it , customize it according to your preferences. So imagine an Arduino module that wants to be one of those people that overshares and tweets whenever you’re in the room saying you came home and you’re here…(check-in addicts or foursquare users would benefit)

check out this video for more details.
 
 
Bionym Nymi, Gesture recognition

Sunday, November 24, 2013

Hand gestures for your Windows Phone

Sources familiar with Nokia’s plans have revealed that Goldfinger will ship with some technology that Nokia has been developing for years now. Described internally as "3D Touch," it essentially makes use of hardware sensors to detect interactions that are off-the-glass.  Nokia started experimenting publicly with similar features in an update to its Glance screen software, which let Lumia owners wave at their handsets to see the time. Lumia 1520 will be the first Windows Phone 8.1 device will take this a step further.
Users will be able to interact with Goldfinger by making hand gestures above and to the side of the device. Gestures currently under consideration include scrolling and flipping to let Windows Phone 8.1 users navigate around a phone without touching the screen. Samsung has similar features on its Galaxy S4 handset, known as Air Gestures, to change music tracks, accept phone calls, and scroll through webpages.
Microsoft is currently readying its Windows Phone 8.1 update, and we understand the company is tentatively planning to detail parts of these new features at Mobile World Congress 2014. Windows Phone 8.1 is expected to include a notification center, improvements to multitasking, and a personal digital assistant codenamed Cortana.
 This post is inspired from The Verge's articles Source 
Hand gestures for windows phone, hand gesture recognition, touchless gesture recognition

Thursday, November 21, 2013

Google launches Glass Dev Kit preview and Augmented reality app

Yesterday Google launched the Glass Development Kit (GDK) "Sneak Preview," which will finally allow developers to make real, native apps for Google Glass. While there have previously been extremely limited Glass apps that used the Mirror API, developers now have full access to the hardware.

By releasing the GDK, Google has turned Glass from a neat toy into a platform, one that is ripe for development and open to the public. The bottleneck that is currently holding Glass back is its limited physical distribution, with a likely less than five figure number of headsets distributed.
Still, Google’s pitch to Android developers is simple: Take your Android Application, reuse most of the code, get them on to Glass, and then optimize their interface. In short, by tying Glass to Android, the company is all but ensuring that it will be flush with applications by the time that an average consumer gets near one.
Google Glass runs a heavily skinned version of Android 4.0.4, so Glass development is very similar to Android development. The GDK is downloaded through the Android SDK Manager, and Glass is just another target device in the Eclipse plugin. Developers have access to the Glass voice recognition within their app as an intent.  Apps can be totally offline and can do all their processing on Glass. They can also support background events and have full access to the camera and other hardware.
Google showed off a few of the first native Glass apps, and one of the coolest among them was Wordlens, a real-time, augmented-reality translation app. Wordlens works much like it does on the iPhone—foreign-language text targeted by the camera is translated on top of the video feed in real time. This is neat on a smartphone, but on a device like Glass it becomes much more powerful. Just by looking at text and saying "OK, Glass, translate this," the text on the Glass video feed is translated and placed above the original text. Wordlens' app uses the accelerometers to keep the virtual text aligned, all while working completely offline
Some of the other applications are,
  • Allthecooks: A cooking app to help you read recipes without using your hands.
  • Spellista: A word jumble game.
  • GolfSight: A tool to calculate distances and the like as you wear silly pants and wave sticks at small round objects.
  • Strava: A run tracking app. 
I haven't used these glasses myself yet, so I was worried about the terrible frame rate displayed by Glass but I heard the screen mirroring functionality is extremely slow, but on real hardware, Glass runs the stock apps pretty smoothly. It remains to be seen how much power native apps can actually get out of the system, as Glass only has a 1GHz dual-core OMAP 4430 and 1GB of RAM. Native apps pushing the hardware to the limit or running in the background will make the runtime even worse. And battery life is a problem, too. Even with the extremely conservative stock apps, Glass can only last a few hours.

Tuesday, November 12, 2013

touchless gesture technology using ultrasonic sound

Few days back I was talking with a good friend of mine about the touchless gesture technology and how does it work.  I explained him as much as I could, we spoke a great deal about factors involved, Image processing, Hardware, software etc. so I thought I should share it with you guys as well.
In my previous posts I wrote about hardwares like Kinect, Leap motion and motion detecting sensors but mainly the gesture recognition based on image processing. In this post I am going to write about the Touch less gesture interaction with computer, tabs or smart phones using Ultrasonic sound.

Ultrasonic touchless gesture technology will make touchless interaction with your computer a feasible and strong solution. The advantages ultrasonic gesture technology has distinguishes it from other touchless technologies on the market. These qualities open up endless possibilities to equip the devices with the latest and most intuitive interaction innovation. Ultrasonic sensors create a wide field of interaction space that surrounds device and can be configured to detect a wide range of gestures and gesture combinations.


How it works
Ultrasound signals sent through the air from speakers integrated in smart phones and tablets bounce against your hand and are recorded by microphones also integrated in these devices. In this way, Elliptic Labs’ technology recognizes your hand gestures and uses them to move objects on a screen, very similar to how bats use echolocation to navigate.

Advantages with ultrasonic gesture technology:

Power saving:
its because it uses ultrasonic sound sensor, it uses lest power for computing as compared to other optical and 3D sensors like Kinect and leap.
Wide field of view  
Sensors can see both forward and to the sides, enabling an interaction zone extending over the screen and to the sides. Ultrasonic sensors enable gesturing both from a distance and very close to the screen at the same time. This for me is best point about it. generally with Kinect and leap motion sensors, we have to be either in front of the sensor(kinect) or above the sensor(leap). when the sensors are placed away from screen it becomes problematic. ultrasonic sound sensors give better degree of freedom if not best. Of course as of now am not entirely so sure about its exact field of view ( FOV) or Region of interest (ROI).

Distributed sensing
Enables motion capture from multiple angles, avoiding occlusion of objects or parts of an object. Facilitates robust positioning by giving sufficient sensor baselines. Sensors used are MEMS microphones, which can also be used for speech enhancement & recognition.

 Leading edge detection & range gating
Ability to separate the first returning echoes from other, later arriving echos. Filter can effectively be created which thresholds reflections in the z-direction, useful both for separating finger motion from wrist or hand motion, or from movements or reflections from the body of use

Accurate time-of-flight measurements
Enables accurate tracking of leading edges of hands & fingers, as well as accurate location of devices having ultrasonic capabilities in space.

It is not my intention to decide or judge or to say that this in any way better than Kinect or leap motion or vice versa, Hey, don't kill the messenger !! I am just happy to learn there are different ways we can do gesture recognition. for more information about this technology and this product, you can visit source

Thank You for taking your time to read through it and I hope you have learnt something. If you have any questions or suggestion related to this article, you can send it to me. Add your Comment below to start or join a conversation. If you enjoyed this article and want more, kindly subscribe to feeds.
bizandtech.net News & Resource Directory
Blog Directory

Sunday, November 10, 2013

Roboroach: the first commercial cyborg

There is a big row going on over a US company's new smartphone app "Roboroach backpack".
The US company Backyard Brains has developed an "Electronic Backpack" called Roboroach that fits on to cockroach allowing its movements to be controlled by a smartphone app. The Backyard Brains company says that the device is intended to get children to be interested in neuroscience.


For the "electronic backpack" to work the cockroaches have to be placed in icy water to subdue them

before sandpaper is used to remove the waxy coating on the shell of the insect's head. An electrode connector and electrodes are then glued on to the insect's body and a needle is used to poke a hole in their thorax in order to insert a wire.Their antennae are then cut and electrodes are inserted. A circuit is attached to their backs, and signals are received through a mobile phone app allowing users to control the cockroaches' movements to the left and to the right. 

There is lot of criticism going on about this, Everybody’s favorite protest group, the People for the Ethical Treatment of Animals, is at it again—this time in defense of cockroaches. Specifically, PETA wants to stop the sale of the Backyard Brains’ RoboRoach, a science education kit that enables kids to implant electrodes into a cockroach’s brain and then control the beetle with a smartphone app.
Some say "electronic backpack" is cruel and subjects the insects to unnecessary stress. some say , Modifying a living creature to make a toy is wrong. they even say, This enables and encourages kids to harm animals.

I personally feel its not time to criticize, its time to celebrate and appreciate the innovation. I mean come on  these are the roaches people easily kill in their apartments and its not like these animals are in any harms way as Backyard Brains contends that the experiments have no lasting effect on the bugs and that dipping them in ice water anesthetizes them from pain. Personally, I think RoboRoach is a really interesting way to get people interested in science, which we need to do more of at all costs.

I don't care if you think I'm insane and cruel to think and support this idea but imagine how cool it is to play soccer with remote controlled cockroach team or I can even create cockroach army. I guess time of playing with video games is over, I can train my cockroach in soccer that he will defeat the likes of Messi and Ronaldo :P

Thank You for taking your time to read through it and I hope you have learnt something. If you have any questions or suggestion related to this article, you can send it to me. Add your Comment below to start or join a conversation. If you enjoyed this article and want more, kindly subscribe to feeds.

Blog Directory Roboroach, Gesture recognition, iphone app, cyborg cockroach

ShareThis