Showing posts with label Interaction. Show all posts
Showing posts with label Interaction. Show all posts

Tuesday, July 26, 2011

mesh modeller that uses KINECTs depth perception and homemade data gloves

 

a simple mesh modeller that uses KINECTs depth perception and homemade data gloves for a more real world oriented user interaction in virtual 3d space.
realized only with open source software

Monday, July 25, 2011

SixthSense

sixthsense

 

system.jpghand_phone.jpgvideo_paper.jpg
SixthSense: How It Works: A webcam captures video, including specific hand signals that the laptop reads as commands. A mini-projector then displays the relevant content — e-mail, stock charts, photos — on the nearest surface  Bland Designs
Imagine a wearable device that lets you physically interact with interfaces that appear in front of you on any surface, where and when you want them. You can watch a video on your newspaper's front page, navigate through a map on your dining table, and flick through photos on any wall. The "Sixth Sense" system from Patti Maes' Fluid Interfaces Group at the MIT Media Lab does all this through a prototype built from $300 worth of off the shelf components. You can even take a photograph by simply holding your hand in the air and making a framing gesture.

 

SixthSense is a wearable gestural interface device developed by Pranav Mistry,

SixthSense - Wikipedia, the free encyclopedia

SixthSense is a wearable gestural interface device developed by Pranav Mistry, a PhD student in the Fluid Interfaces Group at the MIT Media Lab. It is similar to Telepointer, a neckworn projector/camera system developed by Media Lab student Steve Mann[1] (which Mann originally referred to as "Synthetic Synesthesia of the Sixth Sense").[2]

The SixthSense prototype comprises a pocket projector, a mirror and a camera contained in a pendant like, wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques.[3] The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tips of the user’s fingers. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. SixthSense supports multi-touch and multi-user interaction.

Sunday, July 10, 2011

tactile pixels to create an electrical field you can feel.

Tactile Pixels Will Make It Easy To Read Braille On Touchscreens

It’s a familiar complaint. Even as companies like Apple try to tie the computer interface back to the natural world, touchscreens are still woefully flat. But recent developments, including Senseg’s new E-sense technology, could bring real-world touch into everyday computing. 

Obviously, this is nothing new. Research into programmable friction has already yielded impressive results, especially in the realm of stickiness. But this concept accomplishes the same goal by using “tixels” or tactile pixels to create an electrical field you can feel. Your skin responds by feeling whatever the interface wants it to feel: Buttons, perhaps, or even the fur of a virtual pet.

The project sounds really promising, and Senseg already has Toshiba backing them. Imagine video chat aided by tactile pixels. Being able to gentle touch the face of a newborn baby. Or the hand of a distant lover. The possibilities are as endless as ever. [The Next Web via Geekosystem]

Monday, June 27, 2011

Using The Kinect To Make Any Surface Multi-Touch - 3point calibration

 




Today in the Department of Kinect Hacks, we’ve got an official-looking hack showing off how you can use the Kinect (and its open-source drivers, of course) to turn any flat surface into a multi-touch trackpad or projected Surface.

It’s pretty straightforward, really. The Kinect looks at the scene in 3D, you establish a plane and boundaries for the interaction area, and boom, multi-touch.

This little demo was put together by a seasoned interactive surface team, Patten Studio, from whom I hope we can expect to see an open demo app of this thing.

[via Reddit, where they're getting good at catching these little experiments]

Friday, June 17, 2011

Kinect Hacks: Flash, HTML, Unity and Silverlight integration

 

Blitz, an interactive marketing agency, has released its source code and scripts for a Kinect mod that outputs data compatible with Flash, HTML, Unity and Microsoft Silverlight. The company, which helped launch Halo: Waypoint with Microsoft and 343 Industries, explains that the device's standard C++ programming language was too limited for budding Kinect hackers.

 

http://www.joystiq.com/2011/01/13/kinect-hacks-flash-html-unity-and-silverlight-integration/

Has Anybody Downloaded The Kinect PC SDK Yet? | Gizmodo Australia

 

Has Anybody Downloaded The Kinect PC SDK Yet?

One of the most exciting things about Microsoft’s Kinect gaming peripheral (aside form Dance Central) is the awesome ways that hackers took it and used it to create really interesting and new user interfaces for engaging with technology. And now that Microsoft has officially launched an SDK that lets users create their own apps using the Kinect camera, we’re wondering if any of you have downloaded it yet. 

It’s an amazing technology that opens up the doors for lots of future applications, many of which are likely to be created by people playing around with this SDK.

If you have downloaded it, tell us what you think in the comments below. What are you planning to do with it – will you be trying to create something amazing, or just stuffing around for laughs?

Friday, June 10, 2011

Sweatshoppe – Video Painting « Urban Projection

 

Sweatshoppe – Video Painting

Got a mail to a fantastic project going on in New York. I just post the mail here :

Multimedia performers Sweatshoppe have been wheat pasting buildings

with moving images all over New York. Mapping video projections to

LED-lit paint rollers, Sweatshoppe lay their projections on a surface,

paint-stroke by paint stroke. They call new digital performance style

“Video Painting” and have demonstrated the end result here:


SWEATSHOPPE, 4spots, the landing extras from SWEATSHOPPE on Vimeo.

How it works: The software controlling the video was written in Max.

The paint roller does not use any sort of paint, it simply contains

green LEDs. The software tracks the color green and outputs the x y

position which are sent to drawing commands and the strokes are

textured with video.

Sweatshoppe is video artists Bruno Levy and Blake Shaw. They plan on

eventually releasing the software, but only after it is much more

refined, buffed up with features and is user-friendly.

SWEATSHOPPE Video Painting

 

SWEATSHOPPE Video Painting @ DIS-PATCH Festival Belgrade from SWEATSHOPPE on Vimeo.

calibrate CCV

Community Core Vision (CCV) – Calibration | Seth Sandler

 

Community Core Vision (CCV) - Calibration

 

In order to calibrate CCV for your camera and projector/LCD, you'll need to run the calibration process. Calibrating allows touch points from the camera to line up with elements on screen. This way, when touching something displayed on screen, the touch is registered in the correct place. In order to do this, CCV has to translate camera space into screen space; this is done by touching individual calibration points.

 

 

Optical Multitouch Techniques

Multitouch – How To | Seth Sandler

 

Optical Multitouch Techniques

Each technique utilizes 3 main components:

  1. Infrared Camera (or other optical sensor)
  2. Infrared light
  3. Visual Feedback (projector or LCD)

setup thumb1 Multitouch   How To

 

Wednesday, June 8, 2011

Perpetual Storytelling Apparatus

Perpetual Storytelling Apparatus by Julius von Bismarck & Benjamin Maus | CreativeApplications.Net

 

The “Perpetual Storytelling Apparatus” was designed by Julius von Bismarck & Benjamin Maus. It is a drawing machine that illustrates a never-ending story by translating words of text into patent drawings.

Seven million patents — linked by over 22 million references — form the vocabulary. By using references to earlier patents, it is possible to find paths between arbitrary patents. They form a kind of subtext.

The machine attempts to show how new visual connections and narrative layers emerge through the interweaving of the story with the depiction of technical developments.

Basic procedure

1. The program downloads and parses a part of the text of a recent best-selling book.

2. The algorithm eliminates all insignificant words like “I”, “and”, “to”, “for”, “the”, etc. The remaining words and their combinations are the keywords for the patent drawings.

3. Using the keywords in chronological order, it searches for the key-patents

4. The program now searches for a path connecting the found key patents. This is possible because every patent contains several references to older patents – the so-called “prior art”.

5. All key-patents and the patents connecting them semantically are arranged and printed.

6. Goto step 1.

 

 

Perpetual Storytelling Apparatus

Perpetual Storytelling Apparatus

 

The Works of Silke Hilsing: Impress & Virtual Gravity | Designerscouch #thecritiquenetwork

 

The Works of Silke Hilsing: Impress & Virtual Gravity

Virtual gravity is an interface between digital and analog world. With the aid of analog carriers, virtual terms can be taken up and transported from a loading screen to an analog scale. The importance and popularity of these terms, outputted as a virtual weight, can be weighed physically and be compared. Therefore impalpable, digital data get an actual physical existence and become a sensually tangible experience.

 

Virtual Gravity [Processing]: Weight of digital data / project by Silke Hilsing | CreativeApplications.Net

virtualgravity00

Tuesday, June 7, 2011

Martin Wattenberg & Marek Walczak - Apartment

Database Aesthetics » Martin Wattenberg & Marek Walczak - Apartment

 

Saturday, 9 Feb 2008

Martin Wattenberg & Marek Walczak - Apartment

Martin Wattenberg & Marek Walczak - Apartment

In Wattenberg and Marek’s Apartment, “viewers are confronted with a blinking cursor. As they type, rooms begin to take shape in the form of a two-dimensional plan, similar to a blueprint. The architecture is based on a semantic analysis of the viewer’s words, reorganizing them to reflect the underlying themes they express. The apartments are then clustered into buildings and cities according to their linguistic relationships.

Each apartment is translated into a navigable three-dimensional dwelling, so contrasting between abstract plans/texts and experiential images/sounds.

Apartment is inspired by the idea of the memory palace. In a mnemonic technique from a pre-Post-It era, Cicero imagined inscribing the themes of a speech on a suite of rooms in a villa, and then reciting that speech by mentally walking from space to space. Establishing an equivalence between language and space, Apartment connects the written word with different forms of spatial configurations.”

Experience the project here.

Martin Wattenberg: Apartment

Martin Wattenberg: Apartment

 

 

Viewers are confronted with a blinking cursor. As they type, rooms begin to take shape in the form of a two-dimensional plan, similar to a blueprint. The architecture is based on a semantic analysis of the viewer’s words, reorganizing them to reflect the underlying themes they express. The apartments are then clustered into buildings and cities according to their linguistic relationships.


Apartment is inspired by memory palaces. In a mnemonic technique from a pre-Post-It era, Cicero imagined inscribing the themes of a speech on a suite of rooms in a villa, and then reciting that speech by mentally walking from space to space. Establishing an equivalence between language and space, Apartment connects the written word with different forms of spatial configurations.

In some versions the computer constructs a 3D as well as 2D structure. In a few later installations, viewers can collaborate, with two people able to merge their apartments into
a combined structure.

Monday, June 6, 2011

projected onto the floor of the gallery that allows the visitor to walk ‘through’ it

Group exhibition, Ljubljana and Ribnica/Slovenia: You Own Me Now Until You Forget About Me. | CONT3XT.NET

 

Apartment (2001), by Martin Wattenberg & Marek Walczak

Martin Wattenberg and Marek Walczak

(with additional programming by Jonathan Feinberg)

Apartment (2001-2004) | Website, projection

In Apartment Martin Wattenberg and Marek Walczak were inspired by Cicero’s mnemonic technique of a memory palace. The user establishes an equivalence between language and space by typing words and phrases into the computer. After being automatically processed, language takes the form of a two-dimensional blueprint projected onto the floor of the gallery that allows the visitor to walk ‘through’ it. The semantic relationships of the written words are connected to spatial and contextual configurations and at the same time cause their architectural re-organisation.

http://www.turbulence.org/Works/apartment/#

tiles on wall arrangement

Academictips.org - Memory Techniques, Memorization Tips - The Roman Room Technique

The Roman Room technique is an ancient and effective way of remembering
unstructured information where the relationship of items of information to
other items of information is not important.

It functions by imagining a room (e.g. your sitting room or bedroom). Within that room are objects. The technique works by associating images with those objects.

To recall information,
simply take a tour around the room in your mind, visualising the known objects and their associated images.

Expanding the Roman Room System

The technique can be expanded in one way, by going into more detail, and
keying images to smaller objects. Alternatively you can open doors from the
room you are using into other rooms, and use their objects to expand the
volume of information stored. When you have more experience you may find
that you can build extensions to your rooms in your imagination, and populate
them with objects that would logically be there.

Other rooms can be used to store other categories of information.

Moreover, there is no need to restrict this information to rooms: you could
use a view or a town you know well, and populate it with memory images.

Summary

The Roman Room technique is similar to the Journey method, in that it works
by pegging images coding for information to known images, in this case to
objects in a room or several rooms.

The Roman Room technique is most effective for storing lists of unlinked
information, whereas the journey method is most effective for storing lists
of related items.

Curiosity Cabnet, Cabinet of Wonder, Wonder-room

Cabinet of curiosities - Wikipedia, the free encyclopedia

A cabinet of curiosities was an encyclopedic collection in Renaissance Europe of types of objects whose categorical boundaries were yet to be defined. They were also known by various names such as Cabinet of Wonder, and in German Kunstkammer or Wunderkammer (wonder-room).

File:Berlin Naturkundemuseum Korallen.jpg - Wikipedia, the free encyclopedia

File:Berlin Naturkundemuseum Korallen.jpg

Modern terminology would categorize the objects included as belonging to natural history (sometimes faked), geologyethnography,archaeology, religious or historical relics, works of art (including cabinet paintings) and antiquities. "The Kunstkammer was regarded as a microcosm or theater of the world, and a memory theater. The Kunstkammer conveyed symbolically the patron's control of the world through its indoor, microscopic reproduction."[1] Of Charles I of England's collection, Peter Thomas has succinctly stated, "TheKunstkabinett itself was a form of propaganda".[2] Besides the most famous and best documented cabinets of rulers and aristocrats, members of the merchant class and early practitioners of science in Europe also formed collections that were precursors to museums.

Several internet bloggers describe their sites as a wunderkammer, either because they are primarily made up of links to things that are interesting, or because they inspire wonder in a similar manner to the original wunderkammer (see External Links, below). Robert Gehl describes internet video sites like YouTube as modern-day Wunderkammern, although in danger of being refined into capitalist institutions, "just as professionalized curators refined Wunderkammers into the modern museum in the 18th century."[19] Playwright Jordan Harrison's Museum Play is structurally based around the cabinets, habitats and hallways of a natural history museum.

Wednesday, June 1, 2011

Content is Queen - composite collage

today and tomorrow

 

“Content is Queen” is a new project by Sergio Albiac, it is a generative video painting that comments on democracy and power.

At the same time, is a paradoxical dialogue and strange marriage between the banal and utterly majestic: to create the series, the most popular (in a truly democratic sense) internet videos of a given moment are used as the input of a generative process that “paints” with action the image of a contemporary Queen.

You might also want to check out a more static version by Sergio called “Divided Experiences“.


 

Sony's "SmartAR" Augmented Reality Tech Demo

Sony's "SmartAR" Augmented Reality Tech Demo - Core77

 

Posted by hipstomp | 30 May 2011

 | 
Comments (1)

Sony might have lost the portable music player and smartphone war, but it's too soon to count them out of the product design space. What they need is a hit or a killer app to put them back in the game, and since they've lost points on hardware, perhaps they'll make it back in software. Take a look at "SmartAR," the augmented reality technology they've been messing around with in their skunkworks:

 

Needless to say, the ability to photograph barcode-less items in the real world and get instant information on them could be huge, a sort of away-from-a-home-computer Google. What remains to be seen is if Sony can bring it to the masses in a palatable format and, of course, what Google will counteroffer if SmartAR takes off.