realized only with open source software
SixthSense - Wikipedia, the free encyclopedia
SixthSense is a wearable gestural interface device developed by Pranav Mistry, a PhD student in the Fluid Interfaces Group at the MIT Media Lab. It is similar to Telepointer, a neckworn projector/camera system developed by Media Lab student Steve Mann[1] (which Mann originally referred to as "Synthetic Synesthesia of the Sixth Sense").[2]
The SixthSense prototype comprises a pocket projector, a mirror and a camera contained in a pendant like, wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user's hand gestures and physical objects using computer-vision based techniques.[3] The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tips of the user’s fingers. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. SixthSense supports multi-touch and multi-user interaction.
Created by The Light Surgeons for the National Maritime Museum in London, the installation “Voyagers” engages with England’s long standing relationship to the sea, featuring thematic images and film from the museum’s collection animated atop a continually flowing ocean of typography across an abstract wave shaped structure. Together with a number of other projects, the installation opens to the public tomorrow. We got a chance to take a sneak peak earlier today and get some insight into the making together with what we enjoy most – the debug info and some fantastic behind the scene images.
James George from the New York studio Flightphase collaborated with TThe Light Surgeons to create custom application to animate the content in realtime. Created using openFrameworks, the applications use a number of different tools to communicate the narratives. The ocean effect of type sweeping across the installation surface is a 3d wave simulation created using a vector field. The complete simulation is stitched and mapped across seven projectors covering the 20 metre triangulated surface. The image sets were designed by the Light Surgeons to relate each of the six themes of the museum. openFrameworks parses the layouts and generates animations that cascade down the wave. Also, at the far end of the gallery is a Puffersphere, which is an internal spherical projector. During the course of each cascade of images the puffersphere collects thematic keywords that relate to the images and prints them onto the surface of the globe. Likewise, the type waves trigger projected content of the sphere as they “hit” it’s surface. The audio created by Jude Greenaway is mixed dynamically by interfacing openFrameworks to SuperCollider over OSC.
James used Dan Shiffman‘s Most Pixels Ever library for synchronizing the application. He has also released a number of changes to the library that can be found here (github). The team has also built a way to synchronize parameters over the network using MPE – github and through developing content for the Puffersphere, the team created a lightweight library for animating the surface of the sphere and can be found here.
Full credits:
Design/Direction: The Light Surgeons, Bespoke Software Design: Flightphase, Sound Design: Jude Greenaway, Additional Programming: Timothy Gfrerer, SuperCollider Programming: Michael McCrea and Exhibition Design: Real Studios
It’s a familiar complaint. Even as companies like Apple try to tie the computer interface back to the natural world, touchscreens are still woefully flat. But recent developments, including Senseg’s new E-sense technology, could bring real-world touch into everyday computing. Obviously, this is nothing new. Research into programmable friction has already yielded impressive results, especially in the realm of stickiness. But this concept accomplishes the same goal by using “tixels” or tactile pixels to create an electrical field you can feel. Your skin responds by feeling whatever the interface wants it to feel: Buttons, perhaps, or even the fur of a virtual pet. The project sounds really promising, and Senseg already has Toshiba backing them. Imagine video chat aided by tactile pixels. Being able to gentle touch the face of a newborn baby. Or the hand of a distant lover. The possibilities are as endless as ever. [The Next Web via Geekosystem]Tactile Pixels Will Make It Easy To Read Braille On Touchscreens
Today in the Department of Kinect Hacks, we’ve got an official-looking hack showing off how you can use the Kinect (and its open-source drivers, of course) to turn any flat surface into a multi-touch trackpad or projected Surface.
It’s pretty straightforward, really. The Kinect looks at the scene in 3D, you establish a plane and boundaries for the interaction area, and boom, multi-touch.
This little demo was put together by a seasoned interactive surface team, Patten Studio, from whom I hope we can expect to see an open demo app of this thing.
[via Reddit, where they're getting good at catching these little experiments]
Blitz, an interactive marketing agency, has released its source code and scripts for a Kinect mod that outputs data compatible with Flash, HTML, Unity and Microsoft Silverlight. The company, which helped launch Halo: Waypoint with Microsoft and 343 Industries, explains that the device's standard C++ programming language was too limited for budding Kinect hackers.
http://www.joystiq.com/2011/01/13/kinect-hacks-flash-html-unity-and-silverlight-integration/
Has Anybody Downloaded The Kinect PC SDK Yet? | Gizmodo Australia
One of the most exciting things about Microsoft’s Kinect gaming peripheral (aside form Dance Central) is the awesome ways that hackers took it and used it to create really interesting and new user interfaces for engaging with technology. And now that Microsoft has officially launched an SDK that lets users create their own apps using the Kinect camera, we’re wondering if any of you have downloaded it yet.
It’s an amazing technology that opens up the doors for lots of future applications, many of which are likely to be created by people playing around with this SDK.
If you have downloaded it, tell us what you think in the comments below. What are you planning to do with it – will you be trying to create something amazing, or just stuffing around for laughs?
Museum of the personal: the souvenir and nostalgia
Essentially, I am interested in dealing with
photography's capacity to hold nostalgic significance for the possessor.
Museum of the personal: the souvenir and nostalgia
Souvenirs generally fall into two distinct types: a souvenir of a place,
or a souvenir of an event. These types sometimes overlap, and once purchased
both aspects are considered intrinsic to the narrative of the object. For
instance, someone may say 'I bought this key ring at Stonehenge last summer'
or 'I bought this tee-shirt at Big Day Out last year.' As contended earlier,
it is obvious that a mass-produced kitsch materiality limited to the realm
of tourism does not bound the souvenir as an object. Souvenirs can also be
precious objects from the start¾for instance souvenir commodities from jewelry
factories and gem fields. Rather, the souvenir can take any material form
as long as the relationship with the possessor is intact. By this, I mean
that there is no separation or rupture of the narrative cast by the possessor
regarding the object. This relationship is at once fetishistic, nostalgic
and above all capable of generating a narrative or discourse with the aid
of the owner. Without the narrative, the objects meaning is invisible, not
able to be articulated without the possessor's input, its role as a stand
in or partial object is lost.
Sweatshoppe – Video Painting « Urban Projection
Got a mail to a fantastic project going on in New York. I just post the mail here :
Multimedia performers Sweatshoppe have been wheat pasting buildings
with moving images all over New York. Mapping video projections to
LED-lit paint rollers, Sweatshoppe lay their projections on a surface,
paint-stroke by paint stroke. They call new digital performance style
“Video Painting” and have demonstrated the end result here:
SWEATSHOPPE, 4spots, the landing extras from SWEATSHOPPE on Vimeo.
How it works: The software controlling the video was written in Max.
The paint roller does not use any sort of paint, it simply contains
green LEDs. The software tracks the color green and outputs the x y
position which are sent to drawing commands and the strokes are
textured with video.
Sweatshoppe is video artists Bruno Levy and Blake Shaw. They plan on
eventually releasing the software, but only after it is much more
refined, buffed up with features and is user-friendly.
Community Core Vision (CCV) – Calibration | Seth Sandler
In order to calibrate CCV for your camera and projector/LCD, you'll need to run the calibration process. Calibrating allows touch points from the camera to line up with elements on screen. This way, when touching something displayed on screen, the touch is registered in the correct place. In order to do this, CCV has to translate camera space into screen space; this is done by touching individual calibration points.
Multitouch – How To | Seth Sandler
Each technique utilizes 3 main components:
Perpetual Storytelling Apparatus by Julius von Bismarck & Benjamin Maus | CreativeApplications.Net
The “Perpetual Storytelling Apparatus” was designed by Julius von Bismarck & Benjamin Maus. It is a drawing machine that illustrates a never-ending story by translating words of text into patent drawings.
Seven million patents — linked by over 22 million references — form the vocabulary. By using references to earlier patents, it is possible to find paths between arbitrary patents. They form a kind of subtext.
The machine attempts to show how new visual connections and narrative layers emerge through the interweaving of the story with the depiction of technical developments.
Basic procedure
1. The program downloads and parses a part of the text of a recent best-selling book.
2. The algorithm eliminates all insignificant words like “I”, “and”, “to”, “for”, “the”, etc. The remaining words and their combinations are the keywords for the patent drawings.
3. Using the keywords in chronological order, it searches for the key-patents
4. The program now searches for a path connecting the found key patents. This is possible because every patent contains several references to older patents – the so-called “prior art”.
5. All key-patents and the patents connecting them semantically are arranged and printed.
6. Goto step 1.
Perpetual Storytelling Apparatus
The Works of Silke Hilsing: Impress & Virtual Gravity | Designerscouch #thecritiquenetwork
Virtual gravity is an interface between digital and analog world. With the aid of analog carriers, virtual terms can be taken up and transported from a loading screen to an analog scale. The importance and popularity of these terms, outputted as a virtual weight, can be weighed physically and be compared. Therefore impalpable, digital data get an actual physical existence and become a sensually tangible experience.
Database Aesthetics » Martin Wattenberg & Marek Walczak - Apartment
Saturday, 9 Feb 2008
In Wattenberg and Marek’s Apartment, “viewers are confronted with a blinking cursor. As they type, rooms begin to take shape in the form of a two-dimensional plan, similar to a blueprint. The architecture is based on a semantic analysis of the viewer’s words, reorganizing them to reflect the underlying themes they express. The apartments are then clustered into buildings and cities according to their linguistic relationships.
Each apartment is translated into a navigable three-dimensional dwelling, so contrasting between abstract plans/texts and experiential images/sounds.
Apartment is inspired by the idea of the memory palace. In a mnemonic technique from a pre-Post-It era, Cicero imagined inscribing the themes of a speech on a suite of rooms in a villa, and then reciting that speech by mentally walking from space to space. Establishing an equivalence between language and space, Apartment connects the written word with different forms of spatial configurations.”
Experience the project here.
Viewers are confronted with a blinking cursor. As they type, rooms begin to take shape in the form of a two-dimensional plan, similar to a blueprint. The architecture is based on a semantic analysis of the viewer’s words, reorganizing them to reflect the underlying themes they express. The apartments are then clustered into buildings and cities according to their linguistic relationships.
Apartment is inspired by memory palaces. In a mnemonic technique from a pre-Post-It era, Cicero imagined inscribing the themes of a speech on a suite of rooms in a villa, and then reciting that speech by mentally walking from space to space. Establishing an equivalence between language and space, Apartment connects the written word with different forms of spatial configurations.
In some versions the computer constructs a 3D as well as 2D structure. In a few later installations, viewers can collaborate, with two people able to merge their apartments into
a combined structure.
Jonathan Salem Baskin's Dim Bulb: Two-Headed Babies
Curiosity cabinets were history's first happenings, or performance art pieces.
Over time, science got more rigorous, and education more common. The scientific method robbed life of myths and superstitions, and replaced them with facts and repeatable processes. We lost our experience of magic, only to have it replaced with steam engines.
Serious museums emerged to educate people on this shift.
WELCOME TO THE DALI HOUSE (PLEASE MIND YOUR HEAD)
A bit more rooting around in other people’s boxes of stuff lately. Nothing to get alarmed about though. Specifically it’s the curiosity cabinets alluded to in previous posts on Mark Ryden
Studio Bility's Curiosity Cabinet | Apartment Therapy Los Angeles
We've always loved curiosity cabinets, and this one from the Icelandic husband and wife design team of Studio Bility is no exception. The unit opens from all four sides, offering a myriad of storage slots for all shapes and sizes of curiosities to artfully present when the opportunity arises..
Projection mapping on the rise
As visualists have noticed already and known for some time now, projection mapping is cool. And now it seems that this technique is going mainstrem in a big way as it is beginning to be used more and more in commercial contexts. In my city I am noticing more and more outdoor video projections – some of them using mapping techniques.
For those who stumble upon this article and are not familiar with the term, projection mapping is the technique of beaming video (with a standard video projector) onto three dimensional objects and adjusting and masking the image so that it seems to follow the shape of the target object instead of spilling out onto walls etc. The result can be surprisingly effective and eye catching as the video is no longer a flat square on the wall but becomes an object in space – an animated sculpture if you will.
Martin Wattenberg and Marek Walczak
(with additional programming by Jonathan Feinberg)
Apartment (2001-2004) | Website, projection
In Apartment Martin Wattenberg and Marek Walczak were inspired by Cicero’s mnemonic technique of a memory palace. The user establishes an equivalence between language and space by typing words and phrases into the computer. After being automatically processed, language takes the form of a two-dimensional blueprint projected onto the floor of the gallery that allows the visitor to walk ‘through’ it. The semantic relationships of the written words are connected to spatial and contextual configurations and at the same time cause their architectural re-organisation.
http://www.turbulence.org/Works/apartment/#
Method of loci - Wikipedia, the free encyclopedia
'the method of loci', an imaginal technique known to the ancient Greeks and Romans and described by Yates (1966) in her book The Art of Memory as well as by Luria (1969). In this technique the subject memorizes the layout of some building, or the arrangement of shops on a street, or any geographical entity which is composed of a number of discrete loci. When desiring to remember a set of items the subject literally 'walks' through these loci and commits an item to each one by forming an image between the item and any distinguishing feature of that locus. Retrieval of items is achieved by 'walking' through the loci, allowing the latter to activate the desired items.
The method of loci is also commonly called the mental walk.
In basic terms, it is a method of memory enhancement which uses visualization to organize and recall information. Many memory contest champions claim to use this technique in order to recall faces, digits, and lists of words.
more to do with their technique of using regions of their brain that have to do with spatial learning
It is generally applied to encoding the key ideas of a subject. Two approaches are:
1. Link the key ideas of a subject and then deep-learn those key ideas in relation to each other, and;
2. Think through the key ideas of a subject in depth, re-arrange the ideas in relation to an argument, then link the ideas to loci in good order.
It has been found that teaching such techniques as pure memorization methods often leads students towards surface learning only. Therefore, it has been recommended that the method of loci should be integrated thoroughly with deeper learning approaches.
Example:
During the mental walk, people remember lists of words by mentally walking a familiar route and associating these objects with specific landmarks on their route. An example of this would be to remember your grocery shopping list in a mental walk from your bedroom to kitchen in your house. Let's say the first item on your list was bread; then mentally you can place a loaf of bread on your bed. As you continue mentally walking you can place the next item, assume it is eggs, on your dresser. The mental walk continues like this as you place consecutive items along a familiar route that you walk. So when you are at the grocery store, you can then think about this walk and “see” what you placed at each location. In your head you will remember bread being on your bed, and eggs being on the dresser. This can continue for as many items as you want to place on your path as long as the route continues. The more dramatic the images, the more vivid the memory. For instance: instead of "bread", try to visualize a giant loaf of bread; instead of "eggs", imagine broken eggs all over the place.
Academictips.org - Memory Techniques, Memorization Tips - The Roman Room Technique
The Roman Room technique is an ancient and effective way of remembering
unstructured information where the relationship of items of information to
other items of information is not important.
It functions by imagining a room (e.g. your sitting room or bedroom). Within that room are objects. The technique works by associating images with those objects.
To recall information,
simply take a tour around the room in your mind, visualising the known objects and their associated images.
Expanding the Roman Room System
The technique can be expanded in one way, by going into more detail, and
keying images to smaller objects. Alternatively you can open doors from the
room you are using into other rooms, and use their objects to expand the
volume of information stored. When you have more experience you may find
that you can build extensions to your rooms in your imagination, and populate
them with objects that would logically be there.
Other rooms can be used to store other categories of information.
Moreover, there is no need to restrict this information to rooms: you could
use a view or a town you know well, and populate it with memory images.
Summary
The Roman Room technique is similar to the Journey method, in that it works
by pegging images coding for information to known images, in this case to
objects in a room or several rooms.
The Roman Room technique is most effective for storing lists of unlinked
information, whereas the journey method is most effective for storing lists
of related items.
Cabinet of curiosities - Wikipedia, the free encyclopedia
A cabinet of curiosities was an encyclopedic collection in Renaissance Europe of types of objects whose categorical boundaries were yet to be defined. They were also known by various names such as Cabinet of Wonder, and in German Kunstkammer or Wunderkammer (wonder-room).
File:Berlin Naturkundemuseum Korallen.jpg - Wikipedia, the free encyclopedia
Modern terminology would categorize the objects included as belonging to natural history (sometimes faked), geology, ethnography,archaeology, religious or historical relics, works of art (including cabinet paintings) and antiquities. "The Kunstkammer was regarded as a microcosm or theater of the world, and a memory theater. The Kunstkammer conveyed symbolically the patron's control of the world through its indoor, microscopic reproduction."[1] Of Charles I of England's collection, Peter Thomas has succinctly stated, "TheKunstkabinett itself was a form of propaganda".[2] Besides the most famous and best documented cabinets of rulers and aristocrats, members of the merchant class and early practitioners of science in Europe also formed collections that were precursors to museums.
Several internet bloggers describe their sites as a wunderkammer, either because they are primarily made up of links to things that are interesting, or because they inspire wonder in a similar manner to the original wunderkammer (see External Links, below). Robert Gehl describes internet video sites like YouTube as modern-day Wunderkammern, although in danger of being refined into capitalist institutions, "just as professionalized curators refined Wunderkammers into the modern museum in the 18th century."[19] Playwright Jordan Harrison's Museum Play is structurally based around the cabinets, habitats and hallways of a natural history museum.
“Content is Queen” is a new project by Sergio Albiac, it is a generative video painting that comments on democracy and power.
At the same time, is a paradoxical dialogue and strange marriage between the banal and utterly majestic: to create the series, the most popular (in a truly democratic sense) internet videos of a given moment are used as the input of a generative process that “paints” with action the image of a contemporary Queen.
You might also want to check out a more static version by Sergio called “Divided Experiences“.
Sony's "SmartAR" Augmented Reality Tech Demo - Core77
Posted by hipstomp | 30 May 2011
Sony might have lost the portable music player and smartphone war, but it's too soon to count them out of the product design space. What they need is a hit or a killer app to put them back in the game, and since they've lost points on hardware, perhaps they'll make it back in software. Take a look at "SmartAR," the augmented reality technology they've been messing around with in their skunkworks:
Needless to say, the ability to photograph barcode-less items in the real world and get instant information on them could be huge, a sort of away-from-a-home-computer Google. What remains to be seen is if Sony can bring it to the masses in a palatable format and, of course, what Google will counteroffer if SmartAR takes off.
IdN presents the Street Art Issue and Projection Mapping issue for May-June 2011. The IdN Video v18n2: Projection Mapping issue includes a DVD with 17 leading international studios and designers who play with optical perceptions, create imaginative illusions in real time. Take a look at the video below:
We look beyond the object itself, Inside and out // We focus on the ethos that defines the meaning of a product and its function and purpose. Using our unique approach, we are able to create products that move you, that challenge conventional wisdom, that embrace positive and meaningful experiences.
OUR FOCUS IS NOT JUST THE PERFECT DESIGN SOLUTION BUT THE CHARGED SPACE BETWEEN THE USER AND THE PRODUCT. THE INTANGIBLE SPACE THAT DEFINES THE EXPERIENCE AND ENABLES DEEPER CONNECTIONS
Cultural Memory: Forgetting to Remember/ Remembering to Forget
Cultural memory - Wikipedia, the free encyclopedia
Because memory is not just an individual, private experience but is also part of the collective domain, cultural memory has become a topic in both historiography (Pierre Nora, Richard Terdiman) and cultural studies (e.g., Susan Stewart). These emphasize cultural memory’s process (historiography) and its implications and objects (cultural studies), respectively. Memory is a phenomenon that is directly related to the present; our perception of the past is always influenced by the present, which means that it is always changing.
Stewart, for example, claims that our culture has changed from a culture of production to a culture of consumption.
These specific objects can refer to either a distant time (an antique) or a distant (exotic) place. Stewart explains how our souvenirs authenticate our experiences and how they are a survival sign of events that exist only through the invention of narrative.
Catherine Keenan explains how the act of taking a picture can underline the importance of remembering, both individually and collectively. Also she states that pictures cannot only stimulate or help memory, but can rather eclipse the actual memory – when we remember in terms of the photograph – or they can serve as a reminder of our propensity to forget. Others have argued that photographs can be incorporated in memory and therefore supplement it.
Nora pioneered connecting memory to physical, tangible locations, nowadays globally known and incorporated as lieux de mémoire
lieux de mémoire - The memorial site is a historical concept put forward by the book Places of Memory, published under the direction of Pierre Nora between 1984 and 1992. Nora is equally well known for having directed Les Lieux de Mémoire, three volumes which gave as their point the work of enumerating the places and the objects in which are the incarnate national memory of the French. According to Pierre Nora , "a place of memory in every sense of the word is going to be the most tangible and concrete, possibly geographically located at the object more abstract and intellectually constructed. " It may therefore be a monument , a prominent figure, a museum , the archives , as well as a symbol of a currency , an event orinstitution .
"An object becomes a place of memory when he escaped oblivion, for example with the display of commemorative plaques, and when a community reinvestment of its emotionand its emotions . "
Individuals remember events and experiences some of which they share with a collective. Through mutual reconstruction and recounting collective memory is reconstructed. Individuals are born into familial discourse which already provides a backdrop of communal memories against which individual memories are shaped.A group's communal memory becomes its common knowledge which creates a social bond, a sense of belonging and identity. Memory work - Wikipedia, the free encyclopedia
it is perception driven by a longing for authenticity that colors memory, which is made clear by a desire to experience the real (Susan Stewart)
Memory work - Wikipedia, the free encyclopedia
group's
Why Not... A Robert Gober Nursery? - Daddy Types
I've always loved the cute-but-unsettling evocations of vestigial childhood memories that adhere to the incredible, hand-made sculptures of Robert Gober. And it continues to surprise me that no one has ever licensed Gober baby furniture--actually, that doesn't surprise me at all. But at least, why has no one created a Gober-inspired nursery?
I mean, the actual crib and playpen sculptures are problematic because a) they're sculptures, not functional objects, so b) they're often distorted or incomplete, c) they probably run about a million dollars apiece if they ever came to market at all, d) if you did manage to get one of the rare, 4-sided, right-angled works, the conservation risks of having a teething kid gnaw on the rail might keep you up at night, and the real dealbreaker, d) the spindles are spaced more than the CPSC-mandated 2 3/8 inches apart, which could pose a strangulation or entrapment hazard.
NonObject: The Phone Designed With Wrinkles - Gizmodo
a more honest representation of life and how we live it? No orange is a perfect sphere; no tree grows in a perfectly straight line.
Why then do we seek perfections in our lives and in our objects?
Tarati invites us, finally, to "reach out and touch someone" through the invisible magic of technology. It is, literally, poetry in motion, the first step toward realizing the promise of cellular technology. Technology, invisible as it is, is also magical. It is possible to penetrate the barrier of the physical.
An interesting project on show at the Computer-Human Interaction Conference in Vancouver combine two Microsoft Kinect devices and a projector, together with a large acrylic global and some custom software and hardware. To present a 360 degree view of an object and which can then track a viewers movements and be controlled by gestures.
“Project Snowglobe” has been designed by students from Queens University and has a hemispherical mirror mounted inside of an acrylic sphere on to which the projector displays its image. Watch the video after the jump to see “Project Snowglobe” in action.
The imaged viewed within the globe is not actually in 3D but is moved as the viewer moves around the globe and is tracked by the two Kinect devices.
Planned obsolescence - Wikipedia, the free encyclopedia
Planned obsolescence or built-in obsolescence[1] in industrial design is a policy of deliberately planning or designing a product with a limited useful life, so it will become obsolete or nonfunctional after a certain period.[1] Planned obsolescence has potential benefits for a producer because to obtain continuing use of the product the consumer is under pressure to purchase again, whether from the same manufacturer (a replacement part or a newer model), or from a competitor which might also rely on planned obsolescence.[1]
For an industry, planned obsolescence stimulates demand by encouraging purchasers to buy sooner if they still want a functioning product. Built-in obsolescence is used in many different products. There is, however, the potential backlash of consumers who learn that the manufacturer invested money to make the product obsolete faster; such consumers might turn to a producer (if any exists) that offers a more durable alternative.[citation needed]
Planned obsolescence was first developed in the 1920s and 1930s when mass production had opened every minute aspect of the production process to exacting analysis.[citation needed]
Estimates of planned obsolescence can influence a company's decisions about product engineering. Therefore the company can use the least expensive components that satisfy product lifetime projections. Such decisions are part of a broader discipline known as value engineering.
Planned obsolescence - Wikipedia, the free encyclopedia
1932 with Bernard London's pamphlet Ending the Depression Through Planned Obsolescence.[
Brooks Stevens - Wikipedia, the free encyclopedia
Though he is often cited[citation needed] with inventing the concept of planned obsolescence (the practice of artificially shortening product lifecycle in order to influence the buying patterns of consumers in favor of manufacturers), he did not invent it but rather coined the term and defined it. Stevens defined it as "instilling in the buyer the desire to own something a little newer, a little better, a little sooner than is necessary". His view was to always make the consumer want something new, rather than create poor products that would need replacing.[4] There is some debate over his role in this controversial business practice.[5]
A gaming experience that only requires you to use your head
Microsoft’s Kinect and Sony’s PlayStation Move started a movement that’s steadily been picking up momentum over the last year or so, bringing motion sensing into gaming. This has added a new dimension to gameplay, making it far more immersive than ever before, and opening up pathways for gaming ideas we had never even dreamed of before.In a similar vein, students from the University of Texas have put together a fantastic video-game simulation setup. It makes use of an arced gaming screen, a head-tracking camera which has a pico-projector attached to it’s rear end, fitted with a rotating motor. That may be a bit technical, but it’ll all make perfect sense once you’re done reading this and have watched the video.
The function of the head tracking camera is self-explanatory. It fixes on to the gamer’s head, thereby telling where he’s looking. The pico-projector attached to its rear end, moves with the tracker, and projects the simulated image in the gamer’s line of sight, giving him the illusion that he’s seeing things in the first-person and is truly in control of his surroundings using just his head.
There were two games on display in the demonstration, of which the first is in the video below. That being a flight simulator, in which the player pilots the plane by moving around and bobbing his head appropriately to rise, descend or turn. The second was a military based first-person-shooter simulator, in which the player really feels like a soldier himself, with his viewpoint shifting as it would by moving his head in real life. Really impressive stuff, even more so, because it doesn’t make use of Kinect or Move or the likes.
Ladies and gentlemen, prepare to throw those gamepads away. This could well be the future of videogame technology!
In a similar vein, students from the University of Texas have put together a fantastic video-game simulation setup. It makes use of an arced gaming screen, a head-tracking camera which has a pico-projector attached to it’s rear end, fitted with a rotating motor. That may be a bit technical, but it’ll all make perfect sense once you’re done reading this and have watched the video.
The function of the head tracking camera is self-explanatory. It fixes on to the gamer’s head, thereby telling where he’s looking. The pico-projector attached to its rear end, moves with the tracker, and projects the simulated image in the gamer’s line of sight, giving him the illusion that he’s seeing things in the first-person and is truly in control of his surroundings using just his head.
There were two games on display in the demonstration, of which the first is in the video below. That being a flight simulator, in which the player pilots the plane by moving around and bobbing his head appropriately to rise, descend or turn. The second was a military based first-person-shooter simulator, in which the player really feels like a soldier himself, with his viewpoint shifting as it would by moving his head in real life. Really impressive stuff, even more so, because it doesn’t make use of Kinect or Move or the likes.
Ladies and gentlemen, prepare to throw those gamepads away. This could well be the future of videogame technology!
1. Set up background and camera
2. Calibrate your camera with one click
3. Start scanning by sweeping the laser line over the object
4. Gaze at the 3D window and export your result to .OBJ
5. Optional: Automatically stitch several scans/meshes
with DAVID-Shapefusion and export .OBJ, .STL, or .PLY
What is Moject?
Moject stands for “Mobile Motion Projection”, a patented technology and system developed by Dave&Adie that enables devices to interact with projected content.
When can I use Moject?
Dave&Adie are making the technology available through licensing to interested parties like device manufacturers and software companies so they can Moject ‘enable’ their products. If your company would like to know how you can integrate Moject interactivity into your products then get in touch and we’ll arrange a Discovery meeting.
If you are a hardware hacker or modest gadget tinkerer you might want to try our free guide to building a simple Moject (for non-commercial use). Check back on our blog or follow us on twitter to be the first to hear when our guide is published.
What devices can be Moject enabled?
Almost any device can benefit from integrating Moject technology but primarily products such as,
Johnny Chung Lee - Projects - Thesis
The fundamental concept of my thesis is to: 1) Embed optical sensors into the projection surface. 2) Project a series of Gray-coded binary patterns. 3) Decode the location of the sensors for use in a projected application. This video demonstrates this idea in the form of a target screen fitting application. It goes on to demonstrate how this approach can be used in multi-projector applications such as stitching (creating a large display using tiled projection) or layering (multiple versions of content on the same area for view dependent displays). Additionally, it can be used to automatically register the orientation of 3D surfaces for augmenting the appearance of physical objects.
This technique is also useful for performing automatic touch calibration of interactive whiteboards or touch-tables.
Lee, J., Dietz, P., Aminzade, D., and Hudson, S. "Automatic Projector Calibration using Embedded Light Sensors", Proceedings of the ACM Symposium on User Interface Software and Technology, October 2004. [pdf][mov][ppt]
Beagle Board finds its new purpose in DIY wearable computer | Blog | ZiggyTek
Martin Magnusson a researcher and an entrepreneur has created his own wearable computer by using a Beagle Board, a pair of Myvu Crystal video glasses, a bluetooth keyboard and an iPhone. A DIY wearable computer is like something from a sci-fi movie!This small but terrific gadget runs Angstrom Linux and it relies on a bluetooth keyboard for input and a tethered iPhone for its Internet connectivity.
Wouldn’t it be great if they would release this gadget on the market?
BeagleBoard Gives New Power to Open Source Gadgets | Gadget Lab | Wired.com
Open source hardware hobbyists now have a chipset to play with that’s comparable to the powerful processors found in smartphones such as the Nexus One or HTC Incredible.
Texas Instruments has released a new version of its low-power, single-board computer called BeagleBoard-xM. It’s based on the same 1-GHz ARM Cortex A8 processor that drives the most sophisticated smartphones today. That gives it far more processing power than the leading open-source microcontroller platform, Arduino, which many hobbyists currently use to create robots, sensors, toys and other DIY devices.
The BeagleBoard-xM has multimedia features similar to the processor seen in the Palm Pre and Motorola Droid, and includes on-board ethernet, five USB 2.0 ports and 512 MB of memory.
“It’s a fully loaded, open platform that allows users to run multiple applications and embed them in devices,” says Jason Kridner, ARM software architecture manager and BeagleBoard community manager. “We wanted to offer something that’s cheap, ups the performance level and has sufficient memory.”
The first BeagleBoard debuted in 2008, targeting hardware hobbyists who wanted a powerful chipset to build home-brewed gadgets. But, so far, it has been eclipsed by the simpler open source microcontroller Arduino. Arduino has become a big hit among DIYers powering an eclectic variety of projects including electronic textiles, a fire-breathing dragon and many robots.
BeagleBoard isn’t as popular, even though it packs in more technical firepower. Some hobbyists say that could change as open source hardware hackers get more ambitious and move beyond what a simple microcontroller can do.
The 3-inch–square BeagleBoard-xM runs a full Linux operating system with desktop managers and office applications. It also includes a 2-D and 3-D graphics accelerator, a port to add a computer monitor and an S-video port for TV.
BeagleBoard will let hobbyists and open source hardware enthusiasts go where the Arduino won’t, says Justin Huynh, a open source hardware hacker.
“A lot of people complain that Arduino is not powerful enough and if you want something that’s more technical and intensive it is just not good enough,” he says. “So BeagleBoard can be a very interesting alternative.”
And at $180, the BeagleBoard-xM is inexpensive enough to be a technical toy for DIYers, says Hyunh.
“What we have seen happen in the Arduino community is now happening with the BeagleBoard,” he says.
Here are four cool ideas that use the BeagleBoard:
There are at least two ways to create a large display: Buy a giant TV screen from Best Buy, or MacGyver a solution using multiple PC monitors.
The BeagleBoard Videowall tries the latter. It has six 19-inch LCD monitors networked together over USB to run high definition full-screen video.
“I enjoy the challenge of making the most out of limited resources, and the BeagleBoard is a perfect platform for doing just that,” says Måns Rullgård, an embedded software consultant based in England. “It has the power to do really cool things if you get it right, while remaining small both in physical size and power consumption.”
Rullgård and his project partners wanted to create “something spectacular” using the BeagleBoard and FFmpeg, open source multimedia libraries and programs.
The resulting Videowall project uses six BeagleBoards, where each board plays a special file containing only the corresponding segment of the video. The files were created ahead of time on a PC. To synchronize the playback across the BeagleBoards, they are interconnected with a USB-based network.
The video wall made its public debut in February in Brussels and it will be shown this week at the LinuxTag conference in Berlin.
Photo: Måns Rullgård