Home | Lectures | Videos | Publications | Talks | Ideas | About | Imprint | RSS

Research Ideas

Here is a collection of ideas for which I seek collaborators from anywhere around the world, in particular excellent students of programs related to computer science who want to delve into software development and experimentation. A call to lecturers and researchers: Let’s set up transnational student projects. If somebody from industry gets stranded on this page, I’m happy to hear which stuff could actually be sold. Please contact me in case you are interested.

I expect all projects listed on this to page to lead to publications on major conferences or even in major journals, if done appropriately. I’ll be happy to help with preparing a submission to a conference and will provide and/or organize funds to pay a substantial percentage if not all of the travel expenses if the submission is accepted to a conference. For my track record of bringing graduate students and also a significant number of undergrad students to scientific conferences, see the publications with me as the last author. In addition, if you are a student from another university, I will try to convince your university to count your work as a final thesis or grant other credits toward your degree.

If you use one of these ideas without involving me, you will be taken into custody by Karma police. I myself tried my very best to give due credit to my sources of inspiration. If I missed that or if I overlooked that somebody already published similar work, I’ll be glad (well, or not so glad) to learn about that.

Please check back regularly. I still have tons of ideas in stock that wait to be added here. This page was updated last on Feb 27, 2012.

°// Sustainable Development
{&) Audio and Music Computing
(.) Visual Computing: Computer Graphics and Image Processing
!!, User Interfaces feat. Tangible Computing
%*| Mad Scientist

°// Facebook for Trees
Create a Facebook lookalike in which every tree hugger like us can create a page for any tree that is large enough, say, with a trunk that is more than one feet in diameter. Trees are identified through their GPS location and through images. The species, age and gender of the tree can be entered. One can report on the current state of the tree, on birds and other animals inhabiting it; and one can, of course, “like” and “friend” the tree. Registering and updating the data may be assignments for high school students. This Web site also helps to keep track of excessive uprooting commissioned by the local authorities. Somebody already created a Web site 1 to keep track of fruit trees that are public property. Prospective Venue: SAINT, conferences on sustainability or on geographic information systems

{&) !!, Saving the Lives of MP3 Addicts
Crossing the street while listening to music or to a podcast is quite a dangerous thing to do 1 2. Can one enhance an MP3 player by a sort of early-warning system for the listener? This could perhaps be based on ultrasound distance measurements, on data from a panoramic video camera or on an (extremely sensitive?) detector for the mobile phones of car drivers. A special feature would be a detector for bicycles approaching the user on the sidewalk from behind at the usual inappropriate speed. Prospective Venue: CHI, MobileCHI

°// (.) !!, Virtually Transparent Fridge Door
Samsung proposes to build the doors of fridges as transparent displays. Can we create a low-tech solution instead, using a camera that takes a picture every time the fridge is being closed? Prospective Venue: CHI, UIST

(.) %*| A Swarm of Scribblebots
Equip a number of ASURO-class robots with a mechanism to draw chalk lines. Let them write crazy equations on a blackboard that is mounted horizontally, as a table. A camera mounted above could track the robots; they could receive control signals through coded infrared light. There could be a second class of robots that do not write but have an eraser instead. The content to be written could stem from Web sites selected by the audience. Prospective Venue: SIGGRAPH Art

(.) {&) Boosted Visual Rhythm
In a given video file detect motion that fits to the ryhthm of the audio track, such as somebody speaking, violinists bowing, somebody directing an orchestra. Highlight, boost or exaggerate such motion. Both the detection and the transformation could be handled through motion fields. Prospective Venue: SIGGRAPH, Eurographics

!!, A Filing Assistant
Can software offer an educated guess to the user where to put a new file that he or she just created or received? Learn from the existing structure of documents on the disk and from their content. Learn from the clusters of files that the user may have created on the desktop. Prospective Venue: IUI, UIST

(.) !!, Pseudo-haptic Perspective
Accelerating or decelerating the mouse pointer on the screen in relation to the actual motion of the hand enables an inexpensive simulation of forces acting on the mouse 1. Apply this principle to mouse-based interaction with a 3D scene: The mouse pointer becomes progressively slower when it controls an object that moves toward the horizon. If an object that is being moved with the mouse collides with another object, the mouse pointer gets stuck accordingly. Prospective Venue: CHI, UIST

(.) !!, Painting Effect Parameters
Extend image processing software such as GIMP or Paint.NET by effect layers whose mask does not simply control the balance between original and processed image but controls parameters of the effect. For instance, the radius of a blur filter could be small where the mask is black and could be large where the mask is white. Or, the brightness of the mask could control by which angle the hue of the color of the image is shifted. Prospective Venue: SIGGRAPH, Eurographics

(.) !!, GUIs Turned into Sketches
Develop software that hooks into the operating system to turn every standard GUI into a hand sketch such as this. For extra credit, don’t simply replace the static graphics of the widgets, but create animated controls. For instance, the lines may wiggle slowly or the background paper may get a coffee stain from time to time. Prospective Venue: SIGGRAPH, Eurographics

{&) Lecture Recording Cleaner
Audio recordings of talks such as lectures contain coughs, the lecturer’s “um”s, and interferences from a mobile phone’s incoming call signal. Design an audio filter that removes all of these. Prospective Venue: AES Convention, DAFX, conferences on signal processing

°// {&) PC-based Acoustic Anemometer
Use one loudspeaker and two microphones mounted some meters away in perpendicular directions. Continuously or periodically emit a signal from the loudspeaker and measure the delay to both microphones. From this, compute the direction and the speed of the wind. One could use ultrasound to not disturb people in the vicinity. The system should be built using a cheap audio interface that can handle a sampling rate of 192 kHz. The tricky but important part is robustness against environmental noise and wind noise in the microphones themselves. Some spread-spectrum technique such as an MLS sequence may be of help. Prospective Venue: AES Convention, conferences on signal processing

(.) !!, Handwriting Beautification for Tablets
Develop an add-on for Tablet PCs to automatically correct lines of writing that slope down, to even out the arcs of all n and m, etc. Hopefully, this works without a deep analysis of the written words and can be based on simple features such as the running minimum of the y value to detect the baseline of the writing. Prospective Venue: SIGGRAPH, Eurographics, SBIM

(.) !!, Augmented Bubble Chamber
Add image processing and a data projector to a bubble chamber to mark the traces and add information about the type and the energy of each particle that produced a trace. Prospective Venue: SIGGRAPH Emerging Technologies, conferences on AR/MR, conferences on learning technology

!!, Smart Autocompletion
Whereas most text editors support autocompletion, they mostly present pretty dumb choices to complete a given word fragment. In particular for program source code, much smarter choices could be generated that take the history and the context into account. In a brute-force version, look at all texts typed lately and figure out what the user has (and other users have?) written in similar contexts. Prospective Venue: ACM CHI, IUI

°// !!, Recommender Systems for Sustainable Living
Status: Pervasive 2010 workshop paper, further work in progress. Based on sensor data (solar power prognosis from the weather report, content of the fridge, etc.) one could make suggestions of what to cook, when to switch on the washing machine, etc. This may be based on former usage patterns ingested by machine learning algorithms. It may also leverage the suggestions and recipes from a social web. Prospective Venue: ACM CHI, ACM Pervasive

°// !!, User Interfaces for Demand-Side Power Manangement
The upcoming smart electrical grid allows the utilities to control home appliances to adapt the current power consumption to the current power production, which tends to fluctuate increasingly through the use of wind and solar power. How to balance the control between the user and the grid? How to manage the level of detail presented? (“Switch on the washing machine if the price of a kWh is below 20 cents for the next two hours with a probability of 95% or more.”) The system must plan ahead but still be flexible. The integration of power generated locally, for instance through micro combined heat and power, and the use of electric cars as buffers makes things considerably more difficult. In how far could we get rid of regular human-computer interfaces such as screens, keyboards, and mice? Could there be a “Do What I Mean” button or even no button at all, thanks to all the sensors? The classic theme of flexibility vs. complexity in the user interface is a big issue here. Prospective Venue: ACM IUI, conferences on man-machine or human-computer interfaces

°// !!, Optimizing Schedules to Minimize the Ecological Impact
Optimize schedules (meetings, classes, travel, etc.) with respect to energy use and personal preferences. E.g., arrange a meeting so that the sum of energy spend for travel, air conditioning, etc. is mimimal. Updates are needed if the estimates change. This could be a Web-based system. How would one enter personal preferences without getting lost in details? There is already an optimizer for travel alone. Prospective Venue: conferences on human-computer interfaces, conference on Web-based systems

%*| The Un-Camera
Here is a quote from the movie One Hour Photo: “No one ever takes a photograph of something they want to forget.” Well, could we build an uncamera to untake photos of stuff we want to forget? Taryn Simon has some (regular) photographs of that sort. Wayne Martin Belger builds strange cameras. Prospective Venue: SIGGRAPH Art

{&) (.) !!, °// Waterless Virtual Sink
Create a waterless sink or an almost waterless sink to wash one’s hands. Block the direct view of the user to his or her hands, use a camera and a microphone and process their output to create the appropriate noise of pouring water. Background: One can make hand rubbing sound wet by playing back the sound of the rubbing hands with boosted high frequencies (Jousmaki: parchment-skin illusion). Maybe some jet of cold air will help, too. Prospective Venue: SIGGRAPH Emerging Technologies, conferences on VR/AR/MR

{&) !!, Volume Control for Mobile Phones
Status: too late. Control the volume of a mobile phone through its distance from the user’s ear. When the user moves the phone away, turn the volume down. But leave it at that level as he or she brings the phone closer to the ear again. Turn up the volume when the user presses the phone onto his or her ear. Could this even be done with signals from the touchscreen alone? Prospective Venue: ACM CHI, ACM MobileHCI

{&) !!, Sounds for Painting Software
Create the right sounds for drawing tools such as Corel Painter: Chalk, pencil, and a brush loaded with oil paint should produce natural sounds, depending on the speed of the tool and the pressure being applied. For bonus points, also include cartoonish sounds. Prospective Venue: ACM CHI, SIGGRAPH Emerging Technologies

{&) !!, Future Interfaces for Computer Audio
Try to conceive of future user interfaces for audio production and audio consumption. How will an audio studio on board a starship look like? Will there still be such a thing as a studio? Will there be something that we recognize as “audio”? Or will all media eventually have been melted together? Will there be “producers” and “users”? Looking back: How would a steam-powered audio studio have looked Like? Prospective Venue: AES Convention, SIGGRAPH Emerging Technologies, ACM DUX

{&) (.) !!, VR Mixing Console with One Knob
A virtual reality system could be used to simulate a huge mixing desk and possibly also some additional outboard gear: The user wears a VR helmet; there is one single fader controller, one single rotary-knob controller, and one single switch; these are moved around at high speed by some mechanism so that they below the user’s hand whenever he or she tried to grab a virtual control displayed in the VR helmet. A remotely related system called CirculaFloor has been created to allow the user to walk around in a seemingly infinite space. Prospective Venue: SIGGRAPH Emerging Technologies, conferences on VR, AES Convention

°// Alternatives to Wireless Power
Manufacturers of mobile devices and of TV sets start selling products that are powered wirelessly. This incurs a substantial loss of energy, for the sole purpose of reducing the clutter of cables. Can we achieve that by less enviromentally costly means? Prospective Venue: conferences on sustainability, conferences on product design

°// Cultural History of the Washing Mashine
Whereas German washing machines may spend two hours on one load, U.S. applicances are done after half an hour. Why is that so? How did it come about? This may teach us something about the development of sustainable lifestyles. Prospective Venue: conferences on sustainability

{&) Test Robot for Audio Plug-Ins
Thanks to their standardized digital interfaces, audio software plug-ins such as VST instruments are amenable to automated testing, which may help in the development process. An automated plug-in testing solution could for instance execute a series of tasks that resemble unit tests. Checking the results sample by sample may not make sense, however, as tweaks in the DSP algorithms may slightly change the results. It looks more promising to employ feature extraction techniques to check if the generated output “sounds right.” Another mode of testing may be to set all controls in a monkey-like random fashion and feed generated or recorded audio signals or MIDI commands into the plug-in to be tested. Prospective Venue: DAFX, AES Convention, conferences on signal processing

°// !!, Planning Air and Heat Flow
Find a simple means to capture the 3D shape of a room. Allow the user to place different types of radiators in the room. Simulate the flow of heat and air to optimize the room for comfort and for energy efficiency. Also include thermal radiation, which is vital for specific kinds of radiators and for direct sunlight entering through a window. (However, do not include air conditioning. This work addresses the European way of living.) Actual data gained from thermometers etc. could be employed in the simulation. The computation may be accelerated by GPGPU techniques. Provide a great user interface for all of this functionality. Prospective Venue: Conferences on simulation, on user interfaces, or on computer-aided design

{&) (.) Accompanying Sound with Images
Status: Oops, this has already been done by somebody else. Find an appropriate sequence of images for a given audio track. For instance, control a virtual camera that pans over a large image (possibly a panorama shot) or create a slideshow from (Web-based?) collections of images or move through virtual worlds (in a dancing fashion?). What the user sees should fit to the music he or she listens to, both in terms of rhythm and in terms of mood. (Courtesy of Frank Bildstein.) Prospective Venue: ACM Multimedia, ACM MIR

{&) !!, Drumming with a Camera
A camera, possibly even a webcam, could be a cheap but versatile interface to control drum sounds with. However, standard cameras pose two big obstacles: First, the latency, that is: the delay with which an image frame arrives in the application software, is too large for musical applications. Second, the frame rate is too low for musical applications. Can these obstacles be overcome by some kind of intelligence? Can one guess in advance at which point of time the user will strike some surface? Or could a camera be combined with some other input device such as a microphone? Prospective Venue: NIME, ICMC

{&) Phoneme-Based Equalizer
Create a novel kind of equalizer that boosts or attenuates different vowels etc., not different frequencies. As the algorithmic recognition of phonemes is not overly reliable, one could try and find coarse-grained groups of phonemes to control jointly. This “equalizer” could be used to correct pronunciation problems or to introduce or to suppress an accent. Prospective Venue: DAFX, AES Convention, conferences on signal processing

°// (.) VR to Assess the Aesthetic Impact of Wind Turbines
Google Earth could be used to show the visual impact 1 2 of wind turbines to a larger audience, for instance, to the neighbors of a prospective wind power plant site. This could include shadows and reflexions and may be extended into planning tool where (some?) users can tweak the positioning or the paint of the turbines. One could also simulate the sound emissions and send them as an accompanying audio stream. Prospective Venue: conferences on renewable energies, on geoinformatics, or on Web-based interaction

{&) Tailoring Auditory Warnings to the Acoustic Background
Many computer users listen to music or even watch movies while working (or, um, “working”). One could do a real-time analysis of the audio signal that is currently played back and then adapt warning and notification sounds to it. These sounds could blend well with the current audio signal, for instance because their key, meter, level, timbre, and surround position fit. Or they could be irritatingly different in one or more of these respects to stand out in the mix. Prospective Venue: ICAD, Audio Mostly, SMC, conferences on HCI

{&) Sound Around the Screen
Mount the satellite speakers of a surround sound system around the edges of a large computer screen, so that not only left and right can be distinguished, but also bottom and top can. Use this to apparently play notification sounds and the sounds of small videos from their position on the screen. One could also indicate acoustically the position of the mouse cursor or make typing errors hum or put the voices of people participating in a video conferencing where they belong. Similarly for multi-camera video surveillance. In a multitrack editor (maybe MIDI, maybe audio) everything, no matter where it is on the timeline, could be played back at the same time to produce an overview; each sound would be mapped to the position that corresponds to its position on the screen, that is: The start of track 1 plays from the upper left speaker. And could a sound (seemingly) emanate from the position of the mouse cursor? Once upon a time, the illustration software FreeHand was able to produce click and ping sounds when the cursor hit a grid point or two curves joined perfectly. This tended to be distracting, but maybe only so because the audio signal was detached from the screen. A first question to ask would be how well a listener can make out the intended position of the sound. This has been tested for decades with stereo and surround sound systems for standard listening situations such as home theaters, see for instance 1. Prospective Venue: ICAD, Audio Mostly, SMC, conferences on HCI

°// The Total Cost of Reading
Which is best in terms of energy efficiency and/or environmental damage: an old-fashioned printing press, printing from the personal computer, reading on-screen with a PC, notebook computer, or e-book reader? Study different cases such as reading a novel, a newspaper, a user manual, or looking something up in a dictionary. Specify value ranges and uncertainties for the results. Some observers even argue that it could be cost-effective for a newspaper to provide e-book readers to its subscribers. There is already a study, the content of which is only available to a strictly limited audience, however. Prospective Venue: conferences on human-computer interfaces

°// The Total Cost of Watching Movies
Which is best in terms of energy efficiency and/or environmental damage: disk-based media (DVDs) that you buy, disk-based media that you borrow, downloading a movie, or streaming? Specify value ranges and uncertainties for the results. Prospective Venue: conferences on human-computer interfaces

{&) Noise-Free Phone Jack
In particular at conferences, people connect phone plugs to their notebook computers without first turning down the volume of the P.A. system. This tends to cause devastating noise. Can we built jacks that remain compatible to those disastrous TRS mini connectors but connect the ground line first, through some mechanical or electronic provision? Neutrik sells a noise-free guitar plug; but that’s a plug, not a jack and has the bigger diameter. Prospective Venue: AES Convention

°// !!, Booking with After-Sale Flexibility
Status: ITSim 2010 Paper, further work in progress. How can one design Web-based booking systems for airlines and bus lines that allow users to specify flexibility concerning the date and time of departure and return even after they hit the “Buy” button? This would allow operators to dispatch their planes or buses in a more efficient manner. Prospective Venue: conferences on information management or on man-machine interfaces

{&) Shrinking Musical Structure by one Level
Music is essentially self-similar: Patterns such AABA appear at the level of single notes as well as at the level of the movements of a symphony. Can one “compress” music by shifting these levels down one step: A phrase becomes a note, etc. (Which note? Which timbre?) All of this could be done on a MIDI-only basis. Prospective Venue: ICMC, ICAD, Audio Mostly

{&) Mapping the Inaudible to the Audible
Use special microphones to sense infrasound and ultrasound, possibly in stereo. Map their output in real time to the audible frequency range and play the result back to the user to augment his or her perception. Hide the system in something that looks like an MP3 player or possibly can even simultaneously work as one because the audio signals being input are in different frequency ranges from those being output. Maybe, the headphones can be used in reverse: as microphones. Infrasound sensing and sonification has already been addressed in seismology. Prospective Venue: ICAD, conferences on human-computer interaction, Audio Mostly

(.) A Megapixel Brush: Painting and Writing with a Picture
Can one paint fluently with an ultra-large patterned brush, that is: with a complete image? This could be helpful to write with a picture: The streaks of the highlights trace out the letters written. Doing this in real time with a large image may require replacing a series of stamps of the brush by a convolution of the tool’s path with the brush picture, which can be accelerated using the Fourier transform. A completely different option, which, however, goes under the same heading, is to write with the highlights by placing a camera onto a computer-controlled pan-and-tilt head, open the shutter, move the head along the drawing path (possibly in real time, in sync with the user’s input), and then close the shutter. For both the software and the hardware variant it would be nice to control the brush size (that is, the scale of the applied image or the zoom of the camera’s lense) in real time through pen pressure. Prospective Venue: SIGGRAPH Poster, Eurographics Short Paper

!!, Aiming a Thermometer for Affective Computing
Status: work in progress. Use a Web camera to steer an infrared thermometer mounted on a computer-controlled pan-and-tilt head toward the face of the user. This can hopefully be used to gain information on his or her emotional state, as has already been achieved with a (much more expensive) thermographic camera. It may be helpful to use array sensors. For comparison, the ground truth can be recorded through a thermographic camera or through thermistors taped onto the user’s face. Prospective Venue: conferences on human-computer interfaces

{&) DVI to Wave Field
Use the DVI output of a graphics card to output hundreds of audio channels in parallel. This requires setting up a DVI receiver chip, DAC chips and digital amplifiers in a compact form factor. Application: Wave field synthesis on the GPU. A preliminary study on VGA-to-Audio has already been done. Prospective Venue: AES Convention

(.) !!, Stage Lighting Equipment for Computer Applications
Computer-controlled moving lights have become very inexpensive. How can they be applied in computer graphics? For instance, to provide realistic brightness, one may direct a yellow spot light on the location of the sun in an image projected by a data projector. Or one may use a bright computer-driven spot on the floor as a virtual ball in an exertion interface. These applications require image processing to bring the moving light to the right place at the right time. Prospective Venue: SIGGRAPH, Eurographics, conferences on games, UIST, CHI

°// !!, Effective Fuel Consumption Control
Displaying the current fuel consumption of a car engine helps to improve the driver’s performance in terms of energy efficiency. What is the most effective way to display such data? Should we show miles per gallon, gallons per mile, or gallons per hour? (This is more complex than it seems on first sight.) And should the display show numbers, a bar, and/or icons? Prospective Venue: conferences on man-machine interfaces

!!, Statistical Physics of Games
There are attempts to employ the methods of statistical physics to create models in economy and the social sciences, as they can deal with infinite numbers of interacting items. Can one carry this over to game bots? Or to the populations of massive multi-player games? What are analogies to physical quantities such as energy, enthalpy, pressure? Can one formulate a Lagrangian or a Hamiltonian? Are there equilibria and phase transitions? For simulation, one could employ frameworks such as Breve or Netlogo. Prospective Venue: game conferences

(.) Virtually Ancient Maps
Render maps from OpenStreetMap as though they were drawn by Mercator himself or by some pirates cruising the Carribean. Don’t forget to invent Latin names for the geographic items. Handle world maps as well as country maps and city maps. Prospective Venue: SIGGRAPH, Eurographics

(.) !!, Geobrowsing Non-Geographic Data
Landscape-style visualizations are common in text retrieval and music retrieval. What about mapping these and other data such as social relationships onto earth (Techno goes to, say, South America, R&B goes to India) or the actual subway network of a city, as exemplified by 1 and 2. This would require placing data items accordingly. For instance, a SOM may be modified by introducing borders across which its update process has only limited influence. In the case of a network, one could try to identify the nodes of the given data with the nodes of the network with as little stress as possible, possibly through some hierarchical algorithm. To provide a familiar user interface, one could add layers or placemarks to Google Maps or Google Earth. Prospective Venue: visualization conferences, UIST, CHI

{&) !!, UIs for Audio: Why photorealism?
Graphical user interfaces for audio software tend to be photorealistic replicas of actual, more or less antique hardware. This is particularly prominent with audio effect plug-ins and software synthesizers. Why is this the case? In which respects is it beneficial, in which respects is it detrimental? Hedonism may play a vital role here. Prospective Venue: CHI, AES Convention

!!, UIs for Aliens
How would user interfaces for extraterrestrial beings have to look like? Imagine for instance, fish-like creatures swimming hundreds of miles below the frozen ocean surface of Jupiter’s moon Europa. This task sounds outlandish, but we may learn something about interfaces for humans in the process. Actually, there is a paper on (pretty standard) user interfaces from science fiction. Prospective Venue: alt.chi

(.) !!, Debug Information Visualization
Create a DebuggerVisualizer for Microsoft Visual Studio that automatically applies appropriate information visualization methods, marks outliers, shows the history, etc. Prospective Venue: visualization conferences

°// (.) Poor Man’s Thermographic Camera
Status: SIGGRAPH 2009 Poster, but still needs to be turned into a robust add-on for mobile phones. Professional thermographic cameras used to detect thermal bridges are pretty costly. Mount a cheap infrared thermometer on two servo motors to scan over a horizontal and vertical range. Apply adaptive sampling to quickly build an image. Apply deconvolution methods to boost the spatial resolution. Create panoramic images and time-lapse movies from the temperature data. Alternative: Attach a camera (mobile phone?) to the thermometer, track the image, and mark the places at which measurements have been taken in the image. Can one learn from the camera’s regular RGB image how to extend the sparse data onto larger domains (an example of sensor fusion)? Do a floodfill to the nearest edges in the RGB image? Prospective Venue: SIGGRAPH, Eurographics

(.) Visualization of Animation Tracks
Most 3D animation software features a timeline editor window that displays the temporal evolution of positions and orientations through function curves. However, the three curves that depict the angles (heading/pitch/bank) are difficult to interpret. Can one create one unified display that makes it easy to read off the orientation at every instant of time? Would coloring help, possibly in Lab color space? Extending this idea: Can one show other data in the timeline editor such as masses, moments of inertia, or torques? Would that help in creating physically plausible animations? Prospective Venue: SCA, SIGGRAPH, Eurographics

(.) Making 3D Objects Look like Toys
Develop a plug-in for standard 3D software that turns (almost) any object into a cartoonish version with soft corners and bright colors that looks as though it has been manufactured by Fisher-Price. Somewhat related: 1. Prospective Venue: SIGGRAPH, Eurographics

!!, Location from Powerline Hum
Can we determine a receiver’s location inside a room from the parasitic electrical field created by the powerline emissions, possibly making use of the noise always present in it? There is scant work in forensics about similar issues. Prospective Venue: PERVASIVE, UBICOMP

°// !!, Energy-Saving UIs
Can we help the user to save energy through better (graphical? tangible? multimodal? intelligent?) interfaces for ovens, showers, etc.? Some work of that sort has already been done: 1 2 3 Prospective Venue: CHI, conferences on sustainability

(.) Taming the Space of All Icons
The mathematical set of all images of, say, 64x64 pixels is too vast to explore naively. Learn from examples which forms typically appear in a given icon design style. Use this to automatically iconify sketches or photographs. Prospective Venue: SIGGRAPH, Eurographics

{&) Multitrack Audio Coloring
Status: AES Paper (128th Convention, 2010), further work in progress. Multitrack audio editors display waveforms, which are not that much expressive. Can we use colors or patterns to indicate in a better way what’s on the tracks? For instance, repeated parts should have similar colors. This is an application of techniques already existent for Music Information Retrieval; many of these are available as ready-made function libraries. Automatically identify and groups the parts of a multitrack recording that belong together (repeated stuff; things that mostly occur together; etc.). This could be used as such to simplify editing or could even become an intelligent “enlarge selection” function. Technically, one could base this on an error-tolerant codebook approach where increasingly longer codes are built from audio features characterizing, say, a second of audio. A lens effect can be employed to show both detail and context. Here is some related work: 1 2 3 4 5 6 Prospective Venue: AES Convention, DAFX, ICMC, Audio Mostly

{&) Multitrack Sorting
Virtually every multitrack audio editor presents a list of tracks. Can we sort them by date, level, panorama position, instrument type, possibly even by importance? Or group them by timbral and/or rhythmic similarity? Prospective Venue: AES Convention, DAFX, ICMC, Audio Mostly

%*| Xelpel: Grooming the Long Tail
Search engines always present typical results. How can we show more of the “long tail”? Simply be reversing the order of the results? Hence the name: “Googleplex” turns into “Xelpelgoog.” Prospective Venue: conferences on the Web, on data mining, or on information retrieval

!!, Joystick Gear Box
Status: Oops, this has already been done by some other people 1 2. Transform a force-feedback joystick into a gear stick that stays within the typical |-|-|-like profile. Of course, it must get stuck from time to time. Prospective Venue: UIST poster, CHI, game conferences

{&) (.) !!, Video2Sound
Create a freely configurable system that maps motion present in videos to sound, generalizing the ideas of 1 and 2. Somewhat related: 3 Prospective Venue: NIME

(.) Automated Tourist
Create a program that automatically “walks” through virtual landscapes and cities (including games and Google Earth) and takes photographs of the nicest (Define that!) sights. Prospective Venue: VR conferences, SIGGRAPH, Eurographics

(.) Debugging 1000 Bots
If there are 1000 NPCs in a game, how can we get an idea about which states they are in? Can visualization help? This is a specific issue in software visualization. Prospective Venue: software visualization conferences, game conferences

{&) Block-wise Feature Extraction
Is it quicker to extract features from audio files (or video files, while we are at it) if one does not progress file by file but in a low-level fashion: block (sector) by block on the disk? This would require, of course, storing some unfinished data for all touched but not completely scanned files in RAM. Prospective Venue: ISMIR, other MIR conferences

(.) Gold and Silver on the Screen
Status: WSCG 2010 Communication Paper. Simulate gold and silver prints (matte or glossy) on a display by using a Web cam to detect the position of the user’s head and to capture the environment and the illumination. Prospective Venue: SIGGRAPH, Eurographics

(.) 3D Compression Taken Literally
Create a processor that takes in an arbitrary 3D model including textures and outputs a model that looks like the result of putting the input model into a junk press. Some parts have to be recognizable, some parts come off or stand out (cf. the eye in Monsters Inc.). Art reference: Compression by César. Prospective Venue: SIGGRAPH, Eurographics

{&) Sound Morphing
Sample many different ways of playing a single instrument or of creating a specific foley sound. Construct a simple parameter space from that, for instance by looking for a low-dimensional embedding of the sample’s acoustic features such as MFCCs or MPEG-7. There is a catch: Simply blending different sounds by changing their relative volume often does not work. In a game, one may for instance create a continuum of different engine sounds from samples taken from an engine running at different speeds (with tools such as FMOD Designer). These sounds cannot be blended right unless they are synchronized. Prospective Venue: DAFX, ICMC, AES Convention

{&) !!, Musical Keyboard with Bendable Keys
The keys of a standard musical keyboard can be bent left and right by a tiny amount. Detect this motion and use it to control the pitch, for instance. Can we also allow the user to push and pull the keys a little, by dragging them away or toward him or her? Maybe these types of controls are better to handle than the regular aftertouch, the use of which didn’t exactly take off (Why?). Prospective Venue: NIME, ICMC

(.) A Hair Brush, Literally Speaking
Can one lengthen or comb hair on photographs? The first step would be to recognize hair. The existing hair can serve as exemplar for an intelligent cloning process or as a basis for the extraction of statistical features. One could include a fully automatic hair growth potion, so to speak. Extending this idea: Recognize and alter eye brows and eyelashes as well; recognize and change the skin including the lips (effects of cosmetic make-up, intoxication, etc.); build a pluggable architecture so that the cosmetics industry can offer virtual replicas of their products; build a general clone tool that bends the source image around curved paths (like the edge brush) and/or makes use of a orientation-sensing pen. There is lot of work concerning 3D rather than 2D hair, a more practical being 1. Prospective Venue: SIGGRAPH, Eurographics

!!, Walking with Inverted Joysticks
Mount two joysticks upside down and use them to input gait motion, for instance for computer animation or inside a game. This is related to using two pens for the same purpose. The trigger button (now at the pinky finger) can be used to indicate whether a foot touches the ground or not. You press the button of the left or the right joystick to lift the corresponding foot. Twisting the handles induces sideway motion. Force feedback could be applied to indicate collisions or to make it harder to climb stairs. Prospective Venue: SCA, TUI, SIGGRAPH, Eurographics, CHI

%*| Cheating Tournament
All game developers seem to fight cheating but do not succeed. Turning this onto its head, create a game in which cheating is a vital and indispensable element. Prospective Venue: game conferences, CHI

!!, A Mouse for Tilted Desks
Lecterns tend to have inclined surfaces and so do some ergonomically designed desks. It’s pretty hard to use a standard mouse on such surface as the mouse will not stay where one leaves it. Build a computer mouse (a wireless one, of course) that locks its position when nobody touches it. This involves a touch sensor and some mechanics. For instance, the mouse could be a classical mechanical one (the one with a rubber ball inside). If the rubber ball is mounted at the mouse’s barycenter, it would suffice to create a mechanism that fixes the rubber ball when the user does not touch the mouse. This mechanism can be set and reset, but otherwise must keep its state without consuming battery power. Prospective Venue: UIST

{&) !!, Waveforms per Note
Status: Audio Mostly 2009 Poster; further work as 127th AES Convention Paper. Standard editors for MIDI tracks in music production software display every note as a rectangle. If a track is to be played back with a software synthesizer, we can do better: The editor can fire up the software synthesizer (note that this can happen way faster than in real time), collect the data, and display tiny waveforms instead of the rectangles. When the sound’s parameters are edited, the waveforms should update immediately. All in all, this looks somewhat like the display of the dissected audio tracks in Celemony Melodyne. Prospective Venue: DAFX, ICMC, AES Convention

!!,  Multi-Touch Fingerprint Recognition
Status: too late. If we could recognize individual fingerprints on a multi-touch display, we could tell apart the actions of different users and could assign different actions to different fingers. Funnily enough, frustrated total internal reflection is a standard method for both multi-touch displays and for the recording of fingerprints. The demands on spatial extent and spatial resolution, however, are highly different in these two applications. I can imagine using an additional steerable high-resolution camera that quickly pans to all locations of fingertips that have been spotted by a standard multi-touch system. A second issue is that the glass panel of the display has to be matte to allow projecting images onto it. This blurs the fingerprints way too much. One option may be to create a glass panel that is matte in the visible domain but clear for infrared light. (Details upon request.) Another option may be to use an autostereoscopic 3D display several inches behind a completely clear glass pane. The 3D monitor creates the illusion of the image plane lying in the glass pane; the empty space between the pane and the display allows capturing fingerprints through cameras mounted sideways. Here are yet two further options: 1. Separate the input panel from the display panel; mount the former horizontally and the latter vertically; this also relieves the user’s neck and wrists; for better orientation show a shadow of the user’s hands on the display panel. 2. Add a milky liquid film on the top of the panel; at positions where the user places his or her finger tips, the milky fluid is squeezed away so that the finger tips become clearly visible from below. 3. Use panel whose state can be quickly changed from transparent to matte, such as the one employed in Microsoft SecondTouch. Prospective Venue: UIST, TableTop, CHI

(.) !!, Camera as Pointer
The Nintendo Wii Remote employs an infrared-sensitive camera to track up to four beacons. Can we instead splatter the walls with (random?) patterns of special ink that reflects IR light but is invisible to humans and use a standard Web cam with an added IR filter to determine where the camera is and in which direction it points? Prospective Venue: UIST, CHI, conferences on image processing

(.)  SVG2GPU
Status: This seems to have become obsolete with the advent of Direct2D in Microsoft Windows 7. Create a framework for 2D graphics to be rendered on the graphics processor. In how far has this to be different from standard 2D frameworks such as Java Swing? It may be reasonable to create a retained-mode API similar to Microsoft WPF, possibly fed by SVG data structures. The output needs to be optimized for GPUs. For instance, the number of drawing calls (batches) needs to be minimal. Geometry shaders could be employed to approximate curves by tiny straight lines. One could also look into the techniques proposed to draw and fill spline curves through pixel shaders. The stencil buffer, too, may help with filling. Full-screen antialiasing may come free of charge, but what about sub-pixel effects like Microsoft ClearType? Prospective Venue: SIGGRAPH, Eurographics

!!,  Drag&Drop Save
In most software applications on most operating systems you can open a file through a drag&drop operation. Saving a file, however, rarely works like that. Note, however, that typical Web browsers already allow saving a URL through drag&drop. The lack of this functionality in general is particularly vexing when the folder to which one wants to save is already open on the desktop but one has to drill down to it again in the “Save As” dialog box. Can we extend Microsoft Windows or another operating system to allow a drag&drop save operation from all programs? (Note that there is drag & drop of files and folders to (!) the file dialog in Mac OS X, which is better than nothing, but feels like the wrong direction of motion.) The most straightforward way for doing so may be to enhance the system-wide “Save As” dialog box. It may offer an icon that can be dragged out into folders or it may offer that file icons can be dropped onto it, compare the behavior of command.exe in Microsoft Windows. Using standard Windows functionality, the former solution (dragging the icon out) can create a file in Windows Explorer. The “Save As” dialog box would not know the actual data to be written, though. It could, however, create a placeholder file, search the opened windows of Windows Explorer for that file, delete it and hand this file’s name and path to the caller. Some relevant literature: 1 2 3. Prospective Venue: CHI, UIST

(.) !!, Legibility and Display Resolution
How does the legibility of text depend on the resolution (dpi value) of the display? This can be tested with a data projector whose zoom factor is changed and/or which is placed in different distances from the projection wall, with appropriate correction of the brightness. Prospective Venue: APGV, CHI, SIGGRAPH, Eurographics

{&) !!, Squeezed-Frequency Waveform Display
Waveform displays have the issue that they have to cover a 1:1000 range of frequencies. When you zoom in to see the fastest oscillations, the slowest oscillations extend all the way across the screen. One could try and compress the frequency range for the graphical display. Extend PD to show such wave forms along the connection lines. Leverage the GPU for that. A minor first attempt has already been done. A related idea: Depict audio signals by more or less deformed spheres. The spectrum of the deformations (area and height of dents and spikes) can be derived from the short-time audio spectrum. Prospective Venue: AES Convention, CHI, Audio Mostly, ICMI

{&) !!, Alias-free Waveform Drawing
Hand-drawn audio waveforms tend to be severely aliased. How can one correct the drawing in real time to eliminate this effect? Prospective Venue: DAFX, AES Convention, ICMC

(.)  Automatic Video Dimensioning
Automatically find pairs of parallel straight lines in images and videos (time coherence!) and attach dimensioning markings and labels to them. This could be a funny effect. Prospective Venue: SIGGRAPH Posters

(.)  Video Paint by Numbers
Automatically process a video to turn it into a paint-by-numbers style. Ensure massive time coherence, that is, a smooth development of the numbered regions. For reference, see Andy Warhol’s piece “Do it yourself (Landscape)”. Prospective Venue: SIGGRAPH Posters

!!,  Gaming Mice: Fact or Fiction?
Are “gaming” computer mice actually faster and/or more precise than garden-variety ones? Conduct user tests. This involves writing not-so-complicated test software that for instance displays random click targets on the screen and records the responses (time, precision) of, say, 20 to 40 users. The text could be administered via the Web, but it may make more sense to let every participant use every mouse. A main task is the statistical evaluation, compare Fitts’ Law. Prospective Venue: HCI conferences, popular game magazines

(.)  Image Idioms
Conduct a number of (semi-?)automated design studies by collecting images off the Web: Do children pictures tend to show the sun in the top left? Does the chimney tend to be on the right hand side of a building? Etc. This study could be based on Google image search and could be powered by Human Computation. Prospective Venue: Computational Aesthetics, SIGGRAPH Posters

(.) !!, Graphical Tag Cloud Service
The tag clouds featured by Web-2.0-style sites consist of regular text, albeit at different font sizes. Could one instead generate a map in which related (co-occurring) topics appear close and/or overlap? Such visualization is done regularly in text retrieval and music retrieval based on methods such as SOM or MDS. The software to create such maps should be a Web-based service that is handed the raw data (topics and their frequencies) by the calling Web site. And make sure that different spellings of the same tag are not counted as different topics. Here is a different approach: Wordle forms space-filling arrangements of tags. There is already some research on the best arrangement of standard tag clouds. This service presents a spherical tag cloud. Prospective Venue: visualization conferences, Hypertext, Web Intelligence

{&)  Movie Trailer Voice Makeover
Status: initial B.Sc. thesis. Create an “In a world” plug-in that makes a voice sound like the one used in movie trailers, see 1 and 2. This involves matching equalization, boosting the unvoiced part of the sound, and lowering the pitch. This work should be informed by an acoustic analysis of typical trailers. On top of that, one could analyze the duration of pauses and the distribution of dynamics to give a feedback on whether the current material to which the effect is applied conforms to the speaking style of the late Don LaFontaine. There is lot of general work on voice transformation, such as 1 2. Prospective Venue: AES Convention, DAFX

(.)  Virtually Centered Webcam
Status: Oops, this has already been done by somebody else. Video conferences are always disturbing because the conference partners do not look straight into the camera. This is a placement problem: You cannot put a camera in front of the display screen (well, not without a mirror system). Can we improve that by attaching webcams to the left and the right side of the display screen and use both images to compute the image that a camera mounted in the middle (or at the position of the mouse cursor) would deliver? Prospective Venue: ICME, image processing conferences

{&)  Beats-per-Minute Analyzer
Write a BPM analysis plug-in for audio software in the spirit of the Rhythmic Equalizer. Could we do more? Can we display the contributions of each track of a multitrack mix? What about displaying the temporal evolution of the BPM spectrum? Prospective Venue: AES Convention

{&)  Physics2MIDI
Use a simulation done with a physics engine to fire MIDI events. Integrate that as a plug-in into standard MIDI software. The direct route from a physics engine to sound synthesis has already been taken. Prospective Venue: ICMC, NIME

(.)  Automated Location Scout
Sketch an arrangement of streets, buildings, mountains, shores, etc. resembling either a map or a picture. Automatically search for landscapes that fit to this sketch. This could be helpful to find the location where a photograph was taken; it could also help with location scouting for movies. The system would need to build on standard geographic information systems such as Google Earth. There is already research on image-only approaches. Prospective Venue: GIS conferences, conferences on information retrieval, IUI

(.)  Doomsday VFX
Write a plug-in for 3D software that produces some of the effects needed for Volker Engel’s next blockbuster movie: tornadoes, collapsing buildings, etc. For bonus points, create such effects in real time inside a game engine, leveraging the GPU. TSNStudios already address some of that functionality, so do PixelLux. There is academic work, too: 1. Prospective Venue: SIGGRAPH, Eurographics, Workshop on Natural Phenomena

!!,  Piles on the Desktop
Automatically learn from the user’s arrangement of the mess of items on his or her computer desktop any patterns that may be present there (topics, dates, file types, etc.). Create corresponding smart folders / search folders. Related work: 1. Prospective Venue: IUI, UIST, HCI conference

(.)  Texture Compression
Study the statistics of all 4x4 pixel subimages of standard textures. How can such a block be encoded with 16 bit or 32 bit without much error on average? Can one improve over standard methods such as DXT5? Would it help to try genetic optimization of the mapping from, say, 16 bits to 4x4 colors? Or would a classic vector-quantization-type dictionary outperform a computation? Prospective Venue: Graphics Hardware, SIGGRAPH, Eurographics, EvoStar

(.)  Machine-Learned Surfaces
On Eurographics 2005, there was a paper in which the authors use a support-vector machine to learn an implicit surface from a set of data points. What about other methods of machine learning? (neural nets with or without hidden layers, genetic algorithms, etc.) Prospective Venue: SIGGRAPH, Eurographics, Symposium on Computational Geometry

(.)  A Framework for the Stylization of 3D Objects
The stylization of 2D graphics has been addressed a lot, in particular concering line-art renderings. But what about changing 3D models themselves? I can imagine styles such as the following:

I can imagine that effects such as the above can be created by combining a handful of operators. But which ones? Prototypes of these operators should be built, for instance in Maya. (Which operators of that sort are already available in Maya or other software? How can they be leveraged?) Prospective Venue: SIGGRAPH, Eurographics

!!,  Helpers for Acoustic Instruments
Status: ICMC 2009 Short Paper; still needs to be done for instruments other than a bass guitar. Casio manufactures educational music keyboards the keys of which are illuminated when they have to be pressed. Can we do similar things for instance with a guitar (LEDs in the neck), with a cello (track the neck visually, project visual markings through a data projector or a laser), or with a saxophone (vibrating motors or electrotactile stuff in the flaps). I like the solution for the saxophone best because it would be eyes-free. Prospective Venue: NIME, ICMC

(.)  Vector Graphics Texture Synthesis
Texture synthesis for bitmaps has long been dealt with. But what about vector graphics? Can we automatically build an infinite non-repeating texture from a given pattern of cubic curves (filled or not)? This may be interesting for instance for arabesques and for the similar patterns of Art Noveau. It may be nice to also synthesize plausible transitions from the pattern to a given boundary such as an image frame. To not run into problems with topology, the way to go may be an implicit representation of the curves. There is already work on how to stylize curves, how to distribute patterns, and how to apply ornaments. Prospective Venue: SIGGRAPH, Eurographics

(.)  GPU Roundoff for Effect
Can we utilize roundoff errors in a GPU shader computation for complex effects with little computational load? Prospective Venue: SIGGRAPH, Eurographics

{&)  Podcast Intelligibility Optimizer
Write a sound processor that optimizes podcasts for better intelligibility in noisy surroundings. This may involve boosting spectral changes and may involve measuring the outside noise. Some techniques may be borrowed from existing approaches to build hearing aids. Basic helpers such as automatic repeat after a burst of noise may be added. It may be possible to use the standard headphones as microphones during regular usage by some subtraction mechanism or by “listening” in frequency bands (intentionally?) left silent by the output signal. Related work: 1 2. Prospective Venue: AES Convention, ACM Multimedia

!!,  Emotion-Enhanced Camera
Now that we can buy digital cameras that will take photographs only when everybody smiles let’s go one step forward and record the photographer’s affective response. A prototype based on a video camera can easily measure the user’s heart rate, skin temperature, and galvanic skin response, and record these data as one of the sound tracks, possibly after some modulation. This is somewhat reminiscent of Life Blogging. What I would find interesting is to capture a very special biosignal that usually is not taken care of: the iris reflex (1, 2, 3). When we see something that we like, our pupils widen. A camera with a classic view finder could use a second camera directed at the operator’s eye, which is illuminated by infrared light. When the pupil widens, the shutter is released. This could also be turned into some art piece: The photograph that is taken is not the regular picture of the real scene but the picture of the iris. Or we leave out the iris size recognition and always take two photos: one of the real scene and one of the iris. We’d end up with 1000 pictures of eyes that see something they like. A similar idea has already been used to classify Web search results. Prospective Venue: IUI, UIST, ICMI, SIGGRAPH Art Exhibition

%*|  Disk Printer / Random Paths on Vinyl
Can one print the groove of a vinyl disk with a standard printer and then transfer this pattern to a blank disk? (Note that the opposite has already been pursued.) Maybe by some photographic process as is used for printed circuit boards? (Could we use a circuit board as disk?) What about grooves that do not follow the standard, tightly wound spiral path but have intersections? The groove may for instance be a handwritten word. How would the record player deal with intersecting grooves? Would it simply be subject to inertia and follow its original direction? Or would it switch to the “wrong” groove with some probability? Could we exploit this to let the record player autonomously create random music? There is work on doing the reverse: playing vinyl records by image processing. Prospective Venue: ICMC

{&)  Arcimboldo in Musica
Approximate a given audio recording from a given set of samples. This is not unlike concatenative synthesis such as GUIDAGE, but here it is employed for the effect. This may be “art” and it may be a weird sort of data compression. Maybe, one can borrow some ideas from similar software for 3D graphics. Extension: Do not use audio samples but control a bunch of different electrical tools, motors, whatsoever through the computer. To feed the matching process, conduct an upfront analysis of the sounds of these devices. Maybe, a machine could even talk if you modify its major acoustic resonance by some servos. The Talking Piano is a hand-made implementation of such ideas. Prospective Venue: ICMC, DAFX, Audio Mostly

{&)  Electrical Noise Organ
One could use five or so copies of the same electrical drill, hairdryer, etc. to create a polyphonic MIDI-based instrument. Each of the devices is controlled by an independent dimmer circuit (which requires devices with asynchronous motors). In a calibration phase done upfront, the computer learns which level of the control signal leads to which pitch. In performance mode, the computer maps up to five keys pressed on a MIDI keyboard to the five devices, similar to how the key presses would be mapped to five voices of a polyphonic synthesizer. Extension: Create several “organ stops” from diverse copies of electrical equipment. There should be nice labels sounding like “8’ Diapason.” Could we also leverage the velocity and after-touch sensing of the MIDI keyboard? There are some examples for such instruments with prepared music 1 2, not being played live. Prospective Venue: NIME, ICMC, SIGGRAPH Art

(.)  Universal 3D Plug-ins
Develop a C++ framework with different wrappers to write 3D plug-ins once and compile each for several host programs (Cinema, Maya, XSI, etc.) in parallel. Prospective Venue: SIGGRAPH, Eurographics

(.)  3D-Corrected Texture Effects
Texture painting software such as Maxon BodyPaint 3D incorporates a range of effect filters in the spirit of standard 2D image editing software. These filters operate in texel space, which does not make sense in terms of 3D geometry, as every texel may be mapped to a different size and even a different elongated shape on the object. Can one for instance write a blur plug-in that does not accept the blur strength in terms of pixels (that is, texels) but in actual inches on the 3D object? Another issue is to treat texture seams correctly: A blur operation has to spill over from one region (chart) of a texture to other regions that are connected through seams. Similar ideas apply to other bitmap filters. Prospective Venue: SIGGRAPH, Eurographics