Tuesday, December 13, 2011

Super Wendy to the Rescue!

        We did it! We made it! Our suit works! And we presented it without any hiccups! Wow, it is such a huge relief to be done. For me it almost feels a bit underwhelming that after all the hours and work I put into the project that all it does is turn a few lights on and off and vibrate every now and then. But I know that ultimately the suit is much more powerful. Sophie and I frequently talked about how cool we feel being able to do all the things we have now learned to do. The final prototype process was a bit of a blur that included a lot of really late nights but there were a number of notable moments that I would like to talk about.
        I'm so glad we came up with such clean code to use for the water level sensor because it was very easy to change the code to check for sleep and exercise. Coding got a bit complicated every now and then, especially thinking about how to check the difference between walking and exercising, but once I was able to come up the idea of using thresholds and checking to see if the y-coordinates of the accelerometer are above and below typical walking y-coordinates for a sustained period of time then it was very easy to implement. Sleeping was also quite simple. I only had to use the z-coordinate to check to see if the person was lying down versus sitting up. Since there may be times when the wearer is reclined but not sleeping I also added the light level sensor to check if it's dark in the room. If the wearer is both lying down and the lights are off I assume the user is sleeping. While I know this is not the most accurate way to measure sleep, especially because someone could be lying in bed unable to actually fall asleep for a while, this does ensure that the wearer at least gets eight hours of rest per day and it would be fairly easy to add additional sensors to determine whether the user is actually sleeping.
        The more difficult aspect of the project, not surprisingly, came with using the hardware. I spent a lot of time bent over the breadboard putting the correct wires in the right locations and following schematics online for each sensor and actuator. While struggled with getting the light sensor to read in any data Sophie tried to hack the Wii Nunchuck for its accelerometer. After our feasibility prototype presentation Orit had encouraged us to find a more affordable way of measuring exercise level and sleep instead of using a heart-rate monitor. We decided to use an accelerometer instead but for some reason, even though we thought we had asked for it, we discovered only about four or five days before our presentation that there had been a miscommunication and the accelerometer hadn't been ordered from SparkFun. Consuelo ensured us that it is possible to hack Wii remotes for their accelerometers and that she even had a Nunchuck for us to use that she could spare. The Nunchuck proved to be relatively straightforward to hack but much more difficult to actually read the data from in a way that we could understand.
        While Sophie was struggling to understand the remote, we also began experiencing problems with our Arduino board. Whenever we tried to upload data it wouldn't recognize the serial port or we would be reading the output on the computer and all of a sudden it would freeze. Sophie and I both hit a point of exhaustion and frustration so I told Sophie to head to bed and called up my boyfriend (the same friend who had helped to explain breadboards and resistors to me). He actually drove all the way to Wellesley to look at our circuits and try to figure out why the Arduino had stopped working.
        As it turned out, the Arduino had reached its limit in the amount of energy it could get from the computer so we needed to find an additional power source for it. Once Zac unplugged the power source from the stereo system and plugged it into our Arduino it started working again, no problem. Sophie and I would have never thought to do that. At the same time Zac showed me that the light sensor actually was working, it just wasn't getting enough light to show any changes. I had been working in a fairly dimly lit corner or the room and it hadn't even occurred to me that the light sensor wasn't getting enough light. So basically, in less than fifteen minutes, Zac was able to fix two of our incredibly frustrating problems. This is a picture of what my breadboard looked like at this point. It ended up getting incredibly more cluttered by the time it was actually finished.


      After already saving the day twice, Zac explained we needed to figure out the x, y, and z coordinates on the Nunchuck in order to understand the data. Once that was done, it was extremely easy for me to program and complete the rest of the circuits. It was also around this point that I accidentally fried our buzzer. I was following the diagram provided online for our buzzer from SparkFun and it surprised me that the schematic didn't include a resistor but I figured maybe the buzzer needed more current in order to work. I should have followed my instincts because once I completed the circuit I heard a terrible crackling sound, saw a tiny bit of smoke rising from the buzzer, and new it was over. Poor buzzer. I felt so terrible, especially because I was super excited to use it and was really curious as to what it sounded like. The next morning Sophie told me she still had hope for our buzzer to work but I had seen the smoke and I knew that it was the end. Unfortunately it was too late to order a new buzzer so we decided to use a beeping sound from the computer to simulate a buzzer instead. It would have been extremely easy to add a real buzzer to to our suit, though, if we had it.
        Orit had also told us not to worry about installing an Arduino RFID reader into our suit because we only have Phidget RFID readers and we would have already demonstrated the feasibility of our suit with our other sensors and actuators. Instead we simply needed to include an RFID tag in the suit and show that the Phidget reader could track the suit's location on the computer. Sophie decided to work on the RFID reader and was able to get it to keep track of not only if people had entered or exited a dining hall but how many people were in the dining hall as well. This shows that it would be possible for our suit to track the location of its wearer and tell if the wearer had been close to other people or was just by herself all the time. If the suit had its own long-range RFID reader and each building/room had a tag in the doorway, the suit could track if the wearer was in the library or a social area like a dorm living room. If the suit realized the wearer hadn't been to a dining hall or a social location in an extended period of time it could give the wearer feedback.
        Once all the circuits were complete we were able to solder them and install them into the suit. We had to come into the engineering lab the morning before the presentation in order to do installation because we didn't want Kat to have to wear the suit all night. That was probably the most stressful part because we knew we had a time crunch and we couldn't have any mistakes take place during the installation process. Fortunately, we were able to get the suit completely on and working just in time for the presentation to start and ultimately I think it went fairly well.
        The more I think about this suit, the more excited I get about other things that could have been added to it. We could add so many more sensors to make it more accurate and so many other kinds of feedback. It also would have been really cool if we could have LEDs in different shapes to explain the meaning behind them. For example, instead of a yellow LED it would have been cool if the LED was the shape of Zzzz's and the off switch looked like a pause button of some sort. That would make it easier for the user to understand the suit without needing a great deal of prior instruction. At any rate, it feels great to finally have the project complete! I now have much more respect for anyone who makes anything electronic work.
This is a link to the video of our final high level prototype:

Two Steps Forward, One Step Back

        Some good news, some not so good news. We were able to find solutions to both the breadboard and water bottle issues! That's a huge relief. It turns out I actually did know what I was doing when I was using the breadboard. I was right about the entire circuit the only problem was I had used a resister with a resistance much larger than was necessary. My friend explained to me I = V/R and that I need to use the right resistance in order for the LED to get enough current to actually turn on. I also will need to use resistors when ever I use sensors that have varying values in order to get an accurate reading. Also, I found out that if I connect the Ground pin to one of the rows at the bottom and the 5V pin to the other row then the entire row will be ground and the other one will all be 5V. That's a huge relief because I was really getting worried about how we'd fit all these wires into the single pin on the Arduino. Once I changed the resistor on the circuit I had made I was able to get the entire system to work! Now all we had to do was use the water level sensor instead of the force sensor.
        In the meantime, Sophie had figured out how we're going to get our water bottle working. It turns out Camelbak water bottles have breathable lids that allow air in while letting water out. This way, when we drink out of the container the amount of air in the pack won't change. We bought a Nalgene water pack that has a regular water bottle lid on it, switched it with a Camelbak lid, and connected it to a long tube to drink through. We also cut a hole in the side of the pack and inserted our water level sensor. The whole system seemed to be working when we used our test code! :)
        Next, Sophie and I walked through how we wanted the programming side of the system to work. She had a rather complicated suggestion that involved using different classes for different sensors but I was convinced we could come up with something quite simple and elegant that would hopefully work just as well. I definitely understand now why Takis always told us it's better to spend hours planning out code and only thirty minutes actually writing it because if we had just written our first idea it would have been a huge headache to debug the code and figure out what was wrong. Instead, Sophie and I brainstormed how the program should run, wrote and re-wrote pseudocode on the board, and walked through every line of the pseudocode until we felt confident it would work. Once I actually sat down to write the code it took less than thirty minutes to write and it worked the first time I compiled and uploaded it! I'm quite proud of myself for that one. 
        I came up with a relatively simple way of tracking how much water had been drunk. I decided we should have keep track of the initial water level and then check every five minutes if the water level has dropped. If the water level has dropped, the amount it has fallen is added to a cupsdrunk integer and the new water level is saved to be compared to next time. Every five minutes, when the water level is tested, the program checks if the water level changes a significant amount over a five second interval to determine whether the value is a reliable datapoint or if the wearer is likely just moving around a lot and causing the water level to rise and fall. It has been proven that drinking eight cups of water a day is considered healthy and we want the wearer to have a grace-period for when they have to drink all their water so, every six hours, the program subtracts the liquid level sensor's equivalent of two cups of water from the cupsdrunk variable. If the cupsdrunk integer is negative, that means the wearer has not drunk two cups of water in the last two hours and the LED changes from blue to pink. If at anytime the integer is positive, the LED will change back from pink to blue. 
        Something I really like about this implementation is that it can easily be transferred from the water bottle to measuring exercise and sleep. It also allows the wearer to drink an excess amount of water and not be reminded to drink water until much later. The way I think of it, the program almost works like a bank. If you are in water-intake debt you will need to drink extra in order to make up for it. If, however, you drink a lot of water all at once, you will have a surplus and will not have to worry about drinking water again until your savings run out. Below is the code for the water intake sensor and LEDs:

    #define LSR 0
    #define blueLED 11
    #define redLED 2
    #define SliSwi 0 
    #include <Time.h>  
    #define TIME_MSG_LEN  11   // time sync to PC is HEADER followed by Unix time_t as ten ASCII digits
    #define TIME_HEADER  'T'   // Header tag for serial time sync message
    #define TIME_REQUEST  7    // ASCII bell character requests a time sync message 
    int lsrReading;
    int lastWaterLevel;
    int cupsdrunk;
    int fivmin;
    int sixhrs;
    void setup(void)
      pinMode(LSR, INPUT); 
      pinMode(blueLED, OUTPUT); 
      pinMode(redLED, OUTPUT);
      pinMode(SliSwi, INPUT);
      lastWaterLevel = analogRead(LSR);
      cupsdrunk = 0;
      fivmin = 0;
      sixhrs = 0;
    void loop(void)
      lsrReading = analogRead(LSR);
      if (second()==0)
      if (fivmin==5)
      if (sixhrs == 360)
       cupsdrunk = (cupsdrunk - 90);
      sixhrs = 0;
     if (cupsdrunk >= 0)
      digitalWrite(blueLED, HIGH);
      digitalWrite(redLED, LOW);
    } else {
      digitalWrite(blueLED, LOW);
      digitalWrite(redLED, HIGH);
      if (cupsdrunk >= 0) 
        digitalWrite(blueLED, HIGH);
      digitalWrite(redLED, LOW);
    /*  if (lsrReading > 500)
        digitalWrite(blueLED, HIGH);
        digitalWrite(redLED, LOW);
      } else if (lsrReading < 500)
        digitalWrite(blueLED, LOW);
        digitalWrite(redLED, HIGH);
      Serial.print("Analog reading = ");
      Serial.print("cupsdrunk = ");
      Serial.print("lastWaterLevel = ");
      Serial.print("fivmin = ");
      Serial.print("sixhrs = ");
    void getLiquidChange()
      int level1 = analogRead(LSR);
      int lwrbnd = (level1 - 10);
      int uprbnd = (level1 + 10);
      int level2 = analogRead(LSR);
      if (level2 >= lwrbnd && level2 <= uprbnd)
        if (level2 < lastWaterLevel) 
          Serial.print("level2 = ");
          cupsdrunk = cupsdrunk + (lastWaterLevel - level2);
        lastWaterLevel = level2;
      fivmin = 0;
    void digitalClockDisplay(){
      // digital clock display of the time
      Serial.print(" ");
    void printDigits(int digits){
      // utility function for digital clock display: prints preceding colon and leading 0
      if(digits < 10)

        Unfortunately, in the middle of the night after Sophie and I had finally gotten everything to work, we watched tragically as the input from the water level sensor suddenly dropped from its normal reading down to zero. Whenever we tried to adjust anything, the sensor simply gives in completely random data that doesn't make any sense given the thresholds we calculated while it was still working. It was extremely discouraging for both of us, especially because we had just gotten it to work and we had no idea what went wrong. 
        The next morning I talked with Orit and she explained to me the difference between working with hardware and programming only on the computer. When working with the computer, it is fairly predictable and you can debug and know when it doesn't work it is likely something you have done wrong. When working with hardware, however, you spend a lot of your time troubleshooting and trying to determine whether you've made the error or whether your hardware is simply malfunctioning. In our case, it seems fairly clear the problem lies with the hardware, especially since it was working perfectly and then all of a sudden it stopped right before our eyes even though we didn't make any changes to it. We will be ordering a new water level sensor and implementing that as soon as we can. 
        In the meantime, I installed a flex sensor instead in order to give a demo to our class of reading input and getting feedback using the blue and pink LEDs. I only had to change a few thresholds in the code and it was good to go. Whenever I hold the flex sensor so that the reading is below the last reading, the program thinks I've drunk water. But if the flex sensor stays at the same level or gets higher for too long the LEDs turn pink and I need to drink some more water in order for them to turn blue. Here is a picture of me wearing the feasibility prototype that we presented during class. One person in the hallway asked me if I was a human time bomb. Hehe. 

Breadboard? Breadbox?

        Well, the three of us sat down to begin programming our water level sensor but it turns out we really don't know what exactly we're doing. After some research it looks like the water level sensor needs to be connected using alligator clips to a breadboard. The problem is, we have no idea how to use a breadboard or what it actually does. I did some research and it looks like there are columns in the middle that are all connected to each other and then rows at the top and bottom that are connected. I tried following a force sensor tutorial using what we think is a force sensor form the Arduino kit but for some reason our LED won't turn on. A close friend of mine at MIT is a mechanical engineer who also has a lot of electrical experience and he's going to give me a quick breadboard tutorial tomorrow.
        Another major issue we've run into has to do with our water level sensor. We have been planning on using a Camelbak water bottle that can be connected to our suit but we realized that as we drink from the camelback both air and water leave the pack causing the entire container to compress. This is a major issue given our plan for using a water level sensor. We need the water level to fall at a predictable rate in order for the sensor to determine how much water has been drunk but when the container shrinks and the sides squeeze in the water level hardly falls at all. Sophie and I went through a rather extensive brainstorming session that included using waterproof breathable fabric, turning the entire system upside down and cutting a hole in the top, along with other crazy solutions. Sophie has decided she will take point on resolving this issue and will be driving out to the closest athletic-wear store to do some more research. Hopefully tomorrow night we will be able to come up with some answers instead of just more questions. 

The Future of TUIs

        I've been thinking more about what TUIs will look like in the future. I'm doing a UROP (Undergraduate Research Opportunity) at MIT working for the D-Lab to researching appropriate technology for the developing world. I think it would make a lot of sense to start thinking about TUIs that can help to solve problems that are frequently faced in the developing world. I definitely believe that along with good education and proper health care, access to useful technology really can help people escape the cycle of poverty. I also am really excited about the Organic User Interfaces we read about recently. I have a hard time picturing how the actual technology will work or what it will look like but I can't wait to see what it looks like in the future. Finally, the article "Move to Design/Design to Move" opened up an entirely new vision of TUIs for me. I tend to think about technology as a way to solve problems and try to identify problems that can be resolved using well designed technology. That article, however, encouraged me to completely open up my perspective and think about ways that technology can simply be used for art and pleasure. That's a side of me that has been fairly dormant during my time at school so far but it's something I am also quite passionate about.

Monday, December 12, 2011


We've come to a conclusion about how to measure sleep! The process included research into what actually happens when you sleep. I found this other wearable sleep sensor made by students at MIT called Somnus Sleep Shirt (http://nyxdevices.com/product/) that measures sleep based on breathing and heart-rate. So, as it turns out, you can measure sleep using heart-rate! And that's great news for us because we will already be ordering a heart-rate monitor to measure exercise.

Sensors, Micro-controllers, and Actuators, Oh My!

        Well, we are discovering just how complex the world of sensors and actuators is. We've brainstormed what type of sensors we want to use for each activity we are tracking; some are significantly easier to find sensors for than others. We will be using a heart-rate monitor for exercise, RFID tags to track your location (if you have entered a dining hall we assume you have eaten, if you're in the same location as a friend you are likely to have made a social connection). Finding a sensor to measure sleep we are still working out. We thought maybe we could use an accelerometer to measure movement and when the person is moving less they are sleeping? That still poses problems for sitting still or when people move around in their sleep. We'll have to keep thinking.
        On top of that, actually choosing which system to find our sensors and actuators has proven to be a rather substantial process in and of itself. Initially we had thought we could use wireless sensors because we didn't want there to be a large amount of wiring going on inside the suit making it uncomfortable. We thought that in an ideal world maybe the sensors could use bluetooth to communicate with a computer somewhere (or, in our early stages, the mobile phone), but it turns out that would be incredibly complex and significantly more work and money than is necessary given the scope of our course.
        Next, we looked into phidget sensors and actuators. We spent a lot of time on different electronic gadget websites but I don't think any of us really knew what exactly we were looking for, we were just sort of exploring and trying to see what was out there.
       Kat struck what we thought was gold when she discovered Lilypad Arduino, a wearable micro-controller. We got really excited about that platform, especially because it has a wonderful range of sensors and actuators that we really would want to use including a light sensor (maybe that can help with our sleep sensing dilemma), a vibration board, and a buzzer! Plus, we watched a video tutorial for the Lilypad and you can even use conductive thread instead of wires which, in our eyes, helped to solve our excessive, uncomfortable wiring in the suit problem. When discussing the Lilypad with Orit and Consuelo, however, we discovered that the Lilypad as a micro-controller is actually fairly week and has shown itself to be a serious headache for other project groups in the past. I had already been wondering about all the sowing that it would require and how we would prevent wires from getting crossed in the process so that conclusion made a lot of sense. Consuelo recommended we still use the Lilypad sensors and actuators but instead solder them to wires connected to an Arduino board.
        So, there you have it! We have finally settled on using Arduino to program our suit. And, after much debate about how to measure water intake including force sensors, pressure sensors, and weight sensors, I found a water level sensor on SparkFun! What a relief.

Aren't they pretty?


Oh! I also came up with the idea of having a button on the suit that you can push if you want to turn the feedback off for a little while. It occurred to me you might not want the buzzer to go off during class. I don't want the wearer to have to remember to turn the feedback back on, though, so maybe the button could just last for 70 minutes and then it would turn back on automatically?

Project Update

        We presented our project idea low-fidelity prototype to the class and got a lot of helpful feedback. This is a picture of our initial idea but we will be making a fairly significant change to it:


Instead of using a mobile app to give the wearer feedback, we will be using actuators to give visual, physical, and auditory feedback. I can't believe we didn't think of this idea before because it makes so much sense. We already didn't want the wearer to have to remember anything or do anything to make the suit work but for some reason we thought we needed a mobile app to update the wearer. We're going to need to put a lot of thought into how we want that feedback to look: I, for one, wouldn't want the whole world knowing if hadn't slept in over 24 hours.

User Defined Gestures

        After reading the article "User Defined Gestures for Surface Computing" and talking about it during class I've been thinking a lot more about how I use technology or even just everyday objects. I've been especially aware of what Wendy called the "Sandwich Consideration" or the ability to do as much as possible with one hand while eating a sandwich with the other. I remember when I upgraded my Blackberry from the Pearl to the Curve I was disappointed that it was (and still is) so much more difficult to text with one hand. I've also noticed that I use my left hand so much more to stabilize things while using my right hand to perform the more more elaborate tasks. This focus on how people actually interact with their everyday surroundings is something that particularly attracts me to the TUI field. I definitely like the idea of making something comprehensive you can figure out how to use it just by looking at it. While GUIs try to make computers easy to use, ultimately there is a sharp learning curve from using a mouse to actually being to perform extensive tasks using an application. My teaching my extremely technologically savvy grandmother (she's the only person in her apartment complex who has a personal computer) how to use Google Earth took a process that included note taking and a great deal of trial-and-error. If Google Earth was a TUI, however, I bet she would be able to figure out how to use it very quickly because it would be intuitive.

Breaking News!

        We have completely changed our plan for our final project! Sophie, Kat, and I spent a great deal of time brainstorming ideas for how our MUSE interface would look and work but we ended up hitting a lot of walls in the design process. The three of us all have very different ideas of what the final product should look like and what it should focus on. I'm really excited about having a private time for generating ideas as well as finding a way to very easily organize and share those ideas. Sophie is more interested in individual versus group brainstorming time and how the two can be integrated. Kat has a lot of ideas for the different mediums that people should be able to use when expressing ideas and for how they can be connected to each other. All in all, when the three of us try to combine our ideas we basically get very overwhelmed and struggle to see how they can all fit together. On top of that, we are having the hardest time figuring out what the physical object that represents each idea should look like, how it will display the idea in a way that everyone can remember what the idea is, and how the space won't get too cluttered or disorganized.
        Ultimately, during our most recent brainstorming session, we decided to brainstorm completely new problems that may not have existing solutions. We zeroed in on the problem that Wellesley students have an incredible drive for success while at the same time are often significantly lacking in the amount of effort they are willing to put into taking care of themselves. Kat, Sophie, and I realized that we all do want to take care of our bodies but we often lose track of how much water we've drunk, forget to exercise or eat, and lose track of how much we're sleeping. We also recognize that many Wellesley students have a hard time finding time to actually be with friends and don't make that a priority. As a solution to this problem we've decided we are going to design a Super Wendy Suit that tracks food and water intake, sleep and exercise time, as well as social interactions! We are planning on using sensors to track all of these activities and then designing a mobile application called "Vital Signs" that looks like a Pokemon card and will report how healthy the user is and what they need to do in order to improve their condition. We are really, really excited about this new plan and feel so much better about it than we felt about our Surface application. I can't wait to see how it turns out!

Tuesday, September 20, 2011

MUSE: Phase 0

            Katherine, Sophie, and I have decided on an initial outline for our project! We will be addressing the problem of collaboration in meetings—the potential shyness of meeting participants as a result of group dynamics, the difficulty of generating or sharing ideas, and the laboriousness of replicating and distributing the information once it has been generated. Our hope is to design a solution to this problem using an interactive surface and creating physical artifacts that represent ideas and can be organized and shared easily. We would like our solution to be both ingenuitive and intuitive so that it can add an entirely new dimension to the way brainstorming sessions are conducted and meetings are run. We are particularly excited about this topic because we discovered that we share an enthusiasm for both organization as well as creative freedom and we believe these passions will drive our design process.
            Our proposed solution contains an individual brainstorming element, a group brainstorming and sharing element, as well as a method for sorting and retrieving the ideas once they have been generated. We hope to provide users as much freedom as possible in their brainstorming process—allowing them to use whatever medium they feel most comfortable with whether it is through writing, drawing, talking, or recording videos. We also plan on allowing people the capacity to create tokens that represent their ideas once they have been generated. These ideas should be able to be manipulated by other users, connected, and sorted, at will. We have already come up with a number of phrases that we find inspiring including “let’s see what other ideas are on the table” (both literally and figuratively) as well as perhaps using a physical “train of thought” to help sort information.
We chose to use this approach because we think it is intuitive as it follows existing patterns of brainstorming including what we call a “purging” of ideas, followed by reviewing, critiquing, revising, sorting, and sharing. We want our solution to be as interactive as possible while still allowing individuals ownership over their own creative spaces and ideas. We also really like the concept of being able to physically interact with what would usually be entirely abstract ideas.
            There are a number of things I am slightly worried about as we begin to think about implementing this idea. I hope that the table does not become confusing and overwhelmed with ideas that are difficult to distinguish from each other. I also hope that we are able to come up with something that is even more intuitive and simple than using Post-It notes. Additionally, it seems difficult to find times when we can all meet as a group and I hope that we are able to overcome those scheduling difficulties.
            Ultimately, I am enthusiastic about pursuing this topic and I look forward to seeing how our idea ebbs and grows as we continue to come up with new concepts and learn how to implement them.

Friday, September 9, 2011

Tangible Message Bubbles

As an emerging field, Tangible User Interface (TUI) development is only in the early stages of standardizing a paradigm for classifying its advances.  While contrasting suggestions exist, each proposed method provides unique description of new additions to the field. Tangible Message Bubbles (TMB), for instance, can be analyzed using both Shaer et al.s’ and Hornecker and Buurs’ frameworks. TMB is an interactive tool designed for children to be able to communicate with their friends and relatives in a fun and simple fashion. This TUI, which was developed by Ryokai, Raffle, and Brooks, allows children to record videos and sounds in either an accordion or a balloon and then manipulate the recording by simply extending or compacting toy. Finally their creations can be transmitted to an interactive surface and manipulated around the surface until they are dropped into designated locations representing friends and family. The computer then sends the videos directly to desired recipients’ mailboxes.
Using Shaer et al.’s framework, Tangible Message Bubbles is most closely categorized by tokens and constraints. By using commonly identifiable toys, children quickly understand to speak into the balloon in order to generate a message. Similarly, children understand the constraints of an accordion and discover they can constrain and contract the accordion in order to control the sound. While an interactive surface is used, the majority of tangible interaction with TMB takes place outside of the surface, rendering it a much less prominent classification. Based on Hornecker and Buurs’ framework, however, TMB is classified quite differently. Because users are able to grab and move important elements of the bubbles, as well as proceed in small, experimental steps, TMB should fall under the category of Tangible Manipulation. While both of these classifications are quite descriptive, it is likely that soon the field will evolve to prefer a single framework for describing Tangible User Interfaces.