FMP-2_02 ↓ The Gestural Experience or Nothing

I met with my mentor Pete Wallace, we had loads of catch-up to do about our respective projects.

Prior to that meeting, I didn’t voice out my concept at all. Thus, I was pretty insure about its clarity. Turns out that Pete got exactly what I was trying to explain (aka what I previously wrote → “Bring corporality to the experience of web surfing“), and it definitely helped clearing up my thoughts. I got so many ideas while I was kind of stuck by myself. Because of his background with projection and moving image, he also advised me a lot on the technical part. After that, I sketched out few ideas for the execution – as you can see below-low.

We also talked about my Ars’ project, and my interest in the relationship between sounds and physical gestures. Pete e-mailed some of his colleagues whose line of works could intertwine with mine. I exchanged a bit with Benji Fox – he advised me on binary sound, and sent me a few references of contemporary modular synths with interesting control gestures. I particularly liked Landscape ↓

In the same vein, I also went to the Music Hackspace Artist Talk at Sommerset House this Monday, with Kacper Ziemianin presenting its LightSeq ↓

In the age where everything is pretty much doable within your sole laptop, the experience of playing // watching the performance still matters.

Although my FMP might not follow that musical path – I’m better off concentrating on the visual part, I still can definitely link my interest in the gestural experience into web surfing.

Anyways – this is always useful, particularly in the case I’m planning to improve my Ars’ project. I’m somewhat trying to accomplish a cohesive line of works, and I can’t really disregard my own interest about sounds. I know it shouldn’t be forced though, so I’m backing up a little bit to reflect and solve that visual content part first.

I have to admit I’m stuck on that part. 🙃 The only thing I know is that I want to re-create a web universe, it’s plenty vague enough. I didn’t talk about it much with Pete, though he gave me a few ideas for the variables I could use, such as a RGB detection.

As usual, I mindmapped – and simplified everything out ↓

Here are the sketches ↓

FMP-1_05 ↓ The Cyberflâneur:

Writing a thesis is a pretty structured task. Here is mine, it might change in its finalization though ↓

Here is the Google Slides I used for the Formative Assessment we had last week ↓

For some reasons I was really stressed out, I definitely didn’t manage my time well. My explanations were a bit in a rush, I missed out a lot of parts. Me caught blabblering, I apparently said “it’s like, whatever, I tried, basically, etc.” a lot ↓

FMP-2_01 ↓ The Corporalité or Nothing

After the Formal Thesis Assessment last week and a lil’ break (which mostly consisted in viewing and moving flats…), it’s time to get back on tracks – time for the long awaited FI-NAL GRA-DUA-TE PRO-JECT 👌

While writing, I couldn’t wait to get to this part. I actually have been ideating a rather firm concept of its since I finished my Ars Electronica’s project, which happened during the same time-span – remember, the scroll meditative box?

Since I used to ideate with the screen, I’d like to keep conceptualizing an Internet-inspired type of design – with tangible objects. It was actually my main objective during this MA, to get out of my screen-comfort-zone and to do something actually physical(-computing). For some reasons, I’ve always been intimated by designing actual objects, but I’m starting to do better now… Why not keep going at it then?

My pitch is rather simple → “Bring corporality to the experience of web surfing“.

What I meant by the word corporality is actually taken of the French word corporalité. The actual translation would be physicality, but I like the semantics of corporalité. Why is that? Well, corps translates to body, hence corporalité litteraly takes physicality in line to our body, to our human senses.

Which is what I’m aiming for, bringing web surfing closer to us. I started mindmapping as you can see below ↓

Notes ↓ Ars Electronica

It was such an amazing week that I need to retire from the IRL world a bit to reflect – and finish off my FMP. Well, this is hence the time to write some words about it.

In every festival – especially at a scale such as Ars Electronica – you are bound to go “Huh?”, get impressed, eventually dislike, and finally simply get struck by particular ones. The energy of Ars Electronica itself was very good, and I’m glad we had the opportunity to experience this. I definitely feel this brought up the cohort together, and I hope we will use that energy to pull off an amazing graduate show in a little more than 2 months. I’m starting to get a little bit sad to graduate, although I have long waited for that moment 🙃

Anyways, I can’t detail the whole trip but here are my two biggest impressions ↓


Nyloïd is a sound sculpture made by the brothers André and Michel Décosterd under the name Cod.Act. I first saw it while the artists were setting it up, and it’s definitely mesmerizing. The simplicity of the execution – at least, that’s the impression it gives from the output – got me.

Even if you might not particularly like it, it definitely get you a reaction. It’s pretty interesting to see how it differs from one to another person, and discuss your different impressions. In my case, I found it simply soothing for reasons I can’t explain. It’s also when the technology manages to get past its wow factor, and open up to its own materiality – I really appreciate that.

Robot, Doing Nothing

(I like the title.) Robot, Doing Nothing is an installation directed by Emmanuel Gollob and Johannes Braumann, with the collaboration of Michael Schweiger and Chris Noelle.

It looks supra complicated, but it isn’t doing anything useful. If I remember correctly – this is a speculative scenario – it is encouraging the act of doing nothing as a new way to be efficient in society, by meditating with the installation. Here, the wow factor is definitely exploited and hijacked in a critical pov.

I kind of relate since my project also portrays a stance on meditation, although mine would be particularly focused on un-addiction. It gets me to the question of meditative technology – and intentionaly or not would be my biggest question mark.

But, that’s not the coolest. The coolest is that both were exhibited just a step away from the Campus part we were at. It was super cool to casually walk by those every day to reach my own project. Almost normalizing how amazing it was.

Social Things_11 ↓ Ars Electronica

Here is my project at Ars Electronica ↓

At the beginning, a bad habit re-surfaced – my low-confidence self took over and I was really sick of my project. I guess it’s because I didn’t have any feedback of the updated version of the project, since the version I submitted was a prototype finalized in its concept but not in its form. Those last steps were made during the summer, and I didn’t get to meet or talk to many people during that span of time.

Seeing everyone pretty confident about their own projects and happily talking about its, forced myself to reach to the audience – despite my hesitations. Also, my cohort was a good support in that – it’s always scary to putting yourself out there, at least for me.

Turn out my project was more easily understood than expected, and I got some good feedbacks. It seems it’s definitely relaxing, and most of the audience got right away the concept of the scroll gesture or at least why I did that. I guess the title “DO IT RIGHT, DO IT SLOW” helped in that, as well as the actual shape of the object…? At least, that’s what I got.

I hope there will be opportunities to continue exhibiting my object – and definitely take it further. I think I already said that in a previous post, but I want to make it as a modular synth instead of simply playing samples. Also, get rid of the Mac Mini part to make it self-sufficient. It’s too bad MAX/MSP isn’t supported by Raspberry Pi, but I guess Pure Data would take over then.

It’s my last entry blog as a student for that project, but it is to be continued 🙂

Social Things_10 ↓ Making

I had a great time this summer crafting my object! I’m posting it in one go but I recall I may have spent 1-2 weeks in total for the final making.

My main obstacle was to find out how to make the wavy shape. Thankfully, Nathalie from the 3D workshop is full of tricks. She told me that bending wood is a whole another level of difficulty, but that I could take a different approach about its. We started by laser-cutting the sides in their wavy shapes, and forced a very thin layer of wood on the top to bend over the cut shapes.

I don’t know if it makes sense, so here are some explanatory pictures ↓

Social Things_10 ↓ Making
My first plan, vastly corrected by Nathalie. I’m definitely not a 3D person, but I try.


Social Things_10 ↓ Making 9
Nathalie’s sketches. It’s basically a box in two parts. She recommended MakerCase to generate my plans and then adjust them on Illustrator for the wavy part – super useful, I didn’t have to think about the joints.


Social Things_10 ↓ Making 7
All the parts are cut and glued together. I unfortunately didn’t take any in-between pictures, but you can see the thin layer of the wood glued to the thicker sides – hence the shape shaped.
Social Things_10 ↓ Making 6
Pipe helping me with the scotching so that the glue stays put.


Social Things_10 ↓ Making 5
After deliberation (Nicolas, Betty and Pipe being the jurés), I decided to get rid of the bottom half. It was way too big compared to my expectations.


Social Things_10 ↓ Making 4
Thus, I made another bottom part to be able to close the box.


Social Things_10 ↓ Making 3
I re-tested the electronic part, since I changed the alu to copper tape – much more stable.


Time to sand, sand, sand.


I applied a black satin paint in 2-3 coats, and another satin finish coat on the top.

I also changed the Arduino and the MAX/MSP part. Gareth helped me out for the Arduino part – instead of just reading the pin number, we adjusted its to the speed being read between two pin numbers. Thus, I can use that number within MAX/MSP to play out different samples according to the speed.

It’s all ready and set for Ars Electronica 😎


Social Things_09 ↓ PE

I just handed in my Portfolio of Evidence for this Physical Computing unit. Here is the demonstration video I’ve made, with the hands of Betty scrolling down through my prototype ↓↓↓

I used this song made by my musician friend Sima Kim, and tweaked it a little bit to demonstrate the type of effects I want to produce. Indeed, I’m working towards an evolution of the sound part through the speed of the gesture. Thus, I have some works to do with the MAX/MSP patch, but it might be a lot of fun. Since I’ve discovered MAX/MSP, I’ve always wanted to find the time to actually compose music with its. Should be a cool summer homework!

First, I should still take the next weeks to finish the object’s design in priority.

FMP-1_04 ↓ Proposal

Just handed-in my Final Major Project and Thesis proposal, here is it ↓↓↓



This isn’t a random wordplay but an actual statement. Take cyber and put flâneur; you got the verb, the term and the noun I want to dedicate my thesis study on. What do I mean by cyberflâner, cyberflânerie and its cyberflâneur – and how do I relate it to surfing? Wait, web surfing I meant.

Indeed, the area of my research is specifically the Word Wide Web and the act of surfing – and its relationship with the flâneur. This is the French way to name a man of leisure which was picked up by the scholar Walter Benjamin in the 20th century, and thus became the symbol of the modern explorer. As I aim to do it here, as the symbol of the digital explorer.

Fig 1. Windows 95 Commercial by Microsoft (Source:

The World Wide Web has undoubtedly changed since its invention in 1991 by Tim Berners-Lee. This is definitely to be broaden with the Internet, although the difference has to be mark. If the Internet – firstly ARPANET, has mainly been brought by the U.S Department of Defense to facilitate both communication and surveillance through a global networking infrastructure, the World Wide Web beamed with hope towards infinite explorations.

What exactly is the act of surfing? Here is the definition dated of 2004 found on Urban Dictionary:

  • Usually involves an individual browsing through the Internet, whilst not looking for anything in particular.

I particularly like the last bit: whilst not looking for anything in particular. This is how I relate it to the act of flâner. You put your time in that aimless stroll, mindfully observing the city and its surroundings; the self-awareness of this act is very important, and I believe the act of surfing encouraged that same self-awareness. We click from hyperlink to another hyperlink, surfing through the web pages as they are waves. Now, this isn’t much the case.

Fig 2. Questionary by Facebook.

New (inter)actions has since taken place out of the known gestures: the click and the scroll. The first is quantifying actions – such as like and follow, while the later has transformed the way the World Wide Web is thought, as it has brought up the feed.

Indeed, the hyperlink has been overtaken by the feed, infinitely bringing us contents – personalized yet automatized contents through algorithms. Recommendations systems keep getting more and more accurate by gathering datas through our feeds. Therefore, the act of surfing has now an undermining importance. This has precisely been “damaged” by those algorithms: how relevant is the act of surf if this is influenced by my localization, my previous searches and my datas? That’s why I’m referring to the cyberflâneur instead.

With it, the act of reading has also subsequently changed: short(er), fast(er), and linear. The risk underlying the infinite feed is a trap of time and attention. I believe there isn’t much satisfaction through the feed: you can’t never get enough, precisely because you’ll always get more. This linearity impacts on the act of reading, and I believe personal development is at risk here – the development of oneself. That’s why the concept of individuation is important – as I understand it from the works of Bernard Stiegler [1] against the hegemony exerted by big corporations on the Internet.

Nevertheless, I still don’t believe that the Internet as a medium – is specifically making us any stupid: it’s about the way the (inter)actions are designed and how we use its. The development of cognitive skills happens through the act of reading – and writing, though I’m choosing to exclusively focus on that first act here. I want to demonstrate that the cyberflâneur is very much alive: he/she is aimless, absolutely not mindless – and yet certainly non-aware of his/her own status.

Fig 3. “Paris Street, Rainy Day” by Gustave Caillebotte (1877).

Rather than to provide an actual solution – that would definitely put my work into the realms of the screen, I aim to create a debate around the act of the cyberflânerie. That’s why I want to transcribe the act of cyberflâner into the physical world, through the production of an interactive installation.

Before that, a completed literature review is my first step into the writing part: my main routes are Walter Benjamin and its “flâneur”, Guy Debord and its “dérive”, and Marshall McLuhan and its “global village”. I believe that a systematic review is needed to reach the figure of the cyberflâneur through the understanding of concepts thought at different eras. An expert research on the cognitive aspect of the Internet is also much needed, to add physical substance to my theoretical research.

My main framework is experimentation: I will definitely cyberflâne myself, and might ask individuals to do the same – with the possibility to use brain sensors to track any changes, added to the expert research I’ll have. The writing part would thus definitely overlap with the production part at the beginning. This won’t be the end-result of my project, as I only intend to use it as a way to gather datas. I’m also inspired by Kenneth Goldsmith and his concept of “Wasting Time on the Internet[2] – to find creativity into the act of procrastination. I plan to make use of this reflective practice by producing observations from the outcomes of its.

My other framework is the prototyping and research through it: which interactions represent the best the act of cyberflâner? I can’t find out by putting my energy through the end-production; I first have to test out and choose the better fit. For that, I also plan to conduct a field research using surveys – both online and in my physical environment, to generate thoughts and opinions behind gestures used and envisioned. A contextual research would also help me to define the existing practices in the use of the Internet as a medium.

Lastly, here are my two criteria of success: I want to get the individual to critically reflect on his/her actions on the World Wide Web, and hopefully encourage the act of the cyberflânerie.


Bell, D. (2008). Cyberculture Theorists: Manuel Castells and Donna Haraway. London: Routledge.

Carr, N. (2011). The Shallows: What the Internet Is Doing to Our Brains. New York: W.W. Norton & Company. (1998). The ‘Cyberflaneur’ -Spaces and Places on the Internet II – Ceramics: 05/19/98. [online] Available at: [Accessed 16 Jun. 2017].

[2] Goldsmiths, K. (2014). Why I Am Teaching a Course Called “Wasting Time on the Internet”. The New Yorker. [online] Available at: [Accessed 16 Jun. 2017].

Hendel, J. (2012). The Life of the Cyberflâneur. The Atlantic. [online] Available at: [Accessed 16 Jun. 2017].

Morozov, E. (2017). The Death of the Cyberflâneur. The New York Times. [online] Available at: [Accessed 16 Jun. 2017].

[1] Spatial Machinations. (2013). Bernard Stiegler, “the Net blues”. [online] Available at: [Accessed 16 Jun. 2017].

Van Honk, J. (2016). The Web and its Wanderers. Institute of Network Cultures. [online] Available at: [Accessed 16 Jun. 2017].

Social Things_08 ↓ Final Crit

We had the Project Final Crit this morning, in the presence of Rania Svaronou and Riccie Janus from IBM again. We organized it as a 2P2 feedback, as you can see it below. Pretty cool to see everyone’s project coming through their last iterations!

Here is my (5th) prototype ↓↓↓

(I wish I took a self-explanatory picture before I glued everything, instead of that long paragraph coming 😅)

I made up a very DIY case to ensure the foil was secured: plastic sheet for the touch and colored paper sheet to hide it. I’m considering to simply use colored plastic sheet for the last version, as I don’t need to see the BTS that much anymore.

Compared to the 4th prototype, I didn’t use copper tapes but simply switched back to foil to have bigger strips. I cut around 3 cm compared to 5 mm for the tapes. I also left around 3 cm space between each strip, while the tapes were dispatched too closely and created confusion for the MPR121. I also only used 3 strips compared to the 6 I previously had. I think it’s plenty enough considering the interactions I actually need from them – not that much.

Pretty simple, as instructed: the person has to hold on to the first strip, then slide through the two others strips. I noticed the foil strips sometimes went “off” or were confused between one and another despite the space between them – forcing me to re-start the circuit. It didn’t happen before, not sure if because foil has less stability than copper or it might be simply thank to the tape format? Well, need: bigger tapes!

The technical part didn’t change much from the 4th prototype: I used the same wiring + code for the Arduino part, and I simplified the MAX/MSP patch. Note: the first strip is wired to the 0 pin, the second strip is wired to the 6 pin, and the third strip to 11 pin.

Social Things_08 ↓ Final Crit 4

While the 0 pin didn’t change, I used select to bang each time it detects the 6 pin plus counter to bang each time it effectively counted 6 to 11. Both select and counter are linked to timer to know how many milliseconds has passed since the finger passes the second strip (aka first bang) through the third strip (aka second bang). Then I linked it to a gain function: the more the gesture is fast, the more there wouldn’t be much volume.

Social Things_08 ↓ Final Crit 1

(Here is Pipe interacting with my prototype, you can also see the title I’m settling on: LET’S DO IT RIGHT, LET’S DO IT SLOW.)

I wrote down the main feedbacks I got + my thoughts on that:

  • Audrey: “When moving fast, not aware of the reaction or the idea ‘slow down’.”

Agreed, the sound effects definitely need to be more obvious instead of gain, else it looks like it’s broken. I re-linked that to a feedback function right away, so it distorted the sound instead.

  • Rania: “Loves the idea. Thinking from an UX perspective, better to use vertical scroll instead. Match the speed of the gesture to the content and that’s all it needs more.

It was great to see the idea understood rapidly with straightforward advices. Plus, it seems the vertical scroll definitely comes off as more familiar and match the infinite scrolling we are doing on our social apps.

  • Gareth: “Loves the concept, definitely getting through: it’s the most important, technical part come later. Mention of psychological studies to the scroll gesture, and the insatisfaction we get from it through our never-ending feeds. Doesn’t think the scroll needs to be vertical.

Interesting thoughts – and also related to what I’m looking for my FMP. Maybe the gesture could work in both cases, depending on how people want to handle the object depending on their own preferences – siding horizontal and vertical?

  • Stephanie: “Advised a strong reminder for the context of the Slow Movement – a more high-tech approach with the phone, and the use of fabrics to tone down that approach.

Not into the phone direction, but I got where she came from and it actually gave me an idea: maybe I can ask people to put their smartphones besides my touch pad so that action acts like they are substituting their smartphones for my device?

  • Nicolas: “Something is happening: trust relationship with the object. Need an evolution of the content now: for example, if you scroll right enough to reach a good volume, the next step would be to maintain a good sound effect? The gesture is good as it is now: the hand rests while holding while the other hand scrolls? Last step is the object design, also think about where I want this object to be used? About the question of the fabrics, it could be filled up with cottons and such: take inspiration from toy stores, and look up at kinetic sound.

Digging that “evolution” idea. Definitely a home object, acting as a substitute for the smartphone as I just ideated. To be honest, I don’t think I will use any fabrics except silky ones: 1/ I want a slick touch to remind the screen. 2/ I don’t want my object’s design to be playful. Since I view it as therapy from the infinite scrolling gesture ≠ aka won’t be a toy, my aim is definitely an adult (teenagers included) audience.

The object’s design will also definitely shape the gesture I mentioned the wave idea to Nicolas. In my previous blog post, I previously mentioned that I ordered a plastic ball in order to prototype with its wavy shape, well I don’t know where my package is – hence the flat prototype…

Now looking into kinetic sound, my prototyping process is taking me more into the sound part – which it’s why I think I might let down the light part, I don’t think it’ll add anything much to the interaction. I will still consider it for my final sketches, more as a bonus aesthetic part. I’m still thinking about how you have these flashes when you close your eyes after seeing lights. Well, it’ll depend on the shape but it’ll need to be transparent at least on that part for the light to come through and hiding the strips would be extra work – and make caution that the MPR121 would still be reliable with the distance I’d need.

Though I got my concept across – which I’m feeling pretty relieved about, I still have then few mostly technical steps left: the object design, and the sound part of the MAX/MSP patch.

It might be better to hand over the sketches for the PE, and aim for an actual delivery with the objective of Ars Electronica (I didn’t mention it before but the class is going to Ars this September, and I’m bringing that Social Things project in my suitcase).

Social Things_07 ↓ Fourth Prototype

Here is the 4th prototype, where things are finally starting to come together ↓↓↓

Wiring: I wired up both MPR121 and the RGB LED to a prototype shield + a small breadboard to minimize the size of the circuit. I wired up the RGB LED as usual, look up my previous post and/or directly this tutorial. I wired up the MPR121 as instructed there on Sparkfun, to 6 copper tapes.

(Note: the MPR121 from Sparkfun has been discontinued, but you can find the same on Adafruit)

Code: I first used the original library but I wasn’t able to change the threshold so the tapes would still work with a plastic sheet as you can see on top. I asked Gareth and he advised me to use Bare Conductive’s library – indeed it was pretty easy to change the values there. Here is the code with my values + the RGB LED implemented part ↓↓↓

const int redPin = 5;
const int greenPin = 6;
const int bluePin = 7;
boolean ledState = LOW;

#include <MPR121.h>
#include <Wire.h>

#define numElectrodes 12

void setup()
 while (!Serial); // only needed if you want serial feedback with the
 // Arduino Leonardo or Bare Touch Board


 // 0x5C is the MPR121 I2C address on the Bare Touch Board
 if (!MPR121.begin(0x5A)) {
 Serial.println("error setting up MPR121");
 switch (MPR121.getError()) {
 case NO_ERROR:
 Serial.println("no error");
 Serial.println("incorrect address");
 Serial.println("readback failure");
 Serial.println("overcurrent on REXT pin");
 Serial.println("electrode out of range");
 Serial.println("not initialised");
 Serial.println("unknown error");
 while (1);

 // pin 4 is the MPR121 interrupt on the Bare Touch Board

 // this is the touch threshold - setting it low makes it more like a proximity trigger
 // default value is 40 for touch

 // this is the release threshold - must ALWAYS be smaller than the touch threshold
 // default value is 20 for touch

 // initial data update

 pinMode(redPin, OUTPUT);
 pinMode(greenPin, OUTPUT);
 pinMode(bluePin, OUTPUT);

void setColor (int red, int green, int blue) {
 analogWrite(redPin, 255 - red);
 analogWrite(greenPin, 255 - green);
 analogWrite(bluePin, 255 - blue);

void loop()
 if (MPR121.touchStatusChanged()) {
 for (int i = 0; i < numElectrodes; i++) {
 if (MPR121.isNewTouch(i)) {
 // Serial.print("electrode ");
 Serial.println(i, DEC);
 setColor(255, 0, 0);
 //Serial.println(" was just touched");
 } else if (MPR121.isNewRelease(i)) {
 //Serial.print("electrode ");
 Serial.println(i, DEC);
 //Serial.println(" was just released");
 digitalWrite(redPin, ledState);
 digitalWrite(greenPin, ledState);
 digitalWrite(bluePin, ledState);

Time to re-create the scrolling gesture: the thumb has to hold on the first tape while the others fingers scroll down through the rest of the tapes. This one-hand gesture is pretty similar to what you’re doing on your trackpad or on your smartphone.

I spent quite some time on MAX/MSP to figure out how I could make sure that the fingers have to pass through all the tapes – you might cheat by hold on to one tape only and it’ll still work. After trying out select, clocker and such, I used counter and it does count after the full action from the moment I hit the first tape to the last one! Still, I need to figure out how to measure the speed of that action.

I asked Nicolas for advice and we tried some stuff such as thresh or select added to timer. It didn’t quite work the way I wanted – aka no cheating allowed, but it gave me insight about how I can make it work for the next prototype. Indice: I’m thinking to use counter with timer.

For now, the only thing working is using select down the first tape to activate the sound (I chose an ambient style of music made by my friend Sima Kim in his debut days) through a comb function I intend to fully make use of for its effects.

It’s a bit messy but here the actual MAX/MSP patch ↓↓↓

Also, this is the MAX30100 which is the heart rate sensor I intended to use for the other hand to rest down ↓↓↓

I decided to not use it anymore – not that it didn’t work for some reasons… After discussing it with Nicolas and saying my aim was to parody the tracking technology, he said that my point wouldn’t come across as it’ll only be perceived as technologically intrusive – in fact, exactly why I wanted to get out from tracking datas at the beginning. Well I tried, out for good now!

The light also doesn’t have any real use for now. I’m still struggling to sketch an object’s design that’ll make the most of it while hiding all the wires. I’m thinking of a wavy kind of shape though. I ordered this plastic ball to use its half to cover up the LED, in order to envision the wavy part since I can make up the flat part – next prototype if I got it safely delivered.