If fashion is a language, Ashish’s Spring Summer 17 collection at London Fashion Week spoke of multi-cultural defiance in reaction to the post-Brexit toxic anti-immigrant sentiment and violence reverberating throughout Britain and more broadly, much of the western world. In this collection Ashish celebrated his Indian heritage and proudly declared immigrant status in Great Britain.
#LondonIsOpen was the closing mood, following a luscious procession of adorned Hindu-god-like models, men and women, swept along by an elegantly ambiguous sexuality, at least in this western context.
It was a celebration of craft and colour and a reminder that fashion is most powerful when it has something to say. In the spirit of that, I’ll let Ashish’s SS17 collection do the talking.
We are fully versed in the realm of our physical world and increasingly dipping into the virtual world through virtual reality experiences, but what of the space in between? What of the transition realm – a corridor, if you like, that lies next to the real world in which we transition through before arriving at a state of VR immersion? Think about the experience of entering the virtual world and the need for all of our senses to be stimulated in order for the virtual experience to feel real. Drill down even further to consider the organ through which we physically feel – the skin. Herein lies the connection and transition area of real to virtual.
The Matrix corridor – a representation of the space between the real and virtual worlds
Skin is a powerful tool that allows us to communicate on a highly intricate level. It also communicates who we our biologically and culturally, making it a potent social and physical organ. What happens when a team of curious minds consider the meaning of skin and how skin can transition us from a physical to virtual experience? Skinterface is born.
Skinterface is the work of RCA students Andre McQueen (Footwear designer and trend forecaster) , George Wright (Engineer), Ka Hei Suen (Kitchen Product Designer) and Charlotte Furet (Architect) who embarked on their MSc / MA Innovation Engineering Design course out of curiosity and a desire for collaboration outside of their immediate professional realms. An admiration for each other’s individual project work led them to work together as a team of ‘sensory architects’. The initial exploration for the Skinterface project was broad and posed months of questions about the sensory experience and perception of touch, but began with a very simple test. The test was wearing a plastic bag on the hand and immersing it in water and noting the sensory experience. Although the water doesn’t touch the skin it is still felt – the sensation of water on the hand is experienced. This underpins the working nature of the very human and very wearable piece of tech that is Skinterface.
Mood board images and the initial plastic bag test
The first question posed at the beginning of project related not to creating a defined product, but how to create something that was very human, integrated with technology. Touch is a powerful human tool and to relay this using technology seems a powerful new dimension in communication in a digital age. Skinterface is a one way communication tool – the sensory experience is delivered according to the location of the skinterface garment within a 3D mapped space by tracking its coloured surface details and delivering the sensory experience accordingly. An extension of this is a dual tool using the same tech, but allowing pressure on one part of the tool to effect the sensation delivered by the other. The implications of this are potentially to touch someone in another location, even in another country.
Skinterface at Milan Design Week, 2016
The set of garments created by the team deliver sensory pressure by essentially using a speaker in reverse, so that sounds create a varying electromagnetic field, which in turn is calibrated to produce varying sensations on the skin. These sensations are delivered via a coil and magnets encased in 3D printed caps, created at Imperial College London and adhered to the garments, which require close skin contact to accurately deliver the sensation.
Imagine the sound of a bird flying past you and the sensory experience induced by the change in air pressure caused by the bird’s movement – that’s what Skinterface delivers. In a virtual world, the sound of all manner of objects can be programmed and delivered via the coil and magnet-driven modules that apply just the right amount of pressure to mimic that same sensory experience as though it had happened in the real world. This skin beyond skin is poetically demonstrated in the video below, from beginning to end.
When asked about the aesthetic component of the design, Andre cited the current athletic lifestyle (or athleisure) sportswear evolution and brainstorming about what clothing will look like 40 years from now. Andre is a Cordwainers graduate who launched a streetwear fashion label then moved on to fashion forecasting, working extensively with global brands to evolve their trend-driven products. His curiosity for exploring the technical side of fashion and design led him to the Innovation Engineering Design Masters at the RCA, but he still has a firm grip on where the fashion market is headed.
When I ask the team about their view of this exciting innovation could be used they mention the sex industry, gaming, entertainment and fashion. The sex industry is an obvious one, as is gaming and entertainment, but fashion? Andre sees an opportunity to translate the sensation of wearing a multitude of different fabrics into a sensory ‘digital library’ that can be felt by wearing Skinterface. Wonder what your cotton trench coat would feel like in felted wool? Skinterface can give you that sensation. There is as much scope here for customer-led retail experiences as for fashion designers considering the weight and drape of various fabrics when designing garments.
A library of sounds could be created to induce all manner of sensory experiences through the Skinterface suit. The team talks about a dream open source library of thousands of compositions, and even whole scores for feature films that could be felt while they are watched. Theoretically, the score for each character could be written according to what they experience in the film and as a Skinterface-wearing viewer you could experience it too. The thought of experiencing a film dozens times from a different character’s point of view is mind-blowing.
I leave the team with just five weeks remaining before they complete their studies and exhibit the work arising from two intense years of exploration, research and experimentation. On my way out of the Darwin Building at the RCA, Andre and I muse about a common paradox in fashion design – final design decisions are often made at the beginning of the design process, leaving little room for curiosity, exploration and design evolution. Educational institutions including the RCA are a unique breeding ground for such curiosity and I look forward to seeing where this has taken the IED students, both physically and virtually.
To chat to Paul Sohi is to geek out over all things 3D printed. He takes me on a journey from 3D printed mannequins (the subject of his PhD) to a new polycarbonate composite prosthetic leg he is developing with a team spanning half a dozen countries but centred at Autodesk in San Francisco, for an Olympic cyclist bound for Rio later this year. It’s a helluva ride, so buckle up!
What initially prompted me to get in contact with Paul was a question I’ve been pondering whilst working at the fringe of fashion and technology for some time. Why aren’t there robot models? And why don’t I create the first robot modelling agency? It makes sense for so many reasons, but more on that in a later post.
Paul’s research and development at the Royal College of Art in conjunction with the Makerversity at Somerset House centres on solving an immense problem in mannequin manufacturing. Mannequins are currently sculpted by hand before being moulded and cast – a time consuming process which imposes mass standardisation. As someone who has hired mannequins for London fashion week I can attest to the limited offer currently on the market. Consider a museum requiring a custom-sized mannequin to display historic clothing, and then consider a new technology allowing such a mannequin to be 3D printed in days rather than laboriously hand made in months. Then consider that currently, the best way of creating mannequins to display such costumes is to 3D scan the clothing to determine the volume inside of them when worn on which to then base a mannequin shape – requiring reverse-engineering of the mannequin to mimic someone that did actually live and wear those clothes at a point in historical time. It’s on consideration of these weird truths it’s possible to begin seeing the benefit of Paul’s creation of an algorithm designed to transform actual body (or garment) measurements into 3D printed mannequins, rather than relying on artistic creations inspired by – but anatomically untrue to – the human body. The key here is that measurements entered into Paul’s program are manipulated and represented visually in line with actual metamorphic landmarks. For example, height has an impact on body proportions. It is incorrect to simply scale a mannequin up or down directly proportionately – there are intricacies in height ratios that Paul’s rigorous algorithm takes into account so that the mannequins he 3D prints are true to the human form, rather than a sculpted representation of an imagined ideal. Shorter people’s legs are proportionately longer than their torso compared to taller people, for example, but you would not detect this by looking at them – both body proportions simply look ‘right’. Herein lies the difficulty in artistically interpreting the human form where size and fit are concerned.
Paul’s 3D printed scale mannequins being printed in parts for later assembly
The motivation behind Paul’s PhD was to find out if he could create a 3D printed mannequin using mass customisation algorithms built upon an immense amount of research underpinned by the International Standards. These standards provided all the necessary body measurements to create a digital mannequin which can then be 3D printed.
An important point made by Paul during our conversation is that mass customisation via 3D printing is now possible on a production scale – it has evolved beyond prototyping. This means that standardisation of mannequins is no longer necessary and the skilled work required for each fashion retail market does not have to be localised. Since a ‘standard’ size small in Asia is nothing like a ‘standard’ size small in Europe, mass customisation shatters geographical boundaries and means standardisation – at best badly sized and limited in terms of body shape and at worst pushing damagingly unrealistic body ideals – is no longer necessary. The mannequins Paul is developing can be tailored according to cultural specificity. Regional cuisine radically effects body shape, size and proportion and genetics also has a considerable impact. These factors can be taken into account in Paul’s algorithm.
A complete 3D printed scale mannequin
From an aesthetic point of view, every fashion brand has its own ideal mannequin which in some cases may be seasonal. These are made from master moulds and if done by hand using current methods, take months. 3D printing takes a fraction of the time, allowing greater flexibility and mannequin diversity.
Components of a scale model printing whilst Paul and I chatted at the Makerversity
Paul describes his work as creating avatars and body forms. He is currently working with the Victoria & Albert Museum, London to find rapid solutions for mannequin making for display of historic costumes. As an extension of this revolutionary development for display mannequins, Paul is looking at how the current mass standardisation of garment making mannequins relates to sizing within the fashion industry. There is no datum on mannequins – no system for sizing and no standard approach to it across the industry. When creating clothing, we have anatomical landmarks (nape to waist, for example) but the way this is measured is still variable. Paul is determined to standardise measurement taking and sizing to put an end to what is a slow, laborious and repetitive process. He makes the point that, for example, three people in the fashion industry will measure the same dress and get three completely different sets of measurements . Compare that to Architecture – or any other creative industry – and you would be laughed at for not having and applying a set of standards. He makes a strong point and I have personally dealt with this often painful aspect of sampling and production in the fashion industry. Paul is confident that a set of standards can be extrapolated from the points mapped in his algorithm.
Interestingly, Paul tells me that the standard nape to waist measurement of garment making blocks used routinely in the fashion industry came from 1920’s military uniforms. Today’s approach to garment sizing and pattern proportions has only marginally evolved since then. ‘Standard sizes’ are in truth specific to each individual fashion house and are not related to any actual standard, which to me makes sense because each fashion house/brand has it’s own silhouettes and ‘fit’ which are part of its aesthetic, but I can see how this isn’t customer friendly and how in an increasingly e-commerce driven industry sizing standardisation would reduce returns and help consumers make better style choices.
Returning to museum mannequins, the Alexander McQueen exhibition Savage Beauty at the Metropolitan Museum of Art in New York was one of the most successful exhibitions of all time but despite this, when it ended it was not picked up immediately by another institution. The hand-sculpted mannequins, made specifically for the garments they displayed were destroyed. Shortly after, the V&A took on the exhibition, and set about hand-making the mannequins all over again. Almost a year later they were complete. If these had been created using Paul’s 3D printing method this would have been simpler, quicker and less expensive.
The exception within the Savage Beauty exhibition was the Plato’s Atlantis 3D printed mannequins which closed the exhibition. See the 3D rendering by Asylums FX and a photo I took at the exhibition below:
Plato’s Atlantis, Savage Beauty, Victoria and Albert Museum, London
When I ask Paul about the response so far to his work he says it has been met with distrust and caution from a number of museum curators and fashion designers who feel things are working just fine as they. The fashion industry is famously and paradoxically resistant to change (the out-of-synch seasonal cycles and some luxury brands still refusing to sell online are just two examples) but why isn’t the way things are done being challenged? Why can’t we do things better? Why can’t we explore technology to do things in a better way? As long as we pose the questions, it appears technology will provide the answers.
Paul and I leave the Makerversity disagreeing over the recent Batman V’s Superman film (he’s a fan, I’m not) and agreeing on the amazingness of The Hulk. I wish him well on his bumpy but worthwhile journey to fashion mannequin disruption.
Header image: Paul Sohi
For more about Paul’s work, click here and follow him on Twitter