The Dawning of a New Age of Augmented Reality-Led Fashion

If the title of this article has conjured up images of LED light-embedded bags and swathes of technophiles in VR headsets, prepare yourself for an altogether more sophisticated and integrated use of tech hardware and augmented reality where the result isn’t ‘in yer face geekery’, but more tech-enabled emotional brand experiences.  Leading this charge across fashion design and brand experiences is the London College of Fashion-based Fashion Innovation Agency, whose 5th birthday last week acted as a summary of five years of giant leaps in tech and bold experimentation that began with a smart phone dress (seems rudimentary now, right?!) and most recently a live CGI fashion show.  How and why such big leaps, and why does it matter?  Is the fashion industry really ready and open to placing a digital layer over the physical world?  Yes, and here’s why…

Top: Nokia smartphone skirt in collaboration with Fyodor Golan – Image: BT.com  Middle and above: Steven Tai x ILMxLab at London Fashion Week – Images: Techstyler

Backed by recent calamitous downturns by House of Fraser and Topshop, it seems fashion retailers have lost sway with consumers, who are increasingly shopping online, led by Instagram connected e-commerce, that allows single swipe shopping, delivery within hours and outstanding customer service.  Why go to a store?  Stores are impersonal and finding the right style in the right size can be slow and frustrating due to outdated and inaccurate inventory systems.  Zara is a regular disappointment in this area, boasting stock at specified stores when checking Zara.com, which isn’t actually in stock when visiting the store.  On a recent trip to Zara the staff admitted to me that their inventory is wildly inaccurate and the online stock check is not up-to-date.  Wasted trips equal dissatisfied customers and further back the case for shopping online instead.

“Retail on the high street is incredibly boring” were the frank words of Matthew Drinkwater, Head of the FIA during a panel discussion at their 5th birthday event recently.  As the matchmaker and orchestrator of five years of projects spanning the aforementioned smartphone skirt, the Sabinna X Pictofit Hololens mixed reality shopping experience and Steven Tai’s Live CGI presentation transporting the audience to Macau, the FIA are well versed in breaking new ground and facilitating fashion and technology collaborations for the benefit of both industries.

Images: SABINNA x Pictofit

The outcomes and learnings ultimately filter down to London College of Fashion students, arming them with next-generation tech skills in their fashion toolkit, helping them push the boundaries of fashion design and retail and shape the future of the industry.

LCF students have plenty of ideas on how to improve the retail shopping experience.  Most of these hinge on bridging the physical and digital realms, essentially draping a digital layer over physical stores to enhance and personalise them for individual customers.  The fruits of these ideas were presented at the Future of Fashion Incubator launch event, part of an ongoing partnership between Microsoft and the FIA.  LCF students teamed up with Microsoft experts to experiment with new technologies including Hololens, IoT and AI.  The students were mentored by the Microsoft team in their chosen technology in order transform their ambitious ideas into (often mixed) reality, harnessing what Maruschka Loubser, Senior Global Marketing Manager at Microsoft called the ‘inspirational and exciting’ vision of the students.  Their mission was to unlock the students’ innovation and here are the results…

One team of students created Hololux, a shopping platform experienced via the untethered Hololens Mixed Reality headset which presents 3D renders of products in online stores, bridging the 2D e-commerce experience with the 3D physical instore shopping experience.  Hololens headsets can be networked so that groups of people can shop together regardless of their individual locations.  Want your friend abroad’s opinion on an outfit?  Simply link up and shop together.  The team identified airport lounges as an ideal location for this experience, where travellers may want to experience luxury shopping while waiting for their plane, but at the same time avoid the chaos of the crowded and busy airport.  Totally imaginable.

Hololux in creation – behind the scenes  – Image: Microsoft

Augmenta also made use of the mixed reality Hololens, but this time for visual merchandising using holograms to simulate interior store layouts.  Their platform allows visual merchandisers to map the interior space with digital objects (furniture, fittings and clothing) via hand gestures and voice input quickly, cheaply and with less waste.  Colleagues can co-create by networking their Hololens headsets, again, regardless of location.  The team also identified an opportunity to enhance the platform with AI to provide integrated heat mapping to show the flow of people through the store and further refine and target the visual merchandising based on that.  Augmenta present at the Future of Fashion event – Image: Microsoft

Team DiDi created a garment label that allows lifecycle tracking and transparency.  By using RFID and NFC technology the label can be scanned to access details of the materials and manufacturing of the garment, providing a more current and broader version of the FIA’s previous collaboration with Martine Jarlgaard.  What’s exciting about DiDi’s concept is their consideration of brand storytelling as part of their platform, which is tailored to help brands celebrate their back-story and share it with their consumers, really making it part of the overall brand experience rather than a cumbersome ethics and supply chain document delivered up via the CSR section of their website, as with many brands currently.

AI and neural networks are exciting technical tools which allow the training of a piece of software to recognise images and objects, based on processing a huge number of images and developing a visual ‘memory’ based on them.  This is a powerful tool for visually identifying consumers wearing certain brands, styles and silhouettes – for example shoppers in a mall walking past a camera connected to this software, which can then be used to target appropriate advertising to the passing consumer.  This is the principle of Smart Signs, created by another of the LCF team.  This tool also allows trend analysis of passers by, which the creators say could help retailers create more targeted clothing for local markets and reduce mass production waste of low-demand styles.  They say the next step is facial recognition for personalisation of the Smart Signs experience.  You may find it comforting to know that this platform is much like our human brain in that is ‘sees’ passers by, identifies their style and then dumps that data – meaning personal data is not stored.

Smart Signs demo at the Future of Fashion event – Image: Microsoft

‘Janet’ is a smart-phone-based instore shopping buddy that scans your outfit while you are trying it on in the changing room and suggests alternative styles and other garments to style with it.  It can also tell you where to get a similar style for a better price, or your size in an alternative location.  I love Janet’s everyday name, and I guess that suggests the team wants you to think of her as a really helpful and insightful shopping friend – she’s not judging, just helping.  I can imagine Janet being very helpful in a multi-brand or department store, but in a single brand store I guess Janet won’t be so welcome as she’s likely to recommend rival brands for the benefit of the consumer’s choice.

Casting my mind back to the FIA 5th Anniversary event and panel discussion, I remember the input of Mohen Leo of ILMxLab, the team which created the Live CGI for Steven Tai’s London Fashion Week presentation.  “You can achieve ‘stickiness’ in retail by adding a digital layer, providing a different experience each time”, he said.  Clearly this gives shoppers a reason to visit a physical store.  “Shopping is only about emotions and emotional connection”, said Matthew Drinkwater.  He then went on to say that technology affords an opportunity to enhance this connection and emotion.

   Images:  FIA 

For those attending Fashion Week, the all too familiar break-neck speed of the shows and presentations often leaves the audience with a feeling of visual overload.  Each show blends into the next, as there is rarely an experiential layer  – just the immediate visual presentation of the clothes.  This used to be enough, but not anymore.  The success of Steven Tai’s Live CGI show was its engaging combination of digital and physical worlds, that transported the audience to Macau in an ever-changing landscape which drew the audience into its subtle evolution as the models meandered around the stage alongside a digital counterpart.  To quote Vicki Dobbs-Beck of ILMxLab this was “storyliving, not just storytelling”.  In this recent BBC article, the House of Fraser team commented to say it “urgently needs to adapt” to “fundamental changes” in the retail industry.  An emotional and engaging experience is what retailers and brands need to offer, and the tools with which to do this lie in augmented reality and artificial intelligence.

For a snapshot explanation of the difference between VR, AR and MR, click here  Header Image: Microsoft Follow Techstyler on Instagram and Twitter

Wired 2016 Presents What’s New in AI, VR and Tech Activism

Despite an absence of fashion tech at Wired 2016, the annual conference (too dull a word for the excitement served up) demonstrates fashions in tech, of a sort.

It’s that time of the year pre-christmas when many a head is full of ideas, swarming with information from dozens of conferences, meet-ups, launches, talks and exhibitions when it’s time to cut through the noise and find out what to focus on – some of which you may have heard of and some you definitely won’t have.  Welcome to Wired 2016.

Those with true passion for innovation know that the most exciting ideas and creations arise from special situations involving special people.  Whether they be from tech, medicine, art, music, engineering or social sciences.  Ideally, they’ll be a mix of these fields.  I look forward to seeing fashion added to this mix as the fashion tech sector grows on the back of the launch of Plexal and other cross-disciplinary hubs.

dsc03234

Before the talks kicked-off I browsed the demo area and was struck by the COLLAPSE SCULPTURES (above) by ScanLAB Projects, who gave a fascinating talk at Wired 2015, and are a team specialising in large-scale 3D scanning in architecture and the creative industries.  COLLAPSE is a collaboration between ScanLAB projects, dance company New Movement Collective and composer/cellist Oliver Coates.   This series of sculptures features digitally fabricated fragments of dancer’ limbs which are suspended, lingering where their performer once created them.  Traces of movement are solidified and stand as physical echoes.

From art to tech, standout talks at Wired 2016 included Hike, the Indian messaging app that works offline (useful in a country where connectivity is patchy and data is bought in packages) and transcends the dozens of languages and avoids complex keyboards by using digital stickers as tools of communication.  50% of households in India share smart phones, so the privacy app allowing hiding of selected conversations is a hit with young family members.

dsc03274

Mustafa Suleyman co-founded DeepMind, now owned by Google, and is forging ahead with the application of AI to solve some of the worlds biggest problems.  The use of AI diagnosis in medical imaging can speed up treatment times for cancer and improve patient prognosis.  DeepMind are attempting to solve the problem of most NHS data currently being written on paper, and therefore largely inaccessible.  Mustafa says “In life, data is pushed to us.  In the NHS it’s passive”.

dsc03311

Syrian human-rights activist Abdulaziz Alhamza is the co-founder of RBSS – Raqqa Is Being Slaughtered Silently – a defiant broadcasting platform that exposes the devastation and brutality caused by ISIS in his home town.  RBSS covertly captures images and videos, sharing them on social media and acting as a news source for news organisations.

dsc03415 dsc03419

Philip Rosedale is CEO and co-founder of High Fidelity – a shared VR experience that has global users sharing experiences by meeting in VR “locations” around the virtual world.  It’s like creating your own avatar, hanging out with other avatars and socialising with them, like you might do in real life.

dsc03623

Adding to the Indian flavour running through the two days of Wired, Gingger Shankar told a beautiful story of her experience in the musical family made famous by Ravi Shankar and the plight of her mother who broke out of domesticity to sing on a global stage.  The gems of Wired are in the unexpected, and I captured her playing the ten-string double violin and sharing with us her five octave voice.  Enjoy, and stand by for part two of my coverage of Wired 2016.

dsc03476

Follow Techstyler on Twitter and Instagram

The Athletic-Tech Garment Making the Virtual Real : Introducing Skinterface

We are fully versed in the realm of our physical world and increasingly dipping into the virtual world through virtual reality experiences, but what of the space in between?  What of the transition realm – a corridor, if you like, that lies next to the real world in which we transition through before arriving at a state of VR immersion?  Think about the experience of entering the virtual world and the need for all of our senses to be stimulated in order for the virtual experience to feel real.  Drill down even further to consider the organ through which we physically feel – the skin.  Herein lies the connection and transition area of real to virtual.


corridors

The Matrix corridor – a representation of the space between the real and virtual worlds

Skin is a powerful tool that allows us to communicate on a highly intricate level.  It also communicates who we our biologically and culturally, making it a potent social and physical organ.  What happens when a team of curious minds consider the meaning of skin and how skin can transition us from a physical to virtual experience?  Skinterface is born.

Skinterface is the work of RCA students Andre McQueen (Footwear designer and trend forecaster) , George Wright (Engineer), Ka Hei Suen (Kitchen Product Designer) and Charlotte Furet (Architect) who embarked on their MSc / MA Innovation Engineering Design course out of curiosity and a desire for collaboration outside of their immediate professional realms.  An admiration for each other’s individual project work led them to work together as a team of ‘sensory architects’.  The initial exploration for the Skinterface project was broad and posed months of questions about the sensory experience and perception of touch, but began with a very simple test.  The test was wearing a plastic bag on the hand and immersing it in water and noting the sensory experience.  Although the water doesn’t touch the skin it is still felt – the sensation of water on the hand is experienced.  This underpins the working nature of the very human and very wearable piece of tech that is Skinterface.

DSC01878DSC01897DSC01898

Mood board images and the initial plastic bag test

The first question posed at the beginning of project related not to creating a defined product, but how to create something that was very human, integrated with technology.  Touch is a powerful human tool and to relay this using technology seems a powerful new dimension in communication in a digital age.  Skinterface is a one way communication tool – the sensory experience is delivered according to the location of the skinterface garment within a 3D mapped space by tracking its coloured surface details and delivering the sensory experience accordingly.  An extension of this is a dual tool using the same tech, but allowing pressure on one part of the tool to effect the sensation delivered by the other. The implications of this are potentially to touch someone in another location, even in another country. 

IMG_5668 IMG_5702

Skinterface at Milan Design Week, 2016

The set of garments created by the team deliver sensory pressure by essentially using a speaker in reverse, so that sounds create a varying electromagnetic field, which in turn is calibrated to produce varying sensations on the skin.  These sensations are delivered via a coil and magnets encased in 3D printed caps, created at Imperial College London and adhered to the garments, which require close skin contact to accurately deliver the sensation.

DSC01874DSC01885DSC01881

Imagine the sound of a bird flying past you and the sensory experience induced by the change in air pressure caused by the bird’s movement – that’s what Skinterface delivers.  In a virtual world, the sound of all manner of objects can be programmed and delivered via the coil and magnet-driven modules that apply just the right amount of pressure to mimic that same sensory experience as though it had happened in the real world. This skin beyond skin is poetically demonstrated in the video below, from beginning to end.

When asked about the aesthetic component of the design, Andre cited the current athletic lifestyle (or athleisure) sportswear evolution and brainstorming about what clothing will look like 40 years from now.  Andre is a Cordwainers graduate who launched a streetwear fashion label then moved on to fashion forecasting, working extensively with global brands to evolve their trend-driven products.  His curiosity for exploring the technical side of fashion and design led him to the Innovation Engineering Design Masters at the RCA, but he still has a firm grip on where the fashion market is headed.

Screen Shot 2016-05-16 at 07.28.27 Screen Shot 2016-05-16 at 07.27.31 Screen Shot 2016-05-16 at 07.24.18Screen Shot 2016-05-16 at 07.32.27

A selection of Andre McQueen‘s design work

When I ask the team about their view of this exciting innovation could be used they mention the sex industry, gaming, entertainment and fashion.  The sex industry is an obvious one, as is gaming and entertainment, but fashion?  Andre sees an opportunity to translate the sensation of wearing a multitude of different fabrics into a sensory ‘digital library’ that can be felt by wearing Skinterface.  Wonder what your cotton trench coat would feel like in felted wool?  Skinterface can give you that sensation.  There is as much scope here for customer-led retail experiences as for fashion designers considering the weight and drape of various fabrics when designing garments.    

IMG_1752_Edit IMG_1780_Edit IMG_1851_Edit

A library of sounds could be created to induce all manner of sensory experiences through the Skinterface suit.  The team talks about a dream open source library of thousands of compositions, and even whole scores for feature films that could be felt while they are watched.  Theoretically, the score for each character could be written according to what they experience in the film and as a Skinterface-wearing viewer you could experience it too.  The thought of experiencing a film dozens times from a different character’s point of view is mind-blowing. 

I leave the team with just five weeks remaining before they complete their studies and exhibit the work arising from two intense years of exploration, research and experimentation.  On my way out of the Darwin Building at the RCA, Andre and I muse about a common paradox in fashion design – final design decisions are often made at the beginning of the design process, leaving little room for curiosity, exploration and design evolution.  Educational institutions including the RCA are a unique breeding ground for such curiosity and I look forward to seeing where this has taken the IED students, both physically and virtually.

The RCA MA/MSc Innovation Engineering Design exhibition is in June at the Royal College of Art

Follow me:  Twitter @Thetechstyler  and  Instagram @techstyler

Noma’s Food Language, GPS-Tracked Garbage and the World Exclusive of Pussy Riot’s Dismaland Music Video. It must be WIRED 2015, Part 2!

Continuing in the theme of Artificial Intelligence (covered in part 1), Carlo Ratti presented some exciting work from the MIT Senseable Cities Lab.  Standouts were a trackable waste project where members of the public brought in 3000 pieces of garbage which were then fitted with GPS sensors and tracked, with alarming results.  Some objects made it across America and were still on the move after two months.  The project increased awareness of recycling and changed participants’ garbage disposal behaviours.

Carlo then demonstrated that the implication of self-driving cars in the future will extend beyond safety (eliminating human error – the cause of over 90% of road traffic accidents) to reducing congestion.  Traffic lights would be eliminated and a slot-system used by interconnected cars sensing each other’s position to avoid collision.  The potential use of one self-driving car to drive all the members of a family (and their friends/neighbours) to their respective destinations, blurring the lines between private and public transport, is an interesting prospect too.  With such a system, cities could meet all residents’ current transport requirements with only 20% of the number of cars that are currently on the road, based on journey research done in New York.  Imagine the reduction in traffic, congestion and improvement to air quality?!  Lastly, Carlo showed us a video of UAV-enabled app, SkyCall, created to respond to requests from MIT visitors to guide them to whichever room number they tap into their app.  The UAV talks the visitor though the sights on campus along the way. On a campus with hundreds of rooms scattered over a vast area it’s a cool idea.  I wonder if it could be used for events; Directing people to the correct seat at a fashion show, or the theatre, perhaps.  Check it out here:

The afternoon session was opened by Rene Redzepi and brought culinary adventure to proceedings with a detailed foray into the scientific experiments of the famous Danish restaurant he co-founded, Noma.

Noma create flavours.  They seek not to use ingredients for dishes but to make building blocks of flavour that can then be combined to create either a sauce, or a dish.  It’s akin to writing a new food language, with the building blocks being new words (new flavours) and the combination of them (the sentence) being the dish, as explained by Noma R&D chef Lars Williams, accompanied by Arielle Johnson, flavour scientist.  Fermentation is what underpins this language, and through a series of taste tests they demonstrate that fermentation is a cooking utensil at Noma, rather than simply a technique.  It is their chief mode of experimentation and gives rise to new and complex flavours that develop over time.  Lars explains that these new flavours cannot be manufactured.  The ingredients are controlled precisely in DIY vessels they made out of shipping containers that have a range of minus 50 to plus 60 degrees and exacting humidity control.  It’s a fascinating insight, and given that it’s a non-profit initiative serving only 45 covers for each of their lunch and dinner service, it’s admirable.

IMG_6056

IMG_6058

IMG_6064Cute illustrations by flavour scientist Arielle Johnson

We were given a goodie bag of vials from the Noma kitchen to taste during Lars and Arielle’s talk, one of which contained fermented grasshopper garum, which had a fishy miso-like flavour.  Here’s how Lars makes it:

Here’s Noma’s story in the words of Rene Redzepi:

The food-inspired architects responsible for the following imagery, Christopher Pierce and Christopher Matthews take food and turn it into materials, shapes and schematics for buildings and landscapes.  Their imagination runs riot and gives rise to a world of food-based experiments that result in fantastical architectural plans.

IMG_6084The moving urban farm plan, Copenhagen

IMG_6091 Architectural plan for a city decomposing left to right

The images below are from a project to create architectural plans from leeks.  The leeks were dehydrated in an oven and into which porcelain was poured to reveal the dried layers inside, to stunning effect.

IMG_6087

IMG_6086

For more information about the two Christophers projects click here

The perfect punctuation for these meaty (sorry, I couldn’t resist) talks was live music curated by Denzyl Feigelson, founder of AWAL, including breakthrough Brit artist Izzy Bizu, who is currently touring with Rudimental.  She came over to say hi after her stunning performance of White Tiger to let me know my knitted outfit and multi-coloured platform trainers had caught her eye whilst she was singing.  I promptly dispensed my card so keep an eye out for Izzy Bizu in Brooke Roberts Knitwear.

IMG_6043

A medical imaging treat came in the form of Neuroscientist Sophie Scott’s MRI scan of Reeps One beatboxing.  It’s a fascinating insight into the function of the brain during execution of complex sounds and the areas of the brain most active while generating them.  It satisfied my craving for medical images as an ex-radiographer of two months.

IMG_6039

The theme of Intelligence runs throughout all the talks at Wired 2015, whether derived from the application of AI or applied following analysis of Big Data, the future looks set to provide better healthcare, safer roads, cleaner cities and a more connected global community.  The day wraps up with a softly-spoken but defiant instalment by Nadezhda Tolokonnikova of Pussy Riot notoriety.  Although she didn’t speak on digital activism as billed, she lent a voice of independent thought and personal struggle.  She spoke of her belief in a life without borders and declared herself a citizen of the world.  Opening with a joke that went something like “When travelling, at border control the officer asks ‘Occupation’? I say no, just a holiday”.

Wearing a dress inscribed “Non Stop Feminist” and “My body is a battleground”, apparently echoing the words in artist Barbara Kruger‘s 1989 piece ‘Untitled (Your Body is a Battleground)’ – a visual commentary on women’s rights over reproduction and the continued feminist struggle – she has an aura of rebellion and delivered a mandate based on living and acting outside the parameters of what she described as slow inert governments.  She cited technology as a tool for rebellion and free speech, which seems refreshingly simple after a day of high-concept tech and rampant futurism (not a complaint,  it’s just a welcome grounding moment to think of the here and now and reflect on where we are).

Reflecting on the talks and ideas shared today by the world’s game changers and askers of questions most of us have never come close to contemplating (until now), it strikes me how readily members of the art, architecture, science and technology communities collaborate across disciplines.  A chemist working with a chef at Noma, artist Eyal Gever collaborating with NASA on art in outer space, architects working with chefs to develop concepts for buildings and landscapes based on food, and the greatest thing is, the sum of the parts is richer, more surprising and arguably more legitimate for having sought beyond the boundaries of its own discipline.  It makes me wonder; why isn’t the fashion industry more eager to collaborate with those from other disciplines – scientists, architects, engineers?  The idea of legitimacy is a complex one, but it does seem that having outside input and therefore sharing credit for the design and presenting a collective rather than singular ‘vision’ is something fashion designers and perhaps the industry at large is very uncomfortable with.  In fashion, the Creative Director / Design Director is seen to possess the singular vision and takes all credit, making collaboration from an equal (and opposite) professional seemingly unwelcome.  I feel fashion’s the poorer for it.  You only have to look at the discomfort and awkwardness of the current FashTech/wearables offering to see that the fashion and technology industries are not really collaborating yet.  The most innovative strides being made in the fashion industry are in sportswear brands like Nike with their groundbreaking “Flyknit” trainers, which are a fusion of shoe design, creative coding, digital industrial knitting and engineering.  Another example is retailer Marks and Spencer, with their technical textiles and antibacterial, non-iron, machine-washable, teflon coatings that withstand years of abuse.  For all the hype about ‘creativity and innovation’ in the fashion industry, the visible and lauded designers aren’t leaping into the future, innovating or really questioning what is possible and trying something new.  They are not ready, or willing, to take the collaborative steps to do so, it appears.  The open source aspect of science and technology is a practice fashion could benefit from.  How do we learn and grow if we don’t share and question what we do?  How do we solve complex problems when only looking within the realms of what we already know rather than seeking the perspective and skills of other professions?

As I finish writing this article I get confirmation that Hussein Chalayan has agreed to an interview with me to discuss his contemporary dance collaboration at Sadlers Well, Gravity Fatigue and his work as a fashion designer.  So I’ll prepare to eat all of the last paragraph’s words, thankfully!

Nadezhda closes Wired 2015 with a world exclusive of the new Pussy Riot music video for “Refugee In” shot at Banksy’s Dismaland.

IMG_6128

IMG_6125

IMG_6121

IMG_6114

IMG_6105

IMG_6103

And with a hop and a skip, she’s off!

IMG_6135

Follow me:  Twitter @Thetechstyler  and  Instagram @techstyler 

Fashion’s Robots: McQueen v’s Plein

Just a day after publishing my first blogpost on AI, robotics and fashion came a runway show featuring all three. Designer Philipp Plein showed his SS16 collection at Milan Fashion Week featuring robot band Compressorhead with Courtney Love on vocals and robot arms ‘styling’ the models with sunglasses and bags as they travelled along a conveyor belt. There were also drones flying overhead, the function of which I am not sure.  Robots in fashion suddenly seems topical.

_ALE3027

Header image by NYTimes Image above by Wonderland Magazine

Dazed Digital reported that Philipp Plein’s shows are not normal shows (previous shows have included rappers on jet skis in customised Plein pools) but never had he taken a show this far in terms of concept and techno-grandeur. Each season he aims to outdo himself, apparently leaving the crowd wondering “what next?”  I wasn’t at the show, but watched it in full here

What interested me (beyond the impressive robot theatrics and investment in spectacle) was the reaction. A mixture of wonder, horror, anger and derision. I was lecturing today and played the show video for my fashion students and a handful replied in horror “but they’re just copying McQueen!” “McQueen did it better/first!” The rest were silent/amazed. Instagram is littered with similar comments. Well, McQueen did show industrial robots in his disturbingly beautiful S/S 1999 show, but with a very different theme. His concept was inspired by an installation by artist Rebecca Horn of two machine guns firing blood-red paint at each other.

rebeccaHorn-520x716    McQ.851a–d_mcq.851_v3.AV3

Hear model Shalom Harlow speak about her experience being painted by the industrial robots in McQueen’s show.

Philipp Plein’s show was a rock and roll glamour extravaganza in which the robots were a cool addition of props but not apparently integral to the delivery of the clothing and the collection’s narrative. It looked like one hell of a show though, and hugely entertaining. Nothing wrong with that, surely? Ellie Pithers at the Telegraph said “If it was just entertainment – and the clutch of blondes jiggling along next to me in their Plein studded boots and slashed jersey dresses certainly enjoyed themselves – then it was spot on.”

I think the integration of technology, AI, robotics in the fashion product and its delivery is key if such a show is to convince.  At the whiff of gimmickry, maybe the audience recoils a little? Authenticity is paramount in delivering a show that people connect with and truly buy into (on a deeper than commercial level). So, impressive as it is, maybe it’s not entirely convincing? Hashtag showoff?

1141278

PHILIPP-PLEIN-WOMEN-¦S-FASHION-SHOW-SS16-RUNWAY-@BFANYC-6-682x1024

_ALE3015

PHILIPP-PLEIN-WOMEN-¦S-FASHION-SHOW-SS16-RUNWAY-@BFANYC-31-682x1024

Images: Wonderland Magazine

The only connection between the McQueen and Plein shows is robots. The use and motivation behind them as technical tools is entirely different. With McQueen’s use of technology, the story he was telling and the clothes themselves were still always the main event and the driving force. It was reported that the industrial robot arms that painted Shalom Harlow’s dress #13 took a week to program in order to ensure a choreographed connection with Shalom and the expression of McQueen’s vision. In contrast, at the Plein show, Compressorhead were doing what they already do (see them in action below) and the industrial robot arms which are designed to perform the action of moving objects to/from a production line appeared to be doing just that. There’s no apparent integration of the technology and the reason for the show in the first place – the clothes. The show reviews I have read of Plein’s SS 16 collection barely mention the clothes. Indeed, the show began with Courtney Love and Compressorhead rocking out. It seemed to be a declaration of entertainment first, fashion second.

As a (cool, but slightly disturbing) aside, Compressorhead were created by Robocross machines, who have machines for all occasions. Check out Stickboy’s fascinating CV including date of birth (2007), specifics (four arms, two legs and one head) and playlist…

Screen Shot 2015-09-25 at 15.09.52

FingersBones

For those concluding the Plein show was a “ripoff” of McQueen’s, by that definition, any designer using the same machine/object/material as another designer would be copying too. However, it’s apparently fine to use the same materials/colours/silhouettes as other designers (indeed that helps to create a trend, so that’s quite useful for the industry) but it’s less acceptable to use the same technology/theme to present a show. Most designers wouldn’t go anywhere near a show theme utilised by McQueen. Since McQueen used the Pepper’s Ghost technique to project a flutteringly angelic Kate Moss, who’d go there? I’m not suggesting designers shouldn’t go there, I’m merely illustrating the point that they generally don’t. Except Plein.

Fashionista quoted Plein as saying he feels he’s an industry outsider, that he doesn’t have support from the industry and he has a deep rooted fear that people won’t turn up to his shows, as they didn’t in the beginning. He insists that as a self-funded, investor-less concern competing with the likes of Gucci and Chanel, his spectacular shows are par for the course and that he is spending less than his rivals (who aren’t criticised for such excess). There’s a definite tinge of derision in the article as they go on to claim he “rants” about robots taking over. The Elle headline below is also derisory. Fashionista do, however, admit that Milan Fashion Week would be a lot less fun without his spectacular shows.

Screen Shot 2015-09-25 at 15.16.05

Screen Shot 2015-09-25 at 17.05.56

About the Plein show, following her comment that the entertainment factor was spot on, Ellie Pithers went on to say “If it was meant to be a parody of the fashion industry – the conveyor belt demands of the schedule, the robotic nature of trends, the deliberate mechanics behind product placement – it was even better.” I’m not convinced there were metaphorical intentions and this, her final statement, seems a fairly cold conclusion. The choice of Kraftwerk’s, “Robots” and “The Model” further support the literal nature of the Plein’s message.

I think there’s a lesson to learn here about future conversations involving those in technology and fashion seeking to fuse disciplines. Integrate or irritate. Apple watch has been met with a tepid response for failing to create a genuine, desirable, aesthetic fusion. “Just because you put a strap on it doesn’t make it a watch”. Watch out Wearables.

Follow me:  Twitter @Thetechstyler  and  Instagram @techstyler

The Real and Imagined World of Robotics and Artificial Intelligence In Fashion

I am a hybrid. I began my career as a radiographer (x-raying and scanning patients) before studying fashion design and pattern-cutting. Upon graduation I worked simultaneously in hospitals and fashion studios, leading me to launch a fashion label in 2009, creating digital knitwear inspired by CT and MRI scans, named Brooke Roberts. I am now a designer, consultant, lecturer and blogger with a fascination for combining science, technology and fashion, both in a real and imagined sense. Anything is possible. 

ARTIFICIAL INTELLIGENCE AND ROBOTICS are generally met with a mixture of fascination and fear.  They’re certainly hot topics – Google car is the most recent high profile example, telling us that AI and robotics are developing quickly and will soon become part of our everyday lives.

Twitter fed me an intriguing tweet about a Robotics event called TAROS being hosted by The University of Liverpool Computer Science Department, headed by Professor Katie Atkinson.  TAROS was open to the public and aimed at de-mystifying AI and robotics and sharing advances and potential uses for it.  I couldn’t make it to TAROS but Professor Atkinson and the team agreed to an interview on another day and so begins Techstyler, a blog about science, technology and style.

CS dept

CS 2

My pre-reading led me to this summary about AI: it is aimed at having computers make decisions for us by analysing many factors (more than we could digest), in a quicker timeframe than we could analyse them, and coming up with an answer or series of solutions.  In other words, it streamlines the decision making process – so AI is not such a scary concept.

When it comes to robotics, it’s a common misconception that all robots are human-like, or at least that that’s the end goal of robotics.  In truth, many robots are built and programmed to do one very specific task and do not have any interactive or independent processing capability.  For example, they might pick up a pen from a conveyor belt and put it in a box.  They will repeat the exact same action over and over –  even if a human moves into their path, they will continue their programmed action.  Since killing people is NOT the aim of robots, this limitation is something that the team in the SmartLab at The University of Liverpool are working to solve, amongst many others.

Professor Atkinson (who introduced herself as Katie and is informal and very approachable) took me behind the scenes at the SmartLab to meet the team including PhD students Bastian Boecker, Joscha-David Fossel and Daniel Claes which is headed up by Professor Karl Tuyls. The SmartLab team are exploring how we can take robots away from the static limited industrial type to the type that move around and interact with their environment; they can be programmed to work in groups, based on biological models – like the way bees and ants work together – and they can use laser and infrared technology to understand where they are to detect objects and the presence of humans.

CS 11The guys in the SmartLab

CS 8The SmartLab team awards including the RoboCup Championship trophy, 2014

I spend an afternoon chatting to the guys in the SmartLab about their work and messing about with their robots, leaving me inspired and full of wonder (and them further behind in their PhD work).  They were really generous not just with their time, but in sharing their knowledge.  This is something that always humbles and inspires me in science – it exists to share knowledge and enlighten.  By contrast, the creative industries, and fashion in particular, can be very secretive and exclusive which may explain why, in many ways, it is resistant to change and evolution, paradoxically.  I talk about this in another upcoming blogpost with designer Sarah Angold, so stay tuned.

Daniel begins by telling me about RoboCup@work and the robots they’re developing with the aim of helping humans in a factory and advancing away from the static pre-programmed robot arms (which are really huge, heavy, limited to one action and housed in cages surrounded by lasers) to robots that can reason and react to their environment.  Total re-programming of static robots is required when the production or task changes within a factory, so this is an inconvenient and expensive limitation.

The robot prototype (featured in the video below and as yet unnamed – I ask why?!) is not static but on wheels and is able to analyse its environment and carry out a given task in response to its environment.  In this case, Daniel types in commands and the robot then responds to a specific request to pick up a bolt and nut from a table and drop them in the corresponding shaped holes at another table nearby.  It uses a camera to scan its environment (it has a set of environmental objects which it has been programmed to recognise) and collects then moves and drops the correct objects.  Here it is in action:

I took a couple of snaps of the pieces the robot picks up and the respective shapes the robot drops them into (Literal footnote: My boots are Dr Martens X Mark Wigan):

CS obj 1The items the robot chooses from

CS pc 2The respective shapes the robot drops the items into

My thoughts turn to the use of robots in fashion. Perhaps the most beautiful and arresting use of robots in fashion so far has been Alexander McQueen’s haunting SS 1999 show:

The choreography of the robot arms excites my imagination, like snakes priming to attack. The industrial robots in the McQueen show are the type used to paint cars in a manufacturing plant. I start wondering how robots could impact the fashion industry once the development the SmartLab guys are doing becomes accessible, particularly since biological algorithms are being used to develop robots that could potentially interact as a ‘choreographed group’.

Bastian is working on swarm robotics, which means having a bunch of robots that are controlled by themselves using really simple (I laugh and remind Bastian that’s a relative term) algorithms based on attraction and repulsion – just the same way a flock of birds form a swarm as a group, rather than from a centralised source.  They are really robust, for example you can take a couple of robots out of the system and the flock will still work.  Bastian imagines his flock being used for planetary exploration with a more localised connectivity, rather than for broadband-based military-type operations.  Katie further explains that having many small robots that are interlinked means that if one robot breaks down the others can continue exploring – the mission isn’t scuppered. Motion sensor-based communication between robots allows them to spread and cover a large area – for example an earthquake disaster zone – and scan the terrain for movement or people.

The main source of inspiration for Bastian’s algorithms that “drive” the robots is bees and ants. Both bees and ants use pheromone communication.  Bees use a dance for navigation which involves going back and forth to the beehive once they have found food, to demonstrate to other bees the vector pathway to the food source.  Bastian shows me a really cool video of how he and the SmartLab team created “bee robots” to perform this dance:

CS 10“Bee robots” in the SmartLab

Imagine the use of swarm algorithms to create choreographed robot runway models. The swarm algorithms would be supported by the development Joscha is doing on “state estimation” – the goal that a robot should know exactly where it is in relation to other things.

The runway robot models would know where they are in relation to each and could use lasers to interact with the crowd, making ‘eye contact’ with viewers and giving them a wink or cocking a hip. I’d love to see that. I’m re-imagining the Thierry Mugler show in George Michael’s Music Video “Too Funky” with robots as the models.

**Incidentally, this is the video that made me realise I loved fashion and wanted to work in the industry, back in 1992)**

Robot Naomi, anyone??

2009-12-09Naomi Campbell by Seb Janiak

Maybe I’ll start the first ever robot model agency? It may at first seem bizarre, but it would solve the problem of real people trying to achieve unreal proportions. I am not for a second saying I think the modelling profession is doomed or needs replacing (maybe just disrupting), I just think there’s merit in an alternative. If you think about the way the industry currently works, editorial images are manipulated often to extreme non-human proportions (again, this can be creatively interesting, so I’m not dismissing it entirely) but the human aspect is significantly diminished.

Another area of biological algorithms Bastian is working on is replication of ants’ passive technique of dropping pheromones to communicate with each other. This could be applied to digital pheromones designed to mimic this communication to divide labour between robots and solve complex problems.  Robots can’t create biological pheremones but there is development being done in this area.  Imaging adding the ability to exude pheromones to runway robot models and a whole new heady mix of interaction and attraction is possible. Here’s a demo of the digital pheromone communication concept:

The discussion moves to flying robots and Joscha explains further that State Estimation currently works well on ground robots, but with flying robots it gets more difficult because of the extra dimension of tilt/pitch.  X-box Kinect has a 3D camera that can be used on these type of robot prototypes.  Laser scanners can also be used to give plane of distance measurements.  Based on the time it takes for the laser light to reflect back off the surface of the objects surrounding it, the robot’s distance from those objects can be calculated.  It makes me think of medical imaging and OCT (Optical Coherence Tomography) technology, where it’s possible to use a laser inside of an artery to measure its dimensions – this is exactly the same principle, so I immediately get what Joscha is on about.  Using the laser scanners, Josche’s flying robots generate 3D maps:

I mention drones at this point and Katie explains that they avoid the term because of a general perception of military use, and instead use UAV’s (unmanned aerial vehicles).  It occurs to me again at this point that it must be a difficult and uphill battle to promote their advances in AI and robotics when such misconceptions exist.  Joscha shows me his flying UAV which uses silver reflective 3M tape as markers (which, coincidentally, I have used in yarn-form for knitting) to understand it’s exact location.  It is connected to a large geometric pitchfork-like object with silver balls on the end which allow me to direct it through the air with sweeping arm movements.  Here are the guys in action:

CS 7Joscha’s UAV with silver 3M balls

11821142_148793482121672_2109669878_nA jumper from my AW11 knitwear collection with 3M yarn and camel hair

The flying UAV makes me think of a video I saw recently of the brilliant Disney illustrator Glen Keane, who wore a VR headset and stepped into his illustrations using sweeping arm movements with a paddle to draw in the virtual space around him.  He immersed himself in his drawing and saw his illustrations in real time, 3D.  The video of him ‘stepping into the page’ is well worth a look.  So I guess I made an invisible air drawing with a robot.

I ask Katie whether there are barriers to spreading an accurate message about the work being done in AI and she cites “science fiction stories of dystopian societies where the robots take over” as a big source of misconception.  She is a passionate ambassador for AI, working on the research topic of ‘AI and law’ for over 10 years. She has been working with a local law firm for the past year to expand their use of AI and enable huge amounts of legal information to be analysed quickly and provide accurate and fast solutions.  The whole impetus behind AI is problem solving.  It’s about making our world a better and safer place, for example in disaster relief situations and with self-driving cars.  Katie shares some data on road traffic accidents: 97% of accidents are caused by human error.  That’s a compelling statistic.  Who doesn’t want to be safer on the road?  For the record, Katie tells me that self-driving cars, like those being developed by Google and Bosch, are expected to be launched in stages from partial self-driving initially, to 100% self-drive in the final stage.  So it won’t mean jumping into a vehicle and immediately relinquishing control.

Katie also shared with me some cool robots created by her Computer Science undergraduate students using Lego Mindstorms that demonstrate the concept of the self-driving car.

CS3

The ‘car’ has sensors on each side and at the front.  The sensors detect the lines on the board (road) and adjust their direction and speed accordingly in order to stay within the black lines (stay on the road) and speed up when they pass over the green lines and slow down/stop when they pass over the red.  They are programmed with code written by the students and uploaded directly from a PC into the basic computer box that comes with the Lego Mindstorm kit.  Pretty ingenious.

CS 6

I’m writing up this post while attending London Fashion Week, which has just relocated from Somerset House to the Brewer Street Car Park in Soho, on a pokey corner of a one-way street causing a crammed carnival/circus atmosphere and serious logistical issues. I wonder if Google car could solve this problem? Maybe editor-sized Google self-driving pods could locate their editor by GPS and navigate them quickly from one show to another while they focus on updating their Twitter and Instagram rather than worrying about traffic jams.  It would be a kind of cross between Google car and GM Holden’s Xiao – EN-V electric pod, charging itself in between shows. There’s no way those big old Mercedes cars that have been the long-term ferrying service for LFW editors and buyers are going to get in and out of town fast enough with the new LFW location. The frustration at LFW was palpable outside the shows.

car-side
Google Car
miao_en_v_electric_car-704159
GM Holden’s Xiao EN-V

So what can’t AI do?  I ask Katie about the use of AI and robotics in the creative industries.  Her response, “people regard this as the great separation between man and machine – the ability to be creative”, leaves me feeling relieved my job as a designer is safe.  Interestingly, a large part of Katie’s research work has involved categorising objective versus subjective information and determining how that information should be used in AI.

In terms of creative use of AI, examples exist in the gaming and film industries.  In the Lord of the Rings movies, programmers used crowd algorithms based on AI to program CGI characters in groups to move and interact together, rather than programming individual characters which then need to be programmed to interact with each other (that’s a lot of programming!)

So where is AI and robot technology headed? What’s next? Joscha pipes up and says he would like a robot at the supermarket that can collect everything he wants and bring it back to him.  That sounds like a weird and wonderful space to be in – robots and humans shopping alongside each other.  Katie then mentions the garment folding robot – this technology strikes me as useful for garment manufacturing (and at home, obviously).  The current incarnation folds one item quite well, but the SmartLab team are all about robots being able to not only fold the item, but then move across the warehouse and pack it. I personally would love to have a pattern-cutting robot to work alongside of me tracing-off and cutting out clean, final versions of patterns I have created and digitising them as it goes.

As I finish writing up this blog post, sipping on a cup of coffee that (I’m ashamed to admit) I just reheated in the microwave and I wonder how people felt about microwave ovens when they were first introduced.  Maybe there’s a similarity with AI or any new industry-changing technology?  People fear it because they don’t understand it but once they see how useful it is that fear subsides.  Elementary, my dear Watson.

Follow me:  Twitter @Thetechstyler  and  Instagram @techstyler