Can ​Artificial Intelligence Combat Oversupply and Minimise Deadstock in Fashion?

Originally published on Eco-Age.

Fashion tech innovator, writer and public speaker Brooke Roberts-Islam investigates the role that AI could play in reducing the environmental impact of unsold fashion. 

Artificial Intelligence, for all its futuristic and sci-fi connotations, is a way of analysing data that helps to make smarter decisions. It’s true that AI can be much more – it can ‘learn’ and evolve based on data, but in the case of fashion, its most common application is to find out what’s selling, what’s not and to do something about it. 

Why do we need artificial intelligence to solve such problems? What is wrong with the current system? Followers of fashion and sustainability will be no strangers to the news that brands, both big and small, fast and slow, are grappling with the current business model that relies on predicting what styles will be popular in several months’ time and in what volumes. Additionally, ensuring that the product is available in the right place, at the right price at the right time is increasingly difficult. Add to that the influence of social media on fast-changing trends and it is hard to keep up, and operate in a financially viable, let alone sustainable manner. The very cycle of fashion that requires predictions months ahead of the product being available is outdated and slow and leads to product sitting unsold in warehouses and eventually being discounted, or worse, landfilled or incinerated. 

jeans

The McKinsey ​Notes From The AI Frontier​ report quantifies the benefits of AI for retail as being mostly in marketing and sales (targeting customers with the right products and boosting conversion) and supply chain and manufacturing (manufacturing the right product in the right quantity and making it available at the right time). While this may sound basic, it’s the current fast fashion business model’s inability to get this right that is causing overstock and catastrophic resource and waste consequences. 

How can AI help to cut down on oversupply of stock and thereby reduce the environmental impact of unsold fashion? One of the key ways is by capturing data through online sales based on geographical areas to determine the types of products that are on demand (and those that aren’t) in specific locations. A prime example of this is the Nike Live store ​‘Nike by Melrose’​ in West Hollywood, which is a fusion of online and offline stores. The first of its kind, the store requires shoppers to sign up to the ​Nike Plus​ app in order to unlock the services and perks available in the store. In signing up, the shopping behaviour and preferences of each customer are known to Nike, allowing it to only stock what local shoppers want. No overstock and no need to have regular sales to get rid of stock that wasn’t right for the local customers. This data feeds back to the manufacturing systems and influences the styles and quantities manufactured. 

jeans2

 

Casting the net wider to take in global shopping preferences and real-time purchasing behaviour, ​The Trending Store​, which opened in London’s Westfield Shopping centre this week, is using AI to analyse social media data to extrapolate the ‘top 100 fashion items’. ​The trending products, which span high-street and designer looks across fashion, accessories and footwear, are then collected each morning by a team of stylists from the retail stores at Westfield London. The data analytics are being performed by ​NextAtlas​, who track 400,000 early adopters spread across 1,000 cities worldwide to determine the top 100 items. ​Screens in The Trending Store show exactly where each trend originates from, so shoppers can see which city or country is influencing the popularity of the item. ​This is perhaps the first foray into shopping centres providing AI-driven multi-brand solutions that helps get trending styles and colours in front of customers at the right time. 

It bears noting that these are reactionary solutions that require linked-up data and transparency across the supply chain to genuinely reduce overstock and deadstock – that is to say, the data should ultimately drive the design and manufacturing process to ensure that only the product that is ‘needed’ is manufactured. As fast fashion retailers struggle to manage the vast quantity of products they manufacture globally, this data could allow retailers to make smarter and more sustainable decisions. However, ​digitalisation of the entire supply chain​ (allowing the capture of data at all points from the initial design and manufacturing to global sales) is necessary in order to fully harness and act on this powerful data. Watch this space.

The Real and Imagined World of Robotics and Artificial Intelligence In Fashion

I am a hybrid. I began my career as a radiographer (x-raying and scanning patients) before studying fashion design and pattern-cutting. Upon graduation I worked simultaneously in hospitals and fashion studios, leading me to launch a fashion label in 2009, creating digital knitwear inspired by CT and MRI scans, named Brooke Roberts. I am now a designer, consultant, lecturer and blogger with a fascination for combining science, technology and fashion, both in a real and imagined sense. Anything is possible. 

ARTIFICIAL INTELLIGENCE AND ROBOTICS are generally met with a mixture of fascination and fear.  They’re certainly hot topics – Google car is the most recent high profile example, telling us that AI and robotics are developing quickly and will soon become part of our everyday lives.

Twitter fed me an intriguing tweet about a Robotics event called TAROS being hosted by The University of Liverpool Computer Science Department, headed by Professor Katie Atkinson.  TAROS was open to the public and aimed at de-mystifying AI and robotics and sharing advances and potential uses for it.  I couldn’t make it to TAROS but Professor Atkinson and the team agreed to an interview on another day and so begins Techstyler, a blog about science, technology and style.

CS dept

CS 2

My pre-reading led me to this summary about AI: it is aimed at having computers make decisions for us by analysing many factors (more than we could digest), in a quicker timeframe than we could analyse them, and coming up with an answer or series of solutions.  In other words, it streamlines the decision making process – so AI is not such a scary concept.

When it comes to robotics, it’s a common misconception that all robots are human-like, or at least that that’s the end goal of robotics.  In truth, many robots are built and programmed to do one very specific task and do not have any interactive or independent processing capability.  For example, they might pick up a pen from a conveyor belt and put it in a box.  They will repeat the exact same action over and over –  even if a human moves into their path, they will continue their programmed action.  Since killing people is NOT the aim of robots, this limitation is something that the team in the SmartLab at The University of Liverpool are working to solve, amongst many others.

Professor Atkinson (who introduced herself as Katie and is informal and very approachable) took me behind the scenes at the SmartLab to meet the team including PhD students Bastian Boecker, Joscha-David Fossel and Daniel Claes which is headed up by Professor Karl Tuyls. The SmartLab team are exploring how we can take robots away from the static limited industrial type to the type that move around and interact with their environment; they can be programmed to work in groups, based on biological models – like the way bees and ants work together – and they can use laser and infrared technology to understand where they are to detect objects and the presence of humans.

CS 11The guys in the SmartLab

CS 8The SmartLab team awards including the RoboCup Championship trophy, 2014

I spend an afternoon chatting to the guys in the SmartLab about their work and messing about with their robots, leaving me inspired and full of wonder (and them further behind in their PhD work).  They were really generous not just with their time, but in sharing their knowledge.  This is something that always humbles and inspires me in science – it exists to share knowledge and enlighten.  By contrast, the creative industries, and fashion in particular, can be very secretive and exclusive which may explain why, in many ways, it is resistant to change and evolution, paradoxically.  I talk about this in another upcoming blogpost with designer Sarah Angold, so stay tuned.

Daniel begins by telling me about RoboCup@work and the robots they’re developing with the aim of helping humans in a factory and advancing away from the static pre-programmed robot arms (which are really huge, heavy, limited to one action and housed in cages surrounded by lasers) to robots that can reason and react to their environment.  Total re-programming of static robots is required when the production or task changes within a factory, so this is an inconvenient and expensive limitation.

The robot prototype (featured in the video below and as yet unnamed – I ask why?!) is not static but on wheels and is able to analyse its environment and carry out a given task in response to its environment.  In this case, Daniel types in commands and the robot then responds to a specific request to pick up a bolt and nut from a table and drop them in the corresponding shaped holes at another table nearby.  It uses a camera to scan its environment (it has a set of environmental objects which it has been programmed to recognise) and collects then moves and drops the correct objects.  Here it is in action:

I took a couple of snaps of the pieces the robot picks up and the respective shapes the robot drops them into (Literal footnote: My boots are Dr Martens X Mark Wigan):

CS obj 1The items the robot chooses from

CS pc 2The respective shapes the robot drops the items into

My thoughts turn to the use of robots in fashion. Perhaps the most beautiful and arresting use of robots in fashion so far has been Alexander McQueen’s haunting SS 1999 show:

The choreography of the robot arms excites my imagination, like snakes priming to attack. The industrial robots in the McQueen show are the type used to paint cars in a manufacturing plant. I start wondering how robots could impact the fashion industry once the development the SmartLab guys are doing becomes accessible, particularly since biological algorithms are being used to develop robots that could potentially interact as a ‘choreographed group’.

Bastian is working on swarm robotics, which means having a bunch of robots that are controlled by themselves using really simple (I laugh and remind Bastian that’s a relative term) algorithms based on attraction and repulsion – just the same way a flock of birds form a swarm as a group, rather than from a centralised source.  They are really robust, for example you can take a couple of robots out of the system and the flock will still work.  Bastian imagines his flock being used for planetary exploration with a more localised connectivity, rather than for broadband-based military-type operations.  Katie further explains that having many small robots that are interlinked means that if one robot breaks down the others can continue exploring – the mission isn’t scuppered. Motion sensor-based communication between robots allows them to spread and cover a large area – for example an earthquake disaster zone – and scan the terrain for movement or people.

The main source of inspiration for Bastian’s algorithms that “drive” the robots is bees and ants. Both bees and ants use pheromone communication.  Bees use a dance for navigation which involves going back and forth to the beehive once they have found food, to demonstrate to other bees the vector pathway to the food source.  Bastian shows me a really cool video of how he and the SmartLab team created “bee robots” to perform this dance:

CS 10“Bee robots” in the SmartLab

Imagine the use of swarm algorithms to create choreographed robot runway models. The swarm algorithms would be supported by the development Joscha is doing on “state estimation” – the goal that a robot should know exactly where it is in relation to other things.

The runway robot models would know where they are in relation to each and could use lasers to interact with the crowd, making ‘eye contact’ with viewers and giving them a wink or cocking a hip. I’d love to see that. I’m re-imagining the Thierry Mugler show in George Michael’s Music Video “Too Funky” with robots as the models.

**Incidentally, this is the video that made me realise I loved fashion and wanted to work in the industry, back in 1992)**

Robot Naomi, anyone??

2009-12-09Naomi Campbell by Seb Janiak

Maybe I’ll start the first ever robot model agency? It may at first seem bizarre, but it would solve the problem of real people trying to achieve unreal proportions. I am not for a second saying I think the modelling profession is doomed or needs replacing (maybe just disrupting), I just think there’s merit in an alternative. If you think about the way the industry currently works, editorial images are manipulated often to extreme non-human proportions (again, this can be creatively interesting, so I’m not dismissing it entirely) but the human aspect is significantly diminished.

Another area of biological algorithms Bastian is working on is replication of ants’ passive technique of dropping pheromones to communicate with each other. This could be applied to digital pheromones designed to mimic this communication to divide labour between robots and solve complex problems.  Robots can’t create biological pheremones but there is development being done in this area.  Imaging adding the ability to exude pheromones to runway robot models and a whole new heady mix of interaction and attraction is possible. Here’s a demo of the digital pheromone communication concept:

The discussion moves to flying robots and Joscha explains further that State Estimation currently works well on ground robots, but with flying robots it gets more difficult because of the extra dimension of tilt/pitch.  X-box Kinect has a 3D camera that can be used on these type of robot prototypes.  Laser scanners can also be used to give plane of distance measurements.  Based on the time it takes for the laser light to reflect back off the surface of the objects surrounding it, the robot’s distance from those objects can be calculated.  It makes me think of medical imaging and OCT (Optical Coherence Tomography) technology, where it’s possible to use a laser inside of an artery to measure its dimensions – this is exactly the same principle, so I immediately get what Joscha is on about.  Using the laser scanners, Josche’s flying robots generate 3D maps:

I mention drones at this point and Katie explains that they avoid the term because of a general perception of military use, and instead use UAV’s (unmanned aerial vehicles).  It occurs to me again at this point that it must be a difficult and uphill battle to promote their advances in AI and robotics when such misconceptions exist.  Joscha shows me his flying UAV which uses silver reflective 3M tape as markers (which, coincidentally, I have used in yarn-form for knitting) to understand it’s exact location.  It is connected to a large geometric pitchfork-like object with silver balls on the end which allow me to direct it through the air with sweeping arm movements.  Here are the guys in action:

CS 7Joscha’s UAV with silver 3M balls

11821142_148793482121672_2109669878_nA jumper from my AW11 knitwear collection with 3M yarn and camel hair

The flying UAV makes me think of a video I saw recently of the brilliant Disney illustrator Glen Keane, who wore a VR headset and stepped into his illustrations using sweeping arm movements with a paddle to draw in the virtual space around him.  He immersed himself in his drawing and saw his illustrations in real time, 3D.  The video of him ‘stepping into the page’ is well worth a look.  So I guess I made an invisible air drawing with a robot.

I ask Katie whether there are barriers to spreading an accurate message about the work being done in AI and she cites “science fiction stories of dystopian societies where the robots take over” as a big source of misconception.  She is a passionate ambassador for AI, working on the research topic of ‘AI and law’ for over 10 years. She has been working with a local law firm for the past year to expand their use of AI and enable huge amounts of legal information to be analysed quickly and provide accurate and fast solutions.  The whole impetus behind AI is problem solving.  It’s about making our world a better and safer place, for example in disaster relief situations and with self-driving cars.  Katie shares some data on road traffic accidents: 97% of accidents are caused by human error.  That’s a compelling statistic.  Who doesn’t want to be safer on the road?  For the record, Katie tells me that self-driving cars, like those being developed by Google and Bosch, are expected to be launched in stages from partial self-driving initially, to 100% self-drive in the final stage.  So it won’t mean jumping into a vehicle and immediately relinquishing control.

Katie also shared with me some cool robots created by her Computer Science undergraduate students using Lego Mindstorms that demonstrate the concept of the self-driving car.

CS3

The ‘car’ has sensors on each side and at the front.  The sensors detect the lines on the board (road) and adjust their direction and speed accordingly in order to stay within the black lines (stay on the road) and speed up when they pass over the green lines and slow down/stop when they pass over the red.  They are programmed with code written by the students and uploaded directly from a PC into the basic computer box that comes with the Lego Mindstorm kit.  Pretty ingenious.

CS 6

I’m writing up this post while attending London Fashion Week, which has just relocated from Somerset House to the Brewer Street Car Park in Soho, on a pokey corner of a one-way street causing a crammed carnival/circus atmosphere and serious logistical issues. I wonder if Google car could solve this problem? Maybe editor-sized Google self-driving pods could locate their editor by GPS and navigate them quickly from one show to another while they focus on updating their Twitter and Instagram rather than worrying about traffic jams.  It would be a kind of cross between Google car and GM Holden’s Xiao – EN-V electric pod, charging itself in between shows. There’s no way those big old Mercedes cars that have been the long-term ferrying service for LFW editors and buyers are going to get in and out of town fast enough with the new LFW location. The frustration at LFW was palpable outside the shows.

car-side
Google Car

miao_en_v_electric_car-704159
GM Holden’s Xiao EN-V

So what can’t AI do?  I ask Katie about the use of AI and robotics in the creative industries.  Her response, “people regard this as the great separation between man and machine – the ability to be creative”, leaves me feeling relieved my job as a designer is safe.  Interestingly, a large part of Katie’s research work has involved categorising objective versus subjective information and determining how that information should be used in AI.

In terms of creative use of AI, examples exist in the gaming and film industries.  In the Lord of the Rings movies, programmers used crowd algorithms based on AI to program CGI characters in groups to move and interact together, rather than programming individual characters which then need to be programmed to interact with each other (that’s a lot of programming!)

So where is AI and robot technology headed? What’s next? Joscha pipes up and says he would like a robot at the supermarket that can collect everything he wants and bring it back to him.  That sounds like a weird and wonderful space to be in – robots and humans shopping alongside each other.  Katie then mentions the garment folding robot – this technology strikes me as useful for garment manufacturing (and at home, obviously).  The current incarnation folds one item quite well, but the SmartLab team are all about robots being able to not only fold the item, but then move across the warehouse and pack it. I personally would love to have a pattern-cutting robot to work alongside of me tracing-off and cutting out clean, final versions of patterns I have created and digitising them as it goes.

As I finish writing up this blog post, sipping on a cup of coffee that (I’m ashamed to admit) I just reheated in the microwave and I wonder how people felt about microwave ovens when they were first introduced.  Maybe there’s a similarity with AI or any new industry-changing technology?  People fear it because they don’t understand it but once they see how useful it is that fear subsides.  Elementary, my dear Watson.

Follow me:  Twitter @Thetechstyler  and  Instagram @techstyler