How Fashion Graduate Mathilde Rougier Is Using AI and AR To Eliminate Fashion Textile Waste

Screen Shot 2020-09-07 at 14.10.47

 

Mathilde Rougier is a recent graduate of Central Saint Martins, where she studied on the womenswear pathway. For her final collection, she wanted to create new designs from old garments and samples of fabric she had acquired from past internships. This circular approach to fashion – where textile resources remain in repeated use rather than going to a landfill or being incinerated – is one that the industry as a whole must move towards, considering the amount of textile waste it produces. Like Mathilde, many students from Techstyler’s Digi Fashion Class of 2020 showcase have incorporated sustainable/waste-reducing practices into their collection, showing an acute awareness of the climate emergency and the incredibly wasteful and pollutive industry they will inherit. It’s a dilemma many fashion students have tried to come to terms with: loving fashion creation but also knowing that the world does not need more clothes. To overcome this problem, Mathilde decided to adopt innovative techniques.

She took photos of the old garments she wanted to use, and these became the basis of the augmented designs in her final collection, which is a tessellation of leftover textiles and existing garments, re-imagined virtually. Although she was interested in 3D design before and planned to use it in her final collection to some extent, Mathilde is one of the many fashion design graduates for whom digital fashion took on a much larger role in their projects due to COVID-19.

 

Screen Shot 2020-09-09 at 11.22.03
Screen Shot 2020-09-09 at 11.22.50

 

It must have been bittersweet to have your last year cut short; how did you feel?

At the beginning, when lockdown was announced, I had a couple weeks of uncertainty. I didn’t know what to expect or how long lockdown would actually be. So for that period I was a bit of an emotional mess. Then I became very determined to make the situation better for UAL students: with petitions, meetings with university staff, demanding to cut the cost of tuition… This eventually fizzled out because we were getting nowhere. Finally, after that, I began to feel more acceptance. I came to realise that I have the opportunity to really throw myself into something which I had wanted to explore before. I was in a good position as the main equipment I needed was a computer. I already had the squares of second-hand materials (from sample books I had collected over internships to avoid textile waste) I was going to use to map the pixels of the digital garment. The only machinery I needed was a heat press for the recycled plastic I was using.

How did you overcome the obstacles of designing in lockdown? Was it difficult to stay motivated and in touch with your creativity?

Actually, I thought it was quite exciting in a sense. I was more productive. It may have been the sense that it was all in my hands, no-one was going to push me. The only real problem at the beginning of lockdown was shipping all my stuff back to France, since I was going to spend lockdown with my family. I needed to ship all my squares, all my materials. I couldn’t replace them because the goal was for them to be recycled materials in order to minimise waste.

What led you to choose digital fashion design over traditional methods? Was it an unexpected solution to the limitations of COVID-19 or had you been interested in 3D design before?

I was interested in it before, but it took on a larger role because of the physical limitations of COVID. Digital fashion had already been part of the plan, but not to that extent. What I do is augmented reality, so a middle ground between physical and entirely digital fashion (with 3D avatars). I still want to dress people but want the option to do so without producing waste. You can update your designs constantly without waste.

 

Screen Shot 2020-09-07 at 14.11.13

 

What was the inspiration behind your collection?

It didn’t actually come from an aesthetic inspiration, more a problem-solving approach. My approach was technical: I wanted to address the issue of sustainability. I used different tech systems (whether that’s pixelation*, convolutional neural network**, etc.) to solve problems to do with sustainability.

* The images of the old garments were pixelated in photoshop or sections of the garment were 3D scanned. This allowed Mathilde to translate the real-life textiles into a virtual medium for use as the basis of new 3D designs (i.e. the old garments would be the foundation for the Augmented Reality digital layer of Mathilde’s designs.)

** A convolutional neural network is a type of deep neural network that analyses images to determine and categorise their visual characteristics, in effect recognising details of the garment – such as edges or hard components like buttons – and allowing the designer to adjust the pixel patterns within these garment details. The significance of being able to adjust the pixel patterns is two-fold: first, ensuring that everything looks as intended and to make the transition from physical to digital more seamless, and second, it gives the designer the option to infinitely rearrange and “play with” the building blocks of their design.

 

 

 

Screen Shot 2020-09-09 at 11.05.45

 

What software did you use in creating your designs? Were there any challenges?

I used a convolutional neural network (AI), Spark AR and Blender (3D animation software). I learnt how to 3D model during the project, so it took some time before I was comfortable. It was a challenge to try and get the AR right and plane-track* the clothes. The thing with 3D design is that you need to make sure the design looks good from different angles. This means blocking out the body and using occluders** to make sure that patterns on the back didn’t show up from the front and vice versa. It took a lot of adjustment to make sure it worked.

*Mathilde used plane-tracking on second hand garments to add an Augmented Reality digital layer on top, using software such as Spark AR and Blender. Plane-tracking involves taking a photo of a recognisable high contrast area of the garment, which is the bit that acts as a trigger that causes Augmented Reality interaction.

 

Screen Shot 2020-09-09 at 11.07.57

 

** Occluders are objects which impede the amount of light that reaches the eye. In this context, an occluder would block a pattern on one side of the garment from bleeding into the other side. Mathilde had to put explicit instructions into the software to ensure that everything was visualised in the correct location and orientation, according to Mathilde’s design.

 

This is an example of what could happen if occluders are not used: you can see the details of one side of the 3D object bleeding into the other.
This is an example of what could happen if occluders are not used: you can see the details of one side of the 3D object bleeding into the other.

 

What is your plan after graduation? How do you feel about going into the industry during this uncertain time?

I don’t really know yet. At the moment I’ve had lots of freelance jobs which I don’t think I would have gotten 6 months ago. All of a sudden people are interested in digital fashion, there’s a lot of hype. COVID accelerated things because everyone was in front of their screens. But people have been experiencing fashion digitally for a long time without really realising it: e.g. all the people that watch shows online rather than being there in real life. Fashion enthusiasts have been consuming fashion from their screens for ages, but now, since everyone was in lockdown, industry professionals and the fashion system have had to adapt a bit more. I’m going onto MA Accessories at the Institut Français de la Mode in Paris so I’ll carry on freelancing on the side, but I’m lucky to not have to think about a full-time career yet.

How do you want to see the fashion industry change? Is it already changing or is this evolution merely superficial (for example in terms of sustainability)?

Fashion needs to move towards a circular model of production, i.e. stopping with the production of new textiles and making use of existing garments and materials. In terms of sustainability I think we’re still quite far behind. Sustainability is such a large word, there are so many ways to go about making the industry more sustainable. In a sense it’s a good thing as there are many small ways to help, but the risk is focussing too much on the small things rather than the big problem at hand. We see a lot of greenwashing and superficial fixes to a problem that is fundamentally wrong. I’m curious to see more systemic change.

 

Screen Shot 2020-09-09 at 11.28.00

 

Is 3D fashion design the future?

It’s quite revolutionary waste-wise for smaller brands since the proportion of sampling to product output is bigger than for large companies, meaning it would be extremely worthwhile to move to virtual sampling. With this, you reduce waste at the design and sampling stage and can make many alterations on a toile.

What about digital fashion from end-to-end (meaning no physical product is made)?

This would work for brands doing digital showcases. I have a preference for augmented reality (meaning real people wearing digital clothes rather than a 3D avatar). I still want to dress real people, but people have increasingly developed a digital aspect to their personality. For example, influencers on Instagram that wear a whole new outfit for every new post: augmented reality could solve the overconsumption issue caused by this. The medium is digital so why not the clothes? It can be integrated into the way we live our lives already.

 

Screen Shot 2020-09-09 at 11.27.29

 

Lastly, what do you think digital fashion means for sustainability?

It allows design without physical production, therefore less waste. Though you can’t totally ignore the carbon footprint of technology, the electrical energy used in powering a computer would be less than trying to recycle all the waste material and mixed fibres at recycling plants. Digital fashion at least streamlines the waste and carbon footprint caused by the industry. Also, there’s visibly “tech-y” fashion (virtual fashion) which will certainly affect the zeitgeist and our opinions on owning physical clothes, but I think the core change in terms of sustainability will be about incorporating technologies into making physical products. You might look at a T-shirt and not think there’s anything particularly innovative or futuristic about it, but the truth is it was made using digital processes that aid sustainability.

 

Screen Shot 2020-09-09 at 11.27.08

 

With the goal of a more sustainable fashion industry in mind, Mathilde wanted to address both our physical and digital identities. This is due to our digital identities playing an increasingly bigger role in our lives, and these digital identities (or rather the social media sites where they live) being ruled by algorithms that constantly call for new content. She remedies this in the physical realm through the fabric squares she used for her designs which can constantly be repositioned, creating new designs without creating more waste. This translates to the digital realm too, where her Augmented Reality virtual outer shells (each of which correspond to a different garment) can continuously be updated. 

Mathilde enjoys being able to design without creating any physical waste and strongly believes the waste issue in fashion is one that needs to be addressed by the industry. However, she doesn’t want this to come at the cost of design, whether that’s aesthetically or in terms of speed. This is what her collection tried to remedy, showing that zero-waste, repurposing and recycling techniques do not have to produce the “crafty” results they are often associated with. Like much of fashion’s “new guard”, Mathilde is an advocate for the circular mindset where things can constantly be re-designed, allowing for full creative expression by the designer and satisfying the fashion crowd’s love for newness without creating more textile waste and contributing to the devastating climate impact that comes along with fashion production and consumption. In this way, the designer can fully explore the best of fashion – unbridled creativity – whilst avoiding its destructive impact.

 

By Anastasia Vartanian

Can ​Artificial Intelligence Combat Oversupply and Minimise Deadstock in Fashion?

Originally published on Eco-Age.

Fashion tech innovator, writer and public speaker Brooke Roberts-Islam investigates the role that AI could play in reducing the environmental impact of unsold fashion. 

Artificial Intelligence, for all its futuristic and sci-fi connotations, is a way of analysing data that helps to make smarter decisions. It’s true that AI can be much more – it can ‘learn’ and evolve based on data, but in the case of fashion, its most common application is to find out what’s selling, what’s not and to do something about it. 

Why do we need artificial intelligence to solve such problems? What is wrong with the current system? Followers of fashion and sustainability will be no strangers to the news that brands, both big and small, fast and slow, are grappling with the current business model that relies on predicting what styles will be popular in several months’ time and in what volumes. Additionally, ensuring that the product is available in the right place, at the right price at the right time is increasingly difficult. Add to that the influence of social media on fast-changing trends and it is hard to keep up, and operate in a financially viable, let alone sustainable manner. The very cycle of fashion that requires predictions months ahead of the product being available is outdated and slow and leads to product sitting unsold in warehouses and eventually being discounted, or worse, landfilled or incinerated. 

jeans

The McKinsey ​Notes From The AI Frontier​ report quantifies the benefits of AI for retail as being mostly in marketing and sales (targeting customers with the right products and boosting conversion) and supply chain and manufacturing (manufacturing the right product in the right quantity and making it available at the right time). While this may sound basic, it’s the current fast fashion business model’s inability to get this right that is causing overstock and catastrophic resource and waste consequences. 

How can AI help to cut down on oversupply of stock and thereby reduce the environmental impact of unsold fashion? One of the key ways is by capturing data through online sales based on geographical areas to determine the types of products that are on demand (and those that aren’t) in specific locations. A prime example of this is the Nike Live store ​‘Nike by Melrose’​ in West Hollywood, which is a fusion of online and offline stores. The first of its kind, the store requires shoppers to sign up to the ​Nike Plus​ app in order to unlock the services and perks available in the store. In signing up, the shopping behaviour and preferences of each customer are known to Nike, allowing it to only stock what local shoppers want. No overstock and no need to have regular sales to get rid of stock that wasn’t right for the local customers. This data feeds back to the manufacturing systems and influences the styles and quantities manufactured. 

jeans2

 

Casting the net wider to take in global shopping preferences and real-time purchasing behaviour, ​The Trending Store​, which opened in London’s Westfield Shopping centre this week, is using AI to analyse social media data to extrapolate the ‘top 100 fashion items’. ​The trending products, which span high-street and designer looks across fashion, accessories and footwear, are then collected each morning by a team of stylists from the retail stores at Westfield London. The data analytics are being performed by ​NextAtlas​, who track 400,000 early adopters spread across 1,000 cities worldwide to determine the top 100 items. ​Screens in The Trending Store show exactly where each trend originates from, so shoppers can see which city or country is influencing the popularity of the item. ​This is perhaps the first foray into shopping centres providing AI-driven multi-brand solutions that helps get trending styles and colours in front of customers at the right time. 

It bears noting that these are reactionary solutions that require linked-up data and transparency across the supply chain to genuinely reduce overstock and deadstock – that is to say, the data should ultimately drive the design and manufacturing process to ensure that only the product that is ‘needed’ is manufactured. As fast fashion retailers struggle to manage the vast quantity of products they manufacture globally, this data could allow retailers to make smarter and more sustainable decisions. However, ​digitalisation of the entire supply chain​ (allowing the capture of data at all points from the initial design and manufacturing to global sales) is necessary in order to fully harness and act on this powerful data. Watch this space.

The Real and Imagined World of Robotics and Artificial Intelligence In Fashion

I am a hybrid. I began my career as a radiographer (x-raying and scanning patients) before studying fashion design and pattern-cutting. Upon graduation I worked simultaneously in hospitals and fashion studios, leading me to launch a fashion label in 2009, creating digital knitwear inspired by CT and MRI scans, named Brooke Roberts. I am now a designer, consultant, lecturer and blogger with a fascination for combining science, technology and fashion, both in a real and imagined sense. Anything is possible. 

ARTIFICIAL INTELLIGENCE AND ROBOTICS are generally met with a mixture of fascination and fear.  They’re certainly hot topics – Google car is the most recent high profile example, telling us that AI and robotics are developing quickly and will soon become part of our everyday lives.

Twitter fed me an intriguing tweet about a Robotics event called TAROS being hosted by The University of Liverpool Computer Science Department, headed by Professor Katie Atkinson.  TAROS was open to the public and aimed at de-mystifying AI and robotics and sharing advances and potential uses for it.  I couldn’t make it to TAROS but Professor Atkinson and the team agreed to an interview on another day and so begins Techstyler, a blog about science, technology and style.

CS dept

CS 2

My pre-reading led me to this summary about AI: it is aimed at having computers make decisions for us by analysing many factors (more than we could digest), in a quicker timeframe than we could analyse them, and coming up with an answer or series of solutions.  In other words, it streamlines the decision making process – so AI is not such a scary concept.

When it comes to robotics, it’s a common misconception that all robots are human-like, or at least that that’s the end goal of robotics.  In truth, many robots are built and programmed to do one very specific task and do not have any interactive or independent processing capability.  For example, they might pick up a pen from a conveyor belt and put it in a box.  They will repeat the exact same action over and over –  even if a human moves into their path, they will continue their programmed action.  Since killing people is NOT the aim of robots, this limitation is something that the team in the SmartLab at The University of Liverpool are working to solve, amongst many others.

Professor Atkinson (who introduced herself as Katie and is informal and very approachable) took me behind the scenes at the SmartLab to meet the team including PhD students Bastian Boecker, Joscha-David Fossel and Daniel Claes which is headed up by Professor Karl Tuyls. The SmartLab team are exploring how we can take robots away from the static limited industrial type to the type that move around and interact with their environment; they can be programmed to work in groups, based on biological models – like the way bees and ants work together – and they can use laser and infrared technology to understand where they are to detect objects and the presence of humans.

CS 11The guys in the SmartLab

CS 8The SmartLab team awards including the RoboCup Championship trophy, 2014

I spend an afternoon chatting to the guys in the SmartLab about their work and messing about with their robots, leaving me inspired and full of wonder (and them further behind in their PhD work).  They were really generous not just with their time, but in sharing their knowledge.  This is something that always humbles and inspires me in science – it exists to share knowledge and enlighten.  By contrast, the creative industries, and fashion in particular, can be very secretive and exclusive which may explain why, in many ways, it is resistant to change and evolution, paradoxically.  I talk about this in another upcoming blogpost with designer Sarah Angold, so stay tuned.

Daniel begins by telling me about RoboCup@work and the robots they’re developing with the aim of helping humans in a factory and advancing away from the static pre-programmed robot arms (which are really huge, heavy, limited to one action and housed in cages surrounded by lasers) to robots that can reason and react to their environment.  Total re-programming of static robots is required when the production or task changes within a factory, so this is an inconvenient and expensive limitation.

The robot prototype (featured in the video below and as yet unnamed – I ask why?!) is not static but on wheels and is able to analyse its environment and carry out a given task in response to its environment.  In this case, Daniel types in commands and the robot then responds to a specific request to pick up a bolt and nut from a table and drop them in the corresponding shaped holes at another table nearby.  It uses a camera to scan its environment (it has a set of environmental objects which it has been programmed to recognise) and collects then moves and drops the correct objects.  Here it is in action:

I took a couple of snaps of the pieces the robot picks up and the respective shapes the robot drops them into (Literal footnote: My boots are Dr Martens X Mark Wigan):

CS obj 1The items the robot chooses from

CS pc 2The respective shapes the robot drops the items into

My thoughts turn to the use of robots in fashion. Perhaps the most beautiful and arresting use of robots in fashion so far has been Alexander McQueen’s haunting SS 1999 show:

The choreography of the robot arms excites my imagination, like snakes priming to attack. The industrial robots in the McQueen show are the type used to paint cars in a manufacturing plant. I start wondering how robots could impact the fashion industry once the development the SmartLab guys are doing becomes accessible, particularly since biological algorithms are being used to develop robots that could potentially interact as a ‘choreographed group’.

Bastian is working on swarm robotics, which means having a bunch of robots that are controlled by themselves using really simple (I laugh and remind Bastian that’s a relative term) algorithms based on attraction and repulsion – just the same way a flock of birds form a swarm as a group, rather than from a centralised source.  They are really robust, for example you can take a couple of robots out of the system and the flock will still work.  Bastian imagines his flock being used for planetary exploration with a more localised connectivity, rather than for broadband-based military-type operations.  Katie further explains that having many small robots that are interlinked means that if one robot breaks down the others can continue exploring – the mission isn’t scuppered. Motion sensor-based communication between robots allows them to spread and cover a large area – for example an earthquake disaster zone – and scan the terrain for movement or people.

The main source of inspiration for Bastian’s algorithms that “drive” the robots is bees and ants. Both bees and ants use pheromone communication.  Bees use a dance for navigation which involves going back and forth to the beehive once they have found food, to demonstrate to other bees the vector pathway to the food source.  Bastian shows me a really cool video of how he and the SmartLab team created “bee robots” to perform this dance:

CS 10“Bee robots” in the SmartLab

Imagine the use of swarm algorithms to create choreographed robot runway models. The swarm algorithms would be supported by the development Joscha is doing on “state estimation” – the goal that a robot should know exactly where it is in relation to other things.

The runway robot models would know where they are in relation to each and could use lasers to interact with the crowd, making ‘eye contact’ with viewers and giving them a wink or cocking a hip. I’d love to see that. I’m re-imagining the Thierry Mugler show in George Michael’s Music Video “Too Funky” with robots as the models.

**Incidentally, this is the video that made me realise I loved fashion and wanted to work in the industry, back in 1992)**

Robot Naomi, anyone??

2009-12-09Naomi Campbell by Seb Janiak

Maybe I’ll start the first ever robot model agency? It may at first seem bizarre, but it would solve the problem of real people trying to achieve unreal proportions. I am not for a second saying I think the modelling profession is doomed or needs replacing (maybe just disrupting), I just think there’s merit in an alternative. If you think about the way the industry currently works, editorial images are manipulated often to extreme non-human proportions (again, this can be creatively interesting, so I’m not dismissing it entirely) but the human aspect is significantly diminished.

Another area of biological algorithms Bastian is working on is replication of ants’ passive technique of dropping pheromones to communicate with each other. This could be applied to digital pheromones designed to mimic this communication to divide labour between robots and solve complex problems.  Robots can’t create biological pheremones but there is development being done in this area.  Imaging adding the ability to exude pheromones to runway robot models and a whole new heady mix of interaction and attraction is possible. Here’s a demo of the digital pheromone communication concept:

The discussion moves to flying robots and Joscha explains further that State Estimation currently works well on ground robots, but with flying robots it gets more difficult because of the extra dimension of tilt/pitch.  X-box Kinect has a 3D camera that can be used on these type of robot prototypes.  Laser scanners can also be used to give plane of distance measurements.  Based on the time it takes for the laser light to reflect back off the surface of the objects surrounding it, the robot’s distance from those objects can be calculated.  It makes me think of medical imaging and OCT (Optical Coherence Tomography) technology, where it’s possible to use a laser inside of an artery to measure its dimensions – this is exactly the same principle, so I immediately get what Joscha is on about.  Using the laser scanners, Josche’s flying robots generate 3D maps:

I mention drones at this point and Katie explains that they avoid the term because of a general perception of military use, and instead use UAV’s (unmanned aerial vehicles).  It occurs to me again at this point that it must be a difficult and uphill battle to promote their advances in AI and robotics when such misconceptions exist.  Joscha shows me his flying UAV which uses silver reflective 3M tape as markers (which, coincidentally, I have used in yarn-form for knitting) to understand it’s exact location.  It is connected to a large geometric pitchfork-like object with silver balls on the end which allow me to direct it through the air with sweeping arm movements.  Here are the guys in action:

CS 7Joscha’s UAV with silver 3M balls

11821142_148793482121672_2109669878_nA jumper from my AW11 knitwear collection with 3M yarn and camel hair

The flying UAV makes me think of a video I saw recently of the brilliant Disney illustrator Glen Keane, who wore a VR headset and stepped into his illustrations using sweeping arm movements with a paddle to draw in the virtual space around him.  He immersed himself in his drawing and saw his illustrations in real time, 3D.  The video of him ‘stepping into the page’ is well worth a look.  So I guess I made an invisible air drawing with a robot.

I ask Katie whether there are barriers to spreading an accurate message about the work being done in AI and she cites “science fiction stories of dystopian societies where the robots take over” as a big source of misconception.  She is a passionate ambassador for AI, working on the research topic of ‘AI and law’ for over 10 years. She has been working with a local law firm for the past year to expand their use of AI and enable huge amounts of legal information to be analysed quickly and provide accurate and fast solutions.  The whole impetus behind AI is problem solving.  It’s about making our world a better and safer place, for example in disaster relief situations and with self-driving cars.  Katie shares some data on road traffic accidents: 97% of accidents are caused by human error.  That’s a compelling statistic.  Who doesn’t want to be safer on the road?  For the record, Katie tells me that self-driving cars, like those being developed by Google and Bosch, are expected to be launched in stages from partial self-driving initially, to 100% self-drive in the final stage.  So it won’t mean jumping into a vehicle and immediately relinquishing control.

Katie also shared with me some cool robots created by her Computer Science undergraduate students using Lego Mindstorms that demonstrate the concept of the self-driving car.

CS3

The ‘car’ has sensors on each side and at the front.  The sensors detect the lines on the board (road) and adjust their direction and speed accordingly in order to stay within the black lines (stay on the road) and speed up when they pass over the green lines and slow down/stop when they pass over the red.  They are programmed with code written by the students and uploaded directly from a PC into the basic computer box that comes with the Lego Mindstorm kit.  Pretty ingenious.

CS 6

I’m writing up this post while attending London Fashion Week, which has just relocated from Somerset House to the Brewer Street Car Park in Soho, on a pokey corner of a one-way street causing a crammed carnival/circus atmosphere and serious logistical issues. I wonder if Google car could solve this problem? Maybe editor-sized Google self-driving pods could locate their editor by GPS and navigate them quickly from one show to another while they focus on updating their Twitter and Instagram rather than worrying about traffic jams.  It would be a kind of cross between Google car and GM Holden’s Xiao – EN-V electric pod, charging itself in between shows. There’s no way those big old Mercedes cars that have been the long-term ferrying service for LFW editors and buyers are going to get in and out of town fast enough with the new LFW location. The frustration at LFW was palpable outside the shows.

car-side
Google Car

miao_en_v_electric_car-704159
GM Holden’s Xiao EN-V

So what can’t AI do?  I ask Katie about the use of AI and robotics in the creative industries.  Her response, “people regard this as the great separation between man and machine – the ability to be creative”, leaves me feeling relieved my job as a designer is safe.  Interestingly, a large part of Katie’s research work has involved categorising objective versus subjective information and determining how that information should be used in AI.

In terms of creative use of AI, examples exist in the gaming and film industries.  In the Lord of the Rings movies, programmers used crowd algorithms based on AI to program CGI characters in groups to move and interact together, rather than programming individual characters which then need to be programmed to interact with each other (that’s a lot of programming!)

So where is AI and robot technology headed? What’s next? Joscha pipes up and says he would like a robot at the supermarket that can collect everything he wants and bring it back to him.  That sounds like a weird and wonderful space to be in – robots and humans shopping alongside each other.  Katie then mentions the garment folding robot – this technology strikes me as useful for garment manufacturing (and at home, obviously).  The current incarnation folds one item quite well, but the SmartLab team are all about robots being able to not only fold the item, but then move across the warehouse and pack it. I personally would love to have a pattern-cutting robot to work alongside of me tracing-off and cutting out clean, final versions of patterns I have created and digitising them as it goes.

As I finish writing up this blog post, sipping on a cup of coffee that (I’m ashamed to admit) I just reheated in the microwave and I wonder how people felt about microwave ovens when they were first introduced.  Maybe there’s a similarity with AI or any new industry-changing technology?  People fear it because they don’t understand it but once they see how useful it is that fear subsides.  Elementary, my dear Watson.

Follow me:  Twitter @Thetechstyler  and  Instagram @techstyler