Continuing on from my previous article, a major theme of WIRED 2016 was humanitarianism and the refugee crisis.
Roya Mahboob is an Afghani tech entrepreneur who had her eyes opened on a trip to an internet cafe in Herat. She talked passionately about how the internet offered her a life outside of domesticity via a tech career. She became the first tech CEO in Afghanistan, hiring female employees (many of whom worked from home) and spoke of the challenges in firstly obtaining clients (due to a lack of confidence in the abilities of women in her country, who are largely deemed fit only for domestic life), and once she did obtain clients, a battle to be paid because her work was not valued as she is a woman. Tension arose and she and her family received death threats from the Taliban due to her breakout career and creation of local centres to teach girls computing. She was then forced to leave Kabul, where she had moved to from Herat. She found an Italian/American investor (via LinkedIn) and is now based in the US and declares herself a “global digital citizen”, sharing a door to the world to women and girls in Afghanistan. For more information about Roya’s work follow her on Twitter and see the Digital Citizen Fund.
Regina Catrambone, along with her family, founded the first search and rescue boat for those fleeing danger and persecution to make the journey to southern Italy from neighbouring countries. So devastated was she that hundreds of children and adults were being left to die on this treacherous passage that she co-founded MOAS – Migrant Offshore Aid Station. Since 2014 MOAS has saved more than 30,000 people, the youngest being four days old. Regina says “you cannot stop the might and the will of those looking for a chance to live. It is impossible. You can’t stop them. You have to help them”. Her speech was incredibly moving and showed how harnessing compassion and empathy can create powerful solutions and implore governments and other agencies to help solve the refugee crisis.
Brooklyn-based Jessica O. Matthews presented an ingenious creation – a football that stores energy from kinetic movement which then provides electricity for devices and appliances. A game changing (I couldn’t help the pun) and simple piece of technology, it allows kids in off-grid areas to kick around a football during the day and then read books at night, continuing their studies and affording them a better chance in life. Jessica is extending her invention to other objects such as suitcases with wheels, into which you can plug your mobile phone to charge whilst on the go. See Uncharted Play for more information.
Jessica O. Matthews
Psychiatrist and Aviator, Bertrand Piccard, piloted the Solar Impulse aircraft and declared that the “old world and new world are a state of mind”. Elaborating on this, he gave a thought provoking talk that explained how a boat building company, Alinghi, created an aircraft and how the coming together of teams from diverse disciplines allowed them solve problems never before tackled. “If you want to innovate you have to get out of the system. What you know is a handicap”, says Bertrand. He and his team completed an around the world journey, travelling 40,000 km without fuel, proving that the capabilities of solar power are beyond our current usage. He provided inspiration, and a challenge, to those dismissing renewable energies and highlighted the current work of Elon Musk in bringing solar power into the transportation industry on a commercial level.
Wired has come to a close, leaving an echo that says I can’t keep doing things the same way. Knowing what I now know, and looking at how I have done things in the past, it’s time to adjust and apply new ways of thinking and creating. The talks catalyse new trains of thought and ignite the will to try new technologies, or apply existing ones in new ways.
Wired joins some of the biggest global moving dots with speakers from all over the world giving us a picture of where we are right now in terms of advancing new medical technologies, solving environmental issues, achieving universally acceptable levels of education, battling the refugee crisis, reaching space commercially, using AI to diagnose diseases, fighting hate, racial discrimination and sexism, and connecting people using VR to solve social issues – and it provides the inspiration to contribute to solving these problems.
I’m going to stop talking and start doing. The effects of the above paragraph will be revealed over the coming weeks and months on these pages, my Huffington Post blog and in a soon to be launched new venture.
What will you do today?
Watch snippets and read summaries of all the speakers at Wired here
A recent visit to Ravensbourne has catalysed a shift in my opinion of ‘fashion tech’ as a discipline and led to an animated discussion around the reasons for the aesthetic gulf between fashion design and technology. The reason for my visit was the European Space Agency initiative, ‘Couture in Orbit’ – a fashion show at the Science Museum in May, featuring the work of five fashion colleges in Europe: ESMOD Paris, ESMOD Berlin, Fashion Design Akademiet Copenhagen, Politecnico di Milano and Ravensbourne London, which set about planting creative seeds for what will become a necessity – fashion in space. The colleges worked to a brief set by the ESA to present ideas and prototypes for fashion and accessories in the coming age of space travel. In response to a number of nasty and aggressive comments on their YouTube page in response to a video of this initiative, the ESA wrote this:
Couture in Orbit is a student outreach project. The students are using materials and technology in their designs that are a spin-off from the space industry. Each school had a theme linked to an astronaut’s mission, such as environment, health, sustainability, and their final designs had to have practical benefits for life on Earth. No funds were exchanged and material and technical support was provided by Tech startups.
Yes, the designs could be seen as somewhat ‘amateurish’ and ‘costumey’ in their concept and presentation and describing them as ‘couture’ and ‘fashion’ is not strictly accurate, however the idea here is key. Fashion’s robust approach to design and creation of cohesive, refined collections does not allow for this kind of playful theatrics, but if fashion and tech are to advance there has to be some latitude where the end result is concerned. It makes no sense to judge this by the same standards as a show at London Fashion Week, for example, which exists for an entirely different purpose and is part of a totally different creative and commercial conversation. The YouTube comments demonstrate an attitude that demeans the validity and power of fashion that I have seen previously hinder cooperation between fashion, science and tech sectors, but we will forge forward regardless.
‘Couture in Orbit’ designs
‘It is inevitable’, said Ravensbourne students Farid Bin Karim and Sam Martin-Harper of the fusion of fashion and technology in clothing to come. Their view was the same of space travel – we know for certain there will be inhabitation of other planets and commercial journeys to space, so we need to design clothing fit for space life. The brief provided to the students by the ESA included an array of materials for them to use in their garments and accessories, including Sympatex, woven fabrics by Bionic Yarn and 37.5. Being presented with a fixed set of materials is challenging from a design perspective, as fashion design often begins with selection of a fabrics to complement an aesthetic or theme held by the designer. Removing this from the designer’s creative point of view throws up further challenges and provides experimental opportunities. Karim leads me into a discussion about Design Fiction, a framework based on critical design which is the foundation of his speculative design approach on the Wearables MA course at Ravensbourne. The modelling of future scenarios using design fiction provides a robust outline for predicting what fashion design could be in an age of commercial space travel, for example. Karim selects three modes of technology – one that exists but he can’t access, one that exists that he can access and one that we can reasonably deduce will exist in the future – with which to begin to form a fashion tech product design scenario. This Design Fiction framework and critical design, attributed to Julian Bleecker and Dunne and Raby respectively, and adopted widely in London as a modelling tool, begins to give me insight into how design for a future that we can’t yet imagine is conceivable and believable.
Farid explains that his self-closing helmet and kilt are inspired by sojourners travelling to space and creating their own exoplanet. His concept hinged on the sojourners creating protective barriers around themselves that responded to atmospheric changes to give visual notifications allowing them to react and adapt. His self-closing helmet is powered by muscle wires and his kilt, printed in collaboration with print designer and MA fashion student Laura Perry, has heat responsive ink which disappears at certain temperatures – a useful visual notification when things are hotting up. Farid also used a UV responsive pigment – another useful visual alert. Karim’s work is inspired by an array of creatives including artist Lucy McRae, writer HG Wells and movement artist and coder Nicola Plant.
Heat applied by the palm of the hand causes the ink to temporarily disappear
UV source applied to printed fabric
Visual alert to excessive UV rays
Farid Bin Karim’s self-closing helmet and reactive ink kilt, in collaboration with Laura Perry
Sam Martin-Harper presented an altogether more nostalgic proposition in which she expressed her belief (and hope) that we will always remain rooted to earth. Her love of biology and particular interest in the techniques for growing plants on the International Space Station, including the work of astronaut Tim Peake, drove her to create a 3D printed neck piece containing plant life. Admitting this is a conceptual piece, Sam explained how she used inspiration from the ingenious folding joint sections of space suits to inform the shapes and details of her design. Sam is completing her BA and is still exploring career options. One thing is for sure, she cannot see a future of fashion without the integration of tech.
Sam Martin-Harper’s 3D printed plant-filled neckpiece at ‘Couture in Orbit”
A discussion on the future work of Farid centres on his passion for data as a tool for creating responsive and adaptive design. He has been learning coding and electronics as part of his Wearables MA and sees future fashion as an extension of the individual – as ‘body centric’. On graduation, Karim is hoping to work with a multi-disciplinary research facility to conduct collaborative research and design. When I ask if he would consider a traditional design job (he is a fashion graduate, after all) he reflects on how he has had to unlearn and relearn aspects of his design approach through his Wearable MA training in order to realise his part industrial, part fashion creations. It’s clear he’s happier in unchartered territory.
The discussion turned to couture and obsolescence. Karim is curious about the possible inclusion of technology in couture techniques in order to aid their survival, but this is completely at odds with the fact that couture means made by hand. This meaning of couture would therefore need to change for this to happen. I ponder a possible alternative in the form of technologies so specialised, rare and unique that they create a techno-couture instead. Here we begin to think about fashion and design being driven by technology, rather than the other way around.
In these discussions, as Alexa Pollmann, Course Leader of the MA Wearable Futures course, points out, it is important to consider the designs of Sam, Farid and the other students from Ravensbourne as proposals and prototypes – not final ‘fashion products’ per se. Ask any fashion designer working in the industry today their opinion of fashion tech and they will overwhelmingly tell you that it is gimmicky, ugly and not desirable. Herein lies the chasm between tech and fashion. Looks really count, and so does magic. Fashion designers bring an ephemeral quality to their creations, says Alexa. Fashion designers dream up and articulate experiences better than any other design discipline. They create magic in a way that is often so difficult to define it just feels ‘right’. Fashion is entirely subjective and indisputably powerful. For these reasons, Clive Van Heerden, co-founder of vHM Design Futures studio in London, which develops materials and technologies for a host of Wearable Electronic business propositions in the areas of electronic apparel, conductive textiles, physical gaming, medical monitoring and entertainment, insists on having a fashion designer in his creative team on all projects.
Designs by students from Politecnico di Milano
Designs by students from Ravensbourne
But why are fashion designers resistant to incorporating tech into their designs and what is slowing down the advancement of the fashion tech fusion? One factor is that the development of tech-enabled/collaborative products takes considerable research and development, and therefore time. It requires dedication to solving specific problems related to firstly a single concept or product, which is at odds with designing, sampling and creating whole fashion collections which are visually cohesive within a strict time frame (weeks or months at most), which then have a finite sales period before the next collection is created (making the current one obsolete, for want of a better word) and the cycle continues. The traditional cycle of two main collections per year for high end fashion labels has switched to four in recent years, meaning there is even less time for research and development. Knowing this, it is easy to see why the work of fashion designers is at odds with the research and development required to incorporate tech, and vice versa. In a previous interview with designers Fyodor Golan, they pointed out that fashion tech collaborations often have a required fixed outcome within a tight time frame, limiting the amount of integration possible. This goes some way to explaining why sometimes fashion tech looks more ‘stuck on’ than cohesively and meaningfully designed and produced.
Read more about the technologies involved in the Couture in Orbit project here
Header image: Farid Bin Karim’s self-closing helmet and adaptable ink kilt at ‘Couture in Orbit’
Jam packed. Back-to-back nuggets of our tech-driven future being dispensed by the world’s game changers. That’s how if feels to sit in the front row at Wired 2015.
Advances in technology are set to drive healthcare, music and art. Artificial Intelligence (AI) is enabling us to begin to explore healthcare based on individual’s biology and behaviour (sensors record this information and use it to create a personal health map, which is then used to assess how multiple factors combine to tell us whether or not a drug will work for that individual, for example). When you look at this information on a large scale (and using AI, we can effectively process this big data) it can then be used to shape drug trials and cut down the usual 10-15 years and 20+ billion dollars it takes to develop a drug, which for many people, may not even work. This is powerful stuff and it’s what BERG is all about. Speaker Nivan Narain had me at the Salvador Dali-like graphics, but once I got to understand the implications for not only treating, but preventing disease I was fully inspired.
Gabor Forgacs is taking this concept a step further and asking what a world of bio-printing would be like, where we can 3D print tissues and organs. To be clear, the printing of organs is not yet possible (there is currently no way to provide nutrients and blood supply to 3D printed organs so they can’t be kept “alive” whilst they are being printed, but what is possible is the printing of small areas of tissue that can then be used to perform drug (or other) trials on and determine effectiveness of a drug on an individual (in the most specific sense), or humans (in the broader sense) rather than on animals. This would far improve current drug testing methods which, partly because they are carried out on animals, unsurprisingly result in hit and miss effectiveness for humans. It’s the dawn of a new and exciting era for healthcare, with money-saving (maybe it can help slash our NHS bills and slow down privatisation? – wishful thinking perhaps) and improvements in treatment. I am willing Gabor to develop this technology – and fast.
In the creative space, Eyal Gever is an artist whose “palette is code”. He develops visual representations of sound, as demonstrated by animated graphics responding to beatboxer Reeps One’s voice. It’s a blast of auditory and visual stimulation, making it hard to imagine the sound without the graphics and vice versa. Eyal has also created a water simulation piece that captures the movement and physiology of a dancer in real time. The result is a digital rendering of a liquified dancer exploding into droplets and reforming as if orchestrated by the sound. It’s enthralling.
Eyal’s work extends to 3D printing moments in time captured digitally then created physically and installed amongst great and revered artworks including those of William Turner and Matisse.
His current next project revolves around a project with NASA aimed at 3D printing in micro gravity on the International Space Station. This continues on nicely from a talk I attended the night before held by Pint of Science at King’s College. The headline speaker was NASA astronaut Ken Bowersox and Eyal’s work helps complete a narrative Ken began about collaboration, cross-disciplinary work and how creative the teams are while working on the International Space Station. They spend a huge amount of time devising and performing informal experiments and photographing space and the earth below.
Eyal’s earlier sound sculpture work
The mandate for Eyal is to create art in space. He decided to capture sound, create it’s shape digitally (an extension of his previous work) and 3D print it before releasing it into space. The sound he felt captured the spirit of humans best was the laugh. Want to send your laugh to space? The Laugh app is available to download (but is apparently well hidden) so you can record your laugh, or the laugh of those around you, submit it for selection and potentially send your laugh in physical form to space!
One of the day’s most highly anticipated speakers was Martha Lane Fox. The mild irritation I felt as she sat a few seats for me and chatted to her assistant during the talks before hers (she was no doubt doing last-minute (sorry, couldn’t help the pun!) prep turned to disappointment when she failed to offer any solutions to the myriad of problems we face as women in tech (or rather resulting from not enough women being in tech), which she duly reminded us of during her talk. Yes, we know there aren’t enough women working in tech. Yes, we know women are just as capable/intelligent/resilient as men. We also know that people like Sue Black, Emma Mulqueeny and Anne Marie-Imafidon are doing great work to encourage girls to study STEM subjects and enter into STEM careers, but there has been a decline in the number of women entering into the professional world of science and technology for years.
Martha’s impassioned analogy of wanting to create a warrior hoard of women inspired by her journey to the Altai mountains in Mongolia, while she studied its women and how they removed their breasts so as not to impede their use of a bow and arrow (and therefore perform their duty equally alongside men) is a poetic one, but does it help our current debate? Martha’s rhetoric sticks in my throat. I wish she had come at this topic from a current standpoint, with a little more reality and a little less fantasy. On the ground, London-based Founders and Coders are training women to code in four months, for free. Two of them (Michelle and Claire) are currently in Poland competing with their ModeForMe colleagues to win investment for their ‘kickstarter for fashion’ app. In their spare time they run a group called Prosecco JS (girls who love Prosecco and Java Script) and teach coding to other women (for free) at Founders and Coders. I wrote a full blog post on Michelle and Mode for me, and they are just one example, but you see there is movement in the right direction and it’s inspiring. Martha’s talk does such startups and groups a disservice. Offer solutions, Martha. You’re preaching to the converted.
I am a hybrid. I began my career as a radiographer (x-raying and scanning patients) before studying fashion design and pattern-cutting. Upon graduation I worked simultaneously in hospitals and fashion studios, leading me to launch a fashion label in 2009, creating digital knitwear inspired by CT and MRI scans, named Brooke Roberts. I am now a designer, consultant, lecturer and blogger with a fascination for combining science, technology and fashion, both in a real and imagined sense. Anything is possible.
ARTIFICIAL INTELLIGENCE AND ROBOTICS are generally met with a mixture of fascination and fear. They’re certainly hot topics – Google car is the most recent high profile example, telling us that AI and robotics are developing quickly and will soon become part of our everyday lives.
Twitter fed me an intriguing tweet about a Robotics event called TAROS being hosted by The University of Liverpool Computer Science Department, headed by Professor Katie Atkinson. TAROS was open to the public and aimed at de-mystifying AI and robotics and sharing advances and potential uses for it. I couldn’t make it to TAROS but Professor Atkinson and the team agreed to an interview on another day and so begins Techstyler, a blog about science, technology and style.
My pre-reading led me to this summary about AI: it is aimed at having computers make decisions for us by analysing many factors (more than we could digest), in a quicker timeframe than we could analyse them, and coming up with an answer or series of solutions. In other words, it streamlines the decision making process – so AI is not such a scary concept.
When it comes to robotics, it’s a common misconception that all robots are human-like, or at least that that’s the end goal of robotics. In truth, many robots are built and programmed to do one very specific task and do not have any interactive or independent processing capability. For example, they might pick up a pen from a conveyor belt and put it in a box. They will repeat the exact same action over and over – even if a human moves into their path, they will continue their programmed action. Since killing people is NOT the aim of robots, this limitation is something that the team in the SmartLab at The University of Liverpool are working to solve, amongst many others.
Professor Atkinson (who introduced herself as Katie and is informal and very approachable) took me behind the scenes at the SmartLab to meet the team including PhD students Bastian Boecker, Joscha-David Fossel and Daniel Claes which is headed up by Professor Karl Tuyls. The SmartLab team are exploring how we can take robots away from the static limited industrial type to the type that move around and interact with their environment; they can be programmed to work in groups, based on biological models – like the way bees and ants work together – and they can use laser and infrared technology to understand where they are to detect objects and the presence of humans.
The guys in the SmartLab
The SmartLab team awards including the RoboCup Championship trophy, 2014
I spend an afternoon chatting to the guys in the SmartLab about their work and messing about with their robots, leaving me inspired and full of wonder (and them further behind in their PhD work). They were really generous not just with their time, but in sharing their knowledge. This is something that always humbles and inspires me in science – it exists to share knowledge and enlighten. By contrast, the creative industries, and fashion in particular, can be very secretive and exclusive which may explain why, in many ways, it is resistant to change and evolution, paradoxically. I talk about this in another upcoming blogpost with designer Sarah Angold, so stay tuned.
Daniel begins by telling me about RoboCup@work and the robots they’re developing with the aim of helping humans in a factory and advancing away from the static pre-programmed robot arms (which are really huge, heavy, limited to one action and housed in cages surrounded by lasers) to robots that can reason and react to their environment. Total re-programming of static robots is required when the production or task changes within a factory, so this is an inconvenient and expensive limitation.
The robot prototype (featured in the video below and as yet unnamed – I ask why?!) is not static but on wheels and is able to analyse its environment and carry out a given task in response to its environment. In this case, Daniel types in commands and the robot then responds to a specific request to pick up a bolt and nut from a table and drop them in the corresponding shaped holes at another table nearby. It uses a camera to scan its environment (it has a set of environmental objects which it has been programmed to recognise) and collects then moves and drops the correct objects. Here it is in action:
I took a couple of snaps of the pieces the robot picks up and the respective shapes the robot drops them into (Literal footnote: My boots are Dr Martens X Mark Wigan):
The items the robot chooses from
The respective shapes the robot drops the items into
My thoughts turn to the use of robots in fashion. Perhaps the most beautiful and arresting use of robots in fashion so far has been Alexander McQueen’s haunting SS 1999 show:
The choreography of the robot arms excites my imagination, like snakes priming to attack. The industrial robots in the McQueen show are the type used to paint cars in a manufacturing plant. I start wondering how robots could impact the fashion industry once the development the SmartLab guys are doing becomes accessible, particularly since biological algorithms are being used to develop robots that could potentially interact as a ‘choreographed group’.
Bastian is working on swarm robotics, which means having a bunch of robots that are controlled by themselves using really simple (I laugh and remind Bastian that’s a relative term) algorithms based on attraction and repulsion – just the same way a flock of birds form a swarm as a group, rather than from a centralised source. They are really robust, for example you can take a couple of robots out of the system and the flock will still work. Bastian imagines his flock being used for planetary exploration with a more localised connectivity, rather than for broadband-based military-type operations. Katie further explains that having many small robots that are interlinked means that if one robot breaks down the others can continue exploring – the mission isn’t scuppered. Motion sensor-based communication between robots allows them to spread and cover a large area – for example an earthquake disaster zone – and scan the terrain for movement or people.
The main source of inspiration for Bastian’s algorithms that “drive” the robots is bees and ants. Both bees and ants use pheromone communication. Bees use a dance for navigation which involves going back and forth to the beehive once they have found food, to demonstrate to other bees the vector pathway to the food source. Bastian shows me a really cool video of how he and the SmartLab team created “bee robots” to perform this dance:
“Bee robots” in the SmartLab
Imagine the use of swarm algorithms to create choreographed robot runway models. The swarm algorithms would be supported by the development Joscha is doing on “state estimation” – the goal that a robot should know exactly where it is in relation to other things.
The runway robot models would know where they are in relation to each and could use lasers to interact with the crowd, making ‘eye contact’ with viewers and giving them a wink or cocking a hip. I’d love to see that. I’m re-imagining the Thierry Mugler show in George Michael’s Music Video “Too Funky” with robots as the models.
**Incidentally, this is the video that made me realise I loved fashion and wanted to work in the industry, back in 1992)**
Robot Naomi, anyone??
Naomi Campbell by Seb Janiak
Maybe I’ll start the first ever robot model agency? It may at first seem bizarre, but it would solve the problem of real people trying to achieve unreal proportions. I am not for a second saying I think the modelling profession is doomed or needs replacing (maybe just disrupting), I just think there’s merit in an alternative. If you think about the way the industry currently works, editorial images are manipulated often to extreme non-human proportions (again, this can be creatively interesting, so I’m not dismissing it entirely) but the human aspect is significantly diminished.
Another area of biological algorithms Bastian is working on is replication of ants’ passive technique of dropping pheromones to communicate with each other. This could be applied to digital pheromones designed to mimic this communication to divide labour between robots and solve complex problems. Robots can’t create biological pheremones but there is development being done in this area. Imaging adding the ability to exude pheromones to runway robot models and a whole new heady mix of interaction and attraction is possible. Here’s a demo of the digital pheromone communication concept:
The discussion moves to flying robots and Joscha explains further that State Estimation currently works well on ground robots, but with flying robots it gets more difficult because of the extra dimension of tilt/pitch. X-box Kinect has a 3D camera that can be used on these type of robot prototypes. Laser scanners can also be used to give plane of distance measurements. Based on the time it takes for the laser light to reflect back off the surface of the objects surrounding it, the robot’s distance from those objects can be calculated. It makes me think of medical imaging and OCT (Optical Coherence Tomography) technology, where it’s possible to use a laser inside of an artery to measure its dimensions – this is exactly the same principle, so I immediately get what Joscha is on about. Using the laser scanners, Josche’s flying robots generate 3D maps:
I mention drones at this point and Katie explains that they avoid the term because of a general perception of military use, and instead use UAV’s (unmanned aerial vehicles). It occurs to me again at this point that it must be a difficult and uphill battle to promote their advances in AI and robotics when such misconceptions exist. Joscha shows me his flying UAV which uses silver reflective 3M tape as markers (which, coincidentally, I have used in yarn-form for knitting) to understand it’s exact location. It is connected to a large geometric pitchfork-like object with silver balls on the end which allow me to direct it through the air with sweeping arm movements. Here are the guys in action:
Joscha’s UAV with silver 3M balls
A jumper from my AW11 knitwear collection with 3M yarn and camel hair
The flying UAV makes me think of a video I saw recently of the brilliant Disney illustrator Glen Keane, who wore a VR headset and stepped into his illustrations using sweeping arm movements with a paddle to draw in the virtual space around him. He immersed himself in his drawing and saw his illustrations in real time, 3D. The video of him ‘stepping into the page’ is well worth a look. So I guess I made an invisible air drawing with a robot.
I ask Katie whether there are barriers to spreading an accurate message about the work being done in AI and she cites “science fiction stories of dystopian societies where the robots take over” as a big source of misconception. She is a passionate ambassador for AI, working on the research topic of ‘AI and law’ for over 10 years. She has been working with a local law firm for the past year to expand their use of AI and enable huge amounts of legal information to be analysed quickly and provide accurate and fast solutions. The whole impetus behind AI is problem solving. It’s about making our world a better and safer place, for example in disaster relief situations and with self-driving cars. Katie shares some data on road traffic accidents: 97% of accidents are caused by human error. That’s a compelling statistic. Who doesn’t want to be safer on the road? For the record, Katie tells me that self-driving cars, like those being developed by Google and Bosch, are expected to be launched in stages from partial self-driving initially, to 100% self-drive in the final stage. So it won’t mean jumping into a vehicle and immediately relinquishing control.
Katie also shared with me some cool robots created by her Computer Science undergraduate students using Lego Mindstorms that demonstrate the concept of the self-driving car.
The ‘car’ has sensors on each side and at the front. The sensors detect the lines on the board (road) and adjust their direction and speed accordingly in order to stay within the black lines (stay on the road) and speed up when they pass over the green lines and slow down/stop when they pass over the red. They are programmed with code written by the students and uploaded directly from a PC into the basic computer box that comes with the Lego Mindstorm kit. Pretty ingenious.
I’m writing up this post while attending London Fashion Week, which has just relocated from Somerset House to the Brewer Street Car Park in Soho, on a pokey corner of a one-way street causing a crammed carnival/circus atmosphere and serious logistical issues. I wonder if Google car could solve this problem? Maybe editor-sized Google self-driving pods could locate their editor by GPS and navigate them quickly from one show to another while they focus on updating their Twitter and Instagram rather than worrying about traffic jams. It would be a kind of cross between Google car and GM Holden’s Xiao – EN-V electric pod, charging itself in between shows. There’s no way those big old Mercedes cars that have been the long-term ferrying service for LFW editors and buyers are going to get in and out of town fast enough with the new LFW location. The frustration at LFW was palpable outside the shows.
So what can’t AI do? I ask Katie about the use of AI and robotics in the creative industries. Her response, “people regard this as the great separation between man and machine – the ability to be creative”, leaves me feeling relieved my job as a designer is safe. Interestingly, a large part of Katie’s research work has involved categorising objective versus subjective information and determining how that information should be used in AI.
In terms of creative use of AI, examples exist in the gaming and film industries. In the Lord of the Rings movies, programmers used crowd algorithms based on AI to program CGI characters in groups to move and interact together, rather than programming individual characters which then need to be programmed to interact with each other (that’s a lot of programming!)
So where is AI and robot technology headed? What’s next? Joscha pipes up and says he would like a robot at the supermarket that can collect everything he wants and bring it back to him. That sounds like a weird and wonderful space to be in – robots and humans shopping alongside each other. Katie then mentions the garment folding robot – this technology strikes me as useful for garment manufacturing (and at home, obviously). The current incarnation folds one item quite well, but the SmartLab team are all about robots being able to not only fold the item, but then move across the warehouse and pack it. I personally would love to have a pattern-cutting robot to work alongside of me tracing-off and cutting out clean, final versions of patterns I have created and digitising them as it goes.
As I finish writing up this blog post, sipping on a cup of coffee that (I’m ashamed to admit) I just reheated in the microwave and I wonder how people felt about microwave ovens when they were first introduced. Maybe there’s a similarity with AI or any new industry-changing technology? People fear it because they don’t understand it but once they see how useful it is that fear subsides. Elementary, my dear Watson.