TIME technology

How 3D Printing Helps Robots Tackle Their Greatest Obstacle

stairs
Getty Images

One of the main challenges for robots is still traveling efficiently over rugged surfaces

We’ve long attempted to recreate living creatures in robot form. From the very early age of robotics, there have been attempts to reproduce systems similar to human arms and hands. This has been extended to flexible and mobile platforms reproducing different animals from dogs to snakes to climbing spider octopods, and even entire humanoids.

One of the key actions performed by animals from mantises to kangaroos is jumping. But incorporating a jumping mechanism into autonomous robots requires much more effort from designers. One of the main challenges for robots is still travelling efficiently over rugged surfaces and obstacles. Even the simple task of going up or down a staircase has proven to be rather difficult for robot engineers.

A jumping robot could provide access to areas that are inaccessible to traditional mobile wheeled or legged robots. In the case of some search-and-rescue or exploration missions, in collapsed buildings for example, such a robot might even be preferable to unmanned aerial vehicles (UAVs) or quadcopter “drones.”

There has been increasing research in the robotics field to take on the challenges of designing a mobile platform capable of jumping. Different techniques have been implemented for jumping robots such as using double jointed hydraulic legs or a carbon dioxide-powered piston to push the robot off the ground. Other methods include using “shape memory alloy” – metal that alters its shape when heated with electrical current to create a jumping force – and even controlled explosions. But currently there is no universally accepted standard solution to this complex task.

A new approach explored by researchers at the University of California San Diego and Harvard University uses a robot with a partially soft body. Most robots have largely rigid frames incorporating sensors, actuators and controllers, but a specific branch of robotic design aims to make robots that are soft, flexible and compliant with their environment – just like biological organisms. Soft frames and structures help to produce complex movements that could not be achieved by rigid frames.

The new robot was created using 3D printing technology to produce a design that seamlessly integrates rigid and soft parts. The main segment comprises two hemispheres nestled inside one inside the other to create a flexible compartment. Oxygen and butane are injected into the compartment and ignited, causing it to expand and launching the robot into the air. Pneumatic legs are used to tilt the robot body in the intended jump direction.

Unlike many other mechanisms, this allows the robot to jump continuously without a pause between each movement as it recharges. For example, a spring-and-clutch mechanism would require the robot to wait for the spring to recompress and then release. The downside is that this mechanism would be difficult to mass-manufacture because of its reliance on 3D printing.

The use of a 3D printer to combine the robot’s soft and hard elements in a single structure is a big part of what makes it possible. There are now masses of different materials for different purposes in the world of 3D printing, from flexible NinjaFlex to high-strength Nylon and even traditional materials such as wood and copper.

The creation of “multi-extrusion” printers with multiple print heads means that two or more materials can be used to create one object using whatever complex design the engineer can come up with, including animal-like structures. For example, Ninjaflex, with its high flexibility could be used to create a skin or muscle-like outer material combined with Nylon near the core to protect vital inner components, just like a rib cage.

In the new robot, the top hemisphere is printed as a single component but with nine different layers of stiffness, from rubber-like flexibility on the outside to full rigidity on the inside. This gives it the necessary strength and resilience to survive the impact when it lands. By 3D printing and trialling multiple versions of the robot with different material combinations, the engineers realised a fully rigid model would jump higher but would be more likely to break and so went with the more flexible outer shell.

Once robots are capable of performing more tasks with the skill of humans or animals, such as climbing stairs, navigating on their own and manipulating objects, they will start to become more integrated into our daily lives. This latest project highlights how 3D printing can help engineers design and test different ideas along the road to that goal.

This article originally appeared on The Conversation

The Conversation

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME Science

The Cloud Will Be the Key to Our Robotic Future

robot-yellow-background
Getty Images

The Aspen Institute is an educational and policy studies organization based in Washington, D.C.

These are the 5 elements of cloud robotics

Robots haven’t yet lived up to their potential, but they’re about to benefit enormously from the Cloud. I’ll describe how a new generation of robots can use wireless networking, big data, machine learning, open-source, and the Internet of Things to improve how they assist us in tasks from driving to housekeeping to surgery.

The roots of this trend go back to the early 1990s, when the World Wide Web was first introduced. I was a young professor at the University of Southern California directing my grad students in a robotics research lab. One afternoon they came to my office and showed me something that blew my mind. On my desktop computer, they launched Mosaic, the first web browser. We explored some of the first web sites, including one where anyone could view a live camera pointed at a coffee pot in a college student lounge. The camera was rigged up by Cambridge University grad students to check when there was a fresh pot available; what you might call “disruptive caffeination.”

That night, my students and I stayed up late in the lab brainstorming about how we could take that idea further. Instead of passively watching things with webcams, could we use the Web to let remote visitors actively move and change the environment in our lab using our robot?

Rather than having the robot do something boring like stacking blocks, what if it could tend a garden filled with living plants? We took an industrial robot arm and fitted it with a digital camera, an irrigation system, and a pneumatic nozzle to pick up seeds. We installed it at the center of a custom circular aluminum planter three meters across and filled it a half-meter deep with potting soil and some starter plants. We designed a graphical web interface that anyone could access to view, water, and plant seeds in the garden by moving the robot.

The Telegarden went online in the summer of 1995. Word got around, and within weeks, thousands of people were visiting this community garden. Many came back regularly to water their plants. There was a chat room where people would post requests saying they were going on vacation and asking if someone could water their plants. Thousands of seedlings began to sprout, and the Telegarden quickly became overgrown. It turned into a study of the Tragedy of the Commons.

We were surprised; we’d been concerned that gardening might be the last thing people would want to do online (this was ten years before Farmville). After a year we were invited to install the Telegarden in a museum in Austria, where it remained online, 24 hours a day, for nine years.

To our knowledge, more people operated that robot than any robot in history.

The Telegarden was the first active device on the web, MIT Press published two books about it, and soon many other devices and systems were connected to the web.

Since then, the field of Robotics has advanced considerably. There are now hundreds of research labs and over a dozen journals. There are over 5 million service robots like the Roomba vacuuming homes and offices and over 3000 robots assisting surgeons in operating rooms around the world. There have also been major advances in digital cameras, inertial motion sensors, and other sensors. When Microsoft introduced the Kinect 3D camera for gaming, it was a major breakthrough for robotics, providing a low-cost way to obtain 3D point clouds that can help robots navigate and manipulate. In 2012, President Obama announced the U.S. National Robotics Initiative with over $70 million in new funding for research.

But robots are not yet folding our laundry or loading our dishes into the dishwasher. These mundane chores are fiendishly difficult for robots. The essential problem is uncertainty. Put yourself in the position of being a robot: everything around you is blurry and unsteady, low-resolution and jittery, you can’t really tell what things are, where they are, and how they are moving. You can’t perfectly control your own hands, it’s like you’re wearing huge oven mitts and goggles smeared with Vaseline. I hope you’re having some sympathy now for robots.

I believe the Cloud is the key to a new generation of robots. Take Google’s robot car. The car uses the network to access Google’s enormous database of maps and satellite and Streetview images and combines it with streaming data from GPS, cameras, and 3D sensors to monitor its own position within centimeters, and with past and current traffic patterns to avoid collisions. This gives Google an enormous advantage over automobile companies like Toyota and General Motors.

Why is Google interested in Robots? Because Google understands the Internet.

BTW, I like Brad Templeton’s observation: “A robot will be truly autonomous when you instruct it to go to work and it decides to go to the beach instead.”

Although robots have been on the Internet for 20 years, in 2010 James Kuffner, a brilliant researcher at Google, coined the term “Cloud Robotics.” The Cloud isn’t just a new name for the Internet. It’s a new paradigm that uses the Internet in new ways. Think of Google Docs. Anyone can send Microsoft Word documents over the Internet, but Google Docs is different: The document and the software don’t live on your computer. Everything is stored in the Cloud using remote server farms with shared memory and processors. This is helpful because you don’t have to worry about your disk crashing or maintaining and updating your software or hardware. The Cloud also provides economies of scale and makes sharing data across applications and users easier than ever. (Of course, it also raises huge privacy and security concerns.)

Here’s what I see as the Five Elements of Cloud Robotics

1. The first element concerns memory. The 2012 indie film Robot & Frank is a masterpiece that offers a unique and I think quite realistic glimpse into the future (like the 2013 film Her by Spike Jonze). A man is growing older and becoming forgetful. His kids send him a robot to help him around the house. It cleans up for him and reminds him to eat healthy and to water his garden. When I saw this film, I could imagine myself wanting a robot like this to remind me to eat kale, take my meds, and do my situps, but also to keep me company, reminding me of relevant memories, maybe even telling me jokes based on current events and what’s happening in my environment. But such household robots aren’t available yet. One problem is that there are thousands of objects in a typical house.

Consider designing a robot to declutter your house. This is very important for anyone who has kids and especially for those of us who are getting older. When a senior citizen drops something on the floor, they may not notice it because of poor eyesight. Even if they see it, it’s not easy to reach down and pick all these things up. But the consequence of slipping and falling can be catastrophic; if you break your hip, for example, it could lead to being bedridden, which leads to loss of circulation and depression. After a certain age, things left on the floor can be fatal.

What if a robot could work quietly while you are sleeping or at work, picking things up off the floor and putting them where they belong? The problem is that no matter how well you program the robot, no matter how many objects it stores in its onboard memory, there will always be something that it hasn’t seen before. Like this new remote control I bought to advance my slides. If it fell on the floor, my robot may not know what to do with it. Is it a chocolate bar? Does it belong on my desk? In the fridge? Or in the garbage?

Fortunately, any robot that’s working in your house will be connected to a wifi network. So it has access to a vast library of information on the Internet, where there’s information on almost every object imaginable. It’s an enormous amount of information, and it’s constantly growing. The problem is that keeping all that memory onboard the robot in your house isn’t feasible. But the Cloud makes that information available on demand. The first element of Cloud Robotics is Big Data.

2. The second element of Cloud Robotics addresses limitations in onboard computer processing. Robots can carry at most a few computers, and there are many problems that require far more computation than those can provide.

Robots are starting to use a statistical approach known as Belief Space.

One exciting approach is to model the environment, sensors, and actions using probability distributions. The mathematical term for this is Belief Space. I know that sounds like something spiritual or from science fiction. But it’s shorthand for Partially Observed Markov Decision Processes (POMDPs). To solve them, to find the optimal action in this context, requires taking the convolution of several distributions. This quickly becomes infeasible as the probability distributions become more complex, multi-modal, and non-parametric.

Finding solutions requires an enormous amount of computing power. Belief Space was considered intractable until very recently, when we gained access to clusters of computers on demand through the Cloud. Such computing power also facilitates statistical optimization, machine learning, and planning motions in high dimensions with many simultaneously moving robots. The second element of Cloud Robotics is Cloud Computing.

3. The third element of Cloud Robotics uses the fact that humans are increasingly connected over the web, exchanging information with each other. Here’s an example. I was born in Nigeria, and I went back there a few years ago. I was surprised to find that there was an enormous interest among students in robotics. In Africa, like all countries, when kids they see a robot, it’s very physical, it’s something that they can identify with, and it catches their interest. Robots are a “gateway drug” for getting students interested in Science, Technology, Engineering, and Math (STEM). I met students in different areas of Ghana that were excited about robots, but in many cases they didn’t know about each other.

Robots are a “gateway drug” for getting students interested in Science, Technology, Engineering, and Math.

I met a wonderful professor, Ayorkor Korseh, at Ashesi University. We decided to start the African Robotics Network. And the idea was that we wanted to connect all these groups around not only Ghana, but all over Africa, who are interested in robots, and allow them to learn from each other and develop new tools and new ideas.

Existing educational robots are still relatively expensive. We saw the need for an ultra-affordable robot for education. We created a website and announced a worldwide Design Competition. We set the price target at US $10. We figured that was completely ludicrous but it would get people thinking. After four months we received 28 submissions. Many elegant designs: but the grand prize winner really surprised us.

It was a robot that was made out of a Sony game controller. A designer named Tom Tilley modified a game controller, which anyone can get surplus for about $3 or $4. There are two motors that are built into it that create vibration as you are playing the game. He took them out and attached two wheels to them. The body of the game controller became the body of a mobile robot. He also added lights, but he wanted something that would detect when it bumped into something.

He realized that he could use the two thumb switches on the top. The problem was that when he tested it, the thumb switches wouldn’t react; they needed more leverage. So he needed a counterweight for the thumb switches. He thought about what might work that would be available on fairly low cost and came up with a brilliant idea: lollipops.

What kid could resist a robot with two lollipops on top of it?

Tom calls it the “Lollibot,” and you can find detailed instructions on how to make your own on the web. The total price for all the parts comes to $8.96 (and that includes the lollipops)!

Another exciting development is ROS, the Robot Operating System. It’s an open source software library that has changed the field of robotics. It’s the Linux for robots. When someone comes up with a new algorithm, he or she can immediately load it into ROS and make it accessible to other researchers around the world. The third element of Cloud Robotics is Open-Source: shared access to human ingenuity: code, data, and designs.

4. The fourth element of Cloud Robotics is based on robots communicating directly with each other. Consider Amazon, which has to rapidly fill thousands of orders for books and other products, packing boxes with items that can be located all over huge warehouses. A company called Kiva Systems designed a new type of robots to address this problem. The robots move around storage racks filled with boxes of items. There can be hundreds and thousands of them in a warehouse, and they increase efficiency amazingly. What makes it all work is that these robots are all talking to each other. They are constantly communicating. They work together to coordinate traffic patterns, and if conditions change, like one robot finds a bit of grease on the floor, it instantly alerts the others to avoid it. The fourth element of Cloud Robotics is Robot-to-Robot Learning: robots sharing data and code to collectively improve their performance.

5. The fifth element of cloud robotics concerns error modes, where the robot can’t determine what to do. If you work in robotics, you realize there is always going to be situations where the robot doesn’t know what to do.

When a robot gets stuck, it could ask a human for help.

The Cloud could provide access to call-centers with humans standing by to diagnose the robot’s data and video to suggest steps for recovery, when all else fails. I realize this is the opposite of what happens today, where you call technical support and a robot voice answers. One key to making this work is for the robot to monitor its confidence levels, so it can recognize when it is sufficiently uncertain and needs to stop and request help. So the fifth element of Cloud Robotics is Crowdsourcing.

Until now, Robots have been viewed as self-contained systems with limited computation and memory. Cloud Robotics suggest an exciting alternative where robots access and exchange data and code via wireless networking.

The Five Elements of Cloud Robotics are:

  1. Big Data: indexing a global library of images, maps, and object data,
  2. Cloud Computing: grid computing on demand for statistical learning, and motion planning,
  3. Open-Source: humans sharing robot code, data, algorithms, and hardware designs,
  4. Collective Robot Learning: robots sharing trajectories, control policies, and outcomes that can be analyzed with statistical machine learning methods and
  5. Crowdsourcing and call centers: offline and on-demand human guidance for evaluation, learning, and error recovery.

Cloud Robotics will build on related effort including the “Internet of Things,” “IBM’s Smarter Planet,” General Electric’s vision of the “Industrial Internet,” and Siemens’ concept of “Industry 4.0.” These approaches have enormous potential but also open a Pandora’s Box of issues related to privacy and security. But when robots have their heads in the clouds, the sky’s the limit.

Why We Love Robots is a five-minute short documentary film (co-directed with my wife, award-winning filmmaker Tiffany Shlain) that combines cultural references and found footage to explore our human fascination with robots and emerging research in Cloud Robotics. It was nominated for a 2104 Emmy Award and won a “Botscar” (Robot film Oscar) at the 2014 Robot Film Festival.

This article was originally published by The Aspen Institute on Medium

TIME Ideas hosts the world's leading voices, providing commentary and expertise on the most compelling events in news, society, and culture. We welcome outside contributions. To submit a piece, email ideas@time.com.

TIME robotics

Robot Kills Man at Volkswagen Plant

The machine grabbed and crushed the technician

A robot crushed a worker at a Volkswagen production plant in Germany, the company said Wednesday.

A 22-year-old man was helping to put together the stationary robot that grabs and configures auto parts Monday when the machine grabbed and pushed him against a metal plate, the Associated Press reported. He later died from the injuries. Volkswagen did not release the man’s name.

A spokesperson for the car company told the Associated Press that the robot can be programmed for specific tasks and that the company believes the malfunction was due to human error.

Though the company uses some lightweight robots to work on the production line next to humans, a spokesperson told the Financial Times that this was not one of those robots. The type of robot that crushed the employee is usually kept in a cage. The man was working on the robot inside the cage when he was grabbed.

Prosecutors are still deciding whether to bring charges and whom they would pursue.

[AP]

Read next: Driverless Car Maker Denies Claim of Near-Miss With Google Car

TIME robotics

South Korean Team Wins DARPA Robotics Challenge

Challenge was launched in response to the 2011 nuclear disaster in Japan

A South Korean team took home the top prize on Saturday at the DARPA Robotics Challenge Finals in California.

Team Kaist from Daejeon and its robot, DRC-Hubo, were awarded the grand prize of $2 million over 22 competing robots, according to DARPA, the military technology arm of the Defense Department. The humanoid robot had performed the best across a series of eight tasks relevant to natural disaster response, which included tripping circuit breakers, climbing stairs and walking through rubble.

Team IHMC Robotics of Pensacola, Fla., and its Running Man robot took second place and $1 million, and the CHIMP robot of Pittsburgh’s Tartan Rescue came in at No. 3, earning $500,000.

The competition, launched after Japan’s earthquake, tsunami and nuclear disaster in 2011, was intended to accelerate development in robotics with the ability to one day enter disaster-stricken areas that are too dangerous for humans.

“This is the end of the DARPA Robotics Challenge,” DARPA Director Arati Prabhakar said, “but only the beginning of a future in which robots can work alongside people to reduce the toll of disasters.”

TIME Tech

Watch This Robot Perfectly Mimic a Master Japanese Swordsman

This robot can cut through anything

Robots these days can play the violin, send us Amazon packages, even adapt if they get hurt. Now, one of them can slice us all to bits.

Yaskawa Electric Corporation recently tracked the movements of Isao Machii, a renowned Japanese swordsman, and taught a robot called Motoman-MH24 how to mimic his use of a katana (also known as a scary Japanese sword). In a video released by the robotics company, the sword-wielding robot can be seen slicing through flowers and fruit with ease and precision. He even battles Machii, who holds several world records and has sliced through a fried shrimp hurtling towards him at 80 m.p.h., in a 1,000-cut challenge.

Hopefully, Yaskawa’s next project is a robot that’s really good at dulling swords so it can protect all of humanity.

TIME robots

Watch the Scariest Robot in the World Jump Over Stuff Automatically

Please don't become self-aware

It’s bad enough that Boston Dynamics has made a robotic cheetah that can run nearly 30 m.p.h. (48 km/h). Now MIT has its own cheetah-robot that can autonomously leap tall obstacles in a single bound. The robot uses lasers to see its environment, and the onboard computer uses a three-part algorithm to detect an obstacle, adjust its approach, then measure the appropriate jump trajectory. The entire process takes about 100 milliseconds. Right now the cheetah can clear hurdles as high as 18 in. (46 cm) at an average running speed of 5 m.p.h. (8 km/h).

MIT researchers are planning to demonstrate their cheetah’s abilities at the DARPA Robotics Challenge in June.

TIME Careers

These Jobs Are Most Likely To Be Taken by a Computer

SPAIN-TECHNOLOGY-ROBOT
Gerard Julien—AFP/Getty Images A man moves his finger toward SVH (Servo Electric 5 Finger Gripping Hand) automated hand made by Schunk during the 2014 IEEE-RAS International Conference on Humanoid Robots in Madrid on November 19, 2014.

Great news, dentists!

Telemarketers’ jobs have the highest chance of being automated, according to recent report. Other positions with huge potential for being overtaken by robots? Cashiers, tellers and drivers, among others, according to this new NPR interactive.

While telemarketers have a 99% chance of one day being totally replaced by technology (it’s already happening), cashiers, tellers and drivers all have over a 97% chance at being automated. Many positions within the “production” category put together by NPR, including packaging and assembly jobs, tend to rank highly as well.

The job with the lowest shot at being overtaken by technology in the future? Mental health and substance abuse social workers. They have a 0.3% chance, according to the data. Occupational therapists also rank at 0.3%, while dentists, surgeons and nutritionists appear pretty safe at just 0.4%.

Per NPR:

The researchers admit that these estimates are rough and likely to be wrong. But consider this a snapshot of what some smart people think the future might look like. If it says your job will likely be replaced by a machine, you’ve been warned.

To play around with the complete data, check here. But beware, it’s pretty addicting.

TIME robotics

Watch This Drone’s Unfortunate Encounter With a Goose

Both appear to be okay

In a video posted by RTV NH, a media production company in Holland, a goose appears to smash into a drone (potentially on purpose). Thankfully, both the drone and the goose seemingly end the encounter unscathed.

In case you were wondering, the bird is an Egyptian Goose, which is known for being territorial. The drone was capturing images of Oudorperpolder, in the city of Alkmaar, according to The Daily Mail.

Unfortunately for drones and fowl alike, this could be an increasingly common occurrence, especially as the drone industry is expected to continue booming in coming years. Business Insider report commercial and civilian drone sales will grow at 19% between 2015 and 2020, while military models will see 5% growth.

The drone industry overall is expected to be worth over $8.4 billion by 2019, CNBC reports, citing ABI Research estimates.

This article originally appeared on Fortune.

TIME robotics

A Drug-Buying Robot Has Been Freed From Police Custody

!Mediengrupppe Bitnik Items purchased on the darknet by the Random Darknet Shopper

The bot, programmed to buy illegal goods online, was part of an art exhibition

A robot programmed to buy drugs from illegal online markets has been freed by Swiss police. The shopping bot, called the “Random Darknet Shopper,” was created last fall by a Swiss art group called !Mediengruppe Bitnik to purchase illicit goods online using a weekly allowance of $100 worth of Bitcoin. The various items the bot bought at random, including counterfeit sneakers and ecstasy, would be delivered to the art group’s gallery for an exhibition.

Swiss police captured the robot back in January and confiscated its purchases. However, last week, the art group announced that the police had returned Random Darknet Shopper as well as all of the goods it bought, except for the ecstasy. A Swiss police official told CNBC that the makers of the robot wouldn’t be charged for programming the robot to buy illegal items.

“This is a great day for the bot, for us and for freedom of art!” the art group wrote in a blog post.

[CNBC]

TIME Innovation

Meet The Robot Chef That Can Prepare Your Dinner

Moley Robotics An automated kitchen by Moley Robotics.

It can do a very pleasant crab bisque in less than 30 minutes

Ever since Americans were introduced to Rosie, the beloved robot maid on The Jetsons, way back in the 1960s, robotic household help has been the ultimate in futuristic dream products.

A new product from Moley Robotics might bring that future one step closer, as the company unveiled a robot chef on Tuesday at Hannover Messe, a trade fair for industrial technology in Germany. Comprised of two robotic arms in a specially designed kitchen, which includes a stove top, utensils and a sink, the device is able to reproduce the movements of a human chef in order to create a meal from scratch. The robot learns the movements after they are performed by a human chef, captured on a 3D camera and uploaded into the computer.

A few weeks before the robot chef was unveiled, Moley invited TIME to check out the robot and test its fare. In less than half an hour, the robot made a crab bisque, based on the recipe and technique of Tim Anderson, winner of 2011’s season of MasterChef in the U.K., who is working with Moley to develop the kitchen. From selecting the right heat level on the stove-top to adding the pre-arranged ingredients at just the right moment to operating a small mixer, the robot arms made the soup from scratch. It even plated up the soup, including scraping the bottom of the ladle against the rim of the saucepan in order to prevent drips.

Why crab bisque? “Crab bisque is a challenging dish for a human chef to make, never mind a robot,” explains Anderson. “If it can make bisque, it can make a whole lot of other things.” When asked if he feels at threatened by seeing a machine expertly recreate one of his recipes, Anderson is somewhat surprisingly on the side of the technology. “Some people ask if this is going to put my out of a job. This has already given me a job.”

Comparing the robot to cookbooks and YouTube tutorials by professional chefs, Anderson says, “I think it’s going to help people build brands.” The aim is to have professional chefs record themselves cooking their own recipes so that the robot will be able to mimic the techniques and replicate the dish. Anderson envisions people learning how to make a variety of dishes by watching their robots in action. “It’s changed the way I think about cooking,” he says.

Moley, which was founded by computer scientist Mark Oleynik, has partnered with the London-based Shadow Robot Company, which developed the kitchen’s hands. Twenty motors, two dozen joints and 129 sensors are used in order to mimic the movements of human hands. The robotic arms and hands are capable of grasping utensils, pots, dishes and various bottles of ingredients. Olyenik says that the robot hands are also capable of powering through cooking tasks quickly, though they’ve been designed to move quite slowly, so as not to alarm anyone watching it work.

Sadly for vegetarians, like Shadow Robot’s managing director Rich Walker, crab bisque is the only dish the robot is currently able to make. However, the company plans to build a digital library of 2,000 recipes before the kitchen is available to the wider public. Moley ambitiously aims to scale the robot chef for mass production and begin selling them as early as 2017. The robotic chef, complete with a purpose-built kitchen, including an oven, hob, dishwasher and sink, will cost £10,000 (around $15,000). Yet that price point will depend on a relatively high demand for the kitchen and it’s still unclear how large the market is for such a product at the moment.

Dan Kara, a robotics analyst for U.S.-based ABI Research, a market intelligence company that specializes in emerging technology, tells TIME that the household robots that have found a market tend to be smaller devices that tackle tedious chores. “Successful products for the home that I’ve seen have been floor-cleaning [devices] — sweepers and vacuums — and pool cleaners and lawnmowers,” he says, noting that people tend to favor robots that tackle tasks they don’t want to do “because it’s boring or repetitive.” Another key factor of a product’s success is affordability. “Once [robots] get above a certain price, the number of people using them drops right off.”

A robotic chef, however, “just seems like a bridge too far at this time,” though Kara pointa out that he isn’t familiar with Moley’s kitchen or its specific technology.

Which isn’t to say that a robot chef wouldn’t have interested buyers. The robotics industry is growing and the Boston Consulting Group has estimated that spending on robots could “jump from just over $15 billion in 2010 to about $67 billion by 2025.”

But there is still work to be done on Moley’s kitchen before it would be an even remotely practical, albeit pricey, purchase. As the robot doesn’t have any way to see, it’s unable to locate an ingredient or utensil that might be moved or knocked out of place. It also can’t chop or prep food yet, so it must use prepared ingredients that are meticulously laid out. The company is working on improving the robot’s functions and expanding its capabilities, but as Oleynick admits, “it will have some limits because nothing can replace human touch.”

Your browser is out of date. Please update your browser at http://update.microsoft.com