X
Innovation

MIT breaks new ground in AI with ‘deep’ knitting, yes, knitting

What’s the point of teaching a computer how to knit? MIT researchers found it goes beyond just some nice gloves and scarves, to a window into deep learning forms of artificial intelligence.
Written by Tiernan Ray, Senior Contributing Writer

A hot trend in artificial intelligence in recent years has been the rise of impressive fakes -- fake headshots, fake videos, fake text. Deep learning techniques, part of machine learning, have gotten better and better at taking real-world data and using it to make something artificial, such as a picture, seem incredibly convincing. 

Researchers at the Massachusetts Institute of Technology on Monday announced an AI approach that goes in the opposite direction: it takes something real and makes it artificial. The application is somewhat surprising: knitted garments that need to be reproduced. The system studies a picture of a garment and computes a series of stitches to give to an automated knitting machine. 

Under the curious title, "Neural Inverse Knitting: From Images to Manufacturing Instructions," the paper describes how to take images of knitted garments and use convolutional neural networks, or CNNs, and a generative adversarial network, or GAN, to produce a floor plan or blueprint that specifies at each point in a garment which of seventeen different stitch types should be used. The paper is authored by Alexandre Kaspar, Tae-Hyun Oh, Liane Makatura, Petr Kellnhofer, and Wojciech Matusik of MIT. 

The research is published in conjunction with a second paper, "Knitting Skeletons: A Computer-Aided Design Tool for Shaping and Patterning of Knitted Garments," in which Kaspar, Makatura, and Matusik introduce a software program to easily lay out the stitch patterns for a garment. 

The idea of the second paper is a software tool that makes it super easy for a person who has no experience with knitting to produce instructions for an automated knitting machine. Automated knitting machines have proliferated, such as systems by Shima Seiki. But they typically require some domain expertise to program, so Kaspar and colleagues wanted to offer a way for novices to get in on the action by simplifying the design pipeline. It's rather like what has happened with "additive manufacturing," where people can upload their designs to a 3-D printer in the cloud. Here, the MIT authors want to advance "knitting as a service," as they call it. (The world just got another acronym, "KaaS.")

mit-neural-knitting-data-set-examples.png

The deep knitting pipeline, from real-world photos of garments and ground truth labels of stitches, on the left, to inferred stitch patterns that are produced by the network, on the right. 

MIT

That helps to avoid "machine coding" of the knitter, but it still leaves lots of work to specify all the stitches in a garment. And that's where it gets interesting for machine learning. They came up with an algorithm to automatically produce a pattern of machine-intelligible stitches. 

Also: Nvidia's fabulous fakes unpack the black box of AI

"During its operation, a whole garment knitting machine executes a custom low-level program to manufacture each textile object," they explain in "Neural Inverse Knitting."

"Typically, generating the code corresponding to each design is a difficult and tedious process requiring expert knowledge." 

mit-the-deep-knitting-pipeline.png

The neural inverse knitting network, with a "refiner" that combines real photos and synthetic photos of garments, and a program generator that outputs machine-readable stitch instructions. 

MIT

Hence, the need to use machine learning to develop an ability to figure out such machine instructions automatically starting from a sample garment, what you could call "computational knitting."

As detailed in a paper containing supplementary materials, the neural network has to compute two different things: it has to compute first an ideal representation of the garment in question, and then it has to compute the stitches involved. 

For the first part, the neural net is fed two kinds of samples, real photographs of knitted garments that the authors knitted from scratch and then photographed; and synthesized images of garments that were generated by their design software. The latter, the synthetic images, are cleaner than the real-world photographs, and so they're going to be used to clean up, in a sense, the real-world images. 

Also: MIT ups the ante in getting one AI to teach another

A "refiner" module, using a generative adversarial network, or GAN, melds the real and synthesized images, and cleans up the real images by transforming them according to the idealized, regularized synthetic images of the knit patterns. 

As the authors describe it, they're reversing the typical use of GANs, which try to "map" something fake to something real, to produce a convincing facsimile. Here, they instead want to simplify and clarify the messiness of the real world with something simulated. 

"The previous methods have investigated generating realistic-looking images," they write. "We instead learn to neutralize the real-world perturbation by mapping from real data to synthetic looking data." Through an ablation study, they show that the mix of real and synthetic images does better when computing the stitches involved compared to just using the real photos of knit items. 

mit-making-knit-images-for-ground-truth.png

The authors knitted some samples from scratch and stretched them on metal rods so they could photograph them to build a dataset of real-world images to train the neural network. 

MIT

The second part, called "Img2Prog," is used to derive the stitches from the mix of real and synthetic images. Both sets of images are labeled with a "ground truth" tag indicating which of 17 stitch patterns are being seen in the image. By optimizing the "cross-entropy" loss between the ground truth labels and the output of the neural network, the neural net computes the statistical pattern of the 17 different stitches represented in thousands of example images. 

All this has some fascinating connections to other areas of machine learning. The authors write that what they're doing is akin to "semantic segmentation," where a machine learning algorithm is trained to pick out objects in a photo. "This resembles semantic segmentation which is a per-pixel multi-class classification problem," they write, with an important difference: "semantic segmentation is an easy -- although tedious -- task for humans, whereas parsing knitting instructions requires vast expertise or reverse engineering."

More provocative is the assertion by the authors that their work is akin to "program synthesis," whereby a neural network is used to construct a computer program. 

"In terms of returning explicit interpretable programs, our work is closely related to program synthesis, which is a challenging, ongoing problem," they write.  

"Our task would have potentials to extend the research boundary of this field, since it differs from any other prior task on program synthesis in that: 1) while program synthesis solutions adopt a sequence generation paradigm, our type of input-output pairs are 2D program maps, and 2) the domain-specific language is newly developed and applicable to practical knitting."

And there you have it: from differentiable knit caps and gloves to differentiable computer code.

Scary smart tech: 9 real times AI has given us the creeps

Editorial standards