Assembling building blocks
Li Zheng, a doctoral student in Kochmann’s group, trained an AI model using a dataset of one million structures and their simulated response. “Imagine a huge box of Lego bricks – you can arrange them in countless ways and over time learn design principles. The AI does this extremely efficiently and learns essential design features and how to assemble the building blocks of metamaterials to give them a particular softness or hardness”, says Zheng. Unlike prior approaches using a small catalogue of building blocks as the basis for design, the new method gives the AI freedom to add, remove, or move building blocks around almost arbitrarily. Together with Sid Kumar, an assistant professor at TU Delft and a former member of Kochmann’s team, they showed in a recently published paper that the AI model can even go beyond what it has been trained to do and predict structures that are far better than anything ever generated before.
Learning from the movies
Jan-Hendrik Bastek, also doctoral student in Kochmann’s group, used a different approach to achieve something similar. He used a method originally introduced for AI-based video generation, which has become commonplace: if you type in ‘an elephant flying over Zurich’, the AI generates a realistic video of an elephant circling the Fraumünster Church. Bastek trained his AI system using 50,000 video sequences of deforming 3D-printable structures. “I can insert the trajectory of how I want the structures to deform, and the AI produces a video of the optimal structure and the complete deformation response,” explains Bastek. Most previous approaches have focused on only predicting a single image of the optimal structure. However, giving the AI videos of the entire deformation process is crucial to retain accuracy in such complex scenarios. Based on the video sequences, the AI can create blueprints for new materials, taking into account highly complex scenarios.