Researchers at Stony Brook University have developed an artificial intelligence system that creates realistic three-dimensional videos of the Martian surface. The project, called Martian World Models, aims to help space agencies simulate exploration and prepare for future missions.
Chenyu You, assistant professor in the Department of Applied Mathematics and Statistics and Department of Computer Science at Stony Brook University, led the effort to address a challenge in planetary research. According to You, most AI models are trained on Earth imagery, which makes it difficult for them to interpret Mars’s unique lighting, textures, and geometry. “Mars data is messy,” said You. “The lighting is harsh, textures repeat, and rover images are often noisy. We had to clean and align every frame to make sure the geometry was accurate.”
To solve these issues, the team created a specialized data engine named M3arsSynth. This tool reconstructs physically accurate 3D models of Martian terrain using photographs taken by NASA rovers. By processing pairs of images from the rovers, M3arsSynth calculates depth and scale with precision to build detailed digital landscapes that reflect Mars’s actual structure.
These reconstructions are then used as training material for MarsGen—an AI model capable of generating new videos of Mars based on single frames, text prompts or specific camera paths. The system can produce smooth video sequences that capture both the appearance and physical realism of Martian landscapes.
“We’re not just making something that looks like Mars,” You said. “We’re recreating a living Martian world on Earth — an environment that thinks, breathes, and behaves like the real thing.”
Further details about this work can be found in a full story by Ankita Nagpal on the AI Innovation Institute website.