Google rolled out Project Genie on February 4, giving AI Ultra subscribers in the United States a text-to-world generator. Users type a description or upload an image, and the system instantly creates a playable environment that reacts to physics in real time. The launch caps six months of closed-beta testing and answers creators' call for easier world-building tools.
Users pick a character and a control scheme, then the system assembles an interactive space in seconds. Behind the scenes, the Genie model works with Gemini and Nano Banana Pro to forecast how objects will react as users move through the scene, Google said in its February 4 announcement.
Project Genie shrinks hours of manual world-building into seconds of AI-driven generation. Creators who lack technical expertise can now prototype interactive spaces that once demanded game-engine knowledge and scripting skills. In December, beta testers created more than 50,000 unique environments, averaging 45 minutes per session to fine-tune their worlds, according to Google's AI product documentation.
Moving from static AI-generated images to dynamic, physics-aware worlds expands what a solo creator can build. Users can grab a world someone else made, tweak the generation parameters, and spin out variations. The workflow resembles forking code on GitHub—only the "repo" is a game environment.
Google hasn't said whether Project Genie can export to Unity, Unreal, or other production pipelines. Without announced engine integration, the tool stays in experimental territory. Access costs $19.99 per month through Google AI Ultra and is limited to the United States. Google also has not disclosed any timeline for expansion or details on content-moderation policies for generated worlds.
The real test will come when Google reveals whether these generated worlds can feed into production engines or remain isolated experiments.

















