Google AI Studio develops web applications in 10 minutes, real Vibe Coding of what you think and what you see!

Written by
Audrey Miles
Updated on:June-18th-2025
Recommendation

Google AI Studio allows non-professional developers to easily get started and experience the charm of Vibe Coding.

Core content:
1. Introduction to Google AI Studio and four modes of function
2. Build App function experience and Demo example
3. Use Gemini big model to develop lazy writing companion application

Yang Fangxian
Founder of 53A/Most Valuable Expert of Tencent Cloud (TVP)


There are a lot of new words this year. The term AI programming is out of date. The fashionable term is vibe coding.


I recently got my hands on with the new build app feature of Google aistudio. I can't say it's amazing, but it's quite amazing. It's the best what-you-see-is-what-you-get programming method I can imagine, especially for a non-programmer like me who makes a living by talking and needs to design product prototypes.


Google AI Studio ( https://aistudio.google.com ) is a cloud AI development platform launched by Google. It integrates the latest generative AI models (such as the Gemini series) and provides one-stop services from prototyping to production deployment. There are currently 4 modes: Chat, Stream, Gernerate Media, and Build App.


The Chat mode is similar to our daily habits of using domestic large models, and can support the analysis of multimodal files.


Stream Live mode means that you can talk to the big model through voice chat or video. The answer will be directly output in the form of voice reading, and the tone can be specified. Domestic big manufacturers have done a good job in this.


Gernerate Media  includes Vincent pictures and Vincent videos. You can experience Veo, the previous version of the recently popular Google Veo3, here.



After all this talk, it’s time to show you what we can do today! Build apps with Gemini!



Let's first take a look at the official Demo example.


The first example is a small application that allows cartoon animals to talk to themselves with expressions. It is very interesting. Through this example, we can see that the build apps interface is divided into three categories, namely the chat area, the code area, and the effect preview area. In the upper left corner are the display and hide buttons of these three areas. If you don't want to see the code area, you can hide it directly.




The second example is an application that can create GIF images based on prompt words. The interesting thing is that each frame of the GIF can be generated. The generated application can be directly run and previewed online. I think Cursor can't do this function, so you have to do it yourself.



Next, we will start making a web application. My idea is to make a lazy writing companion, that is, to add a large model prompt function on both sides of the text editing box. We enter the prompt word below.


Application name: Lazy writing companion
Key features:
(1) After writing text in the editing area in the middle of the webpage, add two functions to the area on the right. Let the big model, based on the content of the written paragraph, 1. give the theme of the next paragraph and the main content of the next two paragraphs. 2. make modification suggestions for the current paragraph and continue writing 200 words.
(2) In the area on the left, click Refresh Anna. Through the online search function, the article titles and summaries related to the writing content will be recommended. Click the article title to jump to the corresponding link.


The following video shows the complete application generation process. The Gemini model automatically organizes all requirements by analyzing the prompt words, and generates the corresponding code architecture, and then uses the React front-end framework for development according to the architecture. Currently, it seems that only React is supported. (It is better to watch from 1 minute 30 seconds onwards)



What really reflects the atmosphere of programming is that you can continue to develop iterative versions of the generated application or improve its functions in the chat window. For example, I asked it to beautify the application function buttons. We can see the real-time modification effect in the preview window, which feels very cool. We can also edit and modify the generated code directly in the code area.



The rest is that you have to keep interacting with the big model according to the generated results to tell your ideas. The whole usage process is very simple, but the atmosphere is very strong!


You are a product manager, and it is a very obedient full-stack programmer! Below is my final generated effect, which basically meets my needs.



The above is my initial experience with developing applications with Google Ai Studio. In fact, there are already many such products. I have used Poe’s AppCreator before, but that one cannot directly generate project code. It only generates a very large Html code. The overall experience is not as good as Google Ai Studio.