The latest Instagram scroll is all chiffon, sun flares, and swooping camera angles — and most of it isn’t shot on film. It’s AI. The trend flooding feeds right now takes a regular selfie and spits out a 1990s Bollywood-style portrait in a saree, complete with dreamy lighting and dramatic poses. It’s powered by Google’s Gemini 2.5 Flash Image model and a prompt style the internet is calling Gemini Nano Banana.
What’s driving the hype is how fast and good these edits look. You upload a clear photo, describe the vibe — think flowing chiffon, golden-hour backlight, grainy film texture — and the system builds a retro scene that feels straight out of a music montage. Thousands of users are posting side-by-sides: basic mirror selfie on the left, cinematic poster on the right.
At the core is image-to-image generation. Gemini 2.5 Flash Image takes your original photo as a visual anchor and applies the style you describe in text. The more specific you are, the better. People are asking for “90s Bollywood studio look,” “soft backlight with lens flare,” “chiffon saree in cherry red,” “film grain and light leak on edges,” and “medium-format depth of field.” The model reads those cues and rebuilds the frame in that direction.
You don’t need to know color science or photography jargon to get a decent result. But details help. Mention fabric type (chiffon, georgette), color (banana red, emerald), camera feel (Kodak Portra, 35mm grain), and era (early 90s vs late 90s). Ask for “minimal jewelry,” “wind-swept drape,” or “matte makeup” to steer styling. If the first render misses, tweak the prompt — small changes can fix skin tone, background clutter, or pose.
The tool sits inside Google’s AI Studio and the Gemini app experience. Once you sign in, you pick the image feature, upload a high-quality photo where your face is clear, and enter a prompt. The model then returns a few variations. You can regenerate, adjust the prompt, or switch fabrics and color palettes until the look lands.
Why do these edits look so cinematic? The model leans hard into a familiar visual stack: long lens compression, glowy highlights, saturated reds and teals, vignette, and a shallow depth of field. Add a saree drape with motion and you’re halfway to a movie still. The nostalgia hits because the cues mirror 90s music videos — soft filters, clean framing, and that perfect just-before-sunset light.
People aren’t stopping at sarees. The same workflow is pumping out anime figurines, 3D model-style portraits, plush-toy edits, and superhero collectible looks. It’s the same engine, just a different set of prompts: “vinyl toy texture,” “ABS plastic sheen,” “cell-shaded anime lines,” or “resin statue on pedestal.” The appeal is clear — you get studio-grade aesthetics without a studio or a retoucher.
If you’re trying it for the first time, here’s the simple flow many users follow:
Want prompts that tend to work? Try these starters and make them your own:
The fun has a shadow. As the trend spread, users started flagging weird artifacts and body details that didn’t exist in their original photos. One Instagram user, Jhalakbhawani, described an AI-rendered mole that wasn’t on her face in the source image — unsettling, to say the least. Others have seen warped hands, extra earrings, odd skin textures, or distorted backgrounds that only show up after a closer look.
Why does this happen? Generative models don’t “copy and paste” your photo. They reconstruct it based on patterns learned from training data. When the system guesses wrong — say, about where fabric folds sit on a shoulder, how jewelry should align, or how light should wrap around cheekbones — you get hallucinations: realistic-looking details that aren’t real. In portrait edits, those can cross personal boundaries fast.
There’s also the privacy side. Uploading personal images to any cloud tool means you’re trusting how that platform stores and processes your data. Google says it uses safety layers like invisible watermarks and metadata to signal AI-generated content, and the company has pushed tech such as SynthID for watermarking. Helpful, yes — but watermarks don’t prevent someone from reusing your image or cropping metadata out before reposting.
So how do you keep the creativity without handing over too much?
Creators are also asking bigger questions: Who owns an AI-styled portrait of your face? Most platforms say you own the user-generated content you upload, but they also retain broad licenses to process it. That’s normal for cloud services, though it makes some people uneasy. If you’re using these images in a commercial context — ads, merchandise, brand collabs — read the terms first and get model releases where needed.
Another hot spot is consent. It’s not okay to upload someone else’s face and restyle it without permission, even if you’re doing it “for fun.” That applies to public figures too. Many AI platforms prohibit deepfakes or misleading edits of real people. The line gets blurry with stylized art, so treat consent as a hard rule.
If you’re wondering how this moment compares to earlier AI fads, think back to Lensa’s Magic Avatars wave in late 2022. Back then, the look was painterly and exaggerated. Today’s wave aims for photorealism filtered through nostalgia — less cartoon, more camera. Gemini’s Flash Image model is optimized for speed, so you can iterate fast and lock a look that feels real enough to pass a quick glance.
And the aesthetics aren’t random. The 90s Bollywood palette leans warm, saturated, and romantic — colors pop, skin tones stay golden, and light feels soft. Sarees read beautifully on camera because fabric movement creates instant drama. Add film grain and a faint light leak and your brain fills in the rest: you’ve seen this scene before, in a song sequence or a magazine spread.
Of course, the model isn’t perfect at cultural nuance. Jewelry styles, drape patterns, and regional saree traditions are complex. Prompts like “minimalist styling,” “classic Nivi drape,” or “Bengali drape with traditional red and white palette” can help steer authenticity. If something looks off, call it out. Better prompts produce better respect for the source material.
For people who want to keep things clean, here’s a quick QC checklist before you post: scan hands and ears for doubled shapes; zoom in on eyes for asymmetry; watch for fabric clipping through arms; and check backgrounds for smeared geometry. If a detail is wrong, regenerate or simplify the prompt. Sometimes asking for less — “reduce jewelry,” “plain background,” “no heavy flare” — reduces errors.
The community side of this is what keeps it rolling. Users trade prompt recipes in comments, remix color palettes for wedding looks, or recreate frames that look like they belong to specific movie eras. It’s collaboration without a studio, and it’s giving a lot of people a low-bar way to play with portraiture, costume, and cinema language.
Expect the tech to evolve fast. Better control over inpainting (editing only parts of an image), fine-tuned style locks to prevent odd skin changes, and clearer on-image watermarks are all on the industry’s near-term roadmap. The sweet spot is obvious: let people have their nostalgic moment while keeping identity and consent protected.
Until then, treat it like any other powerful camera tool: experiment, save drafts, and don’t skip the safety pass. The banana-red saree glow is just a prompt away — and so is the responsibility that comes with pressing generate.
Written by Griffin Callahan
Hi, I'm Griffin Callahan, a sports enthusiast with a particular expertise in tennis. I've dedicated years to studying the game, both as a player and an analyst. My passion for tennis has led me to write extensively about the sport, covering everything from player profiles to match analyses. I love sharing my knowledge and insights with fellow tennis fans, and I'm always eager to engage in discussions about the sport we all love.
All posts: Griffin Callahan