top of page
Search

Glassmorphism, AI, and the Thing You Can't Prompt For

  • Writer: Neelasaraswathi Venkataraman
    Neelasaraswathi Venkataraman
  • Mar 4
  • 4 min read

I've been meaning to play with glassmorphism since Apple dropped the Liquid Glass update. Not to build a product. Just to see what the aesthetic actually demands of a designer.

Turns out — quite a lot. And AI tools, for all their speed, can't supply the most important thing.

The style first, features never

My usual instinct when exploring something new is to start with the problem. What does the user need? What's the complexity? Where does the friction live?

I deliberately ignored all of that here.

This was a pure aesthetic experiment. Liquid Glass and Glassmorphism 2.0 are about depth, soft refractive edges, frosted transparency that feels organic rather than clinical. For a personal finance app — which is what I landed on as a canvas — this style creates a sense of premium security, modern elegance. It has a feeling. And I wanted to understand how to build that feeling intentionally, not accidentally.

The first thing I noticed in research: glassmorphism is deeply context-dependent in ways that aren't obvious until you start layering. It needs a background. A real one — gradient, depth, something for the glass to refract against. On a white enterprise dashboard, it dies. On a dark, atmospheric consumer app, it breathes.

That's also why I'd be cautious about anyone pitching this for B2B. On a white enterprise dashboard, it poses. On a dark, atmospheric consumer app, it breathes. It looks spectacular in a VC deck. It falls apart on the actual product.



What happens when you just… prompt

I started where most people start: Figma Make, a prompt, and optimism.

Let me be clear about what came back. Without AI, this — a full app, ideation, feature definition, style, components — would have been weeks of work. With it, I had something to look at in minutes. That part is genuinely remarkable and I don't want to gloss over it.

But it was boring. Half-complete. The glass effects existed but had no weight. Spacing felt like it had been distributed rather than decided. The whole thing had the energy of a template that hadn't quite committed to being anything yet. Glassmorphism the way a stock photo is "emotion" — technically present, experientially absent.

I started questioning the prompts. More detail? Different phrasing? Maybe I just hadn't found the right words yet.


The design system as a point of view

Here's the thing about prompts: they describe. A design system decides.

When you write "use glassmorphism with a dark background and purple accents," you're describing a direction. When you build a design system, you're making hundreds of tiny decisions — this border radius for cards, this one for modals, this exact opacity for the glass layer, this spacing between elements in a tight stack. Each decision reflects a judgement about what looks right, what feels considered, what holds together under pressure.

Hiring managers in India talk about design systems constantly, and sometimes it feels like the conversation is about the artefact rather than the thinking. The system isn't the point. The decisions inside it are.

I built one anyway. Color tokens, spacing scale, radius system, component states — Primary, Secondary, Ghost, Destructive, Solid. The glass tokens specifically: not "make it glassy" but "this blur, this opacity, this background color, this border treatment."

It took time. It was worth it. Not because it looks good in a portfolio — though it does — but because building it forced me to have an opinion about every single thing.



Feeding taste back in

I fed the design system back into Figma Make alongside the same prompt.

Better. Noticeably. The AI had something to reference that wasn't just words. It could see decisions rather than descriptions. The output was more consistent, the glass effects had more intention.

Still not all the way there. I was in "maybe needs more work" territory. There was a gap between what I'd built manually and what the tool was generating, and I wasn't sure if that was a prompt problem, a tooling limitation, or just the nature of the last mile.



Then I tried Lovable. Same prompt with the design system screenshot.

That's where it landed. The output came back and I was genuinely impressed — the kind of impressed that makes you stop and look at it for a moment before moving on. The system I'd built was being interpreted, not just referenced. The decisions I'd made were showing up in the output.



The thing you can't prompt for

AI tools are fast, genuinely capable, and getting better at a pace that I find both exciting and worth paying attention to. I'm not being cautious about them. They're part of my workflow now.

But here's what I kept coming back to as I ran through this experiment: the quality of the output was directly correlated to the quality of the input — and not the words of the input, the taste of it.

When I gave the tool a prompt, I got a description rendered. When I gave it a design system — a distilled record of considered decisions — I got something closer to a design.

The AI didn't develop taste between those two attempts. I brought it.

And that, I think, is where the work is now for designers. Not faster prompting. Not resisting the tools. It's building something worth feeding in — a system, a point of view, a set of decisions that reflect genuine judgment about what works and why.

The generation is infinite. The taste is still scarce.

A small thing I keep noticing: the gap between good and great output is almost always in the details — micro-spacing, how a shadow falls, whether a radius is 8px or 12px. That's worth a separate piece. Soon.

— Neela

 
 
 

Comments


bottom of page