Skip to content
← Back to Blog

Google Labs: The Secret Experimental Playground Nobody Talks About

Google Labs is where Google tests its most experimental AI tools before the world sees them — including Whisk, Stitch, Project Astra, Flow, and a rotating roster of AI experiments that often become the features everyone uses a year later. This complete guide shows you how to access Google Labs, what is currently available, and how to get early access to the tools that matter.

Google Labs: The Secret Experimental Playground Nobody Talks About

Almost every major Google AI feature you use today was experimental first.

Gemini’s conversational interface, AI Overviews in Search, NotebookLM’s Audio Overviews, the multimodal capabilities of Lens — all of these spent time as obscure Google Labs experiments before most users knew they existed. The people who found them early got weeks or months of advantage, learned the tools when they were less crowded, and shaped how those tools developed through their feedback.

Google Labs (labs.google.com) is the front door to this experimental ecosystem. It is where Google DeepMind and Google Research release tools in various stages of development — sometimes polished, sometimes rough, always interesting — for public testing before (or instead of) a full product launch.

This guide covers everything about Google Labs: how to access it, what is currently available, how to get early access to waitlisted experiments, and deep dives into the most significant current tools — including Whisk (image generation from images), Stitch (AI UI design), Project Astra (Google’s multimodal AI agent), and Flow (AI-assisted filmmaking).

🔗 This is Post #12 in our Google AI series. Google Labs is where the next generation of tools covered throughout this series begins. For the current mainstream tools, see Google Gemini Masterclass, Google AI Studio, and NotebookLM. Labs is where those tools came from — and where you find what is coming next.


What Is Google Labs? A Clear Map

Google Labs is not one product — it is an umbrella for multiple distinct experimental programs across Google’s research and product teams.

labs.google.com (The Main Hub)

The primary entry point. Here you find experiments from Google DeepMind, Google Research, and various Google product teams. Experiments appear as cards with brief descriptions. Some are immediately accessible; others require joining a waitlist.

What you find here: Creative AI tools, multimodal experiments, generative media tools, research interfaces.

A separate Labs interface specifically for experimental Google Search features. This is where you opt into enhanced AI Overviews, experimental AI modes in search, and new search interface tests.

What you find here: Enhanced AI Overviews, conversational search experiments, new search result formats.

Google Workspace Labs

Within Google Workspace apps (Docs, Sheets, Gmail, etc.), there are Labs-style experimental feature toggles accessible through Settings → Labs in each app. These are less publicized but often very useful.

What you find here: Early access to Gemini features in Workspace apps before general availability.

AI Studio Experiments

Within Google AI Studio, new model capabilities and features often appear as experimental toggles before becoming standard. Developers working with the Gemini API get early access to new model versions and capabilities here.


How to Access Google Labs

Step 1: Visit labs.google.com

  1. Go to labs.google.com in any browser
  2. Sign in with your Google account
  3. Browse available experiments

No payment required. Most experiments are free during the testing phase.

Step 2: Enabling Experiments

For available experiments:

  1. Click the experiment card
  2. Click “Enable” or “Try it”
  3. The experiment is added to your account

For waitlisted experiments:

  1. Click the experiment card
  2. Click “Join waitlist”
  3. Provide your email and any requested information
  4. Google sends an invitation when your access is approved (timing varies from days to months)

Step 3: Search Labs Specifically

For Google Search experimental features:

  1. Go to google.com and perform a search
  2. Look for the “Labs” icon or link in the top-right area of search results
  3. Alternatively, go directly to search.google.com/search-labs
  4. Enable the experiments you want to try

Step 4: Stay Updated

Google Labs experiments change frequently — new tools appear, existing ones graduate to general availability or are discontinued, and access expands or contracts.

How to stay current:

  • Check labs.google.com monthly
  • Follow @GoogleDeepMind and @Google on social platforms for announcements
  • Subscribe to Google’s official blog (blog.google) for experiment launches
  • Check the 2025 Year in Review and 2026 Outlook guide for broader context on Google’s AI direction

Current Major Experiments: Deep Dives

Google Whisk — Image Generation From Images, Not Just Text

What it is: Whisk is Google’s image generation tool that uses other images as prompts — not just text descriptions. You provide: a subject image (what or who you want in the image), a scene image (the environment or setting), and a style image (the visual aesthetic). Whisk combines these three inputs to generate a new image.

Why this is different: Every other mainstream image generation tool is primarily text-to-image. Whisk is image-to-image-to-image — you communicate visually rather than verbally, which produces more precise results for users who know what they want but struggle to describe it in words.

How to use Whisk:

  1. Go to labs.google.com/whisk or find it via labs.google.com
  2. You see three input boxes: Subject, Scene, Style
  3. For each, either upload an image or type a text description
  4. Whisk generates a batch of images combining all three inputs
  5. Click any result to regenerate variations or adjust inputs
  6. Download results directly

The Subject + Scene + Style workflow in detail:

Subject: Upload or describe the main focus of your image — a product, a character, an object, a person (avoid uploading images of real identifiable people you do not have rights to), a concept.

Scene: Upload or describe the environment — a forest, a city street, a minimalist studio, an abstract background, a specific architectural setting.

Style: Upload or describe the visual aesthetic — a specific art style, a photographic style, a color palette reference, a painterly technique.

Example combinations:

For product photography mockups:

  • Subject: product photo
  • Scene: upscale kitchen countertop
  • Style: clean, commercial photography aesthetic

For concept art:

  • Subject: a sketch or description of a character
  • Scene: a cyberpunk cityscape photo
  • Style: an anime illustration you admire

For social media content:

  • Subject: your brand’s product
  • Scene: seasonal environment (autumn leaves, snowy background)
  • Style: a warm editorial photography aesthetic

Practical use cases for Whisk:

Product and e-commerce: Generate product photography mockups in multiple settings and styles without a physical photoshoot. Show your product in a summer context, an autumn context, and a minimalist context — all from the same product photo.

Content creation: Generate diverse visual content for social media, blog posts, and presentations using reference images for style consistency.

Concept visualization: Turn rough sketches or reference boards into polished visuals for client presentations or creative direction.

Personalized art and gifts: Combine a pet’s photo (Subject) with a fantasy setting (Scene) and a favorite art style (Style) to create unique personalized artwork.

Important ethical and legal considerations:

  • Do not use photos of real, identifiable people as Subject images without their explicit consent
  • Do not use images of trademarked characters (Disney, Marvel, etc.) as Style or Subject references
  • The outputs you generate are yours to use — but the inputs must be images you have rights to use
  • Be transparent about AI-generated images in commercial contexts

Free tier limits: Whisk currently has a daily generation limit during the Labs phase. Generate in batches and download your preferred outputs immediately.


Google Stitch — AI UI/UX Design From Text Prompts

What it is: Stitch is Google Labs’ AI tool for generating user interface (UI) components, app screens, and web layouts from text descriptions. Describe what you want a screen to look like, and Stitch generates a functional, visually polished UI design.

Who it is for: Designers prototyping initial concepts, developers needing quick UI mockups, founders validating product ideas, non-designers who need to communicate interface ideas to development teams.

How to use Stitch:

  1. Access Stitch via labs.google.com/stitch or through Google AI Studio (may be integrated)
  2. Describe the UI you want in plain text
  3. Stitch generates multiple design options
  4. Select and iterate with follow-up prompts
  5. Export to HTML/CSS, component code, or design formats

Prompt examples that work well:

Mobile app screen:

Design a mobile app onboarding screen for a meditation app.
The screen should include: a calming gradient background (soft 
blue to purple), a centered illustration of a person meditating, 
a large headline "Find Your Calm", a two-line subheadline 
explaining the app's value, and two buttons: "Get Started" 
(primary, full width) and "Already have an account" (text only).
Clean, modern, minimal aesthetic.

Dashboard component:

Design a data dashboard card for a sales analytics application.
Show: monthly revenue figure prominently at top, a small 
sparkline chart below it, percentage change vs last month 
(green for positive, red for negative), and a "View Details" 
link at the bottom. Light theme, card style with subtle shadow.

Website hero section:

Design a landing page hero section for a B2B SaaS product 
that automates invoice processing. Include: navigation bar 
with logo and three nav links plus a CTA button, headline 
"Invoice Processing on Autopilot", subheadline (one line), 
primary CTA button "Start Free Trial", secondary CTA "See Demo",
and a placeholder for a product screenshot. 
Professional, trustworthy aesthetic. Light background.

Settings screen:

Design a mobile settings screen for a fitness tracking app.
Include sections: Account, Notifications, Privacy, Health Data,
and App Preferences. Use a grouped list style with section headers,
icons for each setting, and toggle switches for on/off settings.
iOS-style design language.

Stitch vs. other design tools:

Feature Stitch Figma v0 by Vercel Canva
AI from text prompt Limited Limited
Code export
Design system integration Developing ✅ Full Limited
Free tier ✅ (Labs) ✅ (limited) Limited
Collaboration Limited ✅ Full Limited
Google ecosystem ✅ Native

When to use Stitch: Early-stage concept generation, communicating design ideas to stakeholders, quick prototyping for validation, generating starting-point designs for further refinement in Figma. Not yet ready to replace a full design workflow, but excellent for the ideation phase.


Project Astra — Google’s Multimodal AI Agent

What it is: Project Astra is Google DeepMind’s research prototype of a universal AI agent — one that can see, hear, and reason about the world in real time through a device’s camera and microphone.

Unlike a chatbot that processes text, Astra processes continuous video and audio input. You point your phone at something, and Astra understands what it is seeing, remembers what it saw earlier in the conversation, and can answer complex questions that combine visual and verbal reasoning.

Current status: Project Astra is in research preview and limited testing. It is not a fully available consumer product as of early 2026, but demonstrations and early access experiences have shown capabilities that significantly exceed current Gemini Live.

What Astra can do (demonstrated capabilities):

Continuous environmental understanding: Remember what was shown to the camera earlier in the session. Ask “Where did I leave my keys?” and Astra can reference what it observed earlier.

Real-time visual reasoning: Look at a complex diagram, a codebase on screen, or a physical environment and reason about it conversationally.

Multimodal memory: Connect what it sees with what it hears with what it has been told — maintaining a coherent understanding of context across modalities.

Proactive assistance: Observe what you are doing and offer relevant assistance without being asked.

The practical significance: Project Astra represents what Gemini Live is evolving toward — a genuinely ambient AI assistant that understands your physical context, not just your typed text. The demonstrations have shown it helping with tasks that currently require multiple separate apps: identifying what you are looking at, remembering prior context, and providing relevant assistance in natural conversation.

How to get access: Check labs.google.com for the current Astra waitlist or early access program. Google is expanding access gradually.


Google Flow — AI-Assisted Filmmaking

What it is: Flow is Google’s AI tool for video creation that goes significantly beyond simple video generation. It is designed to help filmmakers, content creators, and storytellers create coherent video narratives using AI generation tools — including scene continuity, character consistency, and narrative structure.

Built on: Veo (Google’s video generation model) and Imagen (image generation).

Key capabilities that differentiate Flow from generic video generation:

Scene continuity: Generate multiple shots with the same character, setting, and visual style — maintaining consistency across scenes that is notoriously difficult in AI video generation.

Camera control: Specify camera movements (pan, zoom, dolly, aerial) as part of your generation prompts.

Storyboard to video: Build a visual storyboard and have Flow generate video clips for each panel.

Current access: Flow is in limited access via Google Labs. Check labs.google.com/flow for current waitlist status.

Who Flow is for: Content creators, marketers producing video content, filmmakers exploring AI-assisted production, educators creating visual learning materials, and entrepreneurs who need professional video content without production budgets.

The creative opportunity: Traditional video production requires equipment, locations, actors, editing software, and significant time. For many content types — explainers, concept visualizations, mood boards, advertisement concepts — AI video generation is approaching “good enough for purpose.” Flow represents Google’s serious investment in making this accessible.


Google ImageFX — Text-to-Image with Fine-Grained Control

What it is: ImageFX is Google’s text-to-image generation tool, available through labs.google.com/imagefx. Unlike Whisk (which uses images as prompts), ImageFX is a text-to-image tool powered by Imagen 3.

What makes it notable:

  • Imagen 3 quality is among the best in the industry for photorealistic and artistic generation
  • “Expressive Chips” — clickable modifiers that let you quickly adjust style, mood, composition
  • Direct comparison of multiple variations side by side
  • Strong handling of text within images (a historically difficult challenge for AI image generators)

How to use ImageFX:

  1. Go to labs.google.com/imagefx
  2. Type your image description
  3. Use Expressive Chips to modify style and mood
  4. Generate and compare variations
  5. Download your preferred result

Best uses: Concept visualization, social media imagery, marketing materials, presentation graphics, creative exploration.


NotebookLM in Labs Context

While NotebookLM has graduated from Labs to a full product, it began as a Labs experiment — and some of its most exciting features (including Audio Overviews and newer multimodal capabilities) still cycle through a Labs-style experimental phase before full release.

Keep an eye on Labs for NotebookLM feature expansions, including:

  • Video source support beyond YouTube transcripts
  • Enhanced Audio Overview customization
  • Team and organizational collaboration features

The Labs Strategy: Getting Maximum Value From Experimental Tools

Strategy 1: Enable Everything Relevant, Use What Sticks

When you visit Google Labs, enable every experiment that seems potentially relevant to your work. Most are free during the Labs phase. Using an experimental tool for a week is the fastest way to understand whether it adds genuine value to your workflow. The tools that save you time or unlock new capabilities deserve continued use; the rest can be disabled.

Strategy 2: Join Waitlists Early

For tools with waitlists (Project Astra, Flow), join immediately even if you are not certain you want access. Waitlist position matters — early applicants get access before later ones. You can always ignore the access when it arrives; you cannot retroactively join a waitlist.

Strategy 3: Treat Labs Tools as Research Investments

Tools in Labs are not finished products. They have rough edges, missing features, and inconsistent behavior. The right mindset is: you are contributing to Google’s feedback loop (your usage and behavior shapes development) and you are getting early access to capabilities that will matter later. Patience with imperfect tools is part of the Labs user contract.

Strategy 4: Document What You Find Useful

Keep a simple note of which Labs features you have enabled and what you have found genuinely useful. When features graduate from Labs, you will already know which ones deserve a place in your permanent workflow.

Strategy 5: Use Labs as a Competitive Intelligence Signal

What Google is testing in Labs today is a meaningful signal about where professional and creative tools are heading. If you create content, design products, or make technology decisions professionally, the Labs experiments tell you what capabilities will be mainstream within 12–18 months. Adapting your skills and workflows ahead of that curve is a genuine advantage.


How Google Decides What Graduates From Labs

Not every Labs experiment becomes a full product. Understanding how this selection works helps you prioritize which tools to invest time in.

Signals that lead to graduation:

  • High and sustained engagement from Labs users
  • Strong user feedback indicating genuine value
  • Clear commercial or strategic fit with Google’s product portfolio
  • Technical readiness for scale

Signals that lead to discontinuation:

  • Limited user adoption
  • Technical challenges that prove difficult to resolve
  • Overlap with a product that Google acquires or builds separately
  • Strategic direction changes

Historical pattern: Tools that solve clear, common professional problems tend to graduate. Tools that are technically impressive but address niche needs tend to either stay in Labs or be discontinued. NotebookLM, Gemini’s multimodal features, and AI Overviews in Search all followed the graduation path. Many earlier experiments were quietly retired.

The practical implication: Invest more deeply in Labs tools that solve a problem you personally experience. Your engagement with tools that genuinely help you is the most authentic signal — and if it helps you, it almost certainly helps millions of others in similar situations.


Experimental Search Features Worth Enabling Now

Beyond the creative and productivity tools, Google Search Labs has several features worth enabling immediately.

A conversational interface overlaid on Google Search — more like a Gemini conversation than a traditional search. Available in Search Labs for qualifying regions and accounts.

Best for: Complex research questions, multi-step information gathering, queries where follow-up questions naturally follow from initial answers.

Enhanced AI Overviews

More detailed AI summaries with better sourcing, more nuanced answers to complex questions, and improved citation transparency. Enable in Search Labs to get the more sophisticated version of the AI Overview experience.

Search with Video Understanding

Experimental feature that allows Google to understand and index the content of videos — not just their titles and descriptions, but what is actually shown and said. This makes video content significantly more discoverable and creates new search result types.

Personalized Search Experiments

Various experiments in how Google personalizes search results based on your history, preferences, and context. Worth exploring, but also worth reviewing from a privacy perspective — understand what you are enabling before doing so.


Free Tier Optimization for Labs

Generating Images Efficiently in Whisk and ImageFX

Both tools have generation limits during Labs. Optimize your prompts before generating to avoid wasting your daily quota:

  1. Spend time refining your text description before hitting generate
  2. For Whisk, curate your input images carefully — quality inputs produce better outputs
  3. Generate in batches and download all variations you might use before your session ends

Maximizing Stitch for Non-Designers

If you are not a designer, Stitch’s value is primarily in communication — generating a visual that shows developers, clients, or stakeholders what you have in mind. You do not need pixel-perfect output for this purpose. Use Stitch for rapid concept generation (5–10 minutes) and then spend your time communicating about the concept, not refining the tool’s output.

Labs as a Free Trial of Paid Features

Many Labs features eventually become paid features. Using them during the Labs free phase is one of the best ways to evaluate whether they are worth paying for. Whisk and ImageFX, for example, may transition to paid tiers as they mature. Use them now while free access exists.


Common Mistakes to Avoid

Mistake 1: Treating Labs Features as Production-Ready

Labs experiments are explicitly not production-ready. They change without warning, go offline for maintenance, have rate limits that shift, and may be discontinued entirely. Do not build critical business workflows that depend on Labs features still in experimental phase.

Mistake 2: Skipping Labs Because the Interface Is Rough

Many Labs experiments have deliberately minimal interfaces — Google prioritizes getting the core capability in users’ hands quickly over polishing the UX. Do not judge a Labs tool by its interface. Evaluate it by whether the underlying capability is useful.

Mistake 3: Not Checking Labs After Major Google AI Announcements

After Google I/O, Google DeepMind demos, and major Google AI announcements, new experiments typically appear in Labs within days to weeks. Make checking labs.google.com part of your post-announcement routine.

Mistake 4: Missing Workspace Labs Features

Google Workspace Labs (the Labs settings within individual Workspace apps) are separate from labs.google.com and often contain the most immediately practical experimental features for professional users. Check the Labs section in Settings within Google Docs, Sheets, Gmail, and Slides separately.

Mistake 5: Not Providing Feedback

Labs experiments are development tools. Google explicitly wants feedback — the thumbs up/down buttons, the feedback forms, and the usage patterns you generate all contribute to how tools develop. If a Labs feature is almost right but misses something important for your use case, use the feedback mechanisms. Your edge case might be the one that becomes a feature.


FAQ: Google Labs

Q: Is Google Labs free to use? A: Yes. Most Google Labs experiments are free during the experimental phase. Some may require a Google account to access. Paid features are typically clearly labeled.

Q: How long do Labs experiments stay available? A: There is no standard timeline. Some experiments run for months, some for years, some are discontinued quickly. Graduated features move to their permanent homes (Gemini, Google Search, Google Workspace). Check labs.google.com regularly for current availability.

Q: How do I get on the waitlist for Project Astra or Flow? A: Visit labs.google.com and look for the specific experiment’s waitlist link. Some waitlists also appear on the Google DeepMind website (deepmind.google). Early sign-up is the best strategy.

Q: Can I use Labs experiments for commercial work? A: Check the terms of each specific experiment. Most Labs tools allow personal and commercial use of outputs during the experimental phase, but terms vary. Image generation tools typically allow commercial use of generated images; always verify the specific terms.

Q: How is Google Labs different from Google Bard was different from Gemini? A: Google Bard was the consumer product name for an early version of Gemini — it was not a Labs experiment, it was a product that was later rebranded. Google Labs is the pre-product experimental platform where new tools are tested before becoming products or features.

Q: Can developers access Labs experiments via API? A: Some Labs capabilities are accessible via the Gemini API through Google AI Studio. Check ai.google.dev for current API availability of specific capabilities. Not all Labs features have API access.


Conclusion

Google Labs is where the future of Google AI is being built — in public, for free, one experiment at a time.

The tools available in Labs right now — Whisk’s image-from-image generation, Stitch’s AI UI design, Project Astra’s ambient multimodal AI, Flow’s coherent video creation — represent capabilities that will be mainstream within one to two years. The people building workflows around these tools today will have significant advantages when they become widely available.

The Labs approach to using these tools is different from using a finished product. You are not just getting a feature — you are participating in a development process. Your usage patterns, feedback, and exploration of edge cases contribute to how these tools evolve. That is a different and more interesting relationship with AI tools than simply using a finished product.

Your next step: Open labs.google.com right now. Enable one experiment you have not tried before. Spend 20 minutes exploring it genuinely. If it solves a real problem for you, integrate it. If not, note what is missing and submit feedback.

The most valuable thing you can do in the Labs ecosystem is be a thoughtful early user — not just a passive tester.


📚 Continue the Series:


Last updated: April 2026. Google Labs experiments change frequently — features appear, graduate, and are discontinued on an ongoing basis. The specific experiments available at labs.google.com at the time you read this will differ from those described here. Verify current availability at labs.google.com and deepmind.google.

⚠️ Google Labs experiments are pre-production software. Do not use Labs features for mission-critical workflows without backup plans. Terms of service, data handling, and availability may change without notice. Always review the specific terms for each experiment before using it commercially.


Disclaimer: The information contained on this blog is for academic and educational purposes only. Unauthorized use and/or duplication of this material without express and written permission from this site's author and/or owner is strictly prohibited. The materials (images, logos, content) contained in this web site are protected by applicable copyright and trademark law.