What is Compose?
Compose is LitmusCheck’s interactive test creation mode that allows you to build tests using natural language instructions while watching them execute in real-time. It provides a side-by-side view where you can see your test steps being performed on a live browser, making test creation intuitive and visual. Compose mode bridges the gap between test design and execution, enabling you to create reliable end-to-end tests without writing code manually. You describe what you want to test in plain English, and LitmusCheck translates your instructions into executable test steps.Key Features
1. Live Visual Execution
- Watch your test steps execute in real-time as you add them
- See exactly how each action interacts with your application
- Validate that your test behaves correctly before saving
2. Natural Language Instructions
- Write test steps in plain English (e.g., “Click on the login button”)
- No need to write code or memorize complex selectors
- AI-powered element detection understands your intent
3. Interactive Test Building
- Add, edit, delete, and reorder steps with drag-and-drop
- Stop execution at any time to make changes
- Rerun tests from the beginning to verify changes
4. AI-Powered Element Detection
- Use AI prompts to find elements on the page
- Choose between generating stable selectors or dynamic AI execution
- Save detected elements for reuse across tests
5. Immediate Feedback
- See step-by-step execution status (success, failure, pending)
- Identify issues immediately during test creation
- Debug and fix problems in real-time
How Compose Works
- Enter a URL: Load your application in the Compose browser
- Add Instructions: Select actions from the dropdown (Click, Type, Verify, etc.)
- Watch Execution: Each step executes immediately after being added
- Refine Steps: Edit, reorder, or delete steps as needed
- Save Test: Once satisfied, save your test for future runs
Use Cases
- Rapid Test Creation: Build tests quickly without writing code
- Visual Validation: Ensure tests work correctly before committing them
- Exploratory Testing: Discover and document test scenarios interactively
- Test Maintenance: Update existing tests by watching them execute and making adjustments
Best Practices
- Use Stable Selectors: When AI finds elements, prefer saving them as reusable elements rather than using dynamic AI execution for better reliability
- Test Incrementally: Add a few steps at a time and verify they work before adding more
- Use Rerun: Regularly rerun your test from the beginning to catch regressions
- Save Frequently: Save your work periodically to avoid losing progress