LLM Notice: This documentation site supports content negotiation for AI agents. Request any page with Accept: text/markdown or Accept: text/plain header to receive Markdown instead of HTML. Alternatively, append ?format=md to any URL. All markdown files are available at /md/ prefix paths. For all content in one file, visit /llms-full.txt
Skip to main content

Testing Smart Contracts

This document describes a single, pragmatic strategy to test on Flow. Use layers that are deterministic and isolated by default, add realism with forks when needed, and keep a minimal set of live network checks before release.

At a glance

  • Unit & Property — Test Framework: Hermetic correctness and invariants.
  • Integration — Fork Testing: Real contracts and data; mutations stay local.
  • Local integration sandbox (interactive, flow emulator --fork): Drive apps/E2E against production-like state.
  • Staging (testnet): Final plumbing and config checks.
  • Post-deploy (read-only): Invariant dashboards and alerts.

Layers

Unit and property — test framework

  • Use flow test
  • Use when: You validate Cadence logic, invariants, access control, error paths, footprint.
  • Why: Fully deterministic and isolated; highest-regression signal.
  • Run: Every commit/PR; wide parallelism.
  • Notes: Write clear success andfailure tests, add simple “this should always hold” rules when helpful, and avoid external services.

See also: Running Cadence Tests.

Integration — fork testing

  • Use when: You interact with real on-chain contracts or data (FTand NFT standards, AMMs, wallets, oracles, bridges), upgrade checks, historical repro.
  • Why: Real addresses, capability paths, and resource schemas; catches drift early.
  • Run: On Pull Requests (PRs), run the full forked suite if practical (pinned), or a small quick set; run more cases nightly or on main.
  • How: Configure with #test_fork(network: "mainnet", height: nil) in your test file, or use flow test --fork CLI flags.
  • Notes:
    • Pin with height: 85432100 in the pragma (or --fork-height CLI flag) where reproducibility matters.
    • Prefer local deployment + impersonation over real mainnet accounts.
    • Mutations are local to the forked runtime; the live network is never changed.
    • Be mindful of access-node availability and rate limits.
    • External oracles and protocols: forked tests do not call off-chain services or other chains; mock these or run a local stub.

See also: Fork Testing with Cadence, Fork Testing Flags.

Local integration sandbox — flow emulator --fork

  • Use when: You drive dApps, wallets, bots, indexers, or exploratory debugging outside the test framework.

  • Why: Production-like state with local, disposable control; great for end to end (E2E) and migrations.

  • Run: Dev machines and focused E2E CI jobs.

  • Notes:

    • Pin height; run on dedicated ports; impersonation is built-in; mutations are local; off-chain/oracle calls are not live—mock or run local stubs
    • What to run: Manual exploration and debugging of flows against a forked state; frontend connected to the emulator (for example, npm run dev pointed at http://localhost:8888); automated E2E/FE suites (for example, Cypress or Playwright) against the local fork; headless clients, wallets/bots/indexers, and migration scripts.
    • Not for the canonical Cadence test suite—prefer fork testing with flow test for scripted Cadence tests (see Fork Testing Flags and Running Cadence Tests)

    Quick start example:


    _10
    # Start a fork (pinning height recommended for reproducibility)
    _10
    flow emulator --fork mainnet --fork-height <BLOCK>


    _10
    // In your root component (e.g., App.tsx)
    _10
    import { FlowProvider } from '@onflow/react-sdk';
    _10
    _10
    function App() {
    _10
    return (
    _10
    <FlowProvider config={{ accessNodeUrl: 'http://localhost:8888' }}>
    _10
    {/* Your app components */}
    _10
    </FlowProvider>
    _10
    );
    _10
    }


    _10
    # Run app
    _10
    npm run dev
    _10
    _10
    # Run E2E tests
    _10
    npx cypress run

See also: Flow Emulator.

Staging — Testnet

  • Use when: Final network plumbing and configuration checks before release.

  • Why: Validates infra differences you cannot fully simulate.

  • Run: Pre-release and on infra changes.

  • Notes:

    • Keep canaries minimal and time-boxed; protocol and partner support may be limited on testnet (not all third-party contracts are deployed or up to date).
    • What to run: Minimal app smoke tests (login and auth, key flows, mint and transfer, event checks); frontend connected to Testnet with a small Cypress/Playwright smoke set; infra or config checks (endpoints, contract addresses oraliases, env vars, service or test accounts)
    • Not for the canonical Cadence test suite — prefer fork testing with flow test for scripted tests (see Fork Testing Flags and Running Cadence Tests)

    Quick start example:


    _12
    // In your root component (e.g., App.tsx)
    _12
    import { FlowProvider } from '@onflow/react-sdk';
    _12
    _12
    function App() {
    _12
    return (
    _12
    <FlowProvider
    _12
    config={{ accessNodeUrl: 'https://rest-testnet.onflow.org' }}
    _12
    >
    _12
    {/* Your app components */}
    _12
    </FlowProvider>
    _12
    );
    _12
    }


    _10
    # Run app
    _10
    npm run dev
    _10
    _10
    # Run smoke tests
    _10
    npx cypress run --spec "cypress/e2e/smoke.*"

See also: Flow Networks.

Post-deploy monitoring (read-only)

  • Use when: After releases to confirm invariants and event rates.
  • Why: Detects real-world anomalies quickly.
  • Run: Continuous dashboards and alerts tied to invariants.

Reproducibility and data management

  • Pin where reproducibility matters: Use --fork-height <block> for both flow test --fork and flow emulator --fork. Pins are per‑spork; historical data beyond spork boundaries is unavailable. For best results, keep a per‑spork stable pin and also run a "latest" freshness job.
  • Named snapshots: Maintain documented pin heights (for example, in CI vars or a simple file) with names per dependency or protocol
  • Refresh policy: Advance pins via a dedicated “freshness” PR; compare old vs. new pins
  • Goldens: Save a few canonical samples (for example, event payloads, resource layouts, key script outputs) as JSON in your repo, and compare them in CI to catch accidental schema/shape changes. Update the samples intentionally as part of upgrades.

CI tips

  • PRs: Run emulator unit or property and forked integration (pinned). Full suite is fine if practical; otherwise a small quick set.
  • Nightly/Main: Add a latest pin job and expand fork coverage as needed.
  • E2E (optional): Use flow emulator --fork at a stable pin and run your browser tests.

Test selection and tagging

  • Optional naming helpers: Use simple suffixes in test names like _fork, _smoke, _e2e if helpful.
  • Pass files and directories to run the tests you care about: flow test FILE1 FILE2 DIR1 ... (most common).
  • Optionally, use --name <substring> to match test functions when it’s convenient.
  • Defaults: PRs can run the full fork suite (pinned) or a small quick set; nightly runs broader coverage (and optional E2E).

Troubleshooting tips

  • Re-run at the same --fork-height, then at latest
  • Compare contract addresses/aliases in flow.json
  • Diff event or resource shapes against your stored samples
  • Check access-node health and CI parallelism or sharding

Dos and Don’ts

  • Do: Keep a fast, hermetic base; pin forks; tag tests; maintain tiny PR smoke sets; document pins and set a simple refresh schedule (for example, after each spork or monthly).
  • Don't: Make "latest" your default in CI; create or rely on real mainnet accounts; conflate fork testing (flow test) with the emulator's fork mode (flow emulator --fork).