Skip to main content
T TON Adoption
Security PLAYBOOK · 2026

Bug Bounty on TON: Programs, Payouts and Realism (2026)

Catalog of active TON bug-bounty programs in 2026: TON Foundation up to $100k, Tonkeeper up to $30k, STON.fi, Tonstakers, EVAA.

Author
· research lead · security desk
Published
14 min read

Bug bounty on TON is a narrow but real niche. The stated ceilings look impressive: TON Foundation pays up to $100,000, Tonstakers the same on their programme, Tonkeeper up to $30,000. But between “stated” and “actually received” there is a wide gap. This playbook breaks down who fits TON bug hunting, which programmes are active in 2026, which bug classes actually pay, what tools the hunter uses, and why realism matters: most accepted reports are medium severity worth $100–5,000, not six-figure criticals.

TL;DR — what TON actually pays

ProgrammeStated ceilingRealistic median payoutSubmission channel
TON Foundation core$100,000$500–5,000 (medium)@ton_bugs_bot + GitHub
Tonstakers$100,000$1,000–10,000via Tonstakers site
STON.fi DEX v2.2est. $50,000 critical$500–5,000HackenProof
Tonkeeper wallet$30,000$1,000–5,000security@tonkeeper.com
EVAA Protocolundisclosed (est. $10–100k critical)$500–5,000HackenProof
Bridges (TAC and others)TAC case showed $2.5M+ lossvariablevia the bridge team

Most TON payouts are four-figure. Six-figure ones are isolated events per year, usually tied to critical findings in bridges or core protocol. That is a normal picture for a young ecosystem with TVL around $300–500M.

Who fits TON bug bounty

Profile of a successful TON researcher in 2026:

  • Confident reading unfamiliar code — that is 90% of the work.
  • Understands async models (actor model, event-driven, message-passing). TON is closer to Erlang/Akka than to synchronous Solidity.
  • Has patience: one hunt on a project — 2–6 weeks of reading before the first candidate finding emerges.
  • Ready to spend 3–6 months learning with no guaranteed payout.

Wrong fit if:

  • Chasing quick money. There is none on TON.
  • Expecting Solidity-style bugs (reentrancy via call.value, uint256 integer overflow). They exist here too, but in different form.
  • Not ready to read academic work. Without “From Paradigm Shift to Audit Rift” (analysis of 34 public TON audits and 233 vulnerabilities) you will not understand the ecosystem.

Good news: competition on TON is lower than on EVM. By the same paper, there are only ~34 public audits and fewer than 20 well-documented bug classes. On Ethereum these categories number in the hundreds and each has hundreds of specialists. On TON — dozens.

Catalog of active programmes (May 2026)

TON Foundation Core Protocol

Platform — GitHub ton-blockchain/bug-bounty plus the @ton_bugs_bot Telegram bot. Frontend at hackenproof.com/programs/ton-society.

Max payout: $100,000 in Toncoin. Large amounts come with a 1-year Toncoin lock-up.

In scope:

  • Blockchain core (C++): Simplex, Catchain, validator/full/DHT nodes, TonLib, FunC and Fift compilers.
  • Standard contracts: Wallet V4/V5, multisig, nominator pools, jetton contracts, DNS, elector, network config.
  • Services: MyTonCtrl, HTTP API, Python SDK.

Out of scope: bugs requiring local host control, client-SDK misuse without privilege escalation, input hygiene issues, debug-only crashes, known design peculiarities (e.g., undefined async delivery order across shards — that is a feature, not a bug).

Tonkeeper

Site — tonkeeper.com/bug-bounty. Submission — email security@tonkeeper.com.

Max payout: $30,000. Tiers:

  • $15,000–30,000 — reliable loss of funds or confidential data without user interaction (unauthorized tx signing, private key leak).
  • $5,000–10,000 — limited access to funds, substantial user interaction required.
  • $1,000–2,000 — unauthorized personal data access, limited fund loss.
  • +25% bonus for bugs in beta versions.

Out of scope: brute-force, DoS, social engineering, third-party services embedded in Tonkeeper (NFT marketplace, swap providers).

STON.fi DEX (contracts v2.2.0)

Platform — HackenProof + GitHub ston-fi/bug-bounty. Scope — v2.2.0 contracts on mainnet (router, pools).

Strict PoC requirement: must be mainnet fork, not a clean-room local setup. Submission strictly via HackenProof within 24 hours of discovery. Amounts are not stated in Terms of Participation but estimate $5,000–50,000 for medium-critical and up to $50,000–100,000 for criticals with confirmed drain.

Out of scope: precision issues (LP rounding), slippage, frontrun, backrun, sandwich attacks. These are AMM mechanics, not bugs.

Tonstakers (LST protocol)

Site — tonstakers.com. A $100,000 programme is mentioned on their business page.

In scope: stake pool contracts, jetton minter, mTGV pool, validator management. Protocol TVL — $70M+, any fund-loss bug is automatically critical.

EVAA Protocol (lending)

Platform — HackenProof (via TON Foundation partnership).

In scope: EVAA master contract, user-SCs (sharded), isolated lending pools. Assets — TON, USDT, jUSDT, NOT, tsTON, stTON, hTON. TVL per blog.ton.org in 2026 is approaching $1B.

Payouts are not publicly disclosed — estimate $10,000–100,000 for criticals (standard for lending protocols). Parallel opportunity — the Liquidator Hackathon via EVAA’s Telegram bot, a grants programme.

Bridges and DEX outside top-3

This is both the richest payout category and the most dangerous. The TAC bridge drain in May 2026 (~$2.5M+ lost in wrapped jettons + 384k newly-minted TAC) showed bridge validator logic is still fragile. The bridge admin contract — 2,399 TASM lines — contained no CHKSIGN opcode at all. Validator signature was never verified on-chain. Detailed breakdown — in our TAC bridge attack analysis.

Programmes:

  • TON Diamonds (NFT) — on HackenProof, amounts undisclosed.
  • DeDust — programme mentioned, status to be verified on HackenProof.
  • Storm Trade (perpetuals) — potentially a large programme in the future (dvAMM logic, omni-vault).
  • TONCO (V3 CLMM) — new startup, Algebra Labs fork.

Main action: go to hackenproof.com/programs and filter by TON. The list rotates monthly.

Platforms — HackenProof and Immunefi

HackenProof — the main platform for TON programmes in 2026. Free registration, no KYC until payouts of $10,000+. Submission via the programme’s form, status tracking in the dashboard.

Immunefi historically worked more with EVM, but in 2026 several TON projects connected (TON Foundation hosts part of its programme there). Registration and submission similar.

Bug classes on TON that actually pay

The list is based on the CertiK Tact mistakes and SlowMist Toncoin Smart Contract Security Best Practices catalogs, verified in 2025.

Reentrancy via async messaging

TON has no synchronous call.value like Ethereum. But it has long message chains between contracts, and intermediate state is often stored between steps. If a parallel transaction overwrites state before it is used — that is a reentry-like situation.

Developer mitigation — the carry-value pattern (pass data in the message body, not storage). If you see pending state held in storage between two messages, that is a suspect.

TVM exotic cell attacks

A TVM cell has 5 types: ordinary, pruned_branch, library_ref, merkle_proof, merkle_update. Each exotic type is handled specially. Forged proofs (claimed root not matching the data ref hash) attempted via the ENDXC opcode are validated by the TVM — exit code 8. But when an external message arrives carrying a serialised exotic cell, the contract must itself check the cell is relevant and not forged.

Attacks: substituting library_ref with your own library, slipping a pruned branch where a full tree is expected, fake merkle_proof for claim-style contracts (drop, airdrop). All of them have been seen in TON protocol audits.

Validator signature bypass

The bug class that produced 2026’s loudest drain — TAC. Bridge contracts often implement multisig via off-chain validator signatures that are then checked on-chain via CHKSIGNU. If any handler branch skips CHKSIGN* or calls it with the wrong pubkey, the attacker can pass auth via cherry-picked sender derivation.

Check: grep -E 'CHKSIGN|CHKSIGNU|CHKSIGNS' over the bridge contract’s TASM disassembly. If the admin handler does not contain that opcode — either by design (suspicious) or bug.

Jetton replay and double-spend

A class of errors tied to receiving TransferNotification without validating sender() == expected_jetton_wallet. The attacker deploys a fake jetton master, creates a fake jetton wallet, sends TransferNotification with arbitrary amount — and the contract credits the attacker.

Mitigation: always verify the TransferNotification sender is a legitimate jetton wallet for the known jetton master, via calculate_user_jetton_wallet_address(master, owner).

State-init derivation bypass

When a contract derives the address of another contract via contracts::from_sources(idata, code) — that is a critical point. Every field of the state_init that affects the address must be checked. If even one field is attacker-controlled, they can produce a fake contract with a different address and pass equal_slices(vault~address(), ctx.at(SENDER)).

Subtlety: the code cell must be loaded from storage, not from the message body, otherwise the attacker can supply their own code and get an arbitrary address.

Highload V3 race conditions

Highload wallets are used by projects for mass payouts (staking rewards, drop lists). They have their own specifics — query_id with TTL, replay protection via bitmap. Implementation errors in highload logic in new contracts (when a team copies the pattern but forgets about nonce uniqueness) are a frequent medium-tier finding.

Bounce-handler misuse

A bounced message carries only 256 bits, 224 useful. Complex state recovery is impossible. If a bounce-handler is meant to “return jettons to the user if the outbound failed” but the bounced msg lacks data to reconstruct context — jettons remain in the contract permanently.

Anti-pattern: recv_internal without an explicit if ctx.at(IS_BOUNCED) {...} block. All outbound messages must be sent with bounceable=true (flag 0x18) and the contract must explicitly handle bounce.

Tact-specific traps

From the CertiK pitfalls:

  • amount: Int instead of Coins or uint256 — allows negative values, attacker sends amount = -100, bypasses balance check.
  • response_destination: Address instead of Address? — transactions with zero-address (addr_none) fail.
  • Serialising index: Int as int257 instead of uint256 — messages become undecodable at the receiver.

The hunter’s toolkit

Acton Foundry

A 2025–2026 toolchain that is becoming the de-facto standard for TON auditors. Three key capabilities:

  • Mutation testing (shows which checks in the code are “dead” — no mutation breaks them).
  • Retrace (reconstruct full transaction context from mainnet or fork).
  • Lint against CertiK and SlowMist rules.

Deeper coverage — in our Acton guide.

Misti — Tact static analyser

Repo — github.com/nowarp/misti. Install — npm i -g @nowarp/misti. Run — misti /path/to/contracts/*.tact.

Built-in detectors: CellOverflow, StringReceiversOverlap, SendInLoop, UnboundMap, DivideBeforeMultiply, SuspiciousMessageMode, ZeroAddress, InheritedStateMutation. Every warning is a candidate bug to check manually.

EVM counterpart — Slither. Coverage is smaller but growing.

TSA (TON Symbolic Analyzer)

Symbolic execution of every code path — proves unreachable states, finds assert violations. Deeper than Misti, slower. Use after Misti as a deep-check on selected handlers.

EmulatorEx and pytoniq

pytoniq_core.contract.emulator.EmulatorEx — a local TVM emulator. Emulates exploit scenarios without gas, without deploy, without network. This is gold for PoCs: confirming a bug via EmulatorEx is the fastest path.

TypeScript alternative — @ton/sandbox (part of @ton/blueprint). More convenient for those who write tests in TS.

Tonscan and TonAPI

For on-chain forensics. Tonscan — UI for reading transactions and contracts, TonAPI — programmatic access. Used at the recon stage: see how contracts actually interact in mainnet, who sends what to whom, what TVL is at risk.

The 8-phase audit workflow

The method is built from STON.fi v2 and funcbox audit practice in spring 2026. It applies to any contract written in FunC or Tact.

Phase 1 — Recon (1–2 hours). Understand why this contract exists and what it protects. Read README, whitepaper, audit reports (if any), Twitter/blog of the last 3 months for post-mortems. Answer: which assets are held, who pays gas, where the admin privileges sit, what TVL is at risk.

Phase 2 — Mapping (2–4 hours). Map of entry points and storage. Line counts (find ... -name "*.fc" -o -name "*.tact" | xargs wc -l), sorted by size (large = complex = bigger attack surface). Opcode catalog (grep -r "op::"). Each recv_internal is an entry point, each branch inside is a separate attack surface.

Phase 3 — Trust Model (1–2 hours). For each operation — who can trigger it. Find every equal_slices(ctx.at(SENDER), ...), write out the trust boundaries. Check state-init derivation: are all address-affecting fields checked, is the code cell loaded from storage, is the workchain correct.

Phase 4 — Bug-class checklist. Run a systematic checklist over each entry point. TVM/FunC: impure modifiers, bounce handlers, gas checks, set_code under admin+timelock, force_chain() for critical addresses, end_parse(). Tact: amount is not Int, jetton sender is validated, bounce mode is correct. Domain specifics: slippage and reserves overflow for DEX, health factor and oracle freshness for lending, rate update atomicity for LST.

Phase 5 — State Invariants. For each operation — what should remain true after it executes. This is the most valuable part of the audit. A bug = violation of an invariant that the attacker can trigger. DEX examples: reserve0 * reserve1 >= k_before, total_supply_lp == sum(lp_balances). LST: pool.ton_balance / jetton.total_supply >= rate_at_last_rebase.

Phase 6 — Bounce Path Analysis. For each outbound message — what happens if it bounces. The most common bug source on TON. Real example from STON.fi: vault sends vault_pay_to to the router and self-destructs (DESTROY_IF_ZERO); if the target jetton wallet is not deployed — bounce back to the router; the router tries to refund the vault — but the vault is already destroyed; fees lost.

Phase 7 — PoC Construction. Turn a hypothesis into a reproducible exploit. Set up in @ton/sandbox or via pytoniq EmulatorEx. The PoC must show: initial state, crafted message, final state, quantified damage (how much TON / jetton lost). Without quantified damage the report is rejected.

Phase 8 — Reporting. Structure: Severity (per the programme’s table) → Impact → Affected Component (contract, file, line numbers) → Root Cause → Reproduction Steps → PoC → Suggested Fix → References (CertiK pitfall #N, SlowMist guide §N, similar bugs in other projects).

Realistic expectations

The main thing to understand before starting: 1–2 valid findings over 3–6 months of solo hunting, payout typically $100–5,000. That is the normal picture. Six-figure payouts are isolated events per year, usually tied to critical findings in bridges or core protocol.

What that means in practice:

  • In the first year at 10–15 hours per week, realistic earnings are $3,000–15,000. Not a salary, a side income.
  • For full-time TON bug hunting (income $50,000+ per year) you need to either work 5–10 programmes in parallel or specialise in a niche (e.g., bridges only, or DeFi lending only).
  • Many drop out in month 2 once it becomes clear there is no easy money. Those who stick it out can hit $100k+ annually after 12–18 months, plus access to a full-time audit role.

Alternative growth paths after the first finding:

  • Top-tier auditing: Certora, Trail of Bits, OpenZeppelin pay $200–500k/year full-time.
  • Code4rena and Sherlock contests: after TON, add Solidity, top-1% earn $200k–1M/year.
  • Your own audit service: after 5–10 public findings — own clients, freelance audits at $5–50k each.
  • TON Foundation grants: up to $50k for interesting security tools.

Disclosure path: how to write a report that gets accepted

What belongs in the report:

  1. Severity strictly per the programme’s table. Do not inflate — it annoys reviewers and lowers your odds. An inflated severity usually gets downgraded with a smaller payout.
  2. Impact in one or two sentences. What the attacker gets, how much they can extract, under what conditions.
  3. Affected Component — exact lines in the code. Not “the swap function” but “contracts/router.fc, line 247–268, handle_swap handler”.
  4. Root Cause — the precise technical reason. Which check is missing or which logic is wrong. Not “the contract is unsafe” but “missing validation sender == expected_jetton_wallet before processing transfer_notification”.
  5. Reproduction Steps — step by step, with concrete values.
  6. PoC — link to a gist or embedded code. Must run on a clean environment (git clone && npm install && npm test).
  7. Quantified Damage — mandatory. “10,000 USDT extracted in one transaction” or “the entire pool TVL is permanently locked”. Without numbers it is a theoretical vulnerability, which usually does not pay.
  8. Suggested Fix — concrete change in the code. Shows you understand the root cause, not just the symptom.
  9. References — which pitfall from CertiK / SlowMist / arxiv this illustrates. If it is a known class — name it. If it is new — explain how it differs from known ones.

What NOT to do — anti-patterns

Exploit on mainnet without authorisation. Any transaction that demonstrates the bug on mainnet establishes the elements of a criminal offence. Even if you immediately returned the funds, unauthorised access is on-chain forever. PoCs only on forked mainnet or testnet, never real mainnet.

Publishing the PoC before the fix. Disclosure breach = zero payout + ban from the programme + risk of payout clawback for other researchers using the same channel. Do not publish the bug on Twitter/Telegram/GitHub Issues before the disclosure window ends (typically 90 days or post-confirmed fix).

Retaliation over a low payout. If the team paid $500 instead of the expected $5,000 — it may sting, but a public reaction (“team X stiffed me on bug bounty”) guarantees you are out of the industry. Security teams talk to each other. Better — quietly move on to the next programme.

Insider trading on undisclosed vulnerability knowledge. If you found a bug in a DEX and pre-shorted its token — that is the crypto analogue of insider trading. Regulators do not yet reach into TON, but reputationally it is career suicide.

Contacting the team via non-recommended channels. Only the official submission form. The Telegram of a personal developer is not a security disclosure channel. It creates a grey zone where responsibility for handling the report is diffuse.

The future of TON bug bounty

By end of 2026 expect:

  • More programmes from promising DeFi projects. As ecosystem TVL grows (May 2026 — about $300–400M, forecast end of year $500M–1B), more teams will be able to allocate bounty budgets.
  • TON-specific audit firms appearing. Currently TON audit is dominated by SlowMist, CertiK, Trail of Bits — all primarily EVM teams with TON practice. Native teams expected by 2027.
  • HackenProof standardising as the main TON platform. Immunefi is mostly EVM and the integration pace into TON is hard to catch up.
  • Bridge programmes category will grow. After TAC drain and other 2026 incidents, all bridges are reviewing their security programmes.

The window for solo researchers on TON is still open because tooling is young, bug classes are still being cataloged, FunC and Tact are known by an order of magnitude fewer people than Solidity. In 2–3 years competition will grow, payouts will normalise with EVM, and entry will be harder. Now is the best time.

Frequently asked

Stated ceilings are up to $100k at TON Foundation and Tonstakers, up to $30k at Tonkeeper. In practice most accepted reports are medium severity, paying $100–5,000. Criticals (drain of a contract with $10M+ TVL) happen a handful of times per year, with $20–100k payouts. A realistic goal for a solo researcher — 1–2 medium findings over 3–6 months.
Reading them — mandatory. Writing — desirable but not critical. FunC is close to assembler, Tact closer to Rust/Solidity. Basic syntax takes a week. What matters more is understanding the TVM model: async messaging, bounce, exotic cells, transaction phases. Without that, even strong syntax knowledge is useless.
TON Foundation accepts reports via the `@ton_bugs_bot` Telegram bot — anonymously, but payout needs a TON wallet (large sums have a 1-year lock-up). HackenProof and Immunefi require KYC at $10k+. For small payouts (under $1k) email and wallet are typically enough.
Accepted: reproducible PoC on a forked mainnet with quantified damage (X TON lost, Y jettons drained). Rejected: theoretical vulnerabilities without exploit, out-of-scope bugs, known design peculiarities, frontrun/sandwich on DEX (that is AMM behaviour, not a bug), DoS via message spam, social engineering.
By our estimate — 2–4 months with discipline of 10–15 hours per week. Month 1 — foundation: TVM, FunC/Tact, tooling. Month 2 — deep dive into one protocol's code. Month 3 — systematic checklist run. Month 4 — submission and reviewer feedback. Many drop out in month 2 once it becomes clear there is no easy money.
Criminal prosecution. Under US law — Computer Fraud and Abuse Act, up to 10 years. Under EU directives — equivalent unauthorised-access statutes. Run PoCs only on forked mainnet via `@ton/sandbox` or a local TVM emulator. Even 1 nanoTON sent in an exploit transaction on mainnet establishes intent.
New DeFi protocols with $1–10M TVL that have not yet been publicly audited. Shared libraries (funcbox, awesome-libs) — fewer eyes. Bridges — the richest payout category (the TAC drain in May 2026 showed bridge validator logic is still fragile). Lending markets (EVAA, Storm Trade) — complex oracle/liquidation mechanics. Old multisig and nominator pools are already well studied.
Wait 7–14 days — security teams are often overloaded. Then escalate via an alternative channel (Twitter DM to a core developer, HackenProof support). Do not publish the PoC. If 90 days pass with no fix and no communication — you can reach out to TON Foundation security as an arbiter. Public disclosure only after a fix or on clear user-harm risk, and even then through a CERT-style channel.

Related