Muxify
muxify
Technical assessment, rebuilt

Find candidates who can build

Coding puzzles can't predict who ships with AI. Watch candidates build real projects, then review the full session before you hire.

v24.14.1 ~/sessions/take-home-chat git:(main)
session active
Claude Code v2.1.132
Opus 4.7 (1M context) with xhigh effort · Claude Max
~/sessions/take-home-chat
~/sessions/take-home-chat (main)[Opus 4.7 (1M context)]8% used / 92% left
Found 2 settings issues · /doctor for details
+± 0▭ /remote-control▤ File explorer↳ Rich Input ^G
▢ ~/sessions/take-home-chatmain
For engineering leaders

How hiring works on Muxify.

Three steps from setup to a hiring decision you can trust. On your repo, in your stack, with the AI tools your team uses.

configure

Pick a repo. Pick a problem.

Connect your GitHub repo, set the time limit, and you're live in two minutes.

repositoryacme-co/checkout-api
stackTypeScriptPostgres+2
levelL3L4L5L6
time box60m
promptAdd idempotency keys to /charge
invite

Send a single magic link.

One link, zero setup. Candidates work in a cloud terminal with their AI tool of choice, exactly how they would on the job.

muxify.so/s/a8f-92k1-7dcopy
expires in 7 days·single use
RKRiya K.opened · 2m ago
TMTheo M.submitted · yesterday
JLJamie L.invited · 3 days ago
review

Watch the session, not the score.

See every prompt, edit, and decision in a full session replay. Read the final PR and hire with confidence.

replay · candidate-782100:42:18 / 01:00:00
14:02opened README.md
22:41prompt → Claude
33:08edit client.ts +24 −2
56:11ran tests · 9 / 9 passing
For candidates

What the candidate actually sees.

No webcam, no invigilator, no memorized tricks. Just a repo, a terminal, and the AI tools they already use.

before

Algorithm grinder

  • — Solve algorithm puzzles under a stopwatch
  • — No internet, no AI, no real tools
  • — A pass/fail score with no signal on craft
  • — The same canned problems every other platform asks
on muxify

Real build session

  • + Open the repo, read the README, start shipping
  • + Use Claude Code, Cursor, or Copilot. Their choice
  • + Working code at the end, reviewed like a real PR
  • + Real problems sourced from real products
Session replay

Watch the work, not the résumé.

Every terminal command, file edit, and AI prompt is recorded. Scrub the timeline, comment inline, and share with your hiring panel.

Muxifysession activecandidate-7821 · take-home-chat
00:42:18Share replay
Files
  • src
  • App.tsx
  • components
  • MessageList.tsx
  • Composer.tsx
  • lib
  • package.json
  • README.md
src/components/MessageList.tsx
1import { useEffect, useState } from "react";
2import { fetchMessages } from "../lib/client";
3
4export function MessageList() {
5 const [items, setItems] = useState([]);
6 useEffect(() => {
7 fetchMessages().then(setItems);
8 }, []);
9 return <ul>{items.map(...)}</ul>;
~/sessions/take-home-chat git:(main)
claude-code
 ▐▛███▜▌
▝▜█████▛▘
  ▘▘ ▝▝
Claude Code v2.1.132
Opus 4.7 · ~/sessions/take-home-chat
Wire MessageList to /api/messages with optimistic updates.
·Reading src/components/MessageList.tsx
Adding fetch + optimistic queue. Two edits.
·Edit src/components/MessageList.tsx +24 −2
Brewed for 4s
~/sessions/take-home-chat (main)[Opus 4.7]32% / 68%
2 edits · /history
+± 2▭ /remote-control▤ Files
⎇ main
Timeline00:00:00 — 01:00:00
Replay length00:42:18·Files changed7·Diff+148 −22·Agentclaude-code·Tests9 / 9 passing
vs. legacy assessment platforms

The same problem, solved differently.

LeetCode tests for a world without AI. Muxify tests for the one your team actually works in.

What gets measured
Shipping working code on a real repo
Algorithm puzzle pass/fail
Tools the candidate uses
Real terminal with their AI tool of choice
Pen, paper, no internet
Session length
30–90 minutes
45 minutes of stress
What you review
Full session replay + final PR
A score and a leaderboard
AI usage
Encouraged. The whole point.
Strictly forbidden
Signal for senior ICs
Direct. They did the job.
Indirect. A proxy for tenacity.
Pricing

Priced per session. Not per seat.

You only pay when a candidate actually starts a session. Volume discounts kick in automatically. No sales call required.

Starter

$49/ session

For teams hiring fewer than 10 engineers a quarter. Pay-as-you-go.

  • Up to 25 sessions per month
  • Replay retention · 90 days
  • Single workspace
  • Email support
Request a demo

Enterprise

Custom

For Fortune 500 and regulated industries. SOC 2 Type II, data residency, on-prem runners.

  • Unlimited sessions
  • Self-hosted runners
  • Custom retention & compliance
  • Dedicated solutions engineer
Talk to sales
Questions

The questions engineering leaders ask first.

Is using AI during the session allowed?

Yes, that's the whole point. Muxify shows you how candidates work with AI, not whether they can avoid it. Every prompt is captured so you can evaluate how they direct the agent.

What happens to the candidate's code?

It stays in the candidate's session repo. You get the final state, the diff, and the full replay. Export to a private GitHub repo your team owns, or discard after a retention window you set.

Can candidates cheat by using a friend?

The replay shows every keystroke and every prompt. Outsourcing is obvious in the timeline: long idle gaps followed by sudden, out-of-character commits.

What languages and stacks are supported?

TypeScript, Python, Go, Rust, Ruby, Java, and their surrounding tooling. The cloud terminal is a full Linux environment, so anything that runs in CI runs on Muxify.

How long does a typical assessment take to set up?

Two minutes. Connect a GitHub repo, write a one-paragraph prompt, and set a time limit. Or start from our library of pre-built problems.

Do you integrate with our ATS?

Greenhouse, Ashby, and Lever on the Team plan. Custom and on-prem integrations on Enterprise. Webhooks for everyone.