Last weekend, my team placed 2nd in the DevTools Track at Carnegie Mellon University's NexHacks 2026, the largest hackathon on the East Coast with over 1500 competitors. We built Pyra, a runtime security agent that provides transaction safety for autonomous crypto operations.
We treated this hackathon with the same engineering rigor I apply to telemetry systems at Knights Experimental Rocketry. We spent the first few hours purely on system architecture: defining functional requirements, mapping data flows, and reading documentation before writing a single line of code.
That discipline paid off in stability. Our demo worked flawlessly. But it came at a cost: time.
We spent hours reading Agent Developer Kit documentation just to figure out the boundaries of what was possible. In a 24 hour sprint, that is a luxury you cannot afford.
Reflecting on the win, I realized our workflow was fundamentally backwards for a hackathon environment. We won, but we almost didn't finish. Here is the bottleneck we faced, and the workflow we will use next time to cut setup time dramatically.
The Traditional Workflow
In traditional software engineering, the flow is linear. First you define the premise: what are we building? Then you write the requirements: what must it do? Then you choose the tools and architecture: what tech stack fits these requirements?
This works when you know the tools. But in a hackathon, you are often using new APIs for the first time. We were working with LiveKit and Kairo, neither of which we had used before.
We lost hours trying to write requirements for tools we didn't fully understand yet. We were guessing at the architecture, reading docs to verify, adjusting the architecture, and reading more docs. It was a slow feedback loop. Write a functional requirement. Read the docs. Realize it's not possible the way we wrote it. Rewrite the requirement. Read more docs. Repeat.
The problem is that we were letting requirements dictate the tools before we understood what the tools could do.
The Inverted Workflow: Tools First, Requirements Second
The solution is to invert the process. Instead of letting requirements dictate the tools, let the tools inform the requirements. Use AI as the bridge.
This is where MCP servers come in. MCP stands for Model Context Protocol. LiveKit and many modern dev tools now offer MCP servers, which are connectors that allow LLMs like Claude to read their entire documentation and codebase in real time.
Here is what the new workflow looks like.
First, define the premise. Keep it high level. For Pyra, the premise was: a voice agent that refuses to execute dangerous crypto transactions.
Second, before writing any requirements, connect your IDE to the sponsor's MCP server. Ask the AI to audit your premise against the documentation. For example: "I want to build a runtime security voice agent. Scan the LiveKit documentation via MCP and tell me: What is the lowest latency path for interrupting a speaker mid sentence? Does the Python SDK support 'barge-in' natively, or do I need to implement my own VAD loop? Give me the architectural constraints."
Instead of spending two hours reading docs to find out if something is possible, the MCP tells you in ten seconds. Yes, use the VoiceAgent class with function calling. Here are the constraints. Here is sample code.
Third, now write the functional requirements and system architecture. But this time, you are not guessing. You are writing requirements that you know are technically feasible because the MCP just gave you the implementation path.
Fourth, implement. Because the architecture was pre validated by the tool's own documentation via MCP, the coding phase becomes assembly, not discovery.
The old way: write requirements, read docs, realize it's impossible, rewrite requirements. The new way: ask MCP, write requirements based on the answer, implementation is already mapped.
As engineers, we are trained to be implementation agnostic. Define the what before the how. But in the age of AI and 24 hour builds, the how is changing so fast that it dictates the what. By using MCP servers to interview our tools before we design our system, we don't just build faster. We build things that actually work within the constraints of the technology.
Winning Without Complacency
We didn't win the sponsor tracks. When the results came in, we felt bummed. We had read Kairo's whitepaper, used their core API, and built something that addressed a gap they explicitly identified. It wasn't enough.
Then we won 2nd place in the general DevTools track.
The emotional swing was intense. Suddenly we were celebrating. But on the drive home, I started thinking: what if we hadn't won the general track? I would have been restless, trying to figure out what our competitors did better. I would have dissected every decision we made.
But because we won, it would have been easy to overlook all of that. To get complacent. To assume we had it figured out.
The truth is the work doesn't change based on the outcome. Whether you win or lose, the gaps are still there. The improvements are still waiting.
So even after winning, I went out of my way to figure out what could have been better. How could we have aligned ourselves more with the sponsors? We realized a critical architectural distinction: Passive vs. Generative. We used Kairo as a Gatekeeper—a security layer that validates transactions after they're proposed. The winners used Kairo as a Generator—the core engine that creates transaction proposals in the first place. We built a product that used Kairo; they built products where Kairo was the only reason the product could exist. That distinction is the difference between a functional implementation and a winning narrative.
If you didn't place this time, the work is the same. Study what the winners did differently. The gaps you find are your roadmap.
The Takeaway
We built Pyra with a security first mindset. Next time, we build with a discovery first mindset. Define the premise. Interview the tools with MCP. Then write the requirements.
Winning is a scoreboard outcome. Learning is an engineering outcome.
It's not about needing failure to learn. It's about being a lifelong learner and standing firm on that regardless of the wins and losses.