Prompt engineering gets genuinely interesting when you use one AI conversation to write prompts for another AI conversation, because the result is a workflow where plain-language ideas turn into working code without needing deep Python skills.
- A prompt generator chat writes a technical prompt that extracts data and codes a ranking tool.
- Newer language models tend to write better code, so using a stronger model is a real advantage if one is available.
- Iterating through versioned chats, like RankingToolV1 through V3, keeps the workflow organized even as prompts evolve.
This lesson is a preview from our AI Workflows for AEC Professionals Course Online. Enroll in a course for detailed lessons, live instructor support, and project-based training.
This is a lesson preview only. For the full lesson, purchase the course here.
The sandbox here is forgiving by design. There is no single right answer, and even running through all the steps might not produce a perfect chart. The point is to understand the technique and how to apply it to real project work. A newer model will usually handle the code better than the free tier, so if a stronger model is available, this is the right time to use it. That said, the core workflow holds up on any model that supports file uploads and multi-turn prompts.
Set up the Prompt Generator
Start a new chat that will act as a prompt generator. Its only job is to produce a prompt that the next chat will actually run. Begin with the role: an expert prompt engineer. Then state the goal clearly. The goal is for the chatbot to write a prompt that does two things. First, it extracts data from the attached PDF. Second, it codes a Google Colab tool that ranks contractors.
Attach the PDF while the data source is fresh in mind. Forgetting to attach the file is one of the most common mistakes, and it weakens every subsequent step. Next, add the features the ranking tool should include. Describe an interactive chart that defaults to price, with sliders that re-rank contractors based on weighted scores for safety, schedule, project manager experience, and technical specifications. Call out the two rooms that matter most for this project such as B1.1 and L2.1, so the prompt stays grounded in the right scope.
Chatbots tend to be chatty, so cap the length. A line like as short as possible, provide prompt only keeps the output clean and easier to copy into the next chat.
Extract the Prompt and Run It
When the generator chat responds, it should produce a single prompt with no surrounding commentary. If there is extra text above or below, ignore it and manually select just the prompt itself. Copy the selected text with CTRL+C. Rename the generator chat to something obvious, like PromptGenerator, so it is easy to find later.
Start a new chat and paste the prompt inside. Upload the PDF again, because the new chat has no memory of the previous session. Run the prompt. When it finishes, the chat should show extracted data and Python code that produces a ranking tool. Rename this chat as RankingToolV1 to mark it as the first version. Back in Colab, add a text cell and label it RankingToolV1 as a heading, then add a code cell and paste the generated code inside. Run it.
Handle the Expected Problems
Sometimes the first run works cleanly. Sometimes it does not. A common failure is that the sliders appear but the chart does not, usually because the code uses advanced charting components that Colab does not render natively. If that happens, the cleanest path is not to debug the code directly, but to go back to the PromptGenerator chat and ask it to update the prompt.
A useful update reads, update the prompt to not use advanced charting tools. Instead, build it so the sliders simply redraw an image of the graph every time they are moved, ensuring it works immediately in a standard Colab notebook. That constraint tells the generator to produce a prompt that leads to simpler, more reliable code.
Select only the new prompt, copy it, and paste it into a fresh chat. Attach the PDF again and run the prompt. If everything goes well, the ranking tool now renders a chart that updates when the sliders move. Add a new text cell in Colab labeled RankingToolV2 as a heading, paste the new code, and run it. Adjust the sliders to see how contractors reorder as weights change. When experience and technical scores are prioritized, certain contractors win. When price and schedule matter more, different contractors rise to the top. That kind of interactivity is exactly what makes the tool useful in client conversations.
Polish the Appearance with Another Iteration
With the mechanics working, the next round can focus on visual polish. Head back to the PromptGenerator chat for one more update. Prompt generators can get confused after several rounds, so this is usually the last safe iteration before starting a new generator from scratch. A short update in plain language is fine. Ask for the bars to be shades of pastel colors with rounded edges, kept simple overall.
Run the update and copy the new prompt exactly as before. Open a new chat, paste the prompt, attach the PDF, and run it. When it finishes, copy the new code back into Colab. Add a text cell labeled RankingToolV3, paste the code, and run it. The chart should now use pastel colors, show rounded edges on the bars, and still respond to slider adjustments. The ranking updates as weights change, which gives users a hands-on way to compare contractors without having to rewrite the underlying logic.
Staying Organized is Half the Game
Organization is what keeps this workflow sustainable. A few habits go a long way when iterating across chats and code cells:
- Rename every chat to describe what it does, like PromptGenerator, RankingToolV1, RankingToolV2, and RankingToolV3.
- Use text cells in Colab as headings so each version of the tool has a clear label in the notebook.
- Start a fresh chat whenever the generator starts producing confused or contradictory responses.
- Keep the PDF attachment step explicit in every new chat, because files do not carry between sessions.
Those simple conventions make it much easier to return to the workflow weeks later, share it with a teammate, or adapt it for a different project.
Why This Technique Matters
Even if the final chart is not exactly what you pictured, something bigger has happened. You wrote working Python code without ever touching the code directly. You used AI to generate a technical prompt, iterated on that prompt in plain language, and produced a tool your clients can actually use. That is a meaningful shift in what a non-developer can build in an afternoon.
Layered prompting turns a general-purpose chatbot into a builder of technical tools. Set up a prompt generator with a clear goal and features, extract and run the generated prompt in a fresh chat, and iterate by updating the generator in plain language when something does not work. Stay organized with versioned chat names and labeled Colab cells, and know when to start fresh rather than push a tired conversation further. The result is an interactive ranking chart that would once have required a developer, produced instead by a well-structured prompt and a little patience.