Writing Python chart code with AI is one of the most approachable entry points into coding for AEC professionals, because the goal is concrete, the feedback is immediate, and the troubleshooting process itself becomes a useful learning experience.
- A role, task, format prompt produces cleaner Colab-compatible code than a generic request.
- Explicit constraints, such as rendering a visual chart rather than a data table, prevent common drift.
- Iterative error handling with the AI turns frustrating syntax problems into short, solvable exchanges.
This lesson is a preview from our AI Workflows for AEC Professionals Course Online. Enroll in a course for detailed lessons, live instructor support, and project-based training.
This is a lesson preview only. For the full lesson, purchase the course here.
The goal of this workflow is to generate an interactive stacked bar chart that compares contractor costs from a bid tabulation PDF, with the highest guaranteed maximum price on the left and the lowest on the right. Getting there requires a careful prompt, a few constraints, and a bit of patience for the inevitable error messages that show up the first time someone runs AI-generated code.
Write the Initial Prompt
Start a new chat in your chatbot and structure the prompt using role, task, and format. The role is an AEC data expert. The task is to write clean Google Colab Python code. Calling out Colab is useful because Colab ships with a large set of pre-installed Python libraries, and specifying it encourages the model to choose functions that will run without extra installation steps.
Next, name the library you want the code to use. Plotly is a strong choice for interactive charts, and it is already available in Colab. Ask for an interactive stacked bar chart that compares contractor costs, ordered from highest guaranteed maximum price to the lowest. Then tell the model where to find the data. Attach the APH bid tabulation PDF, and instruct the code to extract the financial tabulation section directly from the file. Finally, add a constraint: ensure the code explicitly renders a visual chart, not just a data table. Without that line, the model sometimes defaults to returning raw numbers, which is not what you asked for.
Run the Generated Code
Once the prompt is submitted, the chatbot produces Python code that imports Plotly, pulls in supporting libraries like pandas when needed, and extracts the required data from the attached PDF. The comments in the code, marked with the hash symbol, explain what each section is doing. Those comments are helpful for anyone who wants to understand the code rather than just run it.
Copy the code using the chatbot's copy button. If there is no copy button, select the whole block carefully, making sure to capture any leading whitespace and punctuation. A missing space can cause a syntax error that is hard to track down. Paste the code into a Colab cell and hit play. If the run succeeds, scrolling down reveals the stacked bar chart, ordered largest to smallest. Colors and stacking might vary between runs, but the shape is reliable.
Intentionally Break the Code
Errors are part of the process, so it helps to practice troubleshooting before they appear in real work. Ask the chatbot to rewrite the code and purposefully include some syntax errors. The chatbot will happily comply, which gives you a safe sandbox for the troubleshooting loop. Copy the broken code, select everything in your Colab cell with CTRL+A, delete it, and paste the new version. Run it, and Colab will error out immediately.
Troubleshoot with the Chatbot
The key to working with AI-generated code is treating errors as prompts. When Colab shows an error message, the important part is usually the error type and the line number. Copy the relevant portion, go back to the chatbot, and use a short script that keeps the conversation productive. A useful pattern looks like this:
- Thank the chatbot for the previous code, which keeps the context polite and positive.
- Paste the error code in quotes so the chatbot can parse it accurately.
- Ask the chatbot to rewrite the code so it runs error free.
- Run the new code in Colab and repeat if a new error appears.
Sometimes the chatbot resolves the problem in one pass. Other times it produces a version that fixes the first error and surfaces a second one, because Python only reports the first syntax error before stopping. That is why troubleshooting can take a few rounds. Copy each new error, return to the chatbot, and ask for a clean fix. After a couple of cycles, the code usually runs cleanly again.
When to Start Fresh
Iteration has limits. The more times a chat asks for fixes, the more context accumulates, and eventually the chatbot starts producing confused or contradictory output. If the fixes stop working, the cleanest move is to start a new chat entirely. Paste the original prompt, re-upload the PDF, and begin again with a clean slate. This is the same principle from earlier lessons, and it applies to code just as much as to written work.
When asking for a final repair, add a bit of emphasis. A phrase like the code still has errors such as (in quotes) followed by please rewrite the code and repair all errors so it runs in Google Colab error free, with a short exclamation point for emphasis, tends to produce a more careful response. The resulting code usually renders the intended chart without further intervention.
Live with the Tradeoffs
Even when the code works, the output may not be exactly what you wanted. The troubleshooting passes might drop some of the labels or change the colors, which is a tradeoff. That is fine for a first pass. The chart exists, it ranks contractors from largest GMP to smallest, and it can be refined with additional prompts later. For a first introduction to AI-assisted coding, the goal is not a polished deliverable. It is a working chart that came from a clear prompt, a set of constraints, and a few rounds of productive troubleshooting.
Generating chart code with AI works best when the prompt names the environment, the library, and the data source, and when it includes a constraint that forces a visual output. Expect errors on the first run and treat them as new prompts rather than roadblocks. When the chat accumulates too much context, start fresh rather than pushing through. Used this way, AI becomes a genuinely useful partner for building data visualizations in Colab, and the troubleshooting loop itself teaches the fundamentals of working with code.