Posted on:

I use the free versions of LLMs like ChatGPT, Claude, and Gemini that impose limits on a per session basis. LLMs calculate these limits through some combination of request counts and token use.

In an earlier post, I provided some general guidelines on how to increase prompt efficiencies with the goal of conserving tokens. For this project, I take "token conservation" further by building an HTML form that helps to organize the initial LLM prompt. The tool generates a JSON file as input for the LLM. The goal here is to reduces the overall number of input tokens by streamlining the query into a clear set of prompt instructions.

Initially, I used highly structured XML as the output container but thought better of it. Instead, I went with semi-structured JSON because its syntax uses fewer characters, thus fewer tokens. Also, this project gave me another opportunity to use vibe coding as an aid in building this prompt tool, which I'll write about in a future post.

For your own use, you'll find the LLM Prompt Organizer on GitHub.

LLM Session Limits

This table contains approximate limitation criteria for non-subscription users.

LLM Key Usage Limits Approx. Token / Context Window
ChatGPT Limited GPT-4o usage per ~5-hour window. ~128k tokens (input + output).
Claude Message/session limit, resets ~every 5 hrs. ~200k tokens (input + output).
Gemini ~2 req/min, ~50 req/day (API); ~5 prompts/day (app). Up to ~1M input / 65K output tokens.

Why Use This Tool?

The Problem with Unstructured Prompts

The Solution: Semi-structured Prompts

The LLM Prompt Organizer helps you create reusable prompts by:

  1. Separating concerns - Breaks your prompt into logical components (role, context, task, format, etc.)
  2. Ensuring completeness - Visual interface reminds you of all the elements that make prompts effective
  3. Enabling reusability - Saves JSON files as templates for similar tasks.
  4. Improving clarity - JSON's semi-structured format helps the LLM understand the request.
  5. Supporting iteration - Users can easily modify specific sections without rewriting everything.

Quick Start

  1. Download the prompt-form-output json-1.html file from GitHub.

  2. Open the form in a browser.

    Prompt Organizer

  3. If you don't need customizations, add content to the default fields.

  4. Export your prompt by clicking Copy JSON or Download JSON.

  5. Enter your JSON data into the chatbot's query field by either pasting the text into the field or by dragging in the JSON file.

Customization Options

JSON Output Sample

Your generated JSON will be similar to this:

{
    "instructions": "Process and respond to this prompt.",
    "role": "You are an experienced data analyst specializing in retail performance.",
    "context": "You are analyzing Q4 2024 sales data for a mid-sized retailer operating in North America.",
    "task": "Identify the top three sales trends and provide insight into what factors may have contributed to these patterns.",
    "format": "Present your findings as three concise bullet points, each with one supporting sentence.",
    "examples": "- Example Trend: Seasonal spike in apparel sales due to holiday promotions.",
    "constraints": "Focus only on quantitative insights supported by data. Avoid speculation not backed by sales metrics.",
    "data": "region,sales,transactions North America,1200000,32000 Europe,900000,28000 Asia,1100000,30000",
    "audience": "Retail operations managers looking for actionable insights to guide Q1 2025 planning."
}

Note: The instructions key/value tells the LLM what to do with the JSON content. It is included in the JSON out, automatically. If you'd like to revise the value, change line 502 of the prompt-form-output json-1.html file.


More posts: