Skip to content

werfish/Context-Lang

Repository files navigation

Context: AI-Powered Code Generation

Context is an AI code preprocessor and generator designed to automate a chat-driven coding workflow, enabling in-code prompting and output via ContextLang tags embedded in code comments.

With Context, you can:

  1. Streamline Web Searches During Development: No more looking up of minute details you don't remember every 30 seconds. Writing code with ContextLang allows for surgical changes to your code based on description of code steps. Let the AI work out the details. Your time spent on web searches during development will significantly decrease.

  2. Automate chat-based code modifications: Say goodbye to the time consuming task of copying and pasting code between your editor and an LLM chat UI. Save descriptions as context variables and prompts with ContextLang and reuse both just like you would with code.

  3. Seamlessly integrate and use: Context is easy to install, and ContextLang is non invasive, residing within code comments. ContextLang is language-agnostic and fits seamlessly into any programming language and codebase, large or small. Starting with Context is as easy as pie, and integrating it into your existing work requires minimal effort.

  4. Save on cost and time: Context keeps generation scoped to explicit prompt/output tags, reducing repetitive manual loops and keeping model usage focused.

The Developer is dead, long live the AI augmented cyberpunk developer. The CyDeveloper :)

Inspiration for this quote from Angel AI here

Table of Contents

  1. Introduction
  2. Context Description
  3. Prerequisites
  4. Installation
  5. CLI Reference
  6. Getting Started
  7. File Template
  8. Best Practices
  9. Features/Specification V1 (CURRENT)
  10. Features/Specification V2 (TO DO)
  11. Features/Specification V3 (TO DO)
  12. Documentation
  13. Contributing
  14. License
  15. ROADMAP

Context Description

Context is a programming tool designed to harness the capabilities of AI to enhance software development. It parses ContextLang tags, treats contextual descriptions as variables and prompts as functions, orders prompt execution by dependencies, and sends prompt payloads to an OpenRouter-backed generation pipeline. This design allows for effective use of AI capabilities, provided the instructions are clear and context is well-described.

Context aims to bridge the gap between manual copy/paste prompt workflows and practical in-editor generation. It focuses on making day-to-day coding tasks and code refinement faster while keeping changes scoped to explicit output locations.

It's important to clarify that Context is not an AI Assistant or an Autonomous Cognitive Entity. It won't ask any questions, work out a design for a project, or carry out reasoning tasks. These elements are left to the human. Context strictly adheres to generating or changing small pieces of code. It effectively replicates the workflow of a chat frontend, eliminating the need for developers to manually copy code back and forth and rewrite details or descriptions. As a result, it significantly reduces the manual workload by automating code generation for defined sections.

Goal

The ambition behind the Context project is to integrate AI into the coding process to make it more efficient and less time-consuming. Context's primary aim is to streamline the initial development phase and make quick fixes more efficient, using the power of AI to achieve these goals. While Context is a powerful tool, it is not a substitute for in-depth knowledge of programming. Developers must still understand their code's logic and thoroughly review all AI-generated code to ensure its correctness and efficiency.

ContextLang

ContextLang is a domain-specific, declarative language created to interact with the Context tool. Inspired by HTML in its syntax, it's mainly used within code comments but isn't strictly limited to them. ContextLang is employed to structure AI prompts, define code context, and mark areas designated for AI-generated code. Its flexibility and simplicity make it a convenient tool that can be adapted across different programming languages and methodologies.

Prerequisites

Before installing ContextLang, ensure you have the following:

  • Python (3.10+) installed on your system.
  • An account on OpenRouter.
  • An OpenRouter API key.
  • Supported platforms: Windows, macOS, Linux.

Installation

Follow these steps to install ContextLang:

  1. Install the ContextLang package using pip:
pip install ContextLang
  1. In the root folder of your project, create a .env file with the following content:
CONTEXT_CONFIG_Open_Router_Api_Key=<Your OpenRouter API Key here>

Remember to replace <Your OpenRouter API Key here> with your actual API key.

  1. To use Context in your project, navigate to the base folder of your project and run the Context command:
Context

If there are no errors, Context is working correctly and is ready for use in your project.

You can also provide a key at runtime with:

Context --openrouter_key <YOUR_OPENROUTER_KEY>

Remember, sensitive information like API keys should not be committed to version control systems. Please ensure to add your .env file to .gitignore (or equivalent for other VCS) to prevent this.

CLI Reference

Context [--filepath <path>] [--parser] [--mock-llm] [--openrouter_key <key>] [--model <model>] [--debug] [--log]
Context clear [--filepath <path>] [--debug] [--log]
  • --filepath <path>: process only a specific file or directory.
  • --parser: run parser/validation only (skip generation).
  • --mock-llm: mock prompt responses (test mode; skips API key requirement).
  • --openrouter_key <key>: override the key from .env.
  • --model <model>: choose a supported OpenRouter model.
  • --debug: enable debug logging output.
  • --log: write logs to timestamped files in Context_Logs/.

Models

  • Default model: openai/gpt-5.2
  • Currently supported models:
    • openai/gpt-5.2
    • openai/gpt-3.5-turbo

Use --model to select a supported model explicitly.

Clear Command

Use Context clear to clear generated code inside output tags while keeping the tags themselves:

Context clear
Context clear --filepath src

Behavior:

  • Clears bodies in output tags like <SomeTag> ... <SomeTag/>.
  • Preserves the opening/closing output tags.
  • Supports clearing a single file or an entire directory via --filepath.
  • Skips files with parser errors and reports them.
  • Prints a summary: files cleared, tag blocks cleared, and files skipped.

Getting Started

The example provided below demonstrates the usage of ContextLang in a Python file to write a simple calculator module.

First, we'll provide a global context to describe the purpose of the file. Global Context will be injected into every prompt of the file (EXPERIMENTAL):

#<context>This file should contain basic arithmetic functions for a and b.<context/>

Next, we will utilize a context variable called "TEMPLATE" to simplify the process of instructing the AI on what we want to achieve. For instance, we'll create a template for an 'addition' function, which the AI can reference when generating other arithmetic functions.

#<context:TEMPLATE>
def add(a,b):
	return a + b
#<context:TEMPLATE/>

Now, using a prompt tag, we instruct the AI to create three additional functions for multiplication, division, and subtraction. We provide the previously created template by using the TEMPLATE context variable as a context to guide the AI's output. Keep in mind that global context will also be injected into the prompt.

#<prompt:Functions>Write me a subtraction, division and multiplication functions for a and b based on the template function. {TEMPLATE} <prompt:Functions/>

To indicate where the Functions prompt should output we use the name of the prompt enclose in {}. This is a one off operation.The output from the prompt will overwrite the following comment in the Python file when we run Context:

#{Functions}

For a more "scratchpad" like experience where a prompt can be iterated with the Functions output variable can also be used as a tag. This tag will not get overwritten. Only the content between will be overwritten, allowing building up the prompt until it outputs the wanted code.

#<Functions>
#<Functions/>

The Functions variable can also be used as context in the prompt while it is used as an output tag, allowing modification over the produced or existing code.

#<prompt:Functions>Please correct the multiply function in the code, it should multiply instead of subtract. {Functions} <prompt:Functions/>
#<Functions>
def multiply(a,b):
	return a - b
#OTHER FUNCTIONS....
#<Functions/>

Please note that tags of ContextLang do not need to start with #. ContextLang should work with most programming language comment characters. Context variables and prompts can be used in txt files.

Next, we run ContextLang by executing the command Context in the base directory of our project. If no errors are thrown, then ContextLang is working as expected, and the #{Functions} line in our Python file has been replaced with the output from the prompt.

Context

If you want to run Context on a specific file or a directory then you can use the --filepath argument.

The command below works across Windows, macOS, and Linux:

Context --filepath src

Importing another file as a context variable

Contents of a file can be used like any other context variable. Relative paths are resolved from the directory of the file containing the tag. Absolute paths are supported unchanged.

Syntax Example:

<file:TABLE_SCHEMA>schema_example.csv<file:TABLE_SCHEMA/>

Python file example:

#<file:TABLE_SCHEMA>shema_example.csv<file:TABLE_SCHEMA/>
# <prompt:PANDAS_CODE>
# Please write a function which takes in a path to a csv file and name as arguments 
# and filters the csv file by the name and then prints the dataframe. Use the provided schema with several rows of data as examples.
# {TABLE_SCHEMA}
# <prompt:PANDAS_CODE/>

# <PANDAS_CODE>
#
# <PANDAS_CODE/>

Importing a specific context variable existing inside another file

Context variables can even be declared in a .txt file. The name of the Context Variable existing in the file needs to be specified. Relative paths are resolved from the directory of the file containing the tag. Absolute paths are supported unchanged.

Syntax:

<import:INDEX_HTML>index.html<import:INDEX_HTML/>

CSS code example:

/*<import:INDEX_HTML>index.html<import:INDEX_HTML/>*/
/*<import:NAVBAR_STYLE>Style_Context.txt<import:NAVBAR_STYLE/>*/

/* <prompt:NavbarStyle>
Please generate css styles for the navbar based on the description according to the ids provided in the HTML code.
{INDEX_HTML}
{NAVBAR_STYLE}
<prompt:NavbarStyle/>*/

/*<NavbarStyle>*/
/*Css code will be generated between the tags*/
/*<NavbarStyle/>*/

Importing all context variables from a file

They can be imported by using the import statement without a context variable name. Relative paths are resolved from the directory of the file containing the tag. Absolute paths are supported unchanged.

/*<import>index.html<import/>*/
/*<import>Style_Context.txt<import/>*/

CSS code example:

/*<import:INDEX_HTML>index.html<import:INDEX_HTML/>*/
/*<import:NAVBAR_STYLE>Style_Context.txt<import:NAVBAR_STYLE/>*/

/* <prompt:NavbarStyle>
Please generate css styles for the navbar and the list of links based on the description according to the ids provided in the HTML code.
{INDEX_HTML}
{NAVBAR_STYLE}
{LINKS_LIST}
<prompt:NavbarStyle/>*/

/*<NavbarStyle>*/
/*Css code will be generated between the tags*/
/*<NavbarStyle/>*/

/* <prompt:FooterStyle>
Please generate css styles for the footer based on the description according to the ids provided in the HTML code.
{INDEX_HTML}
{FOOTER_STYLE}
<prompt:FooterStyle/>*/

/*<FooterStyle>*/
/*Css code will be generated between the tags*/
/*<FooterStyle/>*/

Prompt Dependency Order and Output Targets

Prompt execution order is based on prompt dependencies, not only declaration order.

  • If prompt A uses {B} and {C}, then B and C are generated before A.
  • Circular prompt dependencies are invalid and will fail parsing/ordering.

You can also direct a prompt to write into a different output tag with output-target syntax:

#<prompt:C->A>Please refactor and improve this code. {A}<prompt:C->A/>
#<A>
def old_code():
    pass
#<A/>

In this example, prompt C writes into output tag A, and the current contents of <A>...</A/> are provided as code to modify.

Output-target constraints:

  • The target output tag must exist in the same file.
  • Cross-file output writes are not supported.

Tag Validation and Path Resolution Rules

ContextLang currently supports these colon-prefixed tags only:

  • <context:...>
  • <prompt:...>
  • <import:...>
  • <file:...>

Unknown colon-prefixed tags (for example <inline:...>) raise parser errors.

Path rules for <import...> and <file:...> payloads:

  • Relative paths are resolved from the directory of the file that contains the tag.
  • Absolute paths are used as-is.

File Template

Here is a template that you can copy to your project file. Don't forget to comment it out if using in a code file:). ContextLang does not yet support comments so If you do not want to use a tag you need to delete it.

Template

<file:FileContext>example.txt<file:FileContext/>
<import:VarName>file.txt<import:VarName/>
<import>Descriptions.txt<import/>

<context:SomeContext>Some description<context:SomeContext/>
<context:MultiLine>
Some description 
or a piece of code.
<context:MultiLine/>

<prompt:Main>
Prompt Goes here
{SomeContext}
{Multiline}
<prompt:Main/>

<Main>
Code will be generated here.
<Main/>

Best Practices

  1. Be able to code it yourself!!!!!: If you do not know how to code something manually, then you will usually not be able to describe it accurately. Code generated in this case will be like gambling.

  2. Know CTRL-Z and use git: Sometimes the code generated will be far from what is needed. Then you need to be able to undo with CTRL-Z or go rollback before a commit if a mistake is made.

  3. Use 1 instruction per prompt: Use prompts like functions and context like variables. Try to split tasks into more prompts and larger context variables into smaller ones. Deciding how much context a model needs is an art that takes practice. Smaller/cheaper models usually follow fewer detailed instructions than stronger models.

  4. Use precise domain oriented language: Be precise in descriptions. The less space AI has to make up the details, the better. If you develop a web frontend use words that you would use while communicating with other frontend developers.

  5. Split big context variables to smaller ones: If there is too many instructions in context variables then the AI will fail to deliver all of them. It is better to have 10 context variables and 5 prompts than 3 context variables and 2 prompts.

  6. Use the programming language and library lingo: This is similar to number 2 and connected with 0. Use the same terminology as official documentation for the language and libraries you are using.

  7. Avoid coding with Context using unfamiliar libraries: Connected with 0. If you do not know the library, you will usually not be able to describe the code you want accurately, and model output quality will drop.

  8. Expriment with describing code rather than the effect: Instead of writing you want to "generate a blue modern navigation bar styling" try to to describe how to write the code "write an id for a navbar, it should be blue, with list elements floating to the left"

Features/Specification V1 (CURRENT)

Context Variables

This feature allows users to specify a block of code as a context variable. This context can be used in subsequent prompts.

If a context is declared without a name, it is added as part of the global context for the file and will be used in all subsequent prompts.

Syntax:

#<context:ExampleFunction>
# Code block here
#<context:ExampleFunction/>

or for global context:

#<context>
# Code block here
#<context/>

Prompts

These can be used much like functions. Context variables can only be used inside prompts.

There can be multiple context variables used. The output can then be used anywhere in the file after the declaration. The output from the prompt can be overwritten or used as context for refinement depending on the use of different tags.

Prompt execution order is dependency-based:

  • If prompt A references {B}, prompt B is generated before A.
  • Cycles are invalid (for example A depends on B and B depends on A) and fail before generation.

Here are four examples that demonstrate the different use cases:

  1. Using output variable with {} syntax: This denotes the use of the generated code in the subsequent lines of the script. The output variable {Functions} can be referenced throughout the code after its declaration. The line containing the output variable will be overwritten with the output of the prompt. The {ExampleFunction} in the prompt is a reference to the context variable specified in feature 1.

    Syntax:

    #<prompt:Functions>Please write 3 functions for this calculator file. {ExampleFunction}<prompt:Functions/>
    #{Functions}
  2. Using output variable with <> syntax: This denotes that the code block enclosed within the <> tags will be replaced with the generated code each time the tool is run.

    Syntax:

    #<prompt:NewFunction>Please write a function that squares a number.<prompt:NewFunction/>
    #<NewFunction>
    # Code block here
    #<NewFunction/>
  3. Using output variable with <> syntax and existing code as context: This allows the existing code block to be used as context for the prompt, enabling more nuanced code modification. The existing code block within the <> tags forms part of the context for the prompt and gets replaced with the generated code each time the tool is run.

    Syntax:

    #<prompt:Code_Piece>Please modify this code to calculate the square of a number. {Code_Piece}<prompt:Code_Piece/>
    #
    #<Code_Piece>
    # Code block here
    #<Code_Piece/>
  4. Using output-target syntax with ->: This maps a prompt name to a different output tag in the same file.

    Syntax:

    #<prompt:RefineLogin->LoginHandler>Please improve this login handler. {LoginHandler}<prompt:RefineLogin->LoginHandler/>
    #<LoginHandler>
    # Existing code here
    #<LoginHandler/>

Notes:

  • Prompt outputs are only applied where one of these exists: {PromptName}, <PromptName>...</PromptName/>, or <prompt:PromptName->TargetTag>....
  • For -> syntax, TargetTag must exist in the same file.
  • Cross-file output writes are not supported.

Features/Specification V2 (TO DO)

Support for shortened aliases

Status: Planned (not implemented in the current CLI runtime).

All tags should get a shortened 2 letter version. All closing tags should have "</>" syntax. Previous syntax will still be supported for compatibility and in case a user wants more readability. Syntax:

prompt declaration -            <pr:PromptName></>
context variable declaration -  <cn:VarName></>
file import -                   <fl></>
import context variable -       <im:VarName></>
import all context variables -  <im></>

Features/Specification V3 (TO DO)

Comment Code Generation

Status: Planned (not implemented in the current CLI runtime).

This feature allows users to generate code based on comments. Every block of comments (one or multiple lines without a break line) within the comment_code tags is treated as a separate prompt. Context variables can be referenced in the comments using the {} syntax. The lines below a comment until another comment or the closing tag are replaced by the code generated from the prompt.

Syntax:

#<comment_code>
# Function to calculate the sum of two numbers
# Function to calculate the division of two numbers
# Function to return a "Hello World" string
#</comment_code>

Inline Prompts

This feature allows users to place one-off prompts inline within the code. The output from these prompts will overwrite the inline prompt itself, making it a convenient tool for single-use, immediate code generation tasks.

Syntax:

#<inline:Please write a function that squares a number.>

Documentation:

A dedicated site is not yet created.

Contributing

For contributor setup, checks, and test workflow, use CONTRIBUTE.md. You can also find project plans and learning references in the ROADMAP.

License

Apache 2.0 License

See CONTRIBUTE.md for the full contributor workflow.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors